Import AI 277: DeepMind builds a GPT-3 model; Catalan GLUE; FTC plans AI regs

Could crypto computation eventually compete with AI computation at the level of fab production capacity?
View this email in your browser

Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to give your chums an AI upgrade. Subscribe here.

FTC plans AI regulation:
…FTC brings on three AI Now people as advisors, now turns attention to algorithmic regulation…
The Federal Trade Commission announced Friday that it is considering using its rulemaking authority “to curb lax security practices, limit privacy abuses, and ensure that algorithmic decision-making does not result in unlawful discrimination, according to the Electronic Information Privacy Center (EPIC). The announcement follows the FTC bringing on three people from AI Now, including Meredith Whittaker, as advisors on AI (Import AI #275).
Read more:FTC Signals It May Conduct Privacy, AI, & Civil Rights Rulemaking (EPIC).
  Readthe FTC language at RegInfo.

####################################################

Google thinks sparsity might be the route to training bigger and more efficient GPT-3 models:
…GLaM shows that mixture of experts models keep getting better…
Google has built GLaM, a 1.2 trillion parameter mixture-of-experts model. GLaM is a big language model, like GPT-3, but with a twist: it's sparse; MoE networks are actually a bunch of distinct networks all connected together, and when you pull inference off of one only a few sub-networks activate. This means that the parameter count in a sparse vs dense network isn't really comparable (so you shouldn't think 1.2 trillion MoE = ~6X larger than GPT-3).

Why MoE is efficient: "The experts in each layer are controlled by a gating network that activates experts based on the input data. For each token (generally a word or part of a word), the gating network selects the two most appropriate experts to process the data. The full version of GLaM has 1.2T total parameters across 64 experts per MoE layer with 32 MoE layers in total, but only activates a subnetwork of 97B (8% of 1.2T) parameters per token prediction during inference."

How well does it work: In tests, GLaM exceeds or is on-par with the performance of GPT-3 on 80% of zero-shot tasks and 90% of one-shot tasks. Like DeepMind's Gopher, part of the improved performance comes from the size of the dataset - 1.6 trillion tokens, in this case.

Why this matters: For a few years, various Google researchers have been pursuing 'one model to learn them all' - that is, a single model that can do a huge number of diverse tasks. Research like GLaM shows that MoE networks might be one route to building such a model.
Read more: More Efficient In-Context Learning with GLaM (Google blog).

####################################################

DeepMind announces Gopher, a 280 billion parameter language model:
...AI research firm joins the three comma language club…
DeepMind has built Gopher, a 280 billion parameter language model. Gopher is the UK AI research company's response to GPT-3, and sees DeepMind publicly announce a multi-hundred billion parameter dense model, letting it join a club that also includes companies like Microsoft, Inspur, and Huawei.

What it does: During the research, DeepMind found areas "where increasing the scale of a model continues to boost performance – for example, in areas like reading comprehension, fact-checking, and the identification of toxic language," the company writes. "We also surface results where model scale does not significantly improve results — for instance, in logical reasoning and common-sense tasks."

How well it works: Gopher outperforms GPT-3 in a broad range of areas - some of the results likely come from the dataset it was trained on, called MassiveText. MassiveText "contains 2.35 billion documents, or about 10.5 TB of text" (representing about 2.3 trillion tokens), and DeepMind notes that by curating a subset of MassiveText for data quality, it was able to substantially improve performance.

Language models - good, if you handle with care: Along with analysis on bias and other potential impacts of Gopher, DeepMind dedicates a section of the paper to safety: "We believe language models are a powerful tool for the development of safe artificial intelligence, and this is a central motivation of our work," they write. "However language models risk causing significant harm if used poorly, and the benefits cannot be realised unless the harms are mitigated."
  Given the above, how can we mitigate some of these harms? "We believe many harms due to LMs may be better addressed downstream, via both technical means (e.g. fine-tuning and monitoring) and sociotechnical means (e.g. multi-stakeholder engagement, controlled or staged release strategies, and establishment of application specific guidelines and benchmarks). Focusing safety and fairness efforts downstream has several benefits:"
Read the blog post:Language modelling at scale: Gopher, ethical considerations, and retrieval (DeepMind blog).
  Read the paper:Scaling Language Models: Methods, Analysis & Insights from Training Gopher (PDF).

####################################################

Want to evaluate a Catalan language model? Use CLUB:
...You can only build what you can measure...
Researchers with the Barcelona Supercomputing Center have built the Catalan Language Understanding Benchmark (CLUB), a benchmark for evaluating NLP systems inspired by the (English language) GLUE test. The main curation rationale they followed "was to make these datasets both representative of contemporary Catalan language use, as well as directly comparable to similar reference datasets from the General Language Understanding Evaluation (GLUE)".

What's in the CLUB? CLUB includes evals for Part-of-Speech Tagging (POS), Named Entity Recognition and Classification (NERC), Catalan textual entailment and text classification, and Extracted Question Answering (which involved work like translating and creating new Catalan datasets - XQuAD-Ca, VilaQuAD and ViquiQuad).

Why CLUB matters: There's a phrase in business - 'you can't manage what you can't measure'. CLUB will make it easier for researchers to develop capable Catalan-language systems.
  Read more:The Catalan Language CLUB (arXiv).

####################################################

Deep learning unlocks a math breakthrough:
...The era of Centaur Math cometh...
Deepmind researchers have used an AI system to help mathematicians make two breakthroughs in topology and representation theory. The result provides yet more evidence (following various AlphaFold-inspired projects) that humans+AI systems can discover things that neither could discover on their own.

What they did: The essential ideal is quite simple: get a mathematician to come up with a hypothesis for a given function, then build an ML model to estimate that function over a particular distribution of data, then have the mathematician evaluate the result and use their intuition to guide further experimentation. The best part? "The necessary models can be trained within several hours on a machine with a single graphics processing unit", DeepMind says.

Why this matters: We're entering a world where humans will collaborate with AI systems to synthesize new insights about reality. Though DeepMind's system has limitations ("it requires the ability to generate large datasets of the representations of objects and for the patterns to be detectable in examples that are calculable," DeepMind notes), it sketches out what the future of scientific discovery might look like.
  Read the paper:Advancing mathematics by guiding human intuition with AI (Nature, PDF).
  Read more:Exploring the beauty of pure mathematics in novel ways (DeepMind blog).

####################################################

Anthropic bits and pieces:
…(As a reminder, my dayjob is at Anthropic, an artificial intelligence safety and research company)…
We've just released our first paper, focused on simple baselines and investigations: A General Language Assistant as a Laboratory for Alignment. You can read it at arXiv here.

####################################################

Tech Tales:

Real and Imagined Gains
[DoD Historical archives, 2040]

They got trained in a pretty cruel way, back then - they'd initiatie the agents and place them in a room, and the room had a leak of a poisonous substance that had a certain density and a certain spread pattern. The agents had to work out how not to asphyxiate by doing fairly complicated intuitively-driven analysis of the environment. If they were able to give a correct guess at the spread pattern (and avoid it) before the room filled up, they moved onto the next stage. If they weren't able to, they asphyxiated and died - as in, felt their computational budget get cut, got put in cold storage, probably never booted up again.
  (One curious by-product of the then-popular AI techniques was that the agents would sometimes seek to preserve eachother - in one case, two agents 'kissed' eachother so they could more efficiently exchange their air reserves between eachother, while the room filled; unfortunately, as their attention was allocated to the act of kissing, they did not complete the requisite calculations in time, and both died.) 

Things that inspired this story: Kurt Vonnegut; reinforcement learning; environmental design; moral patient hood.


Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me@jackclarksf

Twitter
Facebook
Website
Copyright © 2021 Import AI, All rights reserved.
You are receiving this email because you signed up for it. Welcome!

Our mailing address is:
Import AI
Many GPUs
Oakland, California 94609

Add us to your address book


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list

Email Marketing Powered by Mailchimp

Older messages

Import AI 276: Tracking journalists with computer vision; spotting factory defects with AI; and what simulated war might look like

Monday, December 6, 2021

What would be the smallest computational envelope required to simulate fluid dynamics to the same fidelity as reality? View this email in your browser Welcome to Import AI, a newsletter about

Import AI 274: Multilingual models cement power structures; a giant British Sign Language dataset;  and benchmarks for the UN SDGs

Monday, November 15, 2021

If you had the choice of having 1, 3, or 10 'AGI-class' systems come online at once, which would you pick? View this email in your browser Welcome to Import AI, a newsletter about artificial

Import AI 273: Corruption VS Surveillance; Baidu makes better object detection; understanding the legal risk of datasets

Monday, November 8, 2021

At what point will AI start to influence religion, and vice versa? View this email in your browser Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to give your

Import AI #272: AGI-never or AGI-soon?, simulating stock markets; evaluating unsupervised RL

Monday, November 1, 2021

If each individual parameter of every machine learning model in existence were rendered as a 1cm by 1cm cube, how much space would they all take up? View this email in your browser Welcome to Import AI

Import AI 271: The PLA and adversarial examples; why CCTV surveillance has got so good; and human versus computer biases

Monday, October 25, 2021

How many times has artificial general intelligence been invented on other planets? View this email in your browser Welcome to Import AI, a newsletter about artificial intelligence. Forward this email

You Might Also Like

SRE Weekly Issue #456

Monday, December 23, 2024

View on sreweekly.com A message from our sponsor, FireHydrant: On-call during the holidays? Spend more time taking in some R&R and less getting paged. Let alerts make their rounds fairly with our

The Power of an Annual Review & Grammarly acquires Coda

Sunday, December 22, 2024

I am looking for my next role, Zen Browser got a fresh new look, Flipboard introduces Surf, Campsite shuts down, and a lot more in this week's issue of Creativerly. Creativerly The Power of an

Daily Coding Problem: Problem #1645 [Hard]

Sunday, December 22, 2024

Daily Coding Problem Good morning! Here's your coding interview problem for today. This problem was asked by Facebook. Implement regular expression matching with the following special characters: .

PD#606 How concurrecy works: A visual guide

Sunday, December 22, 2024

A programmer had a problem. "I'll solve it with threads!". has Now problems. two he ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌

RD#486 (React) Things I Regret Not Knowing Earlier

Sunday, December 22, 2024

Keep coding, stay curious, and remember—you've got this ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌

🎶 GIFs Are Neat, but I Want Clips With Sound — Your Own Linux Desktop in the Cloud

Sunday, December 22, 2024

Also: 9 Games That Were Truly Ahead of Their Time, and More! How-To Geek Logo December 22, 2024 Did You Know Dextrose is another name for glucose, so if you see it listed prominently on the ingredients

o3—the new state-of-the-art reasoning model - Sync #498

Sunday, December 22, 2024

Plus: Nvidia's new tiny AI supercomputer; Veo 2 and Imagen 3; Google and Microsoft release reasoning models; Waymo to begin testing in Tokyo; Apptronik partners with DeepMind; and more! ͏ ͏ ͏ ͏ ͏ ͏

Sunday Digest | Featuring 'The World’s 20 Largest Economies, by GDP (PPP)' 📊

Sunday, December 22, 2024

Every visualization published this week, in one place. Dec 22, 2024 | View Online | Subscribe | VC+ | Download Our App Hello, welcome to your Sunday Digest. This week, we visualized public debt by

Android Weekly #654 🤖

Sunday, December 22, 2024

View in web browser 654 December 22nd, 2024 Articles & Tutorials Sponsored Solving ANRs with OpenTelemetry While OpenTelemetry is the new observability standard, it lacks official support for many

😸 Our interview with Amjad Masad

Sunday, December 22, 2024

Welcome back, builders Product Hunt Sunday, Dec 22 The Roundup This newsletter was brought to you by AssemblyAI Welcome back, builders Happy Sunday! We've got a special edition of the Roundup this