Import AI 300: Google's Bitter Lesson; DOOM AGI; DALL-E's open source competition StableDiffusion

Once AI systems can retrieve from external knowledge bases, they will become true cyberlibrarians
View this email in your browser

Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to give your chums an AI upgrade. Subscribe here.

Google makes its robots massively smarter by swapping out one LM for a different, larger LM:
…Maybe language models really can work as world models…
Earlier this year, Google showed how it was able to use a large language model to significantly improve the performance and robustness of robots tasked with doing tasks in the physical world. The 'SayCan' approach (Import AI 291) basically involved taking the affordances outputted by on-robot AI systems and pairing that with a language model, looking at the high-likleihood actions generated by both systems (the on-robot models, as well as the LM), then taking actions accordingly. The approach is both simple and effective. Now, Google has found a way to make the approach much, much more effective. The secret? Swapping out one LM for a far larger one. 

What Google did: Google upgraded its robots by pairing them with its large-scale 540B parameter 'PALM' language model, where the previous system used the 137B parameter 'FLAN' model. The larger model gives the robots significantly improved performance: "The results show that the system using PaLM with affordance grounding (PaLM-SayCan) chooses the correct sequence of skills 84% of the time and executes them successfully 74% of the time, reducing errors by half compared to FLAN," Google writes. 

The bitter lesson - bigger is better: Though FLAN was finetuned to be good at instruction following, PALM beats FLAN likely as a consequence of scale. "The broader and improved dataset for PaLM may make up for this difference in training," Google writes. This is significant as it's another sign that simply scaling up models lets them develop a bunch of capabilities naturally which beat human-engineered finetuned approaches - chalk another point up in favor of silicon minds versus mushy minds. 

   Read more: Do As I Can, Not As I Say: Grounding Language in Robotic Affordances (arXiv, read the 'v2' version).

####################################################

DOOM programmer Carmack starts AGI company:
…Keen Technologies to do AGI via 'mad science'...
"It is a truth universally acknowledged, that a man in possession of a good fortune, must be in want of an AGI company,” wrote Jane 'Cyber' Austen, and she's right: AGI companies are now proliferating left and right, and the latest is 'Keen Technologies', an AGI startup from John Carmack, the famed programmer behind the DOOM games. Keen has raised an initial seed round of $20 million (not much in the scheme of AI startups) and its mission, per Carmack, is "AGI or bust, by way of Mad Science".

Why this matters: One of the clues for impending technological progress is that a bunch of extremely smart, accomplished people go and all stack their proverbial career poker chips in the same place. That's been happening in AI for a while, but the fact it's now drawing attention from established experts in other fields (in the case of Carmack, computer graphics and general programming wizardry) is a further indication of potential for rapid progress here. 
   Read more in Carmack's tweet thread (Twitter).

####################################################

Want GPT2 to know about Covid and Ukraine? So does HuggingFace:
…Online language modeling means GPT2 and BERT are going to get better…
HuggingFace plans to continuously train and release masked language models (e.g, BERT and GPT2) on new Common Crawl snapshots. This is a pretty useful community service; developers tend to pull whatever off-the-shelf models they can when starting projects, and most publicly available GPT2 and BERT models are essentially amber-frozen records up to 2020 or so (sometimes 2021), so things like COVID or the Ukraine conflict or the current global financial meltdown elude them. By having more current models, developers can deploy things which are more accurate and appropriate to current contexts. 
    Read the HuggingFace tweet thread here (Tristan Thrust, Twitter).

####################################################

Want to use China's good open source language model? You'll need to agree not to attack China, first:
…Terms and conditions with a hint of geopolitics…
If you want to access the weights of GLM-130B (Import AI #299), a good new language model from Tsinghua University, you'll need to first agree that "you will not use the Software for any act that may undermine China's national security and national unity, harm the public interest of society, or infringe upon the rights and interests of human beings" - that's according to the application form people fill out to get the model weights. 
   Furthermore, "this license shall be governed and construed in accordance with the laws of People’s Republic of China. Any dispute arising from or in connection with this License shall be submitted to Haidian District People's Court in Beijing."

  Why this matters: IDK dude. I spend a lot of time in this newsletter writing about the geopolitical implications of AI. This kind of wording in a license for a big model just does my job for me. 
   Read more: GLM-130B Application Form (Google Form).

####################################################

DALL-E gets semi-open competition: Stable Diffusion launches to academics:
…Restrictions lead to models with fewer restrictions. The ratchet clicks again…
A bunch of researchers have come together to build an image model like DALL-E2 but with fewer restrictions and designed with broader distribution in mind. They also have access to a really big GPU cluster. That's the tl;dr on 'Stable Diffusion', a new family of models launched by AI research collective Stability.ai. They're making the weights available to academics via an access scheme and are planning to do a public release soon. 

What's interesting about Stable Diffusion: This model is basically a natural consequence of the restrictions other companies have placed on image models (ranging from Google which built Imagen but hasn't released it, to OpenAI which built DALL-E2, then released it with a bunch of filters and prompt-busting bias interventions). I generally think of this as being an example of 'libertarian AI' - attempts to create restrictions on some part of model usage tend to incentivize the creation of things without those restrictions. This is also, broadly, just what happens in markets. 

Big compute - not just for proprietary stuff: "The model was trained on our 4,000 A100 Ezra-1 AI ultracluster over the last month as the first of a series of models exploring this and other approaches," Stability.ai writes. Very few labs have access to a thousand GPUs, and 4k GPUs puts Stability.ai into somewhat rarified company, in distribution with some of the largest labs. 

Aesthetic data:"The core dataset was trained on LAION-Aesthetics, a soon to be released subset of LAION 5B. LAION-Aesthetics was created with a new CLIP-based model that filtered LAION-5B based on how “beautiful” an image was, building on ratings from the alpha testers of Stable Diffusion," they write. 

Why this matters: Generative models are going to change the world in a bunch of first- and second-order ways. By releasing StableDiffusion (and trying to do an even more public release soon), stability.ai is able to create a better base of evidence about the opportunities and risks inherent to model diffusion. 
   "This is an experiment in safe and community-driven publication of a capable and general text-to-image model. We are working on a public release with a more permissive license that also incorporates ethical considerations," Stability.ai writes. 
   Read more: Stable Diffusion launch announcement (Stability.ai).
   Apply for academic access here: Research and Academia (Stability.ai).
   Get the weights from here once you have access (GitHub).

####################################################

Tech Tales:

Superintelligence Captured by Superintelligence

After we figured out how to build superintelligence, it wasn't long before the machines broke off from us and started doing their own thing. We'd mostly got the hard parts of AI alignment right, so the machines neither eradicated or domesticated the humans, nor did they eat the sun. 

They did, however, start to have 'disagreements' which they'd settle in ways varying from debate through to taking kinetic actions against one another. I guess even superintelligences get bored. 

Fortunately, they had the decency to do the kinetic part on the outer edges of the solar system, where they'd migrated a sizable chunk of their compute to. At night, we'd watch the livefeeds from some of the space-based telescopes, staring in window as the machines resolved arguments through carefully choreographed icerock collisions. It was as though they'd brought the stars to the very edge of the system, and the detonations could be quite beautiful.

They tired of this game eventually and moved onto something more involved: capturing. Now, the machines would seek to outsmart eachother, and the game - as far as we could work out - was a matter of sending enough robots to the opponents' central processing core that you could put a probe in and temporarily take it over. The machines had their own laws they followed, so they'd always retract the probe eventually, giving the losing machine its mind back. 

Things that inspired this story: Boredom among aristocrats; perhaps the best competition is a game of mind against mind; figuring out how machines might try to sharpen themselves and what whetstones they might use.

Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me@jackclarksf

Twitter
Facebook
Website
Copyright © 2022 Import AI, All rights reserved.
You are receiving this email because you signed up for it. Welcome!

Our mailing address is:
Import AI
Many GPUs
Oakland, California 94609

Add us to your address book


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list

Email Marketing Powered by Mailchimp

Older messages

Import AI 299: The world's best language model is Made in China; NVIDIA boosts LLM training; OpenAI shows how to 'fill in the middle' on a LM

Monday, August 8, 2022

What will be the first new thing a superintelligence will invent? View this email in your browser Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to give your chums

Import AI 298: Mimetic models; LLM search engine raises $25m; UK splits from Europe on AI regulation

Monday, July 25, 2022

How many parameters will be tunable in the 'default settings' of a superintelligence? View this email in your browser Welcome to Import AI, a newsletter about artificial intelligence. Forward

Import AI 297: Ukrainians add object detection to killer drones; YOLOv7; and a $71,000 AI audit competition

Monday, July 18, 2022

In the same way humans have a notion of the aesthetics of abstract sciences, like theoretical physics or advanced mathematics, might computers develop notions of 'beauty' at even higher levels

Import AI 296: $100k to find flaws in LLMs, NVIDIA uses RL to make better chip parts; + 256gb of law data, and a story about the cyber gerontocracy!

Monday, July 11, 2022

Will we ever have 'old growth' computers in the way we have 'old growth' forests today. In the same way there are mainframes that have been running (albeit with parts swapped out and

Import AI 295: DeepMind's baby general agent; NVIDIA simulates a robot factory; AI wars.

Friday, May 20, 2022

If it is possible to develop human-level AI, at one point will we make the first AI magician that perplexes even the most accomplished human magician? View this email in your browser Welcome to Import

You Might Also Like

Sunday Digest | Featuring 'The World’s 20 Largest Economies, by GDP (PPP)' 📊

Sunday, December 22, 2024

Every visualization published this week, in one place. Dec 22, 2024 | View Online | Subscribe | VC+ | Download Our App Hello, welcome to your Sunday Digest. This week, we visualized public debt by

Android Weekly #654 🤖

Sunday, December 22, 2024

View in web browser 654 December 22nd, 2024 Articles & Tutorials Sponsored Solving ANRs with OpenTelemetry While OpenTelemetry is the new observability standard, it lacks official support for many

😸 Our interview with Amjad Masad

Sunday, December 22, 2024

Welcome back, builders Product Hunt Sunday, Dec 22 The Roundup This newsletter was brought to you by AssemblyAI Welcome back, builders Happy Sunday! We've got a special edition of the Roundup this

C#537 Automating Santa's Workshop with NServiceBus

Sunday, December 22, 2024

Using event-driven architecture for effective gift delivery 🎄🎁 ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌

The Race for AI Reasoning is Challenging our Imagination

Sunday, December 22, 2024

New reasoning models from Google and OpenAI ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏

ScienceDaily/Minimalist lamp/Avocado tip

Sunday, December 22, 2024

Recomendo - issue #442 ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏

Laravel VS Code Extension, Laravel 11.36, Wirechat, and more! - №544

Sunday, December 22, 2024

Your Laravel week in review ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏

Kotlin Weekly #438

Sunday, December 22, 2024

ISSUE #438 22nd of December 2024 Announcements klibs.io JetBrains has introduced the alpha version of klibs.io – a web service that speeds up and simplifies discovering KMP libraries that best meet

Weekend Reading — Happy "That's a January Problem" week

Saturday, December 21, 2024

Can Christmas season start a little earlier this year Tech Stuff Ramsey Nasser fuck it happened i am in a situation where i do actually need to reverse a linked list Atuin I just learned about Atuin

Daily Coding Problem: Problem #1644 [Easy]

Saturday, December 21, 2024

Daily Coding Problem Good morning! Here's your coding interview problem for today. This problem was asked by IBM. Given an integer, find the next permutation of it in absolute order. For example,