Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to give your chums an AI upgrade. Subscribe here.
How much capability emergence is in a language model? Aka, how long is a piece of string:
…The capabilities overhang just gets more and more significant everywhere you look…
Here's a lovely blog by Jason Wei that pulls together 137 examples of 'emergent abilities of large language models'. Emergence is a phenomenon seen in contemporary AI research, where a model will be really bad at a task at smaller scales, then go through some discontinuous change which leads to significantly improved performance.
Emergence is a big deal because a) it says you get pretty powerful gains from scaling models and b) it's inherently unpredictable, so large-scale models tend to have 'hidden' capabilities and safety issues as a consequence of emergence. This blog shows a bunch of examples of emergence spread across a bunch of different language models (GPT-3, LaMDA, PaLM, Chinchilla, Gopher).
Types of emergence: I won't list all 137, but some highlights: arithmetic, swahili english proverbs, college medicine, conceptual physics, high school microeconomics, hinglish toxicity, word unscrambling, and more.
Why this matters - Houston, we have a Capability Overhang problem: Because language models have a large capability surface, these cases of emergent capabilities are an indicator that we have a 'capabilities overhang' - today's models are far more capable than we think, and our techniques available for exploring the models are very juvenile. We only know about these cases of emergence because people built benchmark datasets and tested models on them. What about all the capabilities we don't know about because we haven't thought to test for them? There are rich questions here about the science of evaluating the capabilities (and safety issues) of contemporary models.
Read more: 137 emergent abilities of large language models (Jason Wei blog).
####################################################
DeviantArt adds generative art to its website, but tries to respect human artists while doing so:
…Artists VS AI Artists VS AI Models, and on and on the controversy goes…
DeviantArt, an ancient and still thriving art website, has built DreamUp, a generative AI tool based on the popular StableDiffusion model. In doing so, it is trying to strike a balance between respecting the human artists on its platform and letting people still generate art - by default, all 'deviations' (outputs of DreamUp) will be automatically labeled as not suitable for downstream use in other AI training datasets.
What does DeviantArt think artists want? Artists have, understandably, had mixed views about image generation. Some of them have adopted the technology and fooled around with it and integrated it into their practice. Others view the technology as inherently bad and threatening to their livelihoods. DeviantArt is clearly trying to navigate those concerns with its approach to DreamUp. "DeviantArt is the only platform giving creators the ability to tell third-party AI datasets and models whether or not their content can be used for training. This is a protection for creators to help them safeguard their content across the web," DeviantArt says.
Why this matters: The intersection of AI and art is a messy area; human emotions and soul colliding with the envisioned curve-fitting extrapolations of alien machines. Here, DeviantArt is trying to strike a balance between giving human artists agency over their work, while attempting to integrate art into its platform.
Read more: Create AI-Generated Art Fairly with DreamUp (DeviantArt blog).
####################################################
Demoscene AI: arXiv adds interactive demo support:
…HuggingFace + arXiv partnership shows the future…
arXiv has partnered with HuggingFace to incorporate live demos into the popular paper preprint repository. This means that when you browse papers on arXiv, you might scroll down and see an option to explore a demo of the model under discussion on 'Hugging Face Spaces'.
Who cares about demos? "Demos allow a much wider audience to explore machine learning as well as other fields in which computational models are built, such as biology, chemistry, astronomy, and economics," arXiv writes in a blog post. "The demos increase the reproducibility of research by enabling others to explore the paper’s results without having to write a single line of code."
Why this matters: In my experience, a demo is worth about ten thousand words, or sixty minutes of talking. Concretely, I've found if I demo something (e.g, StableDiffusion, a language model, or something in a Colab notebook, etc) I can get a point across in five minutes that'd otherwise take an hour or more, and the demo is way more memorable and engaging. All hail the era of didactic demoscene AI.
Read more: Discover State-of-the-Art Machine Learning Demos on arXiv (arXiv blog).
####################################################
Real world reinforcement learning: DeepMind use RL to more efficiently cool buildings.
…First data centers, now offices - the genies are here, and they want to lower your electricity bill!…
DeepMind and building management company Trane have used a reinforcement learning agent to efficiently cool some buildings, yielding reductions in cooling energy use of between 9% and 13%. This is a real world application of reinforcement learning (along with other recent hits, like RL systems designing more efficient chips, and stabilizing the plasma in prototype fusion plants), and shows how a technology which ~ten years ago was most known for beating Atari games has matured to the point we're putting it in charge of buildings full of people.
What they did: The DeepMind system uses RL "to provide real-time supervisory setpoint recommendations to the chiller plant… in two commercial buildings". DeepMind constructs its approach in a similar way to the algorithm used to cool Google data centers and calls the algorithm 'BCOOLER'. BCOOLER does a daily policy re-optimization, so it continually improves. There's a lot of detail in the paper about the precise implementation details, so if you have a building and want to cool it, read the paper.
In tests, DeepMind found that BCOOLER "performs better in some conditions than others" - it did well when the outside temperature was cold and load was lower, and did less well when temperatures were high and load was higher. This makes intuitive sense - when things are hot outside "the equipment are running close to their max capacity, and there is less room for BCOOLER to make intelligent decisions". Interestingly, BCOOLER learned a policy that was pretty robust to sensor miscalibration and learned how to recalibrate them, which is a nice case of 'capability emergence' seen in a real-world RL system.
What comes next - buildings, all watched over by machines of patient and cooling grace: In the future, DeepMind wants to explore versions of BCOOLER that get more sensor inputs and are trained on simulations of different facilities. "Another direction is to focus on the generalizability of the algorithm, because large scale impact requires deployment to new facilities without significant engineering, modeling, and problem definition work per facility." Broadly speaking, this paper is a great example of how I expect AI to begin changing the world in a quiet and significant way - all around us, things will become quietly more efficient and imbued with certain sub-sentient agentic intelligences, diligently working away in the service of humanity. How nice!
Read more: Controlling Commercial Cooling Systems Using Reinforcement Learning (arXiv).
####################################################
AlphaZero learns in a surprisingly human way:
…DeepMind's AI system learns chess in a superficially similar way to people…
Researchers with DeepMind and Google, along with a former Chess grandmaster, have published a paper analyzing how DeepMind's 'AlphaZero' system learns to play chess. "Although the system trains without access to human games or guidance, it appears to learn concepts analogous to those used by human chess players," they write.
How AlphaZero learns, versus how humans learn: To study the differences, they look at around 100,000 human games pulled from the ChessBase archive "and computed concept values and AlphaZero activations for every position in this set." In tests, they find that AlphaZero learns about chess in a similar way to people - "first, piece value is discovered; next comes an explosion of basic opening knowledge in a short time window," they write. "This rapid development of specific elements of network behavior mirrors the recent observation of “phase transition”–like shifts in the inductive ability of large language models."
One puzzling behavior: There's one way in which AlphaZero might differ to humans - AlphaZero seems to start out by considering a broad range of opening moves, then narrowing down from there, whereas humans seem to start by considering a small range of opening moves, then broadening over time. This could either be due to differences in how AlphaZero and humans approach the game, or it could potentially be an artifact of datasets used to do the study.
Why this matters: AI systems are somewhat inscrutable but, as I regularly write, are being deployed into the world. It's interesting to know whether these systems display symptoms of intelligence that are human-like or alien-like; here, it seems like a sufficiently big neural net can learn Chess from a blank slate in a remarkably similar way to people.
Read more: Acquisition of chess knowledge in AlphaZero (PNAS).
####################################################
What is smart, strategic, and able to persuade you to work against your own best interests?
…CICERO, and it's made by Facebook!...
Facebook researchers have built CICERO, an AI system that can play the famous turn-friends-into-bitter-enemies game 'Diplomacy', and which can talk to players via a language model. CICERO builds on an earlier set of Facebook-built models named 'Diplodocus' which played Diplomacy at an expert level, albeit without conversing with humans.
How well CICERO did: "CICERO demonstrated this by playing on webDiplomacy.net, an online version of the game, where CICERO achieved more than double the average score of the human players and ranked in the top 10 percent of participants who played more than one game," Facebook wrote.
Gift of the Golden Silicon Tongue: CICERO's main advantage comes from its ability to effectively utilize a language model to reach agreements with other players, convincing them to form partnerships and so on. "CICERO is so effective at using natural language to negotiate with people in Diplomacy that they often favored working with CICERO over other human participants." The language model is comparatively modest - a 2.7 billion parameter model pre-trained on internet text and fine-tuned on over 40,000 human games on webDiplomacy.net.
Why this matters - a nice thing and a scary thing: CICERO is another achievement showing how AI systems can perform feats of strategic reasoning that experts consider very difficult. It's also an example of the sorts of capabilities which some AI researchers are afraid of - an AI system that is a) better than humans at a hard skill and b) able to persuade humans to go along with it, is basically the origin story of lots of sci-fi stories that end badly for humans. On the other hand, establishing evidence about these capabilities is probably one of the best ways to study them in-situ and accurately calibrate on the severity of the safety problem.
Read more: CICERO: An AI agent that negotiates, persuades, and cooperates with people (Facebook AI Research).
####################################################
Tech Tales:
God Complex
[Earth, 2028].
The Catholic Church was at first skeptical that it could use artificial intelligence to revitalize its religion, but after the success of its VR confessional (replete with a priest avatar based on a generative model finetuned on Catholic doctrine), it changed its mind. Thus was born 'God Complex'.
The idea behind God Complex was that it would live on people's phones and it would display appropriate sections from the bible around any text that appeared on the phone, or any images or videos. If you were taking a photo of an apple tree, it might display a pop-up (or, later, speak about) the Garden of Eden and forbidden fruit. If you were watching a city getting leveled by missiles, it might tell you about the story of Sodom and Gomorrah.
It was all just another form of Reality Collapse, and it blended in with the various other 'ideological AI' projects that were in fashion at the time. But the Catholics were pleased - for the first time in decades, young, atheist Children were converting over to Catholicism, swayed by the interactivity of God Complex, and competing with eachother to find what they called 'Easter Eggs' - certain things you could photograph or say to your phone to get God Complex to quote an unexpected thing.
'Hey guys I just discovered this hack for God Complex that is guaranteed to get your a Rare Verse every time'.
'Listen, y'all, run, don't walk to your nearest Goodwill and pick up these clothing items, then take a selfie. I won't spoil it, but God Complex has a surprise for you'.
'Okay gang I've gotta tell you, I've been ADDICTED to playing this game with God Complex turned on - it triggers so many cool things and I had no idea about some of them - you even get some of Revelations!'.
The success of God Complex ultimately led to a schism in the Church, though - some faction broke off, keen to build an app they called Angels Among Us, which would fill the earth with VR angels, giving users an even closer connection to religion. Some called this blasphemy and others called this the only way to reach a youth, rendered jaded by God Complex and eager for something even more entrancing.
Things that inspired this story: When religion meets gamification and social media incentives; Theistic Attention Harvesting; the role of religion in a world in a secular-wired world;
Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me@jackclarksf
|