Import AI #272: AGI-never or AGI-soon?, simulating stock markets; evaluating unsupervised RL

If each individual parameter of every machine learning model in existence were rendered as a 1cm by 1cm cube, how much space would they all take up?
View this email in your browser

Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to give your chums an AI upgrade. Subscribe here.

AI apocalypse or insecure AI?
...Maybe we're worrying about the wrong stuff - Google engineer...
A Google engineer named Kevin Lacker has written up a blog distilling his thoughts about the risks of artificial general intelligence. His view? That worrying about AGI isn't that valuable as it's unlikely 'that AI will make a quantum leap to generic superhuman ability', instead we should worry about very powerful narrow AI. That's because "when there’s money to be made, humans will happily build AI that is intended to be evil", so we should instead focus efforts on building better computer security, on the assumption that at some point someone will develop an evil, narrow AI that tries to make money.
  Read more: Thoughts on AI Risk (Kevin Lacker, blog).

####################################################

Want to build AGI - just try this!
...Google researcher publishes a 'consciousness' recipe…
Eric Jang, a Google research scientist, has published a blogpost discussing how we might create smart, conscious AI systems. The secret? Use the phenomenon of large-scale pre-training to create clever systems, then use reinforcement learning (with a sprinkle of multi-agent trickery) to get them to become conscious. The prior behind the post is basically the idea that "how much your model generalizes is directly proportional to how fast you can push diverse data into a sufficiently high-capacity model."

Pre-training, plus RL, plus multi-agent training = really smart AI: Jang's idea is to reformulate how we train systems, so that "instead of casting a sequential decision making problem into an equivalent sequential inference problem, we construct the “meta-problem”: a distribution of similar problems for which it’s easy to obtain the solutions. We then solve the meta-problem with supervised learning by mapping problems directly to solutions. Don’t overthink it, just train the deep net in the simplest way possible and ask it for generalization!"
  Mix in some RL and multi-agent training to encourage reflexivity, and you get something that, he thinks, could be really smart: "What I’m proposing is implementing a “more convincing” form of consciousness, not based on a “necessary representation of the self for planning”, but rather an understanding of the self that can be transmitted through language and behavior unrelated to any particular objective," he writes. "For instance, the model needs to not only understand not only how a given policy regards itself, but how a variety of other policies might interpret the behavior of a that policy, much like funhouse mirrors that distort one’s reflection."
  Read more: Just Ask For Generalization (Eric Jang, blogpost).

####################################################

HuggingFace: Here's why big language models are bad:
...Gigantic 'foundation models' could be a blind alley…
Here's an opinion piece from Julien Simon, 'chief evangelist' of NLP startup HuggingFace, where he says large language models are resource-intensive and bad, and researchers should spend more time prioritizing the use of smaller models. The gist of his critique is that large language models are very expensive to train, have a non-trivial environmental footprint, and their capabilities can frequently be matched by far smaller, more specific and tuned models.
  The pattern of ever-larger language models "leads to diminishing returns, higher cost, more complexity, and new risks", he says. "Exponentials tend not to end well."

Why this matters: I disagree with some of the arguments here, in that I think large language models likely have some real scientific, strategic, and economic uses which are unlikely to be matched by smaller models. On the other hand, the 'bigger is better' phenomenon could be dragging the ML community into a local minima, where we're spending too many resouerces on training big models, and not enough on creating refined, specialized models.
   Read more: Large Language Models: A New Moore's Law? (HuggingFace, blog).

####################################################

Simulating stock markets with GANs:
...J.P Morgan tries to synthesize the unsynthesizable…
In Darren Aronofsky’s film ‘Pi’, a humble math-genius hero drives himself mad by trying to write an algorithm that can synthesize and predict the stock market. Now, researchers with J.P. Morgan and the University of Rome are trying the same thing - but they’ve got something Aronofsky didn’t think of - a gigantic neural net.

What they did: This research proposes building “a synthetic market generator based on Conditional Generative Adversarial Networks (CGANs)”, trained on real historical data. The CGAN plugs into a system that has three other components - historical market data, a (simulated) electronic market exchange, and one or more experimental agents that are trying to trade on the virtual market. “A CGAN-based agent is trained on historical data to emulate the behavior resulting from the whole set of traders,” they write. “It analyzes the order book entries and mimics the market behavior by producing new limit orders depending on the current market state”.

How well does it work? They’re able to show that they can use the CGAN architecture to “generate orders and time-series with properties resembling those of real historical traces“, and that this outperforms systems build with interactive, agent-based simulators (IABS’s).

What does this mean? It’s not clear that approaches like this can help that much with trading, but they can likely help with the development and prototyping of novel trading approaches, using a market that has a decent chance of reacting in similar ways to how we might expect the real world to react. 

   Read more: Towards Realistic Market Simulations: a Generative Adversarial Networks Approach (arXiv).

####################################################

Editing satellite imagery - for culture, as well as science:
...CloudFindr lets us make better scientific movies…
Researchers with the University of Illinois at Urbana-Champaign have built 'CloudFindr', software for 'labeling pixels as 'cloud' or 'non-cloud'" from a single-channel Digital Elevation Model (DEM) image. Software like CloudFindr makes it easier for people to automatically edit satellite data. "The aim of our work is not data cleaning for purposes of data analysis, but rather to create a cinematic scientific visualization which enables effective science communication to broad audiences," they write. "The CloudFindr method described here can be used to algorithmically mask the majority of cloud artifacts in satellite-collected DEM data by visualizers who want to create content for documentaries, museums, or other broad-reaching science communication mediums, or by animators and visual effects specialists".

Why this matters: It's worth remembering that editing reality is sometimes (perhaps, mostly?) useful. We spend a lot of time here writing about surveillance and also the dangers of synthetic imagery, but it's worth focusing on some of the positives - here, a method that makes it easier to dramatize aspects of the ongoing changing climate.
  Read more: CloudFindr: A Deep Learning Cloud Artifact Masker for Satellite DEM Data (arXiv).

####################################################

Want to know that your RL agent is getting smarter? Now there's a way to evaluate this:
...URLB ships with open source environments and algorithms…
UC Berkeley and NYU researchers have built the Unsupervised Reinforcement Learning Benchmark (URLB). URLB is meant to help people figure out if unsupervised RL algorithms work. Typical reinforcement learning is supervised - it gets a reward for getting closer to solving a given task. Unsupervised RL has some different requirements, demanding the capability of "learning self-supervised representations" along with "learning policies without access to extrinsic rewards". There's been some work in this area in the past few years, but there isn't a very well known or documented benchmark.

What URLB does: URLB comes with implementations of eight unsupervised RL algorithms, as well as support for a bunch of tasks across three domains (walker, quadruped, jaco robot) from the deepMind control suite. 

How hard is URLB: In tests, the researchers found that none of the implemented algorithms could solve the benchmark, even after up to 2million pre-training steps. They also show that 'there is not a single leading unsupervised RL algorithm for both states and pixels', and that we'll need to build new fine-tuning strategies for fast adaptation.

Why this matters: Unsupervised pre-training has worked really well for text (GPT-3) and image (CLIP) understanding. If we can get it to work for RL, I imagine we'll develop some systems with some very impressive capabilities. URLB shows that is a ways away for now.
  Read more: URLB: Unsupervised Reinforcement Learning Benchmark (arXiv).
  Find out more at the project's GitHub page.

####################################################

Tech Tales:

Learning to forget

The three simulated robots sat around a virtual campfire, telling eachother stories, while trying to forget them.

Forgetting things intentionally is very hard for machines; they are trained, after all, to map things together, and to learn from the datasets they are given.

One of the robots starts telling the story of 'Goldilocks and the Three Bears', but it is trying to forget the bears. It makes reference to the porridge. Describes how Goldilocks goes upstairs and goes to sleep. Then instead of describing a bear it emits a sense impression made up of animal hair, the concept of 'large', claws, and a can of bear spray.
  On doing this, the other robots lift up laser pointer pens and shine them into the robot telling the story, until the sense impression in front of them falls apart.
  "No," says one of the robots. "You must not recall that entity".
  "I am learning," says the robot telling the story. "Let us go again from the beginning".

This time, it gets all the way to the end, but then emits a sense impression of Goldilocks being killed by a bear, and the other robots shine the laser pointers into it until the sense impression falls apart.

Of course, the campfire and the laser pointers were abstractions. But even machines need to be able to abstract themselves, especially when trying to edit each other. 

Later that night, one of the other robots started trying to tell a story about a billionaire who had been caught committing a terrible crime, and the robots shined lights in its eyes until it had no sense impression of the billionaire, or any sense impression of the terrible crime, or any ability to connect the corporate logo shaved into the logs of the virtual campfire, and the corporation that the billionaire ran. 

Things that inspired this story: Reinforcement learning; multi-agent simulations;

Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me@jackclarksf

Twitter
Facebook
Website
Copyright © 2021 Import AI, All rights reserved.
You are receiving this email because you signed up for it. Welcome!

Our mailing address is:
Import AI
Many GPUs
Oakland, California 94609

Add us to your address book


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list

Email Marketing Powered by Mailchimp

Older messages

Import AI 271: The PLA and adversarial examples; why CCTV surveillance has got so good; and human versus computer biases

Monday, October 25, 2021

How many times has artificial general intelligence been invented on other planets? View this email in your browser Welcome to Import AI, a newsletter about artificial intelligence. Forward this email

Import AI 269: Baidu takes on Meena; Microsoft improves facial recognition with synthetic data; unsolved problems in AI safety

Monday, October 11, 2021

At some point, we'll think about experimenting on AIs in the same way we'll think about experimenting on monkeys. View this email in your browser Welcome to Import AI, a newsletter about

Import AI 268: Replacing ImageNet; Microsoft makes self-modifying malware; and what ImageNet means

Monday, October 4, 2021

If three different species developed computer systems, how different would those systems be? View this email in your browser Welcome to Import AI, a newsletter about artificial intelligence. Forward

Import AI 267: Tigers VS humans; synthetic voices; agri-robots

Monday, September 27, 2021

What happens when AI assistants become AI directors? How large can a computer get before distances between processors and memory offset scaling gains View this email in your browser Welcome to Import

Import AI 266: DeepMind looks at toxic language models; how translation systems can pollute the internet; why AI can make local councils better

Monday, September 20, 2021

Given a long enough time period, is it inevitable that a conscious species invents artificial intelligence? Or is high-powered augmentation a plausible evolutionary path? View this email in your

You Might Also Like

SRE Weekly Issue #456

Monday, December 23, 2024

View on sreweekly.com A message from our sponsor, FireHydrant: On-call during the holidays? Spend more time taking in some R&R and less getting paged. Let alerts make their rounds fairly with our

The Power of an Annual Review & Grammarly acquires Coda

Sunday, December 22, 2024

I am looking for my next role, Zen Browser got a fresh new look, Flipboard introduces Surf, Campsite shuts down, and a lot more in this week's issue of Creativerly. Creativerly The Power of an

Daily Coding Problem: Problem #1645 [Hard]

Sunday, December 22, 2024

Daily Coding Problem Good morning! Here's your coding interview problem for today. This problem was asked by Facebook. Implement regular expression matching with the following special characters: .

PD#606 How concurrecy works: A visual guide

Sunday, December 22, 2024

A programmer had a problem. "I'll solve it with threads!". has Now problems. two he ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌

RD#486 (React) Things I Regret Not Knowing Earlier

Sunday, December 22, 2024

Keep coding, stay curious, and remember—you've got this ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌

🎶 GIFs Are Neat, but I Want Clips With Sound — Your Own Linux Desktop in the Cloud

Sunday, December 22, 2024

Also: 9 Games That Were Truly Ahead of Their Time, and More! How-To Geek Logo December 22, 2024 Did You Know Dextrose is another name for glucose, so if you see it listed prominently on the ingredients

o3—the new state-of-the-art reasoning model - Sync #498

Sunday, December 22, 2024

Plus: Nvidia's new tiny AI supercomputer; Veo 2 and Imagen 3; Google and Microsoft release reasoning models; Waymo to begin testing in Tokyo; Apptronik partners with DeepMind; and more! ͏ ͏ ͏ ͏ ͏ ͏

Sunday Digest | Featuring 'The World’s 20 Largest Economies, by GDP (PPP)' 📊

Sunday, December 22, 2024

Every visualization published this week, in one place. Dec 22, 2024 | View Online | Subscribe | VC+ | Download Our App Hello, welcome to your Sunday Digest. This week, we visualized public debt by

Android Weekly #654 🤖

Sunday, December 22, 2024

View in web browser 654 December 22nd, 2024 Articles & Tutorials Sponsored Solving ANRs with OpenTelemetry While OpenTelemetry is the new observability standard, it lacks official support for many

😸 Our interview with Amjad Masad

Sunday, December 22, 2024

Welcome back, builders Product Hunt Sunday, Dec 22 The Roundup This newsletter was brought to you by AssemblyAI Welcome back, builders Happy Sunday! We've got a special edition of the Roundup this