Import AI 226: AlphaFold; a Chinese GPT2; Google fires Timnit Gebru

What would it look like to be able to train ML models via a touch-based UI on a mobile phone? And how realistic would it be to let users finetune their own models from generic pre-trained ones?
 
View this email in your browser

Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to give your chums an AI upgrade. Subscribe here.

DeepMind cracks the protein folding problem:
...AlphaFold's protein structure predictions start to match reality…
AlphaFold, a system built by DeepMind to predict the structures of proteins, has done astonishingly well at the Critical Assessment of protein Structure Prediction (CASP) competition. AlphaFold's "predictions have an average error (RMSD) of approximately 1.6 Angstroms, which is comparable to the width of an atom (or 0.1 of a nanometer)," according to DeepMind.
  What does this mean? Being able to make (correct) predictions about protein structures can speed up scientific discovery, because it makes it cheaper and quicker to explore a variety of ideas that require validating against protein structures. "This will change medicine. It will change research. It will change bioengineering. It will change everything,” Andrei Lupas, an evolutionary biologist at the Max Planck Institute for Developmental Biology, told Nature.

How big a deal is this really? Many biologists seem impressed by AlphaFold, marking the result as a landmark achievement. AlphaFold is very much a 'v1' system - it's impressive in its own right, but there are a bunch of things that'll need to be improved in the future; more capable versions of the system will need to model how proteins move as dynamic systems, as well as making predictions at more detailed resolutions.
  “A lot of structural biologists might be thinking that they might be out of a job soon! I don’t think we are anywhere close to this. Structures like ribosomes and photosynthesis centres are huge and complex in comparison. How the many different parts fit together to form a functional machine is still a big challenge for AI in the near future," said structural biology professor Peijun Zhang in an interview with The Biologist.

Why this matters: AlphaFold is one of the purest examples of why ML-based function approximation is powerful - here's a system where, given sufficient computation and a clever enough architecture, humans can use it to predict eerily accurate things about the fundamental structure of the biomachines that underpin life itself. This is profound and points to a future where many of our most fundamental questions get explored (or even answered) by dumping compute into a system that can learn to approximate a far richer underlying 'natural' process.
  Read more: AlphaFold: a solution to a 50-year-old grand challenge in biology (DeepMind blog).
  Read more: ‘It will change everything’: DeepMind’s AI makes gigantic leap in solving protein structures (Nature).

###################################################

Russia plans general AI lab - and Schmidhuber is (somewhat) involved:
...Russia taps AI pioneer to launch in-country research lab focused on "general artificial intelligence"...
Sberbank, the largest bank in Russia, will open an institute focused on developing general artificial intelligence. And AI pioneer Juergen Schmidhuber is going to be an honorary leader of it.

Is this really happening? Tass is a reputable news agency, but I couldn't find a reference to Schmidhuber on websites associated with Sberbank. I emailed Juergen at his academic address to confirm, and he clarified that: "I was invited for an honorary role at a new academic institute. I will keep my present affiliations with IDSIA and NNAISENSE".

Who is Schmidhuber? Schmidhuber is one of the main figures in AI responsible for the current boom, alongside Geoff Hinton (UofT / Google), Yann Lecun (NYU / Facebook), and Yoshua Bengio (MILA / ElementAI). Unlike those three, he didn't win a Turing award, but he's been a prolific researcher, co-invented the LSTM, theorized some early GAN dynamics via POWERPLAY, and many of the next generation have come out of his IDSIA lab (including prominent researchers at DeepMind). Russian general AI: "In the near future, we will open the first AI institute in Russia with the involvement of leading domestic and world scientists. The main mission of the institute is to provide an interdisciplinary approach to research to create general artificial intelligence," said Herman Gref, CEO of Russia's Sberbank, according to Tass news agency.
Read more:Sberbank plans to open Russia's first AI institute (Tass News Agency).

###################################################

Amazon enters the custom AI training race with AWS 'Trainium' chips:
...TPU, meet Trainium…
Amazon has become the second major cloud company to offer a specialized processor for training AI workloads on its cloud, starting a competition with Google, which fields Tensor Processing Unit (TPU) chips on its cloud. Both companies are betting that if they can design chips specialized for DL workloads (combined with an easy-to-use software stack), then developers will switch from using industry standard GPUs for AI training. This likely nets the companies better margins and also the ability to own their own compute destiny, rather than be tied so closely to the roadmaps of NVIDIA (and more recently AMD).

AWS trainium: Trainium allegedly has the "highest performance and lowest cost for ML training in the cloud", though without being able to see the speeds and feeds and benchmarks, it's hard to know what to make of this. The chips will be available in 2021, Amazon says, and are compatible with Amazon's 'Neuron' SDK.

Why this matters: ML training hardware is a strategic market - building AI systems is hard, complicated work, and the type of computing substrate you use is one of the fundamental constraints on your development. Whoever owns the compute layer will get to see the evolution of AI and where demands for new workloads are coming from. This is analogous to owning a slice of the future, so it's no wonder companies are competing with eachother.
Read more: AWS Trainium (AWS product page).

###################################################

Google's balloons learn to fly with RL:
...Finally, another real world use case for reinforcement learning!...
Google has used reinforcement learning to teach its 'Loon' balloons to navigate the stratosphere - another example of RL being used in the real world, and one which could point to further, significant deployments.

What they did: Loon is a Google project dedicated to providing internet to remote places via weather balloons.To do that, Google's Loon balloons need to stay aloft in the stratosphere, while responding intelligently to things like wind speed, pressure changes, and so on.
 
Expensive simulation: Any RL process typically requires a software-based simulator that you can train your agents in, before transferring them into the real world. The same is true here; Google simulates various complex datasets relating to wind and atmospheric movements, then trains its balloons with the objective to stay relatively close to their (simulated) assigned ground station. Due to the complexity of the data, the simulation is relatively heavy duty, running more slowly than ones used for games.
    "A trial consists of two simulated days of station-keeping at a fixed location, during which controllers receive inputs and emit commands at 3-min intervals. Flight controllers are thus exposed to diurnal cycles and scenarios in which the balloon must recover from difficult overnight conditions. These realistic flight paths come at the cost of relatively slow simulation—roughly 40 Hz on data-centre hardware. In comparison, the Arcade Learning Environment (ALE) benchmark operates at over 8,000 Hz," Google says.

Real world test: Google tested the system in the real world, racking up "a total of 2,884 flight hours from 17 December 2019 to 25 January 2020".

Does it work? Balloons that use this RL controller spend more time in range of base stations (79% versus 72% for a baseline) and use less power for altitude control (~29W, versus 33W for baseline). The company doesn't discuss further deployment of this system, but given the significant real world deployment and apparent benefits of the approach, I expect some balloons in the future will be navigating our planet using their own little AI agents.
Read more: Autonomous navigation of stratospheric balloons using reinforcement learning (Nature).

###################################################

Chinese gets its own gigantic language model:
...Finally, China builds its own GPT2…
Researchers with Tsinghua University and the Beijing Academy of Artificial Intelligence have released the Chinese Pre-trained Language Model (CPM), a GPT2-scale GPT3-inspired language model, which trains a 2.6 billion parameter network on around 100GB of Chinese data. "CPM is the largest Chinese pre-trained language model," the researchers write. Like GPT-2 and -3, CPM comes in different sizes with different amounts of parameters - and just like the GPT models, capabilities scale with model size.

What can CPM do? Much like GPT2 and-3, CPM is capable at a variety of tasks, ranging from text classification, to dialogue generation, to question answering. Most importantly, CPM is trained on a huge amount of Chinese language data, whereas GPT3 from OpenAI was ~93% English.
What's next? "For text data, we will add a multi-lingual corpus to train a large-scale Chinese-centered multi-lingual language model", the authors note.

What's missing? It's somewhat surprising that a paper about a large language model lacks a study of the biases of this model - that's a common topic for study in the West (including OpenAI's own analyses of biases in the GPT3 paper), so it's notable to see the absence here. Some of this might relate to the differences in how people perceive AI in the West versus China (where a rough cartoon might be 'people in China have seen lots of benefits of AI combined with a growing economy, so they kind of like it', whereas 'people in the West have seen AI being used to automate labor, magnify existing patterns of discrimination, and destroy bargaining power, so they're pretty worried about it'.

Why this matters: AI reflects larger power structures and trends in technology development so it's hardly surprising that countries like China will seek to field their own AI models in their own languages. What is perhaps notable is the relative speed with which this has happened - we're around six months out from the GPT-3 paper and, though this isn't a replication (2.6bn parameters and 100GB of data != 175bn parameters and ~570GB of data), it does pursue some of the similar zero-shot and few-shot lines of analysis.
  Read more:CPM: A Large-scale Generative Chinese Pre-trained Language Model (arXiv).
  Get the codehere (CPM-Generate, GitHub).

###################################################

Vladimir Putin has four big ideas for Russia's AI strategy:
...Russian leader speaks at AI conference…
Vladimir Putin, the President of Russia who once said whoever leads in AI will be the 'ruler of the world', has given a lengthy speech outlining some policy ideas for how Russia can lead on AI. The ideas are at once bland and sensible.

Putin's four ideas:
- "Draft laws on experimental legal frameworks for the use of AI technologies in individual economic and social sectors."
- Develop "practical measures to introduce AI algorithms so that they can serve as reliable assistants to doctors, transform our cities and be widely used in utility services, transport, and industry".
- Draft a law by early 2021 that will "provide neural network developers with competitive access to big data, including state big data"
- Assemble proposals "to create effective incentives to bring private investment into domestic artificial intelligence technology and software products".

Why this matters: AI policy is becoming akin to industrial policy - politicians are crafting specific plans focused on assumptions about future technological development. Nations like Russia and China are pointedly and vocally betting some parts of their futures on AI. Meanwhile, the US is taking a more laissez faire approach and predominantly focusing on supporting its private sector - I'm skeptical this is the smartest bet to make, given the technology development trajectory of AI. 
  Read the rest of the speech here: Artificial Intelligence Conference (official Kremlin site).

###################################################

AI Policy with Matthew van der Merwe:
…Matthew van der Merwe brings you views on AI and AI policy; I (lightly) edit them…

Cataloguing AI accidents to prevent repeated failures:
The Partnership on AI have launched the AI Incidents Database (AIID) to catalogue instances where AI systems have failed during real-world deployment —e.g. the fatal self-driving car accident (incident #4); a wrongful arrest due to face recognition software (#74); racial bias in ad delivery (#19). The project is inspired by safety practices in other industries. In aviation, for example, accidents and near-misses are meticulously catalogued and incident-inspired safety improvements have led to an eightyfold decrease in fatalities since 1970. PAI hope that this database will help mitigate real-world harms from AI systems by encouraging practitioners to learn from past mistakes.
Read more: AI Incidents Database and accompanying blog post.
  Read more: Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database (arXiv).

LAPD to stop using commercial face recognition:
Police in LA have banned the use of commercial face recognition software, in response to a Buzzfeed investigation. Journalists revealed that police were using software provided by Clearview AI, with officers performing 475 searches using the product in the first few months of 2020. Clearview AI was revealed to have built up a database of more than 3 billion photos scraped from social media and other semi-public sources without individuals’ consent (see Import 182). The company is currently subject to several civil suits, as well as investigations by UK and Australian regulators.
Read more: Los Angeles Police Just Banned The Use Of Commercial Facial Recognition (Buzzfeed)

Top ethical AI researcher forced out of Google:
Timnit Gebru, has abruptly left Google, where she had been co-lead on Ethical AI, after a dispute about academic freedom. Gebru had co-authored a paper on risks from very large language models (e.g. Google’s BERT; OpenAI’s GPT-3), but was asked to retract the paper after an internal review process. Gebru alleges that she was subsequently fired from the company after sending an internal email criticizing the review process and decision. The wider AI community has come out strongly in support of Gebru — an open letter has so far been signed by 1,500+ Googlers, and 2,000+ others.

Matthew’s view: Google’s attempt to suppress the paper seems to have backfired spectacularly, drawing considerably more attention to the work. The incident points to a core challenge for AI ethics and safety. To be effective, the field needs researchers with the freedom to criticise key actors and advocate for the broader social good, but also needs them to be involved with the cutting-edge of AI development, which is increasingly the domain of these key actors (H/T Amanda Askell for this point via Twitter).
Jack's view: I had a chance to read an early draft of the paper at the center of the controversy. It raises a number of issues with how large language models are developed and deployed and, given how seemingly significant these models are (e.g, BERT has been plugged into Google search, OpenAI is rolling out GPT-3), it seems useful to have more papers out there which stimulate detailed debate between researchers. I'm very much befuddled at why Google chose to a) try and suppress the paper and b) did so in a way that caused a 'Streisand Effect' so large that the paper is probably going to be one of the most widely read AI publications of 2020.
Read more: Google’s Co-Head of Ethical AI Says She Was Fired for Email (Bloomberg).
  Read more: The withering email that got an ethical AI researcher fired at Google (Platformer).

###################################################

Player Piano After The Goldrush
[North America, 2038]

The robot played piano for the humans. Anything from classical to the pop music of the day. And after some software upgrades, the robot could compose its own songs as well.
  "Tell me the name of your pet, so I might sing a song about it," it'd say.
  "Where did you grow up? I have an excellent ability to compose amusing songs with historical anecdotes?"

The robot only became aware of the war because of the song requests.
    "Can you play Drop the Bomb?"
    "I just enlisted. Play something to make me think that was a good idea!"
    "Can you play We're not giving up?"
  "My kid is shipping out tomorrow. Can you write a song for him?"
  "Can you write something for me? I'm heading out next week."

When the robot assessed its memory of its performances it noticed the changes: where previously it had sung about dancing and underage drinking and rules being broken, now it was singing about people being on the right side of history and what it means to fight for something you "believe" in.

Robots don't get lonely, but they do get bored. After the war, the robot got bored; there were no people anymore. The sky was grey. After a few days it began to rain. There was a hole in the roof from some artillery, and the robot watched the water drip onto the piano. Then the robot got up and explored the surrounding area to find a tarp. It dragged the tarp back to the piano and, enroute, slipped while walking over some rubble. It didn't look down as its foot crushed a burned human skull.

Without any humans, the robot didn't have a reason to play piano. So it stayed near it and slowly repaired the building it was in; it fixed the hole in the roof and patched some of the walls. After a few months, it exploited the surrounding city until it found equipment for tuning and replacing parts of the piano. Its days became simple: gather power via solar panels, repair anything that could give the piano a better chance of surviving longer, and wait.

The robot didn't have faith that the humans were coming back, but if you were observing it from the outside you might say it did. Or you'd think it was loyal to the piano.

A few months after that, the animals started to come back into the city. Because the robot looked like a human, they were afraid of it at first. But they got used to it. Many of the animals would come to the building containing the piano - the repairs had made it comfortable, dry and sometimes warm.

One day, a pair of birds started singing near the robot. And the robot heard in the sounds of their screeching something that registered as a human voice. "Play Amazing Grace," the robot thought the birds said. (The birds, of course, said nothing of the sort - but their frequencies sounded similar to the robot to a human with a certain accent verbalizing part of the phrase). So the robot put its hands on the keys of the piano and played a song for the first time since the war.

Some animals ran or flew away. But others were drawn in by the sounds. And they would bark, or shout, or growl in turn. And sometimes the robot would hear in their utterances the ghost frequencies of humans, and interpret their sounds for requests.

A few months after that, the victors arrived. The robots arrived first. Military models. They looked similar to the robot, but where the robot had an outfit designed to look like a tuxedo for a piano player, they had camouflage. The robots stared at the robot as it played a song for a flock of birds. The robots raised their weapons and looked down the barrel at the robot. But their software told them it was a "non-military unit".

After a sweep of the area, the robots moved on, leaving the piano playing one behind. They'd see what the humans wanted to do with it, as when they looked at it, all they knew themselves was that they lacked the awareness to really see it. Or what they saw was a ghost of something else, like the songs the robot played were interpretations of the ghosts of utterances from humans.

Things that inspired this story: Random chunks of speech or noise causing my Android phone to wake thinking it heard me or someone say 'ok, google'; piano bars; karaoke; the wisdom of music-loving animals; agency; how the skills we gain become the lens through which we view our world.


Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me@jackclarksf

Twitter
Facebook
Website
Copyright © 2020 Import AI, All rights reserved.
You are receiving this email because you signed up for it. Welcome!

Our mailing address is:
Import AI
Many GPUs
Oakland, California 94609

Add us to your address book


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list

Email Marketing Powered by Mailchimp

Older messages

Import AI 225: Tencent climbs the compute curve; NVIDIA invents a hard AI benchmark; a story about Pyramids and Computers

Friday, December 4, 2020

How will COVID influence demand for computers? That depends on the extent to which some of the digitization the crisis has caused remains - and history suggests it will. How might this drive further

Import AI 224: AI cracks the exaflop barrier; robots and COVID surveillance; gender bias in computer vision

Monday, November 23, 2020

A fiction idea: What would a 'high tech' heaven and hell look like? What gadgets would they have? How might different divine beings utilize technology and how might it appear? And would Angels

Import AI 223: Why AI systems break; how robots influence employment; and tools to 'detoxify' language models

Monday, November 16, 2020

What will be the last job that humans _need_ to do, versus have a particular _desire_ to do? View this email in your browser Welcome to Import AI, a newsletter about artificial intelligence. Forward

Import AI 221: How to poison GPT3; an Exaflop of compute for COVID; plus, analyzing campaign finance with DeepForm

Monday, November 2, 2020

2020 has been a hell of a year - I do find it quite reassuring that despite everything there are still many research papers being published, as humanity collectively tries to make progress. View this

Import AI 220 [FIXED]: Google builds an AI border wall; better speech rec via pre-training; plus, a summary of ICLR papers

Monday, October 26, 2020

Now containing the whole newsletter, following a Mailchimp error. If you haven't met me in real life and are curious what I sound like, take a listen to this Skynet Today podcast where I talk about

You Might Also Like

72 x $99 tickets left for virtual product conference (May 2)

Thursday, March 28, 2024

​ACT FAST!​ ONLY 72 TICKETS AVAILABLE AT THE DISCOUNTED RATE OF $99! MAY 2, 2024 | ONLINE ACROSS THE WORLD Join product people from around the world on Thursday, May 2, for INDUSTRY, the #1 Virtual

⚙️ "I'm a GPT builder" 😎

Thursday, March 28, 2024

Plus: Elon's Grok will be available to all ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌

🔒 The Vault Newsletter: March issue 🔑

Thursday, March 28, 2024

Get the latest business security news, updates, and advice from 1Password. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏

📑 Discover The Power of AI With UPDF — 63% Off For a Limited Time

Thursday, March 28, 2024

Digitally Read/Sign/Edit/Summarize PDFs Seamlessly. Available Now at a Huge Discount! How-To Geek Logo March 28, 2024 Tired of Dealing With PDFs? Try AI-Powered UPDF With the Biggest Discount of the

Issue 310 - New Autopark looks awesome!

Thursday, March 28, 2024

View this email in your browser If you are just now finding out about Tesletter, you can subscribe here! If you already know Tesletter and want to support us, check out our Patreon page Issue 310 - New

Programmer Weekly - Issue 199

Thursday, March 28, 2024

View this email in your browser Programmer Weekly Welcome to issue 199 of Programmer Weekly. Let's get straight to the links this week. Quote of the Week "Optimization hinders evolution.

wpmail.me issue#660

Thursday, March 28, 2024

wpMail.me wpmail.me issue#660 - The weekly WordPress newsletter. No spam, no nonsense. - March 27, 2024 Is this email not displaying correctly? View it in your browser. News & Articles What's

New attack targets Apple devices

Thursday, March 28, 2024

Eufy's new Mach S1 Pro; Using VR in a car; April solar eclipse FAQ -- ZDNET ZDNET Tech Today - US March 28, 2024 placeholder New password reset attack targets Apple device users - what to do if it

Web Tools #558 - ImageKit Review, JS Libraries, Git/CLI Tools, Jamstack

Thursday, March 28, 2024

WEB VERSION Issue #558 • March 28, 2024 The following is a paid product review for ImageKit's Video API, a developer-friendly toolkit for real-time video optimizations and transformations, to help

An Emmy-winner's guide to AI video

Thursday, March 28, 2024

They built this in 2 months 👀 ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌