Import AI 259: Race + Medical Imagery; Baidu takes SuperGLUE crown; AlphaFold and the secrets of life

What is the difference between interpolation and creativity?
View this email in your browser

Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to give your chums an AI upgrade. Subscribe here.

Uh oh - ML systems can make race-based classifications that humans can't understand:
...Medical imagery analysis has troubling findings for people that want to deploy AI in a medical setting…
One of the reasons why artificial intelligence are challenging from a policy perspective is that they tend to learn to discriminate between things using features that may not be legal to use for discrimination - for example, image recognition systems will frequently differentiate between people on the basis of protected categories (race, gender, etc). Now, a bunch of researchers from around the world have found that machine learning systems can learn to discriminate between different races using features in medical images that aren't intelligible to human doctors.
  Big trouble in big medical data: This is a huge potential issue. As the authors write: "our findings that AI can trivially predict self-reported race -- even from corrupted, cropped, and noised medical images -- in a setting where clinical experts cannot, creates an enormous risk for all model deployments in medical imaging: if an AI model secretly used its knowledge of self-reported race to misclassify all Black patients, radiologists would not be able to tell using the same data the model has access to."

What they found: "Standard deep learning models can be trained to predict race from medical images with high performance across multiple imaging modalities." They tested out a bunch of models on datasets including chest x-rays, breast mammograms, CT scans (computed tomography), and more and found that models were able to tell different races apart even under degraded image settings. Probably the most inherently challenging finding is that "models trained on high-pass filtered images maintained performance well beyond the point that the degraded images contained no recognisable structures; to the human co-authors and radiologists it was not even clear that the image was an x-ray at all," they write. In other words - these ML models are making decisions about racial classification (and doing it accurately) using features that humans can't even observe, let alone analyze.

Why this matters: We're entering a world where an increasing proportion of the 'thinking' taking place in it is occurring via ML systems trained via gradient descent which 'think' in ways that we as humans have trouble understanding (or even being aware of). To deploy AI widely into society, we'll need to be able to make sense of these alien intelligences.
Read more: Reading Race: AI Recognises Patient's Racial Identity In Medical Images (arXiv)

####################################################

Google spins out an industrial robot company:

...Intrinsic: industrial robots that use contemporary AI systems…
Google has spun Intrinsic out of Google X, the company's new business R&D arm. Intrinsic will focus on industrial robots that are easier to customize for specific tasks than those we have today. "Working in collaboration with teams across Alphabet, and with our partners in real-world manufacturing settings, we’ve been testing software that uses techniques like automated perception, deep learning, reinforcement learning, motion planning, simulation, and force control," the company writes in its launch announcement.

Why this matters: This is not a robot design company - all the images on the announcement use mainstream industrial robotic arms from companies such as Kuka. Rather, Intrinsic is a bet that the recent developments in AI are mature enough to be transferred into the demanding, highly optimized context of the real world. If there's value here, it could be a big deal - 355,000 industrial robots were shipped worldwide in 2019 according to the International Federation of Robotics, and there are more than 2.7 million robots deployed globally right now. Imagine if just 10% of these robots became really smart in the next few years?
  Read more: Introducing Intrinsic (Google X blog).

####################################################

DeepMind publishes its predictions about the secrets of life:

...AlphaFold goes online…
DeepMind has published AlphaFold DB, a database of "protein structure predictions for the human proteome and 20 other key organisms to accelerate scientific research.". AlphaFold is DeepMind's system that has essentially cracked the protein folding problem (Import AI 226) - a grand challenge in science. This is a really big deal that has been widely covered elsewhere. It is also very inspiring - as I told the New York Times, this announcement (and the prior work) "shows that A.I. can do useful things amid the complexity of the real world". This is a big deal! In a couple of years, I expect we'll see AlphaFold predictions turn up as the input priors for a range of tangible advances in the sciences.
  Read more: AlphaFold Protein Structure Database (DeepMind).

####################################################

BAIDU sets new natural language understanding SOTA with ERNIE 3.0:
...Maybe Symbolic AI is useful for something after all?...

Baidu's "ERNIE 3.0" system has topped the leaderboard of natural language understanding benchmark SuperGLUE, suggesting that byt combining symbolic and learned elements, AI developers can create something more than the sum of its parts.

What ERNIE is: ERNIE 3.0 is the third generation of the ERNIE model. ERNIE models combine large-scale pre-training (e.g, similar to what BERT or GPT-3 do) with learning from a structured knowledge graph of data. In this way, ERNIE models combine the contemporary 'gotta learn it all' paradigm with a more vintage symbolic-representation approach.
  The first version of ERNIE was built by Tsinghua and Huawei in early 2019 (Import AI 148), then Baidu followed up with ERNIE 2.0 a few months later (Import AI 158), and now they've followed up again with 3.0.

What's ERNIE 3.0 good for? ERNIE 3.0 is trained on "a large-scale, wide-variety and high-quality Chinese text corpora amounting to 4TB storage size in 11 different categories", according to the authors, including a Baidu knowledge graph that contains "50 million facts". In tests, ERNIE 3.0 does well on a broad set of language understanding and generation tasks. Most notably, it sets a new state-of-the-art on SuperGLUE, displacing Google's hybrid T5-Meena system. SuperGLUE is a suite of NLU tests which is widely followed by researchers and can be thought of as being somewhat analogous to the ImageNet of text - so good performance on SuperGLUE tends to mean the system will do useful things in reality.

Why this matters: ERNIE is interesting partially because of its fusion of symbolic and learned components, as well as being a sign of the further maturation of the ecosystem of natural language understanding and generation in China. A few years ago, Chinese researchers were seen as fast followers on various AI innovations, but ERNIE is one of a few models developed primarily by Chinese actors and now setting a meaningful SOTA on a benchmark developed elsewhere. We should take note.
  Read more: ERNIE 3.0: Large-scale Knowledge Enhanced Pre-training for Language Understanding and Generation (arXiv).

####################################################

Things that appear as toys will irrevocably change culture, exhibit 980: Toonify'd Big Lebowski

Here's a fun YouTube video where someone runs a scene from the Big Lebowski through SnapChat's 'Snap Camera' to warp the faces of Jeff Bridges and co from humans into cartoons. It's a fun video that looks really good (apart from when the characters turn their heads at angles not well captured by the Toon setting's data distribution). But, like most fun toys, it has a pretty significant potential for impact: we're creating a version of culture where any given artifact can be re-edited and re-made into a different aesthetic form thanks to some of the recent innovations of deep learning.
  Check it out here: Nathan Shipley, Twitter.

####################################################

Want smart robots? See if you can beat the 'BEHAVIOR' challenge:

…1 agent versus 100 tasks...
Stanford researchers have created the 'BEHAVIOR' challenge and dataset, which aims to "tests the ability to perceive the environment, plan, and execute complex long-horizon activities that involve multiple objects, rooms, and state transitions, all with the reproducibility, safety and observability offered by a realistic physics simulation."

What is BEHAVIOR: BEHAVIOR is a challenge where simulated agents need to "navigate and manipulate the simulated environment with the goal of accomplishing 100 household activities". The challenge involves agents represented as humanoid avatars with two hands, a head, and a torso, as well as taking the form of a commercial available 'Fetch' robot.

Those 100 activities in full: Bottling fruit! Cleaning carpets! Packing lunches! And so much more! Read the full list here. "A solution is evaluated in all 100 activities," the researchers write, "in three different types of instances: a) similar to training (only changing location of task relevant objects), b) with different object instances but in the same scenes as in training, and c) in new scenes not seen during training."

Why this matters: Though contemporary AI methods can work well on problems that can be accurately simulated (e.g, computer games, boardgames, writing digitized text, programming), they frequently struggle when dealing with the immense variety of reality. Challenges like BEHAVIOR will give us some signal on how well (simulated) embodied agents can do at these tasks.
  Read more: BEHAVIOR Challenge @ ICCV 2021 (Stanford Vision and Learning Lab).

####################################################

Tech Tales:

Abstract Messages

[A prison somewhere in America, 2023]

There was a guy in here for a while who was locked down pretty tight, but could still get mail. They'd read everything and so everyone knew not to write him anything too crazy. He'd get pictures in the mail as well - abstract art, which he'd put up in his cellblock, or give to other cellmates via the in-prison gift system.

At nights, sometimes you'd see a cell temporarily illuminated by the blue light of a phone; there would be a flicker of light and then it would disappear, muted most likely by a blanket or something else.

Eventually someone got killed. No one inside was quite sure why, but we figured it was because of something they'd done outside. The prison administrator took away a lot of our privileges for a while - no TV, no library, less outside time, bad chow. You know, a few papercuts that got re-cut every day.

Then another person got killed. Like before, no one was quite sure why. But - like the time before - someone else had killed them. All our privileges got taken away for a while, again. And this time they went further - turned everyone's rooms over.
  "Real pretty stuff," one of the guards said, looking at some of the abstract art in someone's room. "Where'd you get it?"
  "Got it from the post guy."
  "Real cute," said the guard, then took the picture off the wall and tested the cardboard with his hands, then ripped it in half. "Whoops," said the guard, and walked out.

Then they found the same sorts of pictures in a bunch of the other cells, and they saw the larger collection in the room of the guy who was locked down. That's what made them decide to confiscate all the pictures.
  "Regular bunch of artist freaks aren't you," one of the guards said, walking past us as we were standing at attention outside our turned-over cells.

A few weeks later, the guy who was locked down got moved out of the prison to another facility. We heard some rumors - high-security, and he was being moved because someone connected him to the killings. How'd they do that? We wondered. A few weeks later someone got the truth out of a guard: they'd found a loads of smuggled-in phones when they turned over the rooms, which they expected, but all the phones had a made-for-kids "smart camera" app that could tell you things about what you pointed your phones at. It turned out the app was a front - it was made by some team in the Philippines with some money from somewhere else, and when you turned the app on and pointed it to one of the paintings, it'd spit out labels like "your target is in Cell F:7", or "they're doing a sweep tomorrow night", or "make sure you talked to the new guy with the face tattoo".

So that's why when we get mail, they just let us get letters now - no pictures.

Things that inspired this story: Adversarial images; steganography; how people optimize around constraints; consumerized/DIY AI systems; AI security.

Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me@jackclarksf

Twitter
Facebook
Website
Copyright © 2021 Import AI, All rights reserved.
You are receiving this email because you signed up for it. Welcome!

Our mailing address is:
Import AI
Many GPUs
Oakland, California 94609

Add us to your address book


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list

Email Marketing Powered by Mailchimp

Older messages

Import AI 258:Game engines are data generators; Spanish language models; the logical end of civilization

Monday, July 19, 2021

If there were three planetary-scale AIs on three different planets with a 15 minute delay between them, then how might a conflict unfold? View this email in your browser Welcome to Import AI, a

Import AI 257: Firefighting robots; how Europe's AI legislation falls short; what the DoD thinks about responsible AI

Monday, July 12, 2021

Would a dataset of the entire universe be sufficient to encapsulate anything a being stationed in that universe could imagine? Or would it be insufficient in some way? View this email in your browser

Import AI 256: Facial recognition VS COVID masks; what AI means for warfare; CLIP and AI art

Tuesday, July 6, 2021

Will computer viruses ever become so complex that we might consider them sentient? View this email in your browser Welcome to Import AI, a newsletter about artificial intelligence. Forward this email

Import AI 254: Facebook uses AI for copyright enforcement; Google uses RL to design better chips.

Monday, June 21, 2021

Do organizations with significant AI investments make faster decisions (in certain areas) than ones which haven't made these investments? Or is it more that the organizations which invest in AI are

Import AI 253: The scaling will continue until performance saturates

Monday, June 14, 2021

If certain types of AI progress are predictable, then should the government anticipate certain soon-to-arrive capabilities and alter the behavior of its own institutions? View this email in your

You Might Also Like

Christmas On Repeat 🎅

Monday, December 23, 2024

Christmas nostalgia is a hell of a drug. Here's a version for your browser. Hunting for the end of the long tail • December 22, 2024 Hey all, Ernie here with a refresh of a piece from our very

SRE Weekly Issue #456

Monday, December 23, 2024

View on sreweekly.com A message from our sponsor, FireHydrant: On-call during the holidays? Spend more time taking in some R&R and less getting paged. Let alerts make their rounds fairly with our

The Power of an Annual Review & Grammarly acquires Coda

Sunday, December 22, 2024

I am looking for my next role, Zen Browser got a fresh new look, Flipboard introduces Surf, Campsite shuts down, and a lot more in this week's issue of Creativerly. Creativerly The Power of an

Daily Coding Problem: Problem #1645 [Hard]

Sunday, December 22, 2024

Daily Coding Problem Good morning! Here's your coding interview problem for today. This problem was asked by Facebook. Implement regular expression matching with the following special characters: .

PD#606 How concurrecy works: A visual guide

Sunday, December 22, 2024

A programmer had a problem. "I'll solve it with threads!". has Now problems. two he ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌

RD#486 (React) Things I Regret Not Knowing Earlier

Sunday, December 22, 2024

Keep coding, stay curious, and remember—you've got this ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌

🎶 GIFs Are Neat, but I Want Clips With Sound — Your Own Linux Desktop in the Cloud

Sunday, December 22, 2024

Also: 9 Games That Were Truly Ahead of Their Time, and More! How-To Geek Logo December 22, 2024 Did You Know Dextrose is another name for glucose, so if you see it listed prominently on the ingredients

o3—the new state-of-the-art reasoning model - Sync #498

Sunday, December 22, 2024

Plus: Nvidia's new tiny AI supercomputer; Veo 2 and Imagen 3; Google and Microsoft release reasoning models; Waymo to begin testing in Tokyo; Apptronik partners with DeepMind; and more! ͏ ͏ ͏ ͏ ͏ ͏

Sunday Digest | Featuring 'The World’s 20 Largest Economies, by GDP (PPP)' 📊

Sunday, December 22, 2024

Every visualization published this week, in one place. Dec 22, 2024 | View Online | Subscribe | VC+ | Download Our App Hello, welcome to your Sunday Digest. This week, we visualized public debt by

Android Weekly #654 🤖

Sunday, December 22, 2024

View in web browser 654 December 22nd, 2024 Articles & Tutorials Sponsored Solving ANRs with OpenTelemetry While OpenTelemetry is the new observability standard, it lacks official support for many