Import AI 251: Korean GPT-3; facial recognition industrialization; faking fingerprints with GANs

Will the 'personality' types of AI systems be more or less varied than the types of the people that create them?
View this email in your browser

Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to give your chums an AI upgrade. Subscribe here.
 

Want to know what the industrialization of facial recognition looks like? Read this.
...Paper from Alibaba shows what happens at the frontier of surveillance…
Researchers with Alibaba, the Chinese Academy of Sciences, Shenzhen Technology University, and the National University of Singapore are trying to figure out how to train large-scale facial recognition systems more efficiently. They've just published a paper about some of the nuts-and-bolts needed to train neural nets at scales of greater than 10 million to 100 million distinct facial identities.

Why this matters: This is part of the broader phenomenon of the 'industrialization of AI' (#182), where as AI is going from research into the world, people are starting to invest vast amounts of brain and compute power into perfecting the tooling used to develop these systems. Papers like this give us a sense of some of the specifics required for industrialization (here: tweaking the structure of a network to make it more scalable and efficient), as well as a baseline for the broader trend - Alibaba wants to deploy 100 million-scale facial recognition and is working on the technology to do it.
Read more: An Efficient Training Approach for Very Large Scale Face Recognition (arXiv).
Related: Here's a research paper about WebFace260M, a facial recognition dataset and challenge with 4 million distinct identities, totalling 260 million photographs. WebFace260M is developed by researchers primarily at Tsinghua University, along with appointments at XForwardAI and Imperial College London.
Read more: WebFace260M: A Benchmark Unveiling the Power of Million-Scale Deep Face Recognition (arXiv).

###################################################

Help the OECD classify AI systems:

…Improve our ability to define AI systems, and therefore improve our ability to create effective AI policy…
The OECD, a multi-national policy organization, is carrying out a project aiming to classify and define AI systems. I co-chair this initiative and after a year and a half of work, we've released a couple of things readers may find interesting; a survey people can fill out to try and classify AI systems using our framework, and a draft of the full report on classifying and defining systems (which we'd love feedback on).

Why this is worth spending time on: This is a low-effort high-impact way to engage in AI policy and comments can be anonymous - so if you work at a large tech company and want to give candid feedback, you can! Don't let your policy/lobbyists/PR folk have all the fun here - go direct, and thereby increase the information available to policymakers.
This stuff seems kind of dull but really matters - if we can make AI systems more legible to policymakers, we make it easier to construct effective regulatory regimes for them. (And for those that wholly reject the notion of government doing any kind of regulation, I'd note that it seems useful to create some 'public knowledge' re AI systems which isn't totally defined by the private sector, so it seems worthwhile to engage regardless).
Take the OECD survey here (OECD).
Read the draft report here (Google Docs).
Read more in this tweet thread from me here (Twitter).

###################################################

Facebook builds Dynaboard: a way to judge NLP models via multiple metrics:

...Dynaboard is the latest extension of Dynabench, and might help us better understand AI progress…
Facebook and Stanford researchers have built Dynaboard, software to let people upload AI models, then test them on a whole bunch of different things at once. What makes Dynaboard special is the platform it is built on - Dynabench, a novel approach to NLP benchmarking which lets researchers upload models, then has humans evaluate the models, automatically generating data in areas where models have poor performance, leading to a virtuous cycle of continuous, model improvement. (We covered Dynabench earlier in Import AI #248).

What is Dynaboard: Dynaboard is software "for conducting comprehensive, standardized evaluations of NLP models", according to Facebook. Dynaboard also lets researchers adjust the weight of different metrics - want to evaluate your NLP model with an emphasis on its fairness characteristics? Great, Dynaboard can do that. Want to more focus on accuracy? Sure, it can do that as well. Want to check your model is actually efficient? Yup, can do! Dynaboard is basically a way to visualize the tradeoffs inherent to AI model development - as Facebook says, "Even a 10x more accurate NLP model may be useless to an embedded systems engineer if it’s untenably large and slow, for example. Likewise, a very fast, accurate model shouldn’t be considered high-performing if it doesn’t work well for everyone."

Why this matters: We write a lot about benchmarking here at Import AI because benchmarking is the key to understanding where we are with AI development and where we're going. Tools like Dynaboard will make it easier for people to understand the state of the art and also the deficiencies of contemporary models. Once we understand that, we can build better things.
  Read more: Dynaboard: Moving beyond accuracy to holistic model evaluation in NLP (Facebook AI Research).
  Read the paper: Dynaboard: An Evaluation-As-A-Service Platform for Holistic Next-Generation Benchmarking (PDF).
  Tweet thread from Douwe Kiela with more here (Twitter).
  Check out an example use case of Dynaboard here (NLI leaderboard, Dynabench).

###################################################

What I've been up to recently - co-founding Anthropic, a new AI safety and research company:

In December 2020, I left OpenAI. Since then, I've been thinking a lot about AI policy, measuring and assessing Ai systems, and how to contribute to the development of AI in an increasingly multi-polar world. As part of that, I've co-founded Anthropic with a bunch of my most treasured colleagues and collaborators. Right now, we're focused on our research agenda and hope to have more to share later this year. I'm interested in working with technical people who want to a) measure and assess our AI systems, and b) work to contribute to AI policy and increase the amount of information governments have to help them think about AI policy - take a look at the site and consider applying!
  Find out more about Anthropic at our website (Anthropic).
And… if you think you have some particularly crazy high-impact idea re AI policy and want to chat about it, please email me - interested in collaborators.

###################################################

South Korea builds its own GPT-3:

…The multi-polar generative model era arrives...
Naver Labs has build HyperCLOVA, a 204B parameter GPT-3-style generative model, trained on lots of Korean-specific data. This is notable both because of the scale of the model (though we'll await more technical details to see if truly comparable to GPT-3), and also because of the pattern it fits into of generative model diffusion - that is, multiple actors are now developing GPT-3-style models, ranging from Eleuther (trying to do an open source GPT-3, #241), to China (which has built PanGu, a ~200bn parameter model, #247), to Russia and France (which are training smaller-scale GPT-3 models via Sberbank and 'PAGnol' via LightOn, respectively).

Why this matters: Generative models ultimately reflect and magnify the data they're trained on - so different nations care a lot about how their own culture is represented in these models. Therefore, the Naver announcement is part of a general trend of different nations asserting their own AI capacity/capability via training frontier models like GPT-3. Most intriguingly, the Google Translated press release from Naver says "Secured AI sovereignty as the world's largest Korean language model with a scale of 204B", which further gestures at the inherently political nature of these models.
  Read more: Naver unveils Korea's first ultra-large AI'HyperCLOVA'... “We will lead the era of AI for all” (Naver, press release).

###################################################

Fake fingerprints - almost as good as real ones, thanks to GANs:

...Synthetic imagery is getting really useful - check out these 50,000 synthetic fingerprints…
Here's some research from Clarkson University and company Precise Biometrics which shows how to use StyleGAN to generate synthetic fingerprints. The authors train on 72,000 512X512pixel photos of fingerprints from 250 unique individuals, then try to generate new, synthetic fingerprints. In tests, another AI model they develop classifies these fingerprints as real 95.2% of the time, suggesting that you can use a GAN to programmatically generate a synthetic copy of reality, with only a slight accuracy hit.

Why this matters: This is promising for the idea that we can use AI systems to generate data which we'll use to train other AI systems. Like any system, this is vulnerable to a 'garbage in, garbage out' phenomenon. But techniques like this hold the promise of reducing the cost of data for training certain types of AI systems.
  Read more: High Fidelity Fingerprint Generation: Quality, Uniqueness, and Privacy (arXiv).
  Get the code (and 50,000 synthetically generated fingerprints) here: Clarkson Fingerprint Generator (GitHub).

###################################################

DeepMind: Turns out robots can learn soccer from a blank(ish) slate:
...FootballZero! AlphaSoccer!...
DeepMind has shown how to use imitation learning, population-based training, and self-play to teach some simulated robots how to play 2v2 football (soccer, to the American readers). The research is interesting because it smooshes together a bunch of separate lines of research that have been going on at DeepMind and elsewhere (population based training and self-play from AlphaStar! Imitation learning from a ton of projects! Reinforcement learning, which is something a ton of people at DM specialize in! And so on). The project is also a demonstration of the sheer power of emergence - through a three-stage training procedure, DeepMind teaches agents to pilot some simulated humanoid robots sufficiently well that they can learn to play football - and, yes, learn to coordinate with each other as part of the process.

How they did it: "In a sequence of training stages, players first learn to control a fully articulated body to perform realistic, human-like movements such as running and turning; they then acquire mid-level football skills such as dribbling and shooting; finally, they develop awareness of others and learn to play as a team, successfully bridging the gap between low-level motor control at a time scale of milliseconds, and coordinated goal-directed behaviour as a team at the timescale of tens of seconds," DeepMind writes.

Hardware: "Learning is performed on a central 16-core TPU-v2 machine where one core is used for each player in the population. Model inference occurs on 128 inference servers, each providing inference-as-a-service initiated by an inbound request identified by a unique model name. Concurrent requests for the same inference model result in automated batched inference, where an additional request incurs negligible marginal cost. Policy environment interactions are executed on a large pool of 4,096 CPU actor workers," DeepMind says.

Why this matters: While this project is a sim-only one (DeepMind itself notes that the technique is unlikely to transfer), it serves as a convincing example of how simple ML approaches can, given sufficient data and compute, yield surprisingly rich and complex behaviors. I wonder if at some point we'll use systems like this to develop control policies for robots which eventually transfer to the real world?
Read more: From Motor Control to Team Play in Simulated Humanoid Football (arXiv)
Check out a video of DeepMind's automatons playing the beautiful game here (YouTube).###################################################

Tech Tales:

Electric Sheep Dream of Real Sheep: "Imagination" in AI Models
Norman Searle, The Pugwash Agency for Sentience Studies

Abstract:

Humans demonstrate the ability to imagine a broad variety of scenarios, many of which cannot be replicated in reality. Recent advances in generative models combined with advances in robotics have created opportunities to examine the relationship between machine intelligences, machine imaginations, and human imaginations. Here, we examine the representations found within an agent trained in an embodied form on a robotic platform, then transferred into simulated mazes where it sees a copy of itself.

Selected Highlights:

After 10^8 environment steps, we note the development of representations in the agent that activate when when it travels in front of a mirror. After 10^50 steps, we note these representations are used by the agent to help it plan paths through complex environments.

After 10^60 steps, we conduct 'Real2Sim' transfer to port the agent into a range of simulated environments that contain numerous confounding factors not encountered in prior real or simulated training. Agents which have been exposed to mirrors and subsequently demonstrate 'egocentric planning', tend to perform better in these simulated environments than those which were trained in a traditional manner.

Most intriguingly, we can meaningfully improve performance in a range of simulated mazes by creating a copy of our agent using the same robot morphology it trained on in the world, then exposing our agent to a copy of itself in the maze. Despite having never been trained in a multi-agent environment, we find that the agent will naturally learn to imitate its copy - despite no special communication being enforced between them.

In future work, we aim to more closely investigate the 'loss' circuits that light up when we remove the copy of an agent from a maze within the perceptual horizon of the agent. In these situations, our agent will typically continue to solve the maze, but it will repeatedly alternative between activations of the neurons associated with a sense-impression of an agent, and neurons associated with a combinatorial phenomena we believe correlates to 'loss' - agents may be able to sense the absence of themselves.

Things that inspired this story: The ongoing Import AI series I'm writing involving synthetic AI papers (see recent prior issues of Import AI); robotics; notions of different forms of 'representation' leading to emergent behavior in neural networks; ego and counterego; ego.

Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me@jackclarksf

Twitter
Facebook
Website
Copyright © 2021 Import AI, All rights reserved.
You are receiving this email because you signed up for it. Welcome!

Our mailing address is:
Import AI
Many GPUs
Oakland, California 94609

Add us to your address book


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list

Email Marketing Powered by Mailchimp

Older messages

Import AI 250: Facebook's TPU; Twitter analyzes its systems for bias; encouraging proof about federated learning

Monday, May 24, 2021

Is symbolic AI the 'dark matter' of AI - there's tons of it deployed around us and we can't measure it. Or is it far more insubstantial? And how could we know the truth? View this email

Import AI 249: IBM's massive code dataset; dataset archaeology: BookCorpus; Facebook wants computers to read the world

Monday, May 17, 2021

There are more than six thousand languages used in the world today - how many languages might AI systems evolve to communicate with another? View this email in your browser Welcome to Import AI, a

Import AI 247: China makes its own GPT3; the AI hackers have arrived; four fallacies in AI research.

Monday, May 3, 2021

How might different alien intelligences conceive of AI? If - or perhaps when - we meet aliens, will they have also developed things that seem like neural networks? How much diversity is possible in the

Import AI 246: Generating data via game engines; the FTC weighs in on AI fairness; Waymo releases a massive self-driving car dataset.

Monday, April 26, 2021

In the same way 'just-in-time' manufacturing revolutionized global capitalism, how much 'just-in-time' automatic data gathering speed up the OODA loop of model development and

Import AI 244: NVIDIA makes better fake images; DeepMind gets better at weather forecasting; plus 5,000 hours of speech data.

Monday, April 12, 2021

If you had to design a Von Neumann probe, what would be the part of the design you'd be happiest to cheap-out on? View this email in your browser Welcome to Import AI, a newsletter about artificial

You Might Also Like

Christmas On Repeat 🎅

Monday, December 23, 2024

Christmas nostalgia is a hell of a drug. Here's a version for your browser. Hunting for the end of the long tail • December 22, 2024 Hey all, Ernie here with a refresh of a piece from our very

SRE Weekly Issue #456

Monday, December 23, 2024

View on sreweekly.com A message from our sponsor, FireHydrant: On-call during the holidays? Spend more time taking in some R&R and less getting paged. Let alerts make their rounds fairly with our

The Power of an Annual Review & Grammarly acquires Coda

Sunday, December 22, 2024

I am looking for my next role, Zen Browser got a fresh new look, Flipboard introduces Surf, Campsite shuts down, and a lot more in this week's issue of Creativerly. Creativerly The Power of an

Daily Coding Problem: Problem #1645 [Hard]

Sunday, December 22, 2024

Daily Coding Problem Good morning! Here's your coding interview problem for today. This problem was asked by Facebook. Implement regular expression matching with the following special characters: .

PD#606 How concurrecy works: A visual guide

Sunday, December 22, 2024

A programmer had a problem. "I'll solve it with threads!". has Now problems. two he ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌

RD#486 (React) Things I Regret Not Knowing Earlier

Sunday, December 22, 2024

Keep coding, stay curious, and remember—you've got this ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌

🎶 GIFs Are Neat, but I Want Clips With Sound — Your Own Linux Desktop in the Cloud

Sunday, December 22, 2024

Also: 9 Games That Were Truly Ahead of Their Time, and More! How-To Geek Logo December 22, 2024 Did You Know Dextrose is another name for glucose, so if you see it listed prominently on the ingredients

o3—the new state-of-the-art reasoning model - Sync #498

Sunday, December 22, 2024

Plus: Nvidia's new tiny AI supercomputer; Veo 2 and Imagen 3; Google and Microsoft release reasoning models; Waymo to begin testing in Tokyo; Apptronik partners with DeepMind; and more! ͏ ͏ ͏ ͏ ͏ ͏

Sunday Digest | Featuring 'The World’s 20 Largest Economies, by GDP (PPP)' 📊

Sunday, December 22, 2024

Every visualization published this week, in one place. Dec 22, 2024 | View Online | Subscribe | VC+ | Download Our App Hello, welcome to your Sunday Digest. This week, we visualized public debt by

Android Weekly #654 🤖

Sunday, December 22, 2024

View in web browser 654 December 22nd, 2024 Articles & Tutorials Sponsored Solving ANRs with OpenTelemetry While OpenTelemetry is the new observability standard, it lacks official support for many