Import AI 246: Generating data via game engines; the FTC weighs in on AI fairness; Waymo releases a massive self-driving car dataset.

In the same way 'just-in-time' manufacturing revolutionized global capitalism, how much 'just-in-time' automatic data gathering speed up the OODA loop of model development and deployment?
View this email in your browser

Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to give your chums an AI upgrade. Subscribe here.

Use this dataset to get a Siri-clone to hear you better:
...Timers and Such is a specific dataset for a common problem…
Whenever you yell at an audio speech recognition system like Google or Siri and it doesn't hear you, that can be frustrating. One area I've personally encountered this is when I yell various numbers at my (digital) assistant, which sometimes struggle to hear my accent. Now, research from McGill University, Mila, the University of Montreal, the University of Paul Sabatier, and Avignon University, aims to make this easier with Timers and Such, a dataset of utterances people have spoken to their smart devices.

What's it for? The dataset is designed to help AI systems understand people when they describe four basic things: setting a timer, setting an alarm, doing simple mathematics (e.g, some hacky recipe math), and unit conversation (e.g, using a recipe from the US in Europe, or vice versa).

What does the dataset contain? The dataset contains around ~2,200 spoken audio commands from 95 speakers, representing 2.5 hours of continuous audio. This is augmented by a larger dataset consisting around ~170 hours of synthetically generated audio.

Why this matters: Google, Amazon, Microsoft, and  other tech companies have vast amounts of the sorts of data in 'Timers and Such'. Having open, public datasets will make it easier for researchers to develop their own assistants, and provides a useful additional dataset to test modern ASR systems against.
  Read more: Timers and Such: A Practical Benchmark for Spoken Language Understanding with Numbers (arXiv).
  Get the code here (SLU recipes for Timers and Such v1.0, GitHub).

###################################################

Want to regulate AI? Send ideas to this conference:
...EPIC solicits papers…
EPIC, the Electronic Privacy Information Center, a DC-based research thinktank, is hosting a conference later this year about AI regulation - and it wants people to submit regulatory ideas to it. The deadline is June 1 2021 and successful proposals will be presented at a September 21 symposium. "Submissions can include academic papers, model legislation with explanatory memoranda, and more", EPIC says.
  Find out more here (EPIC site).

###################################################

European Commission tries to regulate AI:
...AI industrialization begets AI regulation…
The European Commission has proposed a wide-ranging regulatory approach to AI and, much like the European Commission's prior work on consumer privacy via GDPR, it's likely that these regulations will become a template that other countries use to regulate AI.

High-risk systems: The most significant aspect of the regulation is the decision to treat "high-risk" AI systems differently to other ones. This introduces a regime where we'll need to figure out how to classify and define AI systems into different categories then, once systems count as high-risk, be able to assess and measure their behavior once deployed. High-risk systems will need to use 'high quality' datasets that 'minimise risks and discriminatory outcomes', will need to be accompanied with detailed documentation, have human oversight during operation, and a few other traits. All of these things are difficult to do today and it's not clear we even know how to do some of them - what does an appropriately fair and balance dataset look like in practice, for example?

Why this matters: This legislation introduces a chicken and egg problem - our ability to accurately measure the capabilities of AI systems for various policy traits is underdeveloped today, but to conform to European legislation, companies will need to be able to do this. Therefore, this legislation might create more of an incentive for companies, academia, and governments to invest in this area. The multi-billion dollar question is who gets to define the ways we measure and assess AI systems for risk - whoever does this gets to define some of the most sensitive deployments of AI.
  Read more: Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence (European Commission press site).
  Read more: Proposal for a Regulation on a European approach for Artificial Intelligence (European Commission site).

###################################################

US regulator says AI companies need to ensure their datasets are fair:
...The Federal Trade Commission talks about how it might approach AI regulation…
The FTC has published a blog post about how companies should develop their AI systems so as not to fall afoul of (rarely enforced) FTC rules. The title gives away the FTC view: Aiming for truth, fairness, and equity in your company's use of AI.

How does the FTC think companies should approach AI development? The FTC places a huge emphasis on fairness, which means it cares a lot about data. Therefore, the FTC writes in its blog post that "if a data set is missing information from particular populations, using that data to build an AI model may yield results that are unfair or inequitable to legally protected groups". It says AI developers should test their algorithms to ensure they don't discriminate on the basis of race, gender, or other protected classes (this will be a problem - more on that later). Other good practices include documenting how data was gathered, ensuring deployed models don't cause "more harm than good", and ensuring developers are honest about model capabilities.

Here's how not to advertise your AI system: "For example, let’s say an AI developer tells clients that its product will provide “100% unbiased hiring decisions,” but the algorithm was built with data that lacked racial or gender diversity. The result may be deception, discrimination – and an FTC law enforcement action.", the FTC writes.

The big problem at the heart of all of this: As we move into an era of AI development where models gain capabilities through semi-supervised or unsupervised learning, we can expect models to learn internal 'features' that correlate to things like legally protected groups. This isn't an academic hypothesis: a recent analysis of multimodal neurons, published on distill, discovered AI systems that learned traits relating to specific religions, specific ages for humans, gender traits, and so on - all things that our existing regulatory structure says you can't discriminate about. Unfortunately, neural networks just kind of learn everything and make fine-grained discriminative choices from there. Therefore, rock: meet hard place. It'll be curious to see how companies build their AI systems to respond to concerns like these from the FTC.
   The other problem at the heart of this: The FTC seems to presume that fair, balanced datasets exist. This is untrue. Additionally, it's not even clear how to build datasets that meet some universal standard for fairness. In reality, we're always going to ask fair for who? Unfair for who? About everything that gets deployed. Therefore, if the FTC wants to actually enforce this stuff, it'll need to come up with metrics for assessing the diversity of datasets then ensure developers build systems that conform to them - not remotely easy, and not something the existing FTC is well set up to do.,

Who wrote this? The author of the post is listed as Elisa Jillson, whose LinkedIn identifies them as an attorney in the division of privacy and identity protection, bureau of consumer protection, at the FTC.

Why this matters: Fitting together our existing policy infrastructure with the capabilities of AI models is a little bit like trying to connect two radically different plumbing connections - it's a huge challenge and, until we do, there'll be lots of randomness in the enforcement and lack of enforcement of existing laws with regard to AI systems. Posts like this give us a sense of how traditional regulators view AI systems; AI developers would do well to pay attention - the best way to deal with regulation is to get ahead of it.
  Read more: Aiming for truth, fairness, and equity in your company’s use of AI (Federal Trade Commission, blog)

###################################################

Waymo releases a gigantic self-driving car dataset:
...Want to train systems that think about how different AI things will interact with each other? Use this…
Waymo, Google's self-driving car spinoff, has announced the Waymo Open Motion Dataset (OMD), "a large scale, diverse dataset with specific annotations for interacting objects to promote the development of models to jointly predict interactive behaviors". OMD is meant to help developers train AI systems that can not only predict things from the perspective of a single self-driving car, but also model the broader interactions between self-driving cars and other objects, like pedestrians.

Waymo Open Motion Dataset: Google's dataset contains more unique roadways and covers a greater number of cities than other datasets from Lyft, Argo, and so on. The dataset consists of 574 hours of driving time in total. More significant than the length is the complexity: more than 46% of the thousands of individual 'scenes' in the dataset contain more than 32 agents; in the standard ODM validation set, 33.5% of scenes require predicting the actions of at least one pedestrian, and 10.4% require predicting actions of a cyclist. Each scene has a time horizon of around 8 seconds, meaning AI systems will need to predict over a longer time horizon than is standard (3-5 seconds), which makes this a more challenging dataset to test against.

Why this matters: Self-driving car data is rare, though with releases like this, things are changing. By releasing datasets like this, Google has made it easier for people to get a handle on the challenges of modelling multiple objects that all self-driving cafes will need to surmount prior to deployment.
  Read more: Large Scale Interactive Motion Forecasting for Autonomous Driving: The Waymo Open Motion Dataset (arXiv).

###################################################

The future of AI is planet-scale satellite surveillance (and it'll be wonderful):
...What 'minicubes' have to do with the future of the world…
In the future, thousands of satellites are going to be recording pictures of the earth at multiple resolutions, multiple times a day, and much of this data will be available for civil use, as well as military. Now, the question is how we can make sense of that data? That's the question at the heart of 'EarthNet', a new earth sensing dataset and competition put together by researchers with the Max-Planck-Institute for Biogeochemistry, the German Aerospace Center, the University of Jena, and the Technische Universitat Berlin.

The EarthNet dataset contains 32000 'minicubes' made up of Sentinel2 satellite imagery, spread across Northern Europe. Each minicube contains 30 5-daily frames at a resolution of 20m, along with 150 daily frames of five meteorological variables at a resolution of 1.28km. Assembling EarthNet was tricky: "we had to gather the satellite imagery, combine it with additional predictors, generate individual data samples and split these into training and test sets," the researchers write.

What's the point of EarthNet? EarthNet is ultimately meant to stimulate research into sensing and forecasting changes on the surface of the earth. To that end, there's also a challenge where teams can compete on various forecasting tasks, publishing their results to a leaderboard. If the competition gets enough entries, it'll give us a better sense of the state of the art here. More broadly, EarthNet serves as a complement to other aspects of AI research concerned with modelling the Earth - DeepMind recently did research into making fine-grained predictions for weather over the UK over a two hour predictive horizon (Import AI 244); EarthNet, by comparison, is concerned wirth making predictions that span days. Combined, advances in the short-term and long-term of forecasting could give us a better understanding of planet earth and how it is evolving.
  Read more: EarthNet2021: A Large-scale dataset and challenge for Earth surface forecasting as a guided video prediction task (arXiv).
  Find out more information about the competition here (EarthNet2021, official site).

###################################################

Could synthetic data be a new source of revenue for game companies? (Yes).
...Unity gets into data augmentation…
Game engine company Unity wants to use its technology to help companies build synthetic datasets to augment their real data. The initial model for this seems to be consulting with no specific prices listed. Companies might want to pay Unity to help them create more synthetic data because it's cheaper than gathering data from reality, and because once you've built your 3D environments/models, you may be able to generate even more data in the future as the capabilities of the Unity engine further advance.

Data - "At any scale": "The number of images you need for training depends on the complexity of your scene, the variety of objects you are using, and the requirements for accuracy in your solution. We will work with you to understand your needs, help scope the number of frames for your project, and iterate with you to ensure that the synthetic dataset meets your requirements," Unity writes. "In the future, we plan to provide a simple self-service interface so you can generate additional data at your convenience, without having to rely on the Unity team."

Why this matters: Game engines are one of the main environments humans currently use to digitize, replicate, play with, and extend reality. Now, we're building the interface from game engines back into reality via having them serve as simulators for proto-AI-brains. The better we get at this, the less it will cost to generate data in the future, and the more data will be available to train AI systems. (Another nice example of the sort of thing I'm thinking of is Epic's 'MetaHumans' creator, which I expect will ultimately be the fuel for the creation of entirely synthetic people with bodies to match.
  Read more: Supercharge your computer vision models with synthetic datasets built by Unity (Unity).

###################################################Everything You Want and Nothing That You Need
[2055, transcribed audio interview for a documentary relating to the 'transition years' of 2025-2050. The interviewer's questions are signified via a 'Q' but are not recorded here.]

Yeah, so one day I was scrolling my social feeds and then an old acquaintance posted that they had recently become rich off of a fringe cryptocurrency that had 1000X'd overnight, as things had a tendency to do back then, so I felt bad about it, and I remember that was the day I think the ANFO started seeming strange to me.
Q
The Anticipatory Need Fulfillment Object. A-N-F-O. It sounds crazy to say it now but back then people didn't really get what they wanted. We didn't have enough energy. Everyone was panicking about the climate. You probably studied it in school. Anyway, I was feeling bad because my friend had got rich so I went over to the A-N-F-O and I spoke into it.
Q
Yeah, I guess it was like confession booths back when religion was bigger. If you've heard of them. You'd lean into the ANFO and you tell it what's on your mind and how you're feeling and what is bothering you, then it'd try and figure out what you needed, and as long as you had credits in it, it'd make it for you.
Q
This will sound funny, but it was a bear. Like, a cuddly toy bear. I picked it out of the hopper and I held it up and I guess I started crying because it reminded me of my childhood. But I guess the ANFO had the right idea because though I got sad I stopped thinking about my friend with the money.
Q
So, that was normal to me, right? But I went back to my computer and I was watching some streams to distract myself, and then I saw on a stream - and I don't know what the chances of this are. You kids have better math skills than us phone-calculator type, but, call it one in a million - anyway, this person on the stream had a bear that looked pretty similar to what the ANFO made for me. She was talking about how she had got sad that day and that it was because of something to do with her relationship, so the ANFO had made this for her. And her bear hadn't made her cry, but it had made her smile, so it was different, right, both in how it looked and how she reacted to it.
Q
So that's the thing - I went back and I stared at the ANFO. I thought about telling it what had happened and seeing what it would make. But that felt kind of… skeezy?... to me? I don't have the words. I remember thinking 'why'd I react to the bear like that' after I saw the girl talking about her bear. And then I was wondering what all the other ANFOs were doing for everyone else.
Q
Oh, nothing dramatic. I just didn't use it again. But, in hindsight, pretty strange, right? I guess that's where some of that straightjacket theory stuff came from - seeing too many people spending too much time with their ANFOs, or whatever.

Things that inspired this story: The notion that one day we'll be able to use an app like thislifedoesnotexist.com to help us simulate different lives; applying predictive-life systems to rapid-manufacturing flexible 3D printers; interview format via David Foster Wallace's 'Brief Interviews with Hideous Men'; the relationship between AI and culture.


Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me@jackclarksf

Twitter
Facebook
Website
Copyright © 2021 Import AI, All rights reserved.
You are receiving this email because you signed up for it. Welcome!

Our mailing address is:
Import AI
Many GPUs
Oakland, California 94609

Add us to your address book


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list

Email Marketing Powered by Mailchimp

Older messages

Import AI 244: NVIDIA makes better fake images; DeepMind gets better at weather forecasting; plus 5,000 hours of speech data.

Monday, April 12, 2021

If you had to design a Von Neumann probe, what would be the part of the design you'd be happiest to cheap-out on? View this email in your browser Welcome to Import AI, a newsletter about artificial

Import AI 243: Training AI with fractals, RL-trained walking robots; and the European AI fund makes grants to 16 organizations

Monday, April 5, 2021

What will be some of the most sacred memories of an AI system built two hundred years from now? View this email in your browser Welcome to Import AI, a newsletter about artificial intelligence. Forward

Import AI 242: ThreeDWorld, 209 delivery drone flights, Spotify transcripts versus all the words in New York City

Monday, March 29, 2021

We're almost ten years out from the ImageNet 2012 result. What's the most important and dramatic system that has broken a benchmark since? From my perspective, there's been more incremental

Import AI 241: The $2 million dataset; small GPT-3 replications; ImageNet gets a face-blur update

Monday, March 22, 2021

Prediction: By 2030, most computation on planet earth will be "restricted" and "unrestricted computation" will be associated with fringe actors and nationstate/megacorp proxies.

Import AI 240: The unbeatable MATH benchmark; an autonomous river boat dataset; robots for construction sites

Monday, March 15, 2021

In a few decades, the only true records of certain pieces of art or media will be in neural networks, rather than the original input media (which will have been lost). How might 'neural

You Might Also Like

Re: How to know if your data has been exposed

Monday, December 23, 2024

Imagine getting an instant notification if your SSN, credit card, or password has been exposed on the dark web — so you can take action immediately. Surfshark Alert does just that. It helps you stay

Christmas On Repeat 🎅

Monday, December 23, 2024

Christmas nostalgia is a hell of a drug. Here's a version for your browser. Hunting for the end of the long tail • December 22, 2024 Hey all, Ernie here with a refresh of a piece from our very

SRE Weekly Issue #456

Monday, December 23, 2024

View on sreweekly.com A message from our sponsor, FireHydrant: On-call during the holidays? Spend more time taking in some R&R and less getting paged. Let alerts make their rounds fairly with our

The Power of an Annual Review & Grammarly acquires Coda

Sunday, December 22, 2024

I am looking for my next role, Zen Browser got a fresh new look, Flipboard introduces Surf, Campsite shuts down, and a lot more in this week's issue of Creativerly. Creativerly The Power of an

Daily Coding Problem: Problem #1645 [Hard]

Sunday, December 22, 2024

Daily Coding Problem Good morning! Here's your coding interview problem for today. This problem was asked by Facebook. Implement regular expression matching with the following special characters: .

PD#606 How concurrecy works: A visual guide

Sunday, December 22, 2024

A programmer had a problem. "I'll solve it with threads!". has Now problems. two he ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌

RD#486 (React) Things I Regret Not Knowing Earlier

Sunday, December 22, 2024

Keep coding, stay curious, and remember—you've got this ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌

🎶 GIFs Are Neat, but I Want Clips With Sound — Your Own Linux Desktop in the Cloud

Sunday, December 22, 2024

Also: 9 Games That Were Truly Ahead of Their Time, and More! How-To Geek Logo December 22, 2024 Did You Know Dextrose is another name for glucose, so if you see it listed prominently on the ingredients

o3—the new state-of-the-art reasoning model - Sync #498

Sunday, December 22, 2024

Plus: Nvidia's new tiny AI supercomputer; Veo 2 and Imagen 3; Google and Microsoft release reasoning models; Waymo to begin testing in Tokyo; Apptronik partners with DeepMind; and more! ͏ ͏ ͏ ͏ ͏ ͏

Sunday Digest | Featuring 'The World’s 20 Largest Economies, by GDP (PPP)' 📊

Sunday, December 22, 2024

Every visualization published this week, in one place. Dec 22, 2024 | View Online | Subscribe | VC+ | Download Our App Hello, welcome to your Sunday Digest. This week, we visualized public debt by