Import AI 247: China makes its own GPT3; the AI hackers have arrived; four fallacies in AI research.

How might different alien intelligences conceive of AI? If - or perhaps when - we meet aliens, will they have also developed things that seem like neural networks? How much diversity is possible in the space of software design for different intelligent beings?
View this email in your browser

Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to give your chums an AI upgrade. Subscribe here.

Finally, China trains its own GPT3:
...Now the world has two (public) generative models, reflecting two different cultures…
A team of Chinese researchers have created 'PanGu', a large-scale pre-trained language model with around ~200 billion parameters, making it equivalent to GPT3 (175 billion parameters) in terms of parameter complexity. PanGu is trained on 1.1TB of Chinese text (versus 570GB of text for GPT-3), though in the paper they train the 200B model for a lot less time (on way fewer tokens) than OpenAI did for GPT-3. PanGu is the second GPT-3-esque model to come out of China, following the Chinese Pre-trained Language Model (CPM, Import AI 226), which was trained on 100GB of text and was only a few billion parameters, compared to a couple of hundred!

Is it good? Much like GPT-3, PanGu does extraordinarily well on a range of challenging, Chinese-language benchmarks for tasks as varied as text classification, keyword recognition, common sense reasoning, and more.

Things that make you go hmmmm - chiplomacy edition: In this issue's example of chiplomacy, it's notable the researchers train this on processors from Huawei, specifically the company's "Ascend" processors. They use the 'mindspore' framework (also developed by Huawei).
  Read more: PANGU-α: LARGE-SCALE AUTOREGRESSIVE PRETRAINED CHINESE LANGUAGE MODELS WITH AUTO-PARALLEL COMPUTATION (arXiv).

###################################################

The AI hackers are here. What next?
...Security expert Bruce Schneier weighs in…
Bruce Schnier has a lengthy publication at the Belfer Center about 'the coming AI hackers'. It serves as a high-level introduction to the various ways AI can be misused, abused, and wielded for negative purposes. What might be most notable about this publication is it's discussion of raw power - who has it, who doesn't, and how this interplaces with hacking: "Hacking largely reinforces existing power structures, and AIs will further reinforce that dynamic", he writes.
  Read more: The Coming AI Hackers (Belfer Center website)

###################################################

What does it take to build a anti-COVID social distancing detector?
...Indian research paper shows us how easy this has become…
Here's a straightforward paper from Indian researchers about how to use various bits of AI software to build something that can surveil people, understand if they're too close to eachother, and provide warnings - all in the service of encouraging social distancing. India, for those not tuning into global COVID news, is currently facing a deepening crisis, so this may be of utility to some readers.

What it takes to build a straightforward AI systems: Building a system like this basically requires an input video feed, an ability to parse the contents of it and isolate people, then work out if the people are too close to eachother or not. What does it take to do this? For people detection, they use YOLOv3, a tried-and-tested object detector, using a darknet-53 network pre-trained on the MS-COCO dataset as a backbone. They then use an automated camera calibration technique (though note how you can do this manually with OpenCV) to estimate spaces in the video feed, which they can then use to perform distance estimation. "To achieve ease of deployment and maintenance, the different components of our application are decoupled into independent modules which communicate among each other via message queues," they write.
  In a similar vein, back in January, some American researchers published a how-to guide (Import AI 231) for using AI to detect if people are wearing anti-COVID masks on construction sites, and Indian company Skylark Labs saying in May 2020 it was using drones to observe crowds for social distancing violations (Import AI 196).

A word about ethics: Since this is a surveillance application, it has some ethical issues - the authors note they've built this system so it doesn't need to store data, which may help deal with any specific privacy concerns, and it also automatically blurs the faces of the people that it does see, providing privacy during deployment.
  Read more: Computer Vision-based Social Distancing Surveillance Solution with Optional Automated Camera Calibration for Large Scale Deployment (arXiv).

###################################################

Want some earnings call data? Here's 40 hours of it:
...Training machines to listen to earnings calls…
Researchers with audio transcription company Rev.com and Johns Hopkins University have released Earnings-21, a dataset of 39 hours and 15 minutes of transcribed speech from 44 earnings calls. The individual recordings range from 17 minutes to an hour and 34 minutes. This data will help researchers develop their own audio speech recognition systems - but to put the size of the dataset in perspective, Kensho released a dataset of 5,000 hours of earnings call speech recently (Import AI 244). On the other hand, you need to register to download the Kensho data, but you can pull this ~40 hour lump directly from GitHub, which might be preferable.
  Read more: Earnings-21: A Practical Benchmark for ASR in the Wild (arXiv).
  Get the data here (rev.com, GitHub)

###################################################

Want to test out your AI lawyer? You might need CaseHOLD:
...Existing legal datasets might be too small and simple tlo measure progress…
Stanford University researchers have built a new multiple choice legal dataset, so they can better understand how well existing NLP systems can deal with legal questions.
  One of the motivations to build the dataset has come from a peculiar aspect of NLP performance in the legal domain - specifically, techniques we'd expect to work don't work that well: "One of the emerging puzzles for law has been that while general pretraining (on the Google Books and Wikipedia corpus) boosts performance on a range of legal tasks, there do not appear to be any meaningful gains from domain-specific pretraining (domain pretraining) using a corpus of law," they write.

What's in the data? CaseHOLD contains 53,000 multiple choice questions with prompts from a judicial decision and multiple potential holdings, one of which is correct, which could be cited. You can use CaseHOLD to test how well a model has a grasp on this aspect of the law by seeing which of the multiple choice question answers it selects as most likely.
  Read more: When Does Pretraining Help? Assessing Self-Supervised Learning for Law and The CaseHOLD Dataset of 53,000+ Legal Holdings (Stanford RegLab, blog).
  Read more: When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset (arXiv).
  Get the data here: CaseHOLD (GitHub).

###################################################

AI research has four fallacies - we should be aware of them:
...Making explicit some of the implicit assumptions or beliefs among researchers...
Imagine that the field of AI research is a house party - right now, the punch bowls are full of alcohol, people are excitedly showing each other what tricks they can do, and there's a general sense of joie de vise and optimism (though these feeling aren't shared by the people outside the party who experience its effects, nor by the authorities who are dispatching some policy-police cars to go and check the party doesn't go out of hand). Put simply: the house party is a real rager!
    But what if the punchbowl were to run out - and what would make it run out? That's the idea in a research paper from Melanie Mitchell, a researcher at the Sante Fe Institute, where she says our current optimism could lead to us deluding ourselves about the future trajectory of AI development, and this could come from four (what Mitchell terms) fallacies that researchers use when thinking about AI..

Four fallacies: Mitchell identifies four ways in which contemporary researchers could be deluding themselves about AI progress. These fallacies include:
- Believing narrow intelligence is on a continuum with general intelligence: Researchers assume that progress in one part of the field of AI must necessarily lead to future, general progress. This isn't always the case. 
- Easy things are easy and hard things are hard: Some parts of AI are counterintuitively difficult and we might not be using the right language to discuss these challenges. "AI is harder than we think, because we are largely unconscious of the complexity of our own thought processes," Mitchell writes.
- The lure of wishful mnemonics: Our own language that we use to describe AI might limit or circumscribe our thinking - when we say a system has a 'goal' we imbue that system with implicit agency that it may lack; similarly, saying a system 'understands' something connotes a more sophisticated mental process than what is probably occurring. "Such shorthand can be misleading to the public," Mitchell says.
- Intelligence is all in the brain: Since cognition is embodied, might current AI systems have some fundamental flaws? This feels, from my perspective, like the weakest point Mitchell makes, as one can achieve embodiment by loading an agent into a reinforcement learning environment and provide it with actuators and a self-discoverable 'surface area', and this can be achieved in a digital form. On the other hand, it's certainly true that being embodied yields the manifestation of different types of intelligence.

Some pushback: Here's some discussion of the paper by Richard Ngo, which I felt helpful for capturing some potential criticisms.
  Read more: Why AI is Harder Than We Think (arXiv).

###################################################

Tech Tales

Just Talk To Me In The Real

[2035: Someone sits in a bar and tells a story about an old partner. The bar is an old fashioned 'talkeasy' where people spend their time in the real and don't use augments].

"Turn off the predictions for a second and talk to me in the real," she said. We hadn't even been on our second date! I'd never met someone who broke PP (Prediction Protocol) so quickly. But she was crazy like that.

Maybe I'm crazy too, because I did it. We talked in the real, both naked. No helpful tips for things to say to each other to move the conversation forward. No augments. It didn't even feel awkward because whenever I said something stupid or off color she'd laugh and say "that's why we're doing this, I want to know what you're realy like!".

We got together pretty much immediately. When we slept together she made me turn off the auto-filters. "Look at me in the real", she said. I did. It was weird to see someone with blemishes. Like looking at myself in the mirror before I turn the augments on. Or how people looked in old pornography. I didn't like it, but I liked her, and that was a reason to do it.

The funny thing is that I kept the habit even after she died. Oh, sure, on the day I got the news I turned all my augments on, including the emotional regulator. But I turned it off pretty quickly - I forget, but it was a couple of days or so. Not the two weeks that the PP mandates. So I cried a bunch and felt pretty sad, but I was feeling something, and just the act of feeling felt good.

I even kept my stuff off for the funeral. I did that speech in the real and people thought I was crazy because of how much pain it caused me. And as I was giving the speech I wanted to get everyone else to turn all their augments off and join me naked in the real, but I didn't know how to ask. I just hoped that people might choose to let themselves feel something different to what is mandated. I just wanted people to remember why the real was so bitter and pure it caused us to build things to escape it.

Things that inspired this story: Prediction engines; how technology tends to get introduced as a layer to mediate the connections between people.

Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me@jackclarksf

Twitter
Facebook
Website
Copyright © 2021 Import AI, All rights reserved.
You are receiving this email because you signed up for it. Welcome!

Our mailing address is:
Import AI
Many GPUs
Oakland, California 94609

Add us to your address book


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list

Email Marketing Powered by Mailchimp

Older messages

Import AI 246: Generating data via game engines; the FTC weighs in on AI fairness; Waymo releases a massive self-driving car dataset.

Monday, April 26, 2021

In the same way 'just-in-time' manufacturing revolutionized global capitalism, how much 'just-in-time' automatic data gathering speed up the OODA loop of model development and

Import AI 244: NVIDIA makes better fake images; DeepMind gets better at weather forecasting; plus 5,000 hours of speech data.

Monday, April 12, 2021

If you had to design a Von Neumann probe, what would be the part of the design you'd be happiest to cheap-out on? View this email in your browser Welcome to Import AI, a newsletter about artificial

Import AI 243: Training AI with fractals, RL-trained walking robots; and the European AI fund makes grants to 16 organizations

Monday, April 5, 2021

What will be some of the most sacred memories of an AI system built two hundred years from now? View this email in your browser Welcome to Import AI, a newsletter about artificial intelligence. Forward

Import AI 242: ThreeDWorld, 209 delivery drone flights, Spotify transcripts versus all the words in New York City

Monday, March 29, 2021

We're almost ten years out from the ImageNet 2012 result. What's the most important and dramatic system that has broken a benchmark since? From my perspective, there's been more incremental

Import AI 241: The $2 million dataset; small GPT-3 replications; ImageNet gets a face-blur update

Monday, March 22, 2021

Prediction: By 2030, most computation on planet earth will be "restricted" and "unrestricted computation" will be associated with fringe actors and nationstate/megacorp proxies.

You Might Also Like

a16z’s Infrastructure team gets a new general partner

Friday, April 19, 2024

Post News is shutting down and Wall Street isn't feeling a Salesforce-Informatica pairing View this email online in your browser By Christine Hall Friday, April 19, 2024 Image Credits: Andreessen

New Roundtable! Additive for Mass Production Applications

Friday, April 19, 2024

The Outlook for the Future View this email in your browser engineering.com Roundtable - Additive for Mass Production Applications: The Outlook for the Future 6 Considerations for Choosing the Right

📷 What to Know About Macro Photography — Why You Should Buy a Budget Motherboard

Friday, April 19, 2024

Also: How to Automatically Highlight Values in Excel, and More! How-To Geek Logo April 19, 2024 📩 Get expert reviews, the hottest deals, how-to's, breaking news, and more delivered directly to your

Is the wind going out of the AI sails?

Friday, April 19, 2024

Rippling vacuums up venture capital and Ramp bags more millions View this email online in your browser By Haje Jan Kamps Friday, April 19, 2024 Image Credits: Getty Images / Carol Yepes Welcome to

Llama 3 is out - Weekly News Roundup - Issue #463

Friday, April 19, 2024

Plus: brand-new, all-electric Atlas; AI Index Report 2024; Microsoft pitched GenAI tools to US military; Humane AI Pin reviews are in; debunking Devin; and more! ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏

Daily Coding Problem: Problem #1417 [Easy]

Friday, April 19, 2024

Daily Coding Problem Good morning! Here's your coding interview problem for today. This problem was asked by Wayfair. You are given a 2 x N board, and instructed to completely cover the board with

Charted | How Hard Is It to Get Into an Ivy League School? 🎓

Friday, April 19, 2024

We detail the admission rates and average annual cost for Ivy League schools, as well as the median SAT scores required to be accepted. View Online | Subscribe Presented by: Discover the motivations

Dark Matter & Tortured Poets

Friday, April 19, 2024

New music releases aren't what they used to be -- for good and bad. Dark Matter & Tortured Poets By MG Siegler • 19 Apr 2024 View in browser View in browser New music releases in 2024 are a

Impact of AI on Product Management

Friday, April 19, 2024

​ Impact of AI on Product Management The rise of the AI Product Manager. Product managers have always championed customer's needs. However, with AI, the job requires new technical and ethical

⚙️ Zuck has entered the chat(bot)

Friday, April 19, 2024

Plus: AI video's coming to mobile! ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌