Import AI 223: Why AI systems break; how robots influence employment; and tools to 'detoxify' language models

What will be the last job that humans _need_ to do, versus have a particular _desire_ to do? 
View this email in your browser

Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to give your chums an AI upgrade. Subscribe here.

UK Amazon competitor adds to its robots:
...Ocado acquires Kindred…
Ocado, the Amazon of the UK, has acquired robotics startup Kindred, which they'll plan to use at their semi-automated warehouses.
  "Ocado has made meaningful progress in developing the machine learning, computer vision and engineering systems required for the robotic picking solutions that are currently in production at our Customer Fulfilment Centre (“CFC”) in Erith," said Tim Steiner, Ocado CEO, in a press release. "Given the market opportunity we want to accelerate the development of our systems, including improving their speed, accuracy, product range and economics".

Kindred was a robot startup that tried to train its robots via reinforcement learning (Import AI 87), and tried to standardize how robot experimentation works (#113). It was founded by some of the people behind quantum computing startup D-Wave and spent a few years trying to find product-market fit (which is typically challenging for robot companies).

Why this matters: As companies like Amazon have shown, a judicious investment in automation can have surprisingly significant payoffs for the company that bets on it. But those companies are few and far between. With its slightly expanded set of robotics capabilities, it'll be interesting to check back in on Ocado in a couple of years and see if there've been surprising changes in the economics of the fulfilment side of its business. I'm just sad Kindred never got to stick around long enough to see robot testing get standardized.
  Read more: Ocado acquires Kindred and Haddington (Ocado website).
  View a presentation for Ocado investors about this (Ocado website, PDF).

###################################################

Google explains why AI systems fail to adapt to reality:
...When 2+2 = Bang…
When AI systems get deployed in the real world, bad things happen. That's the gist of a new, large research paper from Google, which outlines the issues inherent to taking a model from the rarefied, controlled world of 'research' into the messy and frequently contradictory data found in the real world.

Problems, problems everywhere: In tests across systems for vision, medical imaging, natural language processing, and health records, Google found that all these applications exhibit issues that have "downstream effects on robustness, fairness, and causal grounding".
  In one case, when analyzing a vision system, they say "changing random seeds in training can cause the pipeline to return predictors with substantially different stress test performance".
  Meanwhile, when analyzing a range of AI-infused medical applications, they conclude: "one cannot expect ML models to automatically generalize to new clinical settings or populations, because the inductive biases that would enable such generalization are underspecified".

What should researchers do? We must test systems in their deployed context rather than assuming they'll work out of the box. Researchers should also try to test more thoroughly for robustness during development of AI systems, they say.

Why this matters: It's not an underestimate to say a non-trivial slice of future economic activity will be correlated to how well AI systems can generalize from training into reality; papers like this highlight problems that need to be worked on to unlock broader AI deployment.
  Read more: Underspecification Presents Challenges for Credibility in Modern Machine Learning (arXiv).   

###################################################

How do robots influence employment? U.S Census releases FRESH DATA!
...Think AI is going to take our jobs? You need to study this data...
In America, some industries are already full of robots, and in 2018 companies spent billions on acquiring robot hardware, according to new data released by the U.S. Census Bureau.

Robot exposure: In America, more than 30% of the employees in industries like transportation equipment and metal and plastic products work alongside robots, according to data from the Census's Annual Capital Expenditure Survey (ACES). Additionally, ACES shows that the motor vehicle manufacturing industry spent more than $1.2billion in CapEx on robots in 2018, followed by food (~$500 million), non-store retailers ($400m+), and hospitals (~$400m).
  Meanwhile, the Annual Survey of Manufacturers shows that establishments that adopt robots tend to be larger and that "there is evidence that most manufacturing industries in the U.S. have begun using robots".

Why this matters: If we want to change our society in response to the rise of AI, we need to make the changes brought about by AI and automation legible to policymakers. One of the best ways to do that is by producing data via large-scale, country-level surveys, like these Census projects. Perhaps in a few years, this evidence will contribute to large-scale policy changes to help create a thriving world.
Read more: 2018 Data Measures: Automation in U.S. Businesses (United States Census Bureau).

###################################################

Want to deal with abusive spam and (perhaps) control language models? You might want to 'Detoxify':
...Software makes it easy to run some basic toxicity, multilingual toxicity, and bias tests…
AI startup Unitary has released 'Detoxify', a collection of trained AI models along with supporting software to try to predict toxic comments against three types of toxicity: data from the Toxic Comment Classification Challenge which is based on Wikipedia comments, along with two datasets from Jigsaw that are made of comments and Wikipedia data.

Why this matters: Software like Detoxify can help developers characterize  some of the toxic and bias traits of text, whether that be from an online forum or a language model. These measures are very high-level and coase today, but in the future I expect we'll develop more specific ones and ensemble them in things that look like 'bias testing suites', or something similar.
  Read more: Detoxify (Unitary AI, GitHub).
  More in this tweet thread (Laura Hanu, Twitter).

###################################################

Tired and hungover? Xpression camera lets you deepfake yourself into a professional appearance for your zoom meeting:
...The consumerization of generative models continues...
For a little more than half a decade, AI researchers have been using dep learning approaches to generate convincing, synthetic images. One of the frontiers of this has been consumer technology, like Snapchat filters. Now, in the era of COVID, there's even more demand for AI systems that can augment, tweak, or transform a person's digital avatar.
  The latest example of this is xpression camera, an app you can download for smartphones or Apple macs, which makes it easy to turn yourself into a talking painting, someone from the opposite gender, or just a fancier looking version of yourself.

From the department of weird AI communications: "Expression camera casts a spell on your computer", is a thing the company says in a video promoting the technology.

Why this matters - toys change culture: xpression camera is a toy -  but toys can be extraordinarily powerful, because they tend to be things that lots of people want to play with. Once enough people play with something, culture changes in response - like how smartphones have warped the world around them, or instant polaroid photography before that, or pop music before that. I wonder what the world will look like in twenty years when people start to enter the workforce who have entirely grown up with fungible, editable versions of their own digital selves?
  Watch a video about the tech: xpression camera (YouTube).
  Find out more at the website: xpression camera.

###################################################

AI Policy with Matthew van der Merwe:
…Matthew van der Merwe brings you views on AI and AI policy; I (lightly) edit them…

What do AI practitioners think about working with the military?
CSET, at Georgetown University, has conducted a survey of US-based AI professionals on working with the DoD. Some of the key findings:

  • US AI professionals are split in attitudes to working with the DoD (38% positive, 24% negative, 39% neutral)
  • When asked about receiving DoD grants for research, attitudes were somewhat more favourable for basic research (57% positive vs. 7% negative) than applied research (40% vs 7%)
  • Among reasons for taking DoD grants and contracts, ‘working on interesting problems’ was the most commonly cited, and top ranked upside; ‘discomfort with how DoD will use the work’ was the most cited and top ranked downside. 
  • Among domains for DoD collaboration, attitudes were most negative towards battlefield projects: ~70–80% would consider taking actions against their employer if they engaged in such a contract— most frequently, expressing concern to superior, or avoiding working on the project. Attitudes towards humanitarian projects were the most positive: ~80–90% would support their employer’s decision.

Matthew’s view: It’s great to see some empirical work on industry attitudes to defence contracting. The supposed frictions between Silicon Valley and DoD in the wake of the Project Maven saga seem to have been overplayed. Big tech players are forging close ties with the US military, to varying degrees: per analysis from Tech Inquiry, IBM, Microsoft, and Amazon lead the pack (though SpaceX deserves special mention for building weapon-delivery rockets for the Pentagon). As AI becomes an increasingly important input to military and state capabilities, and demand for talent continues to outstrip domestic and imported supply, AI practitioners will naturally gain more bargaining power with respect to DoD collaborations. Let’s hope they’ll use this power wisely.
  Read more: “Cool Projects” or “Expanding the Efficiency of the Murderous American War Machine?” (CSET).

How transformative will machine translation be?

Transforming human cooperation by removing language barriers has been a persistent theme in myths across cultures. Until recently, serious efforts to realize this goal have focussed more on the design of universal languages than powerful translation. This paper argues that machine translation could be as transformative as the shipping container, railways, or information technology.

The possibilities: Progress in machine translation could yield large productivity gains by reducing the substantial cost to humanity of communicating across language barriers. On the other hand, removing some barriers can lead to new ones e.g. multilingualism has long been a marker of elite status, the undermining of which would increase demand for new differentiation signals, which could introduce new (and greater) frictions. One charming benefit could be on romantic possibilities — ’linguistic homogamy’ is a desirable characteristic of a partner, but constrains the range of candidates. Machine translation could radically increase the relationships open to people; like advances in transportation have increased our freedom to choose where we live—albeit unequally.


Default trajectory: The author argues that with ‘business as usual, we’ll fall short of realizing most of the value of these advances. E.g. economic incentives will likely lead to investment in a small set of high-demand language pairs e.g. (Korean, Japanese), (German, French), and very little investment in the long tail of other languages. This could create and exacerbate inequalities by concentrating the benefits among an already fortunate subset of people, and seems clearly suboptimal for humanity as a whole.

What to do: Important actors should think about how to shape progress towards the best outcomes—e.g. using subsidies to achieve wide and fair coverage across languages; designing mechanisms to distribute the benefits (and harms) of the technology.   Read more: The 2020s Political Economy of Machine Translation (arXiv).

###################################################

Instructions for operating your Artificial General Intelligence
[Earth - 2???]

Hello! In this container you'll find the activation fob, bio-interface, and instruction guide (that's what you're reading now!) for Artificial General Intelligence v2 (Consumer Edition). Please read these instructions carefully - though the system comes with significant onboard safety capabilities, it is important users familiarize themselves deeply with the system before exploring its more advanced functions.

Getting Started with your AGI

Your AGI wants to get to know you - so help it out! Take it for a walk by pairing the fob with your phone or other portable election device, then go outside. Show it where you like to hang out. Tell it why you like the things you like.

Your AGI is curious - it's going to ask you a bunch of questions. Eventually, it'll be able to get answers from your other software systems and records (subject to the privacy constraints you set), but at the beginning it'll need to learn from you directly. Be honest with it - all conversations are protected, secured, and local to the device (and you).

Dos and Don'ts

Do:
- Tell your friends and family that you're now 'Augmented by AGI', as that will help them understand some of the amazing things you'll start doing.

Don't:
- Trade 'Human or Human-Augment Only' (H/HO) financial markets while using your AGI - such transactions are a crime and your AGI will self-report any usage in this area.

Do:
- Use your AGI to help you; the AGI can, especially after you spend a while together, make a lot of decisions. Try to use it to help you make some of the most complicated decisions in your life - you might be surprised with the results.

Don't:
- Have your AGI speak on your behalf in a group setting where other people can poll it for a response; it might seem like a fun idea to do standup comedy via an AGI, but neither audiences or club proprietors will appreciate it.

Things that inspired this story: Instruction manuals for high-tech products; thinking about the long-term future of AI; consumerization of frontier technologies; magic exists in instruction manuals.



Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me@jackclarksf

Twitter
Facebook
Website
Copyright © 2020 Import AI, All rights reserved.
You are receiving this email because you signed up for it. Welcome!

Our mailing address is:
Import AI
Many GPUs
Oakland, California 94609

Add us to your address book


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list

Email Marketing Powered by Mailchimp

Older messages

Import AI 221: How to poison GPT3; an Exaflop of compute for COVID; plus, analyzing campaign finance with DeepForm

Monday, November 2, 2020

2020 has been a hell of a year - I do find it quite reassuring that despite everything there are still many research papers being published, as humanity collectively tries to make progress. View this

Import AI 220 [FIXED]: Google builds an AI border wall; better speech rec via pre-training; plus, a summary of ICLR papers

Monday, October 26, 2020

Now containing the whole newsletter, following a Mailchimp error. If you haven't met me in real life and are curious what I sound like, take a listen to this Skynet Today podcast where I talk about

Import AI 220: Google builds an AI border wall; better speech rec via pre-training; plus, a summary of ICLR papers

Monday, October 26, 2020

If you haven't met me in real life and are curious what I sound like, take a listen to this Skynet Today 'Let's Talk AI' podcast where I talk about one of my major obsessions -

Import AI 219: Climate change and function approximation; Access Now leaves PAI; LSTMs are smarter than they seem

Monday, October 19, 2020

If the deployment of AI systems starts to change cultures, how might we expect AI systems to be re-engineered to account for expected cultural changes? View this email in your browser Welcome to Import

Import AI 218: Testing bias with CrowS; how Africans are building a domestic NLP community; COVID becomes a surveillance excuse

Monday, October 12, 2020

If last year was about scaling things up and this year is about developing multi-modal networks (eg, ones that learn text and image representations in tandem, like this demo from the Allen Institute

You Might Also Like

How many Vision Pro headsets has Apple sold?

Monday, April 29, 2024

The Morning After It's Monday, April 29, 2024. Apple Vision Pro headset production is reportedly being cut, sales are reportedly “way down.” But but but wait: Wasn't the Vision Pro meant to

Okta Warns of Unprecedented Surge in Proxy-Driven Credential Stuffing Attacks

Monday, April 29, 2024

THN Daily Updates Newsletter cover Webinar -- Uncovering Contemporary DDoS Attack Tactics -- and How to Fight Back Stop DDoS Attacks Before They Stop Your Business... and Make You Headline News.

Import AI 370: 213 AI safety challenges; everything becomes a game; Tesla's big cluster

Monday, April 29, 2024

Are AI systems more like religious artifacts or disposable entertainment? ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏

Apple renews OpenAI talks 🧠, Google fires Python team 👨‍💻, React 19 beta ⚛️

Monday, April 29, 2024

Apple has renewed discussions with OpenAI to use its generative AI technology to power new features coming to the iPhone Sign Up |Advertise|View Online TLDR Together With QA Wolf TLDR 2024-04-29 😘 Kiss

Architecture Weekly #177 - 29nd April 2024

Monday, April 29, 2024

How do you make predictions about tech without the magical crystal ball? We did that today by example. We analysed what Redis and Terraform license changes relate to the new Typescript framework Effect

Software Testing Weekly - Issue 217

Monday, April 29, 2024

How do you deal with conflicts in QA? ⚔️ View on the Web Archives ISSUE 217 April 29th 2024 COMMENT Welcome to the 217th issue! How do you deal with conflicts in QA? Ideally, you'd like to know how

📧 Did you watch the free MMA chapters? (1+ hours of content)

Monday, April 29, 2024

Did you watch the free MMA chapters? Hey there! 👋 I wish you a fantastic start to the week. Last week, I launched Modular Monolith Architecture. More than 300+ students are already deep into the MMA

WP Weekly 191 - Essentials - Duplicate in Core, White Label Kadence, Studio for Mac

Monday, April 29, 2024

Read on Website WP Weekly 191 / Essentials It seems many essential features are being covered in-house, be it the upcoming duplicate posts/pages feature in the WordPress core or the launch of Studio

SRE Weekly Issue #422

Monday, April 29, 2024

View on sreweekly.com A message from our sponsor, FireHydrant: FireHydrant is now AI-powered for faster, smarter incidents! Power up your incidents with auto-generated real-time summaries,

Quick question

Sunday, April 28, 2024

I want to learn how I can better serve you ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌