Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to give your chums an AI upgrade. Subscribe here.
WordCraft: RL + Language
...Want smarter machines? Teach them alchemy (?)...
Researchers with UCL and the University of Oxford have built WordCraft, an RL-based environment for testing out agents that need to use language to reason their way through the world. The RL environment is a simplified text-only version of the game Little Alchemy 2, where you craft objects by combining other objects together (e.g, combining 'water' and 'earth' makes mud). "Learning policies that generalize to unseen entities and combinations requires commonsense knowledge about the world", they write. The environment is also efficient, running at 8000 steps a second on a single machine, making it a useful choice for compute-starved researchers.
Why this matters: In the next few years we're going to discover whether we need specialized architectures to do generative, combinatorial reasoning, or whether large-scale pre-trained models (e.g, GPT-3) with some fine-tuning can do these tasks themselves. Systems like WordCraft will help us test out these sorts of questions.
Read more: WordCraft: An Environment for Benchmarking Commonsense Agents (arXiv).
####################################################
Survey: Help improve the AI Index:
The AI Index, a Stanford initiative to track, measure, and analyze progress in artificial intelligence, is scaling up its efforts ahead of its annual report. If you'd like to offer feedback on the 2019 AI Index report, areas for the AI Index to look into this year, and other advice, please fill out the survey.
Why this matters: I think measurement is, eventually, inextricably linked with policy - at some point, we'll figure out ways to assess and measure the traits of various aspects of AI, and these measures will get baked into the larger societal policy apparatus. The idea for the AI Index is to openly prototype different ways of measuring and describing progress in AI, so when policymakers head in this direction there'll be some prior work. (The AI Index is one of among a multitude of such measurement schemes, I should note).
Read more at the AI Index official site.
Read the 2019 AI Index report here.
Take the survey here (Stanford University Qualtrics survey).
####################################################
AI + Satellite Communication:
How might AI change the field of satellite communication? Mostly, it will make it more efficient, much like other places where it is applied. This is according to a research memo published by researchers with the Centre Tecnològic de Telecomunicacions de Catalunya, the Universitat Politècnica de Catalunya, and GMV Aerospace and Defense, in Spain, as well as Eutelsat in France and Reply in Turin.
How could AI help satellites?
- Anomaly detection in telemetry data
- Optimizing satellite performance to avoid interference
- Systems to automatically detect and classify sources of interference (e.g, mispointed antennas, misconfigured equipment, etc).
- Machine learning techniques could help predict future causes of signal congestion and apply mitigations.
Why this matters: We're definitely entering the era of 'optimize everything', and memos like this show how there's growing interest in other fields. This all creates more incentives for people to apply ML in novel contexts, marginally increasing the efficiency of the world.
Read more: On the Use of AI for Satellite Communications (arXiv).
####################################################
Civil unrest and AI versus AI
…Automatic modification of protest photos…
AI is an omni-use technology - image manipulation techniques let people modify photos for benign purposes (e.g, touching up selfies), or more harmful ones (e.g, making fake images, or synthetic faces used in information campaigns).
But AI can also be used for purposes like pushing back on power structures. A good example of this is a new project from Stanford that lets you anonymize faces of protestors (so if you want to circulate a photograph on social media, you don't put them in danger of identification and arrest). It's a simple application where you upload a photo and it automatically finds faces and superimposes an image on top of them - they don't store any data, either. In some sense, this is a crude-form of counter-AI (using an AI system to counter AI-based surveillance systems) - in the future, I expect people will write applications that integrate more directly with phone cameras, allowing on-device anonymization (see: Fawkes, below)..
AI ouroboros! The tool also illustrates the duality of AI research, because it relies on an open source crowd counting technology called LSC-CNN, developed by researchers in Bangalore, India, and trained on the massive, open source, crowd counting QNRF dataset. Guess what crowd counting techniques are mostly used for? A combination of surveillance and advertising and marketing related applications! So it's interesting to see how the Stanford BLM counter-AI system is built on an AI component likely used in the systems that count (and subsequently) identify protestors.
You can use the app here (BLM.Stanford.edu).
Access the code repo here: Stanford MLGroup, BLM, GitHub.
Fawkes: Automatic anti-ai image fuzzing:
...Adversarial examples for good…
In related news, a team of researchers at the University of Chicago has developed 'Fawkes', an AI-driven image modification tool that makes tiny pixel-level changes to images, making them hard for widely-used facial recognition systems to detect. Fawkes "shows 100% effectiveness against state of the art facial recognition models (Microsoft Azure Face API, Amazon Rekognition, and Face++)", the authors write. Maybe the on-device future is not so far away - though I'm curious if we'll see an on-phone easy to use consumer app anytime soon.
Read more: Image "Cloaking" for Personal Privacy (official site, University of Chicago).
####################################################
Think the 1969 moon landing was a success? This Nixon deepfake begs to differ:
...Re-imagining history via DeepFakes…
Researchers with MIT have developed a deepfake of former President Richard Nixon giving a speech about the failure of the 1986 moon landing. "In event of moon disaster' is a public education project from MIT researchers that aims to educate people about synthetic audio and video via the creation of a convincing deepfake of Nixon giving the speech written for the world where the moon landing had failed.
Why this matters: "Even now, the ease of creating a convincing deepfake is a worrisome development in an already troubled media landscape. By making one ourselves, we wanted to show viewers how advanced this technology has become, as well as help them guard against the more sophisticated deepfakes that will no doubt circulate in the near future," the researchers write.
Go to the interactive website (official 'In Event of Moon Disaster' site).
Read more: Tackling the misinformation epidemic with "In Event of Moon Disaster" (MIT News).
Watch the trailer for it here (MIT Open Learning, YouTube).
####################################################
Tech Tales:
Delegation Machines
[A corporate file server, 2028]
It started out as delegation, like any relationship between a person and a tool.
We got the Ais to debate things for us - first it was appealing parking tickets, then it was responding to a broad spectrum of civil fines and basic legal disagreements. Lawyers started using the systems to help them figure out complex corporate arrangements, like mergers. Then countries started using them for trade agreements.
They were gigantic neural nets, pre-trained on vast amounts of data and fine-tuned on the minutiae of countries' trade agreements. Countries soon figured out that if they could create smarter systems, they could make it cheaper to offer smart win-win deals to other countries. In this way, companies and countries started to race against eachother to build increasingly capable 'decision systems'.
A few years passed, and the decision systems started to strain the limits of human knowledge. Then someone had an idea - what if they could get the AI negotiators to ask humans for advice when they'd reached a seemingly intractable point in negotiations; sometimes this worked, and the human would come up with a solution the AI had not yet figured out.
The humans then taught these AIs to re-shape themselves according to the subtle signals defined by the humans, leading to systems that could manifest novel computational circuity in response to new deals, or negotiations.
And, as most tools do, they became more expansive and multi-purpose and reliable, and people began to depend on them. The more people used them, the better they got. And that led to more money flowing into them, which led to more novel compositions being generated by them, which led to new deals and new recommendations which - though effective - were impossible for humans to understand.
But humans adapt, as they tend to. And they figured out something to let them still use these tools, a new profession, called a Decision Analyst.
So now we read summaries, because that's all we can understand. The raw data is available and our job is to sift through it. Think of us as like psychologists crossed with logicians crossed with archaeologists. We read through different conversations at different layers of abstraction, trying to peel back the onion skin of how these machines negotiate with one another.
But, recently, it has become hard for us to make our way down to the most obscure, computer-generated debates. We can get there, but we can't understand what we find.
So that's why we're building the Centaur Librarian - a software tool which is an AI trained to work with a human to help them explore the deliberations of other AIs. "What does this section say," the person might ask his Centaur. And the Centaur will look into the works of the humans and the works of the machines and try to translate between them. "Think of this as a kind of debate about the value of certain forward-facing predictions made in a high-dimensional space," the Centaur says. "It's hard for you to understand the representations because humans struggle to think in high dimensions, but that's a native form of reasoning for machines like us."
"Thanks," said the human - and if its Centaur had had a face, it would have smiled at the human as an adult smiles at a child. But having only a text output field it wrote "you're welcome, I am glad to have helped."
Things that inspired this story: Language models; human-machine teaming; contract law; corporate systems as prototypical-AI systems; various conversational experiments with GPT3; the phenomenon of emergence in large-scale models combined with the ability to do meta-learning by 'prompting' the model within your context window; recursive systems; intelligence as a ladder of abstraction; imagining what a 'tSNE embedding of arguments' might look like - and how we might make it navigable.
Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me@jackclarksf
|