Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to give your chums an AI upgrade. Subscribe here.
A somewhat abbreviated issue, this week: Due to a combination of things (primarily related to societal complications from COVID-19), I've found I have less time I'm able to devote to this newsletter, among other things. I think many people in COVID-hit countries that are lucky enough to still be employed are having this experience - even though notionally you should have more time due to no commute and the elimination of various other factors, it feels like you have less time than before. A curious and somewhat unpleasant trait of this current crisis! I hope to resume more regularly scheduled service at typical lengths soon. Thank you as always for reading and writing in, and I hope you and your loved ones are safe during this chaotic time.
####################################################
AI & Creativity: Gwern on writing with GPT3:
What is it like to try and write fiction with GPT-3, OpenAI's large-scale language model? Gwern has written a dense, compelling essay about the weirdness of writing with this generative model. Read on to get their take on 'prompts as programming', the strengths and weaknesses of the GPT-3 model, and to see an immense set of examples of GPT-3 in action.
The strange and confusing fun of it all: Gwern has put together a massive collection of generations using the GPT-3 model - I particularly liked some of the poetry experiments, such as Plath, Whitman, and Cummings. I also think this (machine-generated) extract from a completion of Dr. Seuss's 'Oh, The Places You'll Go', is quite charming:
"You have brains in your head.
You have feet in your shoes.
You can steer yourself any direction you choose.
You’re on your way!"
Read more: GPT-3 Creative Fiction (Gwern).
####################################################
Hey dude, where's my AGI?
...Tired of AI hype? Read this…
In an essay in Nature, an author writes that "although development of artificial intelligence for specific purposes has been impressive, we have not come much closer to developing artificial general intelligence". What follows is a narrative about the development of AI and "Big Data" technologies in the past half century or so, along with discussion of where computers do well and where they do poorly. Much of the criticism of contemporary AI systems can be paraphrased as 'curve fitting isn't smart' - these technologies, though powerful, are not able to generate capabilities that we should describe as intelligent. "The real problem is that computers are not in the world, because they are not embodied," the author writes.
Why this matters: I think it's helpful to have more of a sober discourse about whether we're making progress towards AGI or whether we're just developing increasingly capable narrow AI systems. However, one missing piece in this article is a discussion of some of the more recent innovations in generative models - e.g, the author has a 'conversation' with a chatbot program called Mitsuku and uses this to bolster their arguments about the generally poor capabilities of AI systems. How might their conversation had gone if they'd experimented with GPT2 or GPT3 (with context-stuffing via the context window), I wonder?
Read more: Why general artificial intelligence will not be realized (Nature, Humanities and Social Sciences Communications).
####################################################
Making language models more polite, via 1.39 million Enron(?!) emails:
...READ THIS - NOW!...
A few years ago, people developed style transfer techniques for neural nets that let you take a picture, then morph it to have a different style, like becoming a cartoon, or being re-rendered in an impressionist painting style. In recent years, we've started to be able to do similar things for text, via techniques like fine-tuning. Now, researchers with Carnegie Mellon University, have developed a dataset to help them build systems that can turn text from impolite to polite.
The Enron dataset: For the research, they collected a dataset of 1.39 million instances from the 'Enron' email corpus, automatically labelled for politeness with scores from 0 to 1. They've made the dataset available on GitHub, so it could be an interesting fine-tuning resource.
Interesting ethics: The people also hints at some of the ethical challenges involved in doing this sort of research, for instance, by adopting a "data driven definition of politeness" to automatically clean the email corpus.
Politeness style transfer: They use this dataset to develop a system that does what they call 'politeness transfer', for example, by converting the phrase "send me the data" to "could you please send me the data?"?. They also explore some things with more extreme (autonomous!) editorial choices, like converting the sentiment of a sentence, for instance changing "their chips are ok, but their salsa is really bland" to "their chips are great, but their salsa is really delicious".
Why this matters: Reality Editing: The outputs of generative models are like a funhouse mirror version of reality - they reflect back the things in their training corpus, magnifying and minimizing different aspects of their data distribution. One of the core challenges of AI research for the next few years will be figuring out how to more tightly constrain the outputs of these models so they have more of a desired trait, like politeness, or less of a negative trait, like tendencies to express harmful biases. Datasets and experiments like this give us some of the tools (the data) and ideas that can help us figure out how to better align model outputs with societal requirements.
Read more: Politeness Transfer: A Tag and Generate Approach (arXiv).
Get the dataset and code here (GitHub).
####################################################
Coming soon: Elevator surveillance cameras
...100,000 test deployments and counting…
A team from the Shanghai Research Institute has developed a system to automatically identify "abnormal activity", such as drug dealing, prostitution, over-crowded residents, and so on.
The system is currently in a research phase and deployed on around 100,000 elevators, the authors write. They decided to try out elevator-based surveillance because "we find that elevator could be the most feasible and suitable environment to take operation on because it's legal and reasonable to deploy public surveillance and people take elevator widely and frequently enough". They use the system to analyze large amounts of data (here: around one million records for each floor from 100,000 distinct elevators - the resulting system spits out 643 outliers; in a subsequent analysis, they identify a couple of anomalies worthy of investigation by a local property manager, such as indications of a catering service being run from an apartment, and of an over-crowded residence.
What they did: They concoct a system that uses YOLACT to do instance segmentation on people in the video, FaceNet to capture and embed the faces of individuals in the elevator (aiding re-identification in different images), and their own architecture called GraftNet (based on the Inception v3 classification architecture) to learn to assign multiple labels for elevator passengers (labels include: pregnant, oldage, courier, adult holding baby, etc).
Why this matters: AI and social control: The disturbing thing about this paper is how simple it is - these are reasonably well understood techniques and systems, and this is simply an application of them. I think it's hard for us to comprehend what kinds of surveillance capabilities AI systems yield till we see distillations like this - here we've got a system that automatically identifies anomalous behaviors, collecting data from a hundred thousand distinct places, eventually in real-time. Though such systems will have demonstrable benefits for public safety, they'll also make it increasingly easy for people to build AI tools to passively identify anyone that is deviating from some kind of (learned) norm.
And the important thing is none of this is like Enemy of the State - there isn't some shadowy high-tech force developing this stuff, it's just some engineers at a university (who likely work/moonlight at a company) using some open source components and a bit of inventive improvisation, following the incentive structures created for them by governments and the private sector. How will AI development unfold, given these dynamics?
Read more: Abnormal activity capture from passenger flow of elevator based on unsupervised learning and fine-grained multi-label recognition (arXiv).
####################################################
Tech Tales:
The Glass Castles
We'd watch the castles for hours, taking a half hour between shifts in the factory to look at their silhouettes on the horizon. We'd place bets on which one of them would change first. Sometimes one of us would be out there when it happened - you'd see the movement before you heard the cracking sound of giant panes of glass, shattering into pieces. Then if you had your phone you could zoom in and sometimes see the fuzzy air where new parts of the castle scaffold were being put together by drones. The new glass got fitted at night; they'd cover the castle with a sheet before they did it, presumably to stop wind or debris from messing with the glass as they put it in. Then the next morning we'd look at the horizon on our way into the factory and we'd sometimes spot that one had moved. Money would change hands. I made a lot of money one summer, correctly betting that one of the largest castles would spend the next two weeks splitting in two, losing panes and having new shapes stitched into the air by drones, then fitted with a new body, and then repeat, until one had become two.
I think most people are good at guessing, once they spend a long enough time looking at something. There was no cell reception at the factory, so we'd just look at the castles and shoot the shit, then go back to work. A few years later I found out the castles were part of some experiment - Emergent Architecture and Online Game Lotteries was the title of some research paper published about them; one day I sat and looked at the paper and there was a picture called Figure 2 I stared at the picture because I could see the shape of the castle that had split into two, one summer. The caption of the figure said Due to a series of concurrent lottery outcomes focused on a single high-yield trading block, we see the structure partition itself into two to more efficiently provide matchmaking services.
Things that inspired this story: The notion of artefacts as a universal constant - everyone has encountered something on the periphery of their existence, and much of it is made by people somewhere along the line; game theory; large-scale betting competitions.
Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me@jackclarksf
|