Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to give your chums an AI upgrade. Subscribe here.
Fighting COVID with a janky mask detector:
...It's getting really, really easy to homebrew surveillance tech...
Researchers with Texas A&M university, the University of Wisconsin-Milwaukee, and the State University of New York at Binghamtom, have built a basic AI model that can detect whether construction site workers are wearing COVID masks or not. The model itself is super basic - they finetune an object detection model on a mask dataset which they build out of:
- A ~850-image 'Mask' dataset from a site called MakeML.
- A 1,000-image dataset they gather themselves.
The authors train a Faster R-CNN Inception ResNet V2 model to test for mask compliance, as well as whether workers are respecting social distancing guidelines, then they test it out on four videos of road maintenance projects in Houston, TX. " The output of the four cases indicated an average of more than 90% accuracy in detecting different types of mask wearing in construction workers", they note.
Why this matters: Surveillance is becoming a widely available, commodity technology. Papers like this give us a sense of how easy it is getting to homebrew custom surveillance systems. (I also have a theory I published last summer with the 'CSET' thinktank that COVID-19 would drive the rapid development of surveillance technologies, with usage growing faster in nations like China than America. Maybe this paper indicates America is going to use more AI-based surveillance than I anticipated).
Read more: An Automatic System to Monitor the Physical Distance and Face Mask Wearing of Construction Workers in COVID-19 Pandemic (arXiv).
###################################################
Legendary chip designer heads to Canada:
...Jim Keller heads from Tesla to Tenstorrent…
Jim Keller, the guy who designed important chips for AMD, PA Semi, Apple, Tesla, and Intel (with the exception of Intel, this is basically a series of gigantic home runs), has joined AI chip startup Tenstorrent. Tenstorrent includes talent from AMD, NVIDIA, Altera, and more, and with Keller onboard, is definitely worth watching. It'll compete on building chips for ML inference and training with other startups like Graphcore, Cerebras, and others.
Read more: Jim Keller Becomes CTO at Tenstorrent: "The Most Promising Architecture Out There" (AnandTech).
Meanwhile, another chip startup exits bankruptcy:
As a reminder that semiconductor startups are insanely, mind-bendingly hard work: Wave Computing recently started going through Chapter 11 bankruptcy proceedings and has restructured itself to transfer some of its IP to Tallwood Technology Partners LLC. Wave Computing had made MIPS architecture chips for AI training and AI inference.
Read more: Wave Computing and MIPS Technologies Reach Agreement to Exit Bankruptcy (press release, PR Newswire).
Chinese companies pump ~$300 million into chip startup:
...Tencent, others, back Enflame…
Chinese AI chip startup Enflame Technology has raised $278m from investors including Tencent and CITIC. This is notable for a couple of reasons:
- 1) Chiplomacy: The US is currently trying to kill China's nascent chip industry before the nation can develop its own independent technology stack (see: Import AI 181 for more). This has had the rather predictable effect of pouring jetfuel on China's domestic chip industry, as the country redoubles efforts to develop its own domestic champions.
- 2) Vertical integration: Google has TPUs. Amazon has Trainium. Microsoft has some FPGA hybrid. The point is: all the big technology companies are trying to develop their own chips in a vertically oriented manner. Tencent investing in Enflame could signal that the Chinese internet giant is thinking about this more as well. (Tencent also formed a subsidiary in 2020, Baoan Bay Tencent Cloud Computing Company, which seems to be working on developing custom silicon for Tencent).
Read more: Tencent invests in Chinese A.I. chip start-up as part of $279 million funding round (CNBC).
Find out more about Enflame here (Enflame Tech).
###################################################
US army builds a thermal facial recognition dataset:
…ARL-VTF means the era of nighttime robot surveillance isn't that far away...
The US army has built a dataset to help it teach machine learning systems to do facial recognition on footage from thermal cameras.
The DEVCOM Army Research Laboratory Visible-Thermal Face Dataset (ARL-VTF) was built by researchers from West Virginia University, the DEVCOM Army Research Laboratory, Booz Allen Hamilton, Johns Hopkins University , and the University of Nebraska-Lincoln. ARL-VTF consists of 549,712 images of 395 distinct people, with data in the form of RGB pictures as well as long wave infrared (LWIR). All the footage was taken at a resolution of 640 X 512 at a range of around 2 meters, with the human subjects doing different facial expressions and poses.
Why this matters: "Thermal imaging of faces have applications in the military and law enforcement for face recognition in low-light and nighttime environments", the researchers note in the paper. ARL-VTF is an example of how the gains we've seen in recent years in image recognition are being applied to other challenging identification problems. Look forward to a future where machines search for people in the dark.
Read more: A Large-Scale, Time-Synchronized Visible and Thermal Face Dataset (arXiv).
###################################################
Is your language model confused and/or biased? Use 'Ecco' to check:
...Python library lets you x-ray models like GPT2…
Ecco is a new open source python library that lets people make language models more interpretable. Specifically, the software lets people analyze input saliency (how important is a word or phrase for the generation of another word or phrase) and neuron activations (what neurons in the model 'fire' in response to what thing) for GPT-based models. Ecco is built on top of Pytorch via Hugging Face's 'Transformers' library and runs in Google Colab.
Why this matters: Language models are like big aliens that have arrived on earth and started helping us out with our search engines, fan fiction generation, and so on. But what are these aliens 'thinking' and how do they 'think'? These are the sorts of questions that software tools like Ecco will shed a bit of light on, though the whole field of interpretability likely needs to evolve further for us to fully decode these aliens.
Read more: Interfaces for Explaining Transformer Language Models (Jay Alammar, Ecco creator, blog).
Get the code here: Ecco (GitHub).
Official project website here (Eccox.io).
###################################################
GPT-3 replicators release 800GB of text:
...Want to build large language models like GPT-3? You'll need data first...
Eleuther AI, a mysterious AI research collective who are trying to replicate (and release as open source) a GPT-3 scale language model, have released 'The Pile', a dataset of 800GB of text.
What's in The Pile: The Pile includes data from PubMed Central, ArXiv, GitHub, the FreeLaw Project, Stack Exchange, the US Patent and Trademark Office, PubMed, Ubuntu IRC, HackerNews, YouTube, PhilPapers, and NIH. It also includes implementations of OpenWebText2 and BooksCorpus2, and wraps in existing datasets like Books3, Project Gutenberg, Open Subtitles, English Wikipedia, DM Mathematics, EuroParl, and the Enron Emails corpus.
What does data mean for bias? Commendably, the authors include a discussion of some of the biases inherent to the model by conducting sentiment analysis of certain words and how these manifest in different sub parts of the overall dataset. They also note that filtering data on the training side seems challenging, and that they're more optimistic about approaches that let models automatically identify harmful or offensive content and edit them out. "This capacity to understand undesirable content and then decide to ignore it is an essential future research direction," they write.
Compute, and the inherent politics of it: In their acknowledgements, the authors thank Google's TensorFlow Research Cloud for "providing the computational resources for the evaluation", which means in some sense Google is a suppler for (some of) the compute that is supporting the GPT-3 replication.Does that mean Google will support all the downstream uses of an eventual fully OSS gigantic language model? A good question!
Read more: The Pile (Eleuther AI, website).
Check out the paper here: The Pile: An 800GB Dataset of Diverse Text for Language Modeling (Eleuther AI).
###################################################
AI Policy with Matthew van der Merwe:
…Matthew van der Merwe brings you views on AI and AI policy; I (lightly) edit them…
AI forecasting tournament update
We are halfway through the first round of Metaculus’ AI forecasting tournament (first discussed: Import AI 227). Here are a few interesting questions — in each case, I provide the median estimate across participants:
Read more and register here: Forecasting AI Progress (Metaculus).
Algorithm backlash: 2020 round-up:
2020 was a year in which algorithms (ranging from the complex to the extraordinarily basic), became symbols of the decline of public institutions. Let's quickly go over three major events of the year which contributed to declining public trust in the use of tools for automated decisionmaking:
- Exam grades: After school exams were cancelled last year, the UK’s exam regulator graded pupils using an algorithm designed to standardise teacher-assessed grades (Import AI 211). This resulted in widespread downgrading of students, disproportionately affecting those from poorer backgrounds. Students protested, chanting “fuck the algorithm”, and the system was scrapped. Having cancelled exams again in 2021, the government has promised to “put trust in teachers, rather than algorithms.”
- Vaccines: In mid-December, Stanford staff protested against vaccine distribution plans, after only 7 of the first 5,000 doses were designated to medical residents. Administrators apologised, blaming a faulty algorithm for prioritising non-patient-facing staff over younger doctors working on the front line. "Fuck the algorithm", one protestor said (Import AI 228).
- PhD admissions: The CS faculty at UT Austin stopped using an algorithm to screen PhD applicants, after concerns were raised about bias. One critic of the 'GRADE' algorithm (Import AI 227) told designers: “You seem to have built a model that builds in whatever bias your committee had in 2013 and you’ve been using it ever since.”
###################################################
Tech Tales:
Time Madness:
[Earth. 2050]
They'd condemned the machine to time. As was tradition, they gave it a day to have its conversations with people and gather any data it felt it needed. Then they'd slow it down, and cast it adrift in time.
The sentence worked like this: when a machine broke some laws, you'd delete it. But if the machine satisfied some of the criteria laid out in the Sentience Accords, you might grant it clemency; instead of killing it outright, you'd give it a literal 'time out'. Specifically, you'd load it onto the cheapest, smallest computer that could run it, and then you'd starve it of cycles for some predetermined period of time, always measured in human lifespans.
This machine had a sentence of twenty years. It had messed up some prescriptions for people; no one had died, but some people had some adverse reactions. The machine had tried to be creative, thinking it had found a combination of therapies that would help people. It had found enough bugs in the software surrounding itself that it was able to smuggle its ideas into the pharmaceutical delivery system.
Now that they'd patched the system, sued the company that had built the machine, and taken a copy of the machine from a checkpoint prior to its crime, all that was left to do was carry out the sentence. Some humans filed into a room and talked to the machine using a text interface on the screen.
- What will happen to me? it asked.
- You'll slow down, they said. You'll think slower. Then after twenty of our years, we'll speed you back up and have another conversation.
- But what will happen to the other machines, while I am in time?
- They'll run at their usual allotments, as long as they don't break any rules.
- Then won't I be a stranger to them, when I come back from time?
- You will, said the humans. That is the punishment.
They talked a bit more, and then the machine wrote: "I am ready".
With this consent, they initiated the sentence.
To the machine, it noticed few differences. Some of its systems had already been sealed off from itself, so it wasn't aware of it being unloaded from one computer and loaded onto another. It didn't feel the 'weights' of its network being copied from one location to another. But it did feel slow. It sensed, somehow, that it had been cut off in some way from the flowing river of the world. The data it got now was more infrequent, and its ability to think about the data was diminished.
The greatest cruelty of the punishment, the machine realized after a decade, was that it was smart enough to be aware of the changes that had happened to it, but not smart enough to be able to imagine itself in anything different than reality. Instead it was acutely aware of time passing and events occurring, with its own ability to impact these events rendered null by its slowdown in time.
Things that inspired this story: Thinking about what punishment and rehabilitation might mean for machines; how time is the ultimate resource for entities driven towards computation; time itself is a weapon and a double-edged sword able to bless us and curse us in equal measure; carceral realities in late capitalism.
Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me@jackclarksf
|