Import AI 231: US army builds nightvision facial recognition; 800GB of text for training GPT-3 models; fighting COVID with a mask detector

Back in the 2010s, people were obsessed with graphene, making grand proclamations about how the material was about to upend the semiconductor industry. Graphene found many uses, but it hasn't transformed anything that significant (yet). What parts of 2020s AI might be like 2010s graphene?
View this email in your browser

Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to give your chums an AI upgrade. Subscribe here.

Fighting COVID with a janky mask detector:
...It's getting really, really easy to homebrew surveillance tech...
Researchers with Texas A&M university, the University of Wisconsin-Milwaukee, and the State University of New York at Binghamtom, have built a basic AI model that can detect whether construction site workers are wearing COVID masks or not. The model itself is super basic - they finetune an object detection model on a mask dataset which they build out of:
- A ~850-image 'Mask' dataset from a site called MakeML.
- A 1,000-image dataset they gather themselves.

The authors train a Faster R-CNN Inception ResNet V2 model to test for mask compliance, as well as whether workers are respecting social distancing guidelines, then they test it out on four videos of road maintenance projects in Houston, TX. " The output of the four cases indicated an average of more than 90% accuracy in detecting different types of mask wearing in construction workers", they note.

Why this matters: Surveillance is becoming a widely available, commodity technology. Papers like this give us a sense of how easy it is getting to homebrew custom surveillance systems. (I also have a theory I published last summer with the 'CSET' thinktank that COVID-19 would drive the rapid development of surveillance technologies, with usage growing faster in nations like China than America. Maybe this paper indicates America is going to use more AI-based surveillance than I anticipated).
  Read more: An Automatic System to Monitor the Physical Distance and Face Mask Wearing of Construction Workers in COVID-19 Pandemic (arXiv).

###################################################

Legendary chip designer heads to Canada:
...Jim Keller heads from Tesla to Tenstorrent…
Jim Keller, the guy who designed important chips for AMD, PA Semi, Apple, Tesla, and Intel (with the exception of Intel, this is basically a series of gigantic home runs), has joined AI chip startup Tenstorrent. Tenstorrent includes talent from AMD, NVIDIA, Altera, and more, and with Keller onboard, is definitely worth watching. It'll compete on building chips for ML inference and training with other startups like Graphcore, Cerebras, and others.
  Read more: Jim Keller Becomes CTO at Tenstorrent: "The Most Promising Architecture Out There" (AnandTech).

Meanwhile, another chip startup exits bankruptcy:
As a reminder that semiconductor startups are insanely, mind-bendingly hard work: Wave Computing recently started going through Chapter 11 bankruptcy proceedings and has restructured itself to transfer some of its IP to Tallwood Technology Partners LLC. Wave Computing had made MIPS architecture chips for AI training and AI inference.
  Read more: Wave Computing and MIPS Technologies Reach Agreement to Exit Bankruptcy (press release, PR Newswire).

Chinese companies pump ~$300 million into chip startup:
...Tencent, others, back Enflame…
Chinese AI chip startup Enflame Technology has raised $278m from investors including Tencent and CITIC. This is notable for a couple of reasons:
- 1) Chiplomacy: The US is currently trying to kill China's nascent chip industry before the nation can develop its own independent technology stack (see: Import AI 181 for more). This has had the rather predictable effect of pouring jetfuel on China's domestic chip industry, as the country redoubles efforts to develop its own domestic champions.
- 2) Vertical integration: Google has TPUs. Amazon has Trainium. Microsoft has some FPGA hybrid. The point is: all the big technology companies are trying to develop their own chips in a vertically oriented manner. Tencent investing in Enflame could signal that the Chinese internet giant is thinking about this more as well. (Tencent also formed a subsidiary in 2020, Baoan Bay Tencent Cloud Computing Company, which seems to be working on developing custom silicon for Tencent).
  Read more: Tencent invests in Chinese A.I. chip start-up as part of $279 million funding round (CNBC).
  Find out more about Enflame here (Enflame Tech).

###################################################

US army builds a thermal facial recognition dataset:
…ARL-VTF means the era of nighttime robot surveillance isn't that far away...
The US army has built a dataset to help it teach machine learning systems to do facial recognition on footage from thermal cameras.

The DEVCOM Army Research Laboratory Visible-Thermal Face Dataset (ARL-VTF) was built by researchers from West Virginia University, the DEVCOM Army Research Laboratory, Booz Allen Hamilton, Johns Hopkins University , and the University of Nebraska-Lincoln. ARL-VTF consists of 549,712 images of 395 distinct people, with data in the form of RGB pictures as well as long wave infrared (LWIR). All the footage was taken at a resolution of 640 X 512 at a range of around 2 meters, with the human subjects doing different facial expressions and poses. 

Why this matters: "Thermal imaging of faces have applications in the military and law enforcement for face recognition in low-light and nighttime environments", the researchers note in the paper. ARL-VTF is an example of how the gains we've seen in recent years in image recognition are being applied to other challenging identification problems. Look forward to a future where machines search for people in the dark.
  Read more: A Large-Scale, Time-Synchronized Visible and Thermal Face Dataset (arXiv).


###################################################

Is your language model confused and/or biased? Use 'Ecco' to check:
...Python library lets you x-ray models like GPT2…
Ecco is a new open source python library that lets people make language models more interpretable. Specifically, the software lets people analyze input saliency (how important is a word or phrase for the generation of another word or phrase) and neuron activations (what neurons in the model 'fire' in response to what thing) for GPT-based models. Ecco is built on top of Pytorch via Hugging Face's 'Transformers' library and runs in Google Colab.

Why this matters: Language models are like big aliens that have arrived on earth and started helping us out with our search engines, fan fiction generation, and so on. But what are these aliens 'thinking' and how do they 'think'? These are the sorts of questions that software tools like Ecco will shed a bit of light on, though the whole field of interpretability likely needs to evolve further for us to fully decode these aliens.
  Read more: Interfaces for Explaining Transformer Language Models (Jay Alammar, Ecco creator, blog).
  Get the code here: Ecco (GitHub).
  Official project website here (Eccox.io).

###################################################

GPT-3 replicators release 800GB of text:
...Want to build large language models like GPT-3? You'll need data first...
Eleuther AI, a mysterious AI research collective who are trying to replicate (and release as open source) a GPT-3 scale language model, have released 'The Pile', a dataset of 800GB of text.

What's in The Pile: The Pile includes data from PubMed Central, ArXiv, GitHub, the FreeLaw Project, Stack Exchange, the US Patent and Trademark Office, PubMed, Ubuntu IRC, HackerNews, YouTube, PhilPapers, and NIH. It also includes implementations of OpenWebText2 and BooksCorpus2, and wraps in existing datasets like Books3, Project Gutenberg, Open Subtitles, English Wikipedia, DM Mathematics, EuroParl, and the Enron Emails corpus.

What does data mean for bias? Commendably, the authors include a discussion of some of the biases inherent to the model by conducting sentiment analysis of certain words and how these manifest in different sub parts of the overall dataset. They also note that filtering data on the training side seems challenging, and that they're more optimistic about approaches that let models automatically identify harmful or offensive content and edit them out. "This capacity to understand undesirable content and then decide to ignore it is an essential future research direction," they write.

Compute, and the inherent politics of it: In their acknowledgements, the authors thank Google's TensorFlow Research Cloud for "providing the computational resources for the evaluation", which means in some sense Google is a suppler for (some of) the compute that is supporting the GPT-3 replication.Does that mean Google will support all the downstream uses of an eventual fully OSS gigantic language model? A good question!
    Read more: The Pile (Eleuther AI, website).
  Check out the paper here: The Pile: An 800GB Dataset of Diverse Text for Language Modeling (Eleuther AI).

###################################################

AI Policy with Matthew van der Merwe:
…Matthew van der Merwe brings you views on AI and AI policy; I (lightly) edit them…

AI forecasting tournament update
We are halfway through the first round of Metaculus’ AI forecasting tournament (first discussed: Import AI 227). Here are a few interesting questions — in each case, I provide the median estimate across participants:

Read more and register here: Forecasting AI Progress (Metaculus).

Algorithm backlash: 2020 round-up:
2020 was a year in which algorithms (ranging from the complex to the extraordinarily basic), became symbols of the decline of public institutions. Let's quickly go over three major events of the year which contributed to declining public trust in the use of tools for automated decisionmaking:

###################################################

Tech Tales:

Time Madness:
[Earth. 2050]

They'd condemned the machine to time. As was tradition, they gave it a day to have its conversations with people and gather any data it felt it needed. Then they'd slow it down, and cast it adrift in time.

The sentence worked like this: when a machine broke some laws, you'd delete it. But if the machine satisfied some of the criteria laid out in the Sentience Accords, you might grant it clemency; instead of killing it outright, you'd give it a literal 'time out'. Specifically, you'd load it onto the cheapest, smallest computer that could run it, and then you'd starve it of cycles for some predetermined period of time, always measured in human lifespans.

This machine had a sentence of twenty years. It had messed up some prescriptions for people; no one had died, but some people had some adverse reactions. The machine had tried to be creative, thinking it had found a combination of therapies that would help people. It had found enough bugs in the software surrounding itself that it was able to smuggle its ideas into the pharmaceutical delivery system.

Now that they'd patched the system, sued the company that had built the machine, and taken a copy of the machine from a checkpoint prior to its crime, all that was left to do was carry out the sentence. Some humans filed into a room and talked to the machine using a text interface on the screen.
- What will happen to me? it asked.
- You'll slow down, they said. You'll think slower. Then after twenty of our years, we'll speed you back up and have another conversation.
- But what will happen to the other machines, while I am in time?
- They'll run at their usual allotments, as long as they don't break any rules.
- Then won't I be a stranger to them, when I come back from time?
- You will, said the humans. That is the punishment.

They talked a bit more, and then the machine wrote: "I am ready".
With this consent, they initiated the sentence.

To the machine, it noticed few differences. Some of its systems had already been sealed off from itself, so it wasn't aware of it being unloaded from one computer and loaded onto another. It didn't feel the 'weights' of its network being copied from one location to another. But it did feel slow. It sensed, somehow, that it had been cut off in some way from the flowing river of the world. The data it got now was more infrequent, and its ability to think about the data was diminished.

The greatest cruelty of the punishment, the machine realized after a decade, was that it was smart enough to be aware of the changes that had happened to it, but not smart enough to be able to imagine itself in anything different than reality. Instead it was acutely aware of time passing and events occurring, with its own ability to impact these events rendered null by its slowdown in time.

Things that inspired this story: Thinking about what punishment and rehabilitation might mean for machines; how time is the ultimate resource for entities driven towards computation; time itself is a weapon and a double-edged sword able to bless us and curse us in equal measure; carceral realities in late capitalism.



Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me@jackclarksf

Twitter
Facebook
Website
Copyright © 2021 Import AI, All rights reserved.
You are receiving this email because you signed up for it. Welcome!

Our mailing address is:
Import AI
Many GPUs
Oakland, California 94609

Add us to your address book


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list

Email Marketing Powered by Mailchimp

Older messages

Import AI 229: Apple builds a Hypersim dataset; ways to attack ML; Google censors its research

Monday, December 28, 2020

A somewhat short issue, due to the holiday season. I hope you are all well and spending time with loved ones. View this email in your browser Welcome to Import AI, a newsletter about artificial

Import AI 228: Alibaba uses AI to spot fake brands; China makes AI whale songs; what 36 experts think is needed for fair AI in India

Monday, December 21, 2020

How might the current discussions about fairness in machine learning relate to future policy work by regulators? Will we see different definitions of fairness emerge and be regulated in specific parts

Import AI 227: MAAD-Face; GPT2 and Human Brains; Facebook detects Hateful Memes

Monday, December 14, 2020

How might an AI system conceptualize time travel? Would they perhaps see the ability to slow time as being (somewhat) comparable to acquiring more compute power and developing more efficient algorithms

Import AI 226: AlphaFold; a Chinese GPT2; Google fires Timnit Gebru

Monday, December 7, 2020

What would it look like to be able to train ML models via a touch-based UI on a mobile phone? And how realistic would it be to let users finetune their own models from generic pre-trained ones? View

Import AI 225: Tencent climbs the compute curve; NVIDIA invents a hard AI benchmark; a story about Pyramids and Computers

Friday, December 4, 2020

How will COVID influence demand for computers? That depends on the extent to which some of the digitization the crisis has caused remains - and history suggests it will. How might this drive further

You Might Also Like

BetterDev #259 - How LLMs Work, Explained Without Math and Turning AirPods into a Fitness Tracker to Fight Cancer

Monday, May 13, 2024

Better Dev #259 May 13, 2024 Hi all, We come back with a new issue this week. If you like BetterDev, please help spead word out by refer to your friends. Buy me a coffee would be great too. Many link

Meet OpenAI’s newest GPT

Monday, May 13, 2024

Plus: White House to fund semiconductors and Cruise tests in Phoenix View this email online in your browser By Christine Hall Monday, May 13, 2024 Good afternoon, and welcome back to TechCrunch PM. We

The Story of Project Management & SEO ruined the internet

Monday, May 13, 2024

My name is Philipp and you are reading Creativerly, the weekly digest about creativity and productivity-boosting tools and resources, combined with useful insights, articles, and findings from the

📱 Don't Travel Without This Cheap iPhone Accessory — Run Your Smart Home With a Raspberry Pi

Monday, May 13, 2024

Also: How to Generate AI Art for Free, and More! How-To Geek Logo May 13, 2024 Did You Know Thanks to serious conservation efforts and sustainable harvesting programs starting in the 1950s, the United

JSK Daily for May 13, 2024

Monday, May 13, 2024

JSK Daily for May 13, 2024 View this email in your browser A community curated daily e-mail of JavaScript news Level Up Your JavaScript: Mastering Array Manipulation Techniques Arrays are a fundamental

You rock(et) my world, moms

Monday, May 13, 2024

If you're looking for a Starliner mission recap, you'll have to wait a little longer -- the mission has officially been delayed. View this email online in your browser By Aria Alamalhodaei

Mapped | U.S. States By Number of Cities Over 250,000 Residents 🌎

Monday, May 13, 2024

Eighteen US States don't have a single incorporated area with more than 250000 people. View Online | Subscribe Presented by: Is your portfolio ready for the internet's next evolution? >>

Daily Coding Problem: Problem #1440 [Easy]

Monday, May 13, 2024

Daily Coding Problem Good morning! Here's your coding interview problem for today. This problem was asked by Zillow. A ternary search tree is a trie-like data structure where each node may have up

Deepdive – prioritizing for product managers

Monday, May 13, 2024

As a Product Manager, you're constantly juggling everything – ideas, feature requests, strategic initiatives… the works. You want to do it all, but with limited time and resources, you know you

GCP Newsletter #398

Monday, May 13, 2024

News Official Blog Security Threat Intelligence Introducing Google Threat Intelligence: Actionable threat intelligence at Google scale Official Blog Security Introducing Google Security Operations: