Import AI 273: Corruption VS Surveillance; Baidu makes better object detection; understanding the legal risk of datasets

At what point will AI start to influence religion, and vice versa?
View this email in your browser

Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to give your chums an AI upgrade. Subscribe here.

Sure, you can track pedestrians using Re-ID, but what if your camera is corrupted?
...Testing out Re-ID on corrupted images…
Pedestrian re-identification is the task of looking at a picture of someone in a CCTV camera feed, then looking at a picture from a different CCTV feed and working out they're the same person. Now, researchers with the Southern University of Science and Technology in China have created a benchmark for 'corruption invariant person re-identification'; in other words, a benchmark for assessing how robust re-ID systems are to perturbations in the images they're looking at.

What they did: The authors take five widely-used Re-ID datasets (CUHK03, Market-1501, MSMT17, RegDB, SYSU-MM01) and then apply ~20 image corruptions to the images, altering them with things like rain, snow, frost, blurring, brightness variation, frosted glass, and so on. They then look at popular re-ID algorithms and how well they perform on these different datasets. Their findings are both unsurprising and concerning: "In general, performance on the clean test set is not positively correlated with performance on the corrupted test set," they write.

Things that make you go 'hmmm': It's quite typical for papers involved in surveillance to make almost no mention of, you know, the impact of surveillance. This is typically especially true of papers coming from Chinese institutions. Well, here's an exception! This paper has a few paragraphs on broader impacts that names some real ReID issues, e.g, that lots of ReID data is collected without consent and that these datasets have some inherent fairness issues. (There isn't a structural critique of surveillance here, but it's nice to see people name some specific issues).

Why this matters: Re-ID is the pointy-end of the proverbial surveillance sphere - it's a fundamental capability that is already widely-used by governments. Understanding how 'real' performance improvements are here is of importance for thinking about the social impacts of large-scale AI.
  Read more:Benchmarkjs for Corruption Invariant Person Re-identification (arXiv).

####################################################

What's been going on in NLP and what does it mean?
…Survey paper gives a good overview of what has been going on in NLP…
Here's a lengthy survey paper from researchers with Raytheon, Harvard, the University of Pennsylvania, University of Oregon, and University of the Basque Country, which looks at the recent emergence of large-scale pre-trained language models (e.g, GPT-3), and tries to work out what parts of this trend are significant. The survey paper concludes with some interesting questions that researchers in the field might want to focus on. These include:

How much unlabeled data is needed? It's not yet clear what the tradeoffs are between having 10million and a billion words in a training set are - some skills might require billions of words, while others may require millions. Figuring out which capabilities require which amounts of data would be helpful.

Can we make this stuff more efficient? Some of the initial large-scale modules consume a lot of compute (e.g, GPT-3). What techniques might we hope to use to make these things substantially more efficient?

How important are prompts? Prompts, aka, filling up the context window in a pre-trained language model with a load of examples, are useful. But how useful are they? This is an area where more research could shed a lot of light on the more mysterious properties of these systems.
  Read more:Recent Advances in Natural Language Processing via Large Pre-Trained Language Models: A Survey (arXiv).

####################################################

What does it take to make a really efficient object detection system? Baidu has some ideas:
...PP-PicoDet: What industrial AI looks like…
Baidu researchers have built PicoDet, software for doing object detection on lightweight mobile devices, like phones. PicoDet tries to satisfy the tradeoff between performance and efficiency, with an emphasis on miniaturizing the model so you can run object detection at the greatest number of frames per second on your device. This is very much a 'nuts and bolts' paper - there isn't some grant theoretical innovation, but there is a lot of productive tweaking and engineering to crank out as much performance as possible.

How well do these things work: Baidu's models outperform earlier Baidu systems (e.g, PP-YOLO), as well as the widely used 'YOLO' family of object detection models. The best systems are able to crank out latencies on the order of single digit milliseconds (compared to tens of milliseconds for prior systems).

Neural architecture search: For many years, neural architecture search (NAS) techniques were presented as a way to use computers to search for better variants of networks than those designed by humans. But NAS approaches haven't actually shown up that much in terms of applications. Here, the Baidu authors use NAS techniques to figure out a better detection system - and it works well enough they use it.
Read more: PP-PicoDet: A Better Real-Time Object Detector on Mobile Devices (arXiv).

####################################################

Want to use web-scraped data without being sued into oblivion?
...Huawei researchers lay out the messy aspects of AI + licensing…
Today, many of the AI systems used around us are built on datasets that were themselves composed of other datasets, some of which were indiscriminately scraped from the web. While much of this data is likely covered under a 'fair use' provision due to the transformational nature of training models on it, there are still complicated licensing questions that companies need to tackle before using the data. This is where new work from Huawei, York University, and the University of Victoria tries to help, by providing a set of actions an organization might take to assure itself it is on good legal ground when using data.

So, you want to use web-scraped datasets for your AI model? The researchers suggest a multi-step process, which looks like this:
- Phase one: Your AI engineers need to extract the license from your overall model (e.g, CIFAR-10), then identify the provenance of the dataset (e.g, CIFAR-10 is a subset of the 80 Million Tiny Images dataset), now go and look at the data sources that compose your foundational dataset, and extract their licenses as well.
- Phase two: Your lawyers need to read the license associated with the dataset and underlying sources, then needs to analyze the license(s) with regard to the product being considered and work out if deployment works.
- Phase three: If the licenses and source-licenses support the use case, then you should deploy. If a sub-component of the system (e.g, a subsidiary license) doesn't support your use case, then you should flag this somewhere.

Case study of 6 datasets: The authors applied their method to six widely-used datasets (FFHQ, MS COCO, VGGFace2, ImageNet, Cityscapes, and CIFAR-10) and found the following:
- 3/6 have a standard dataset license (FFHQ, MS COCO, VGGFace2 have standard licenses, ImageNet and Cityscapes have a custom license, CIFAR-10 doesn't mention one)
- 5/6 datasets contain data from other datasets as well (exception: Cityscapes)
- 5/6 datasets could result in license compliance violation if used to build commercial AI (exception: MS COCO)

Is this horrendously complicated to implement? The authors polled some Huawei product teams about the method and got the feedback that people worried "over the amount of manual effort involved in our approach", and "wished for some automated tools that would help them".
Read more: Can I use this publicly available dataset to build commercial AI software? Most likely not (arXiv).

####################################################

Tech tales:

Attention Trap
[A robot-on-robot battlefield, 2040, somewhere in Africa]

The fireworks were beautiful and designed to kill machines. They went up into the sky and exploded in a variety of colors, sending sparklers pinwheeling out from their central explosions, and emitting other, smaller rockets, which exploded in turn, in amazing, captivating colors.
Don't look don't look don't look thought the robot to itself, and it was able to resist the urge to try to categorize the shapes in the sky.
But one of its peers wasn't so strong - and out of the sky came a missile which destroyed the robot that had looked at the fireworks.

This was how wars were fought now. Robots, trying to spin spectacles for each other, drawing the attention of their foes. The robots were multiple generations in to the era of AI-on-AI warfare, so they'd become stealthy, smart, and deadly. But they all suffered the same essential flaw - they thought. And, specifically, their thinking was noisy. So many electrical charges percolating through them whenever they processed something. So much current when they lit themselves up within to compute more, or store more, or attempt to learn more.
  And they had grown so very good at spotting the telltale signs of thinking, that now they did this - launched fireworks into the sky, or other distractors, hoping to draw the attention and therefore the thinking of their opponents.

Don't look don't look don't look had become a mantra for one of the robots.
Unfortunately, it overfit on the phrase - repeating it to itself with enough frequency that it's thought showed up as a distinguishable pattern to the exquisite sensors of its enemies.
Another missile, and then Don'tlookdon'tlo-SHRAPNEL. And that was that.

The robots were always evolving. Now, one of the peers tried something. Don't think, it thought. And then it attempted to not repeat the phrase. To just hold itself still, passively looking at the ground in front of it, but attempting-without-attempting to not think of anything - to resist the urge to categorize and to perceive.

Things that inspired this story: Thinking deeply about meditation and what meditation would look like in an inhuman mind; adversarial examples; attention-based methods for intelligence; the fact that everything in this world costs something and it's really about what level of specificity people can detect costs; grand strategy for robot wars. 


Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me@jackclarksf

Twitter
Facebook
Website
Copyright © 2021 Import AI, All rights reserved.
You are receiving this email because you signed up for it. Welcome!

Our mailing address is:
Import AI
Many GPUs
Oakland, California 94609

Add us to your address book


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list

Email Marketing Powered by Mailchimp

Older messages

Import AI #272: AGI-never or AGI-soon?, simulating stock markets; evaluating unsupervised RL

Monday, November 1, 2021

If each individual parameter of every machine learning model in existence were rendered as a 1cm by 1cm cube, how much space would they all take up? View this email in your browser Welcome to Import AI

Import AI 271: The PLA and adversarial examples; why CCTV surveillance has got so good; and human versus computer biases

Monday, October 25, 2021

How many times has artificial general intelligence been invented on other planets? View this email in your browser Welcome to Import AI, a newsletter about artificial intelligence. Forward this email

Import AI 269: Baidu takes on Meena; Microsoft improves facial recognition with synthetic data; unsolved problems in AI safety

Monday, October 11, 2021

At some point, we'll think about experimenting on AIs in the same way we'll think about experimenting on monkeys. View this email in your browser Welcome to Import AI, a newsletter about

Import AI 268: Replacing ImageNet; Microsoft makes self-modifying malware; and what ImageNet means

Monday, October 4, 2021

If three different species developed computer systems, how different would those systems be? View this email in your browser Welcome to Import AI, a newsletter about artificial intelligence. Forward

Import AI 267: Tigers VS humans; synthetic voices; agri-robots

Monday, September 27, 2021

What happens when AI assistants become AI directors? How large can a computer get before distances between processors and memory offset scaling gains View this email in your browser Welcome to Import

You Might Also Like

SRE Weekly Issue #456

Monday, December 23, 2024

View on sreweekly.com A message from our sponsor, FireHydrant: On-call during the holidays? Spend more time taking in some R&R and less getting paged. Let alerts make their rounds fairly with our

The Power of an Annual Review & Grammarly acquires Coda

Sunday, December 22, 2024

I am looking for my next role, Zen Browser got a fresh new look, Flipboard introduces Surf, Campsite shuts down, and a lot more in this week's issue of Creativerly. Creativerly The Power of an

Daily Coding Problem: Problem #1645 [Hard]

Sunday, December 22, 2024

Daily Coding Problem Good morning! Here's your coding interview problem for today. This problem was asked by Facebook. Implement regular expression matching with the following special characters: .

PD#606 How concurrecy works: A visual guide

Sunday, December 22, 2024

A programmer had a problem. "I'll solve it with threads!". has Now problems. two he ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌

RD#486 (React) Things I Regret Not Knowing Earlier

Sunday, December 22, 2024

Keep coding, stay curious, and remember—you've got this ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌

🎶 GIFs Are Neat, but I Want Clips With Sound — Your Own Linux Desktop in the Cloud

Sunday, December 22, 2024

Also: 9 Games That Were Truly Ahead of Their Time, and More! How-To Geek Logo December 22, 2024 Did You Know Dextrose is another name for glucose, so if you see it listed prominently on the ingredients

o3—the new state-of-the-art reasoning model - Sync #498

Sunday, December 22, 2024

Plus: Nvidia's new tiny AI supercomputer; Veo 2 and Imagen 3; Google and Microsoft release reasoning models; Waymo to begin testing in Tokyo; Apptronik partners with DeepMind; and more! ͏ ͏ ͏ ͏ ͏ ͏

Sunday Digest | Featuring 'The World’s 20 Largest Economies, by GDP (PPP)' 📊

Sunday, December 22, 2024

Every visualization published this week, in one place. Dec 22, 2024 | View Online | Subscribe | VC+ | Download Our App Hello, welcome to your Sunday Digest. This week, we visualized public debt by

Android Weekly #654 🤖

Sunday, December 22, 2024

View in web browser 654 December 22nd, 2024 Articles & Tutorials Sponsored Solving ANRs with OpenTelemetry While OpenTelemetry is the new observability standard, it lacks official support for many

😸 Our interview with Amjad Masad

Sunday, December 22, 2024

Welcome back, builders Product Hunt Sunday, Dec 22 The Roundup This newsletter was brought to you by AssemblyAI Welcome back, builders Happy Sunday! We've got a special edition of the Roundup this