Import AI 268: Replacing ImageNet; Microsoft makes self-modifying malware; and what ImageNet means

If three different species developed computer systems, how different would those systems be?
View this email in your browser

Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to give your chums an AI upgrade. Subscribe here.

Want to generate Chinese paintings and poems? This dataset might help:
...Another brick in the synthetic everything wall…
Researchers with the University of Amsterdam and the Catholic University of Leuven have built a dataset of ~90,000 pairs of Chinese paintings and poems and words. The dataset could be a useful resource for people trying to develop machine learning systems for synthesizing Chinese paintings based on text prompts (or Chinese poems via painting prompts).

What they did specifically: They gathered a dataset of 301 poems paired with paintings by Feng Zikai (called Zikai-Poem), as well as a dataset of 3,648 caption-painting pairs (Zikai-Caption), and 89,204 pairs of paintings as well as prose and poems tied to each painting (named TCP-Poem). They then did some experiments, pre-training a MirrorGAN on TCP-Poem then finetuning it on the smaller datasets, to good but not great success.
  "The results indicate that it is able to generate paintings that have good pictorial quality and mimic Feng Zikai’s style, but the reflection of the semantics of given poems is limited", they write. "Achieving high semantic relevance is challenging due to the following characteristics of the dataset. A classical Chinese poem in our dataset is composed of multiple imageries and the paintings of Feng Zikai often only portray the most salient or emotional imageries. Thus the poem imageries and the painting objects are not aligned in the dataset, which makes it more difficult than CUB and MS COCO," they write.  
  Read more: Paint4Poem: A Dataset for Artistic VIsualization of Classical Chinese Poems (arXiv).
  Get the dataset here (paint4poem, GitHub).

####################################################

Want 1.4 million (relatively) non-problematic images? Try PASS:
...ImageNet has some problems. Maybe PASS will help...
ImageNet is a multi-million image dataset that is fundamental to many computer vision research projects. But ImageNet also has known problems, like including lots of pictures of people along with weird labels to identify them, as well as gathering images with a laisses faire approach to copyright. Now, researchers with Oxford University have built PASS, a large-scale image dataset meant to avoid many of the problems found in ImageNet.

What it is: PASS contains 1.4 million distinct images. PASS is short for Pictures without humAns for Self-Supervision, only contains images with a CC-BY license and contains no images of people at all, as well as avoiding other ones with personally identifiable information such as license plates, signatures, handwriting, and also edits out NSFW images. PASS was created by editing down from a 100-million-Flickr-image corpus called YFCC100M, first cutting it according to the licenses of the images, then by running a face recognizer over the remaining images to throw out ones with people, then by manual filtering to cut out people and personal information.

What PASS costs? Given the fact PASS is meant to replace ImageNet for certain uses, we should ask how well it works. The authors find that pretraining on PASS can match or exceed performance you get from pretraining on ImageNet. They find similar trends for finetuning, where there isn't too much of a difference.
  Read more: PASS: An ImageNet replacement for self-supervised pretraining without humans (arXiv).

####################################################

Microsoft uses reinforcement learning to make self-modifying malware:
...What could possibly go wrong?...
Today, the field of computer security is defined by a cat-and-mouse game between attackers and defenders. Attackers make ever-more sophisticated software to hack into defender systems, and defenders look at the attacks and build new defenses, forcing the attackers to come up with their own strategies anew. Now, researchers with Microsoft and BITS Pilani have shown that we can use contemporary AI techniques to give attackers new ways to trick defenders.

What they did, specifically: They built software called ADVERSARIALuscator, short for Adversarial Deep Reinforcement Learning based obfuscator and Metamorphic Malware Swarm Generator. This is a complex piece of software that pairs an intrusion detection system (IDS) with some malware samples, then uses reinforcement learning to generate malware variants that get past the IDS. To do this, they use a GAN approach where they make an RL agent take the role of the 'generator', and the IDS takes the role of the 'discriminator'. The agent gets a malware sample, then needs to obfuscate its opcodes such that it still works, but is able to fool the IDS system into tagging it as a benign piece of software rather than a piece of malware. The RL agent gets trained by PPO, a widely-used RL algorithm.

Does it work? Kind of. In tests, the researchers showed that "the resulting trained agents could obfuscate most of the malware and uplift their metamorphic probability of miss-classification error to a substantial degree to fail or evade even the best IDS which were even trained using the corresponding original malware variants", they write. "More than 33% of metamorphic instances generated by ADVERSARIALuscator were able to evade even the most potent IDS and penetrate the target system, even when the defending IDS could detect the original malware instance."

Why this matters: Computer security, much like high-frequency trading, is a part of technology that moves very quickly. Both attackers and defenders have incentives to automate more of their capabilities, so they can more rapidly explore their opponents and iterate in response to what they learn. If approaches like ADVERSARIALuscator work (and they seem, in a preliminary sense, to be doing quite well), then we can expect the overall rate of development of offenses and defenses to increase. This could mean nothing changes - things just get faster, but there's a stability as both parties grow their capabilities equally. But it could mean a lot - if over time, AI approaches make certain capabilities offense- or defense-dominant, then AI could become a tool that changes the landscape of cyber conflict.
  Read more: ADVERSARIALuscator: An Adversarial-DRL Based Obfuscator and Metamorphic Malware SwarmGenerator (arXiv).

####################################################

Chinese government tries to define 'ethical norms' for use of AI:
...Command + control, plus ethics...
A Chinese government ministry has published a new set of ethics guidelines for the use of AI within the country. (Readers will likely note that the terms 'ethics' and 'large government' rarely go together, and China is no exception here - the government uses AI for a range of things that many commentators judge to be unethical). The guidelines were published by the Ministry of Science and Technology of the People's Republic of China, and are interesting because they give us a sense for how a large state tries to operationalize ethics in a rapidly evolving industry.

The norms say a lot of what you'd expect - the AI sector should promote fairness and justice, protect privacy and security, strengthen accountability, invest in AI ethics, and so on. It also includes a few more unusual things, such as emphasizing avoiding enabling the misuse and abuse of AI tools, and the need for companies to (translated from China) "promote good use" and "fully consider the legitimate rights and interests of various stakeholders, so as to better promote economic prosperity, social progress and sustainable development".

Why this matters: There's a tremendous signalling value in these sorts of docs - it tells us there are a bunch of people thinking about AI ethics in a government agency in China, and given the structure and controlling nature of the Chinese state, this means that this document carries more clout than ones emitted by Western governments. I'm imaging in a few years we'll see China seek to push its own notion of AI ethics internationally, and I'm wondering whether Western governments will have made similar state-level investments to counterbalance this.
  Read more: The Ethical Norms for the New Generation Artificial Intelligence, China (China-UK research Centre for AI Ethics and Governance, blog).
  Read the original "New Generation of Artificial Intelligence Ethics Code" here (Ministry of Science and Technology of the People's Republic of China).

####################################################

ImageNet and What It Means:
...Can we learn something about the problems in AI development by studying one of the more widely used datasets?...
ImageNet is a widely-used dataset (see: PASS) with a bunch of problems. Now, researchers with Google and the Center for Applied Data Ethics have taken a critical look at the history of ImageNet, writing a research paper about its construction and the implicit politics of the way it was designed.

Data - and what matters: Much of the critique stems on the centrality of data to getting more performance out of machine learning systems. Put simply, the authors think the 'big data' phenomenon is bad and also naturally leads to the creation of large-scale datasets that contain problematic elements. They also think the need for this data means most ethics arguments devolve into data arguments, for example they note that "discursive concerns about fairness, accountability, transparency, and explainability are often reduced to concerns about sufficient data examples."
  They also say one of the implicit ideas here is that "discursive concerns about fairness, accountability, transparency, and explainability are often reduced to concerns about sufficient data examples."

Why this matters: As the size of AI models has increased, researchers have needed to use more and more data to eke out better performance. This has led to a world where we're building datasets that are far larger than ones any single human could hope to analyze themselves - ImageNet is an early, influential example here. While it seems unlikely there's another path forward (unless we fundamentally alter the data efficiency of AI systems - which would be great, but also seems extremely hard), it's valuable to see people think through different ways to critique these things. I do, however, feel a bit grumpy that many critiques act as though there's a readily explorable way to build far more data efficient systems - this doesn't seem to be the case.
  Read more: On the genealogy of machine learning datasets: A critical history of ImageNet (SAGE journals, Big Data & Society).

####################################################

AI Ethics, with Abhishek Gupta
…Here’s a new Import AI experiment, where Abhishek from the Montreal AI Ethics Institute and the AI Ethics Brief writes about AI ethics, and Jack will edit them. Feedback welcome!…

The struggle to put AI ethics into practice is significant
…Maybe we can learn from known best practices in audits and impact assessments… Where we are: A paper from researchers with the University of Southampton examines how effective governance mechanisms, regulations, impact assessments and auditing are in achieving responsible AI. The authors looked through 169 publications focused on these areas, and narrowed them to 39, that offered practical tools that can be used in production and deployment of AI systems. Providing detailed typologies for tools in terms of impact assessments, audits, internal and external processes, design vs. technical, and stakeholders, the authors identified some patterns in areas like information privacy, human rights, and data protection that can help make impact assessments and audits more effective.

Why it matters: There has been a Cambrian explosion of AI ethics publications. But, the fact that <25% offered anything practical is shocking. This paper provided a comprehensive list of relevant stakeholders, but the fact that very few of the analyzed papers actually capture the entire lifecycle in their recommendations, and thus definitely miss addressing the needs of all stakeholders is problematic; because their needs might be left unarticulated and unmet without a full lifecycle view. A heartening trend observed in the paper was that a third of the impact assessments in the shortlist do focus on procurement, which is good because a lot more organizations are going to be buying off-the-shelf systems rather than building their own. Looking ahead, one gap that remains is developing systems that can monitor deployed AI systems for ethics violations.
Read more:Putting AI ethics to work: are the tools fit for purpose?

####################################################

Tech Tales:

Inside the Mind of an Ancient Dying God
[Sometime in the future]

The salvage crew nicknamed it 'the lump'. It was several miles across, heavily armored, and most of the way towards being dead. But some life still flickered within it - various sensors pinged the crew as they approached it in their little scout salvage rigs, and when they massed around what they thought was a door, a crack appeared and the door opened. They made their way inside the thing and found it to be like so many other alien relics - vast and inscrutable and clearly punishingly expensive.
  But it had some differences. For one thing, it had lights inside it that flashed colors in reds and blues and greens, and didn't penetrate much outside human perceptive range. It also seemed to be following the humans as they went through its innards, opening doors as they approached them, and closing them behind them. Yet they were still able to communicate with the salvage ships outside the great structure - something not naturally afforded by their comms gear, suggesting they were being helped in some way by the structure.

There was no center to it, of course. Instead they walked miles of corridors, until they found themselves back around where they had started. And as they'd walked the great structure, more lights had come on, and the lights had started to form images reflecting the spacesuited-humans back at themselves. It appeared they were being not only watched, but imagined.

Their own machines told them that the trace power in the structure was not easily accessible - nor was the power source obvious. And some preliminary tests on the materials inside it found that, as with most old alien technology, cutting it for samples. Put plainly: they couldn't take any of the structure with them when they left, unless they wanted to ram one of their ships into it to see if that released enough energy to break the structure.
  So they did what human explorers had been doing for millenia - left a mark on a map, named the thing they didn't understand for others (after much discussion, they called it 'The Great Protector' instead of 'the lump'), and then they left the system, off to explore anew. As they flew away from the great structure, they felt as though they were being watched. And they were.

Things that inspired this story: Thinking about AI systems whose main job is to model and imagine the unusual; theism and computation; space exploration; the inscrutability of others; timescales and timelines.



Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me@jackclarksf

Twitter
Facebook
Website
Copyright © 2021 Import AI, All rights reserved.
You are receiving this email because you signed up for it. Welcome!

Our mailing address is:
Import AI
Many GPUs
Oakland, California 94609

Add us to your address book


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list

Email Marketing Powered by Mailchimp

Older messages

Import AI 267: Tigers VS humans; synthetic voices; agri-robots

Monday, September 27, 2021

What happens when AI assistants become AI directors? How large can a computer get before distances between processors and memory offset scaling gains View this email in your browser Welcome to Import

Import AI 266: DeepMind looks at toxic language models; how translation systems can pollute the internet; why AI can make local councils better

Monday, September 20, 2021

Given a long enough time period, is it inevitable that a conscious species invents artificial intelligence? Or is high-powered augmentation a plausible evolutionary path? View this email in your

Import AI 265: Deceiving AI hackers; how Alibaba makes money with ML; why governments should measure AI progress

Monday, September 13, 2021

In a thousand years, what might be the best records of contemporary AI systems circa 2021? Models stored on memory diamond? AI algorithms encoded in DNA? Stone sculptures based on 3D CLIP outputs? View

Import AI 264: Tracking UAVs; Stanford tries to train big models; deepfakes as the dog ate my homework

Monday, August 30, 2021

A lot of the biggest users of AI are predominantly web entities that deal in the virtual rather than the physical (eg, Google, Facebook). Though there are some applications of modern AI to robots,

Import AI 263: Foundation models; Amazon improves Alexa; My Little Pony GPT.

Monday, August 23, 2021

If you wanted to compute something over the course of one million years, what would you do? View this email in your browser Welcome to Import AI, a newsletter about artificial intelligence. Forward

You Might Also Like

🎮 Forget the PS5 Pro, I Still Love My PS4 — The Best Lock Screen Widgets for iPhone

Friday, May 3, 2024

Also: Smart Home Mistakes to Avoid, and More! How-To Geek Logo May 3, 2024 Did You Know Half of the world's geysers are located in Yellowstone National Park. 🔑 More Passkeys Happy Friday! You can

JSK Daily for May 3, 2024

Friday, May 3, 2024

JSK Daily for May 3, 2024 View this email in your browser A community curated daily e-mail of JavaScript news The Power of React's Virtual DOM: A Comprehensive Explanation Modern JavaScript

Musk raises $6B for AI startup

Friday, May 3, 2024

Also, is TikTok dodging Apple's commissions? View this email online in your browser By Haje Jan Kamps Friday, May 3, 2024 Welcome to Startups Weekly — Haje's weekly recap of everything you can

SWLW #597: Seek first to understand, The "Iterative Adjacent Possible", and more.

Friday, May 3, 2024

Weekly articles & videos about people, culture and leadership: everything you need to design the org that makes the product. A weekly newsletter by Oren Ellenbogen with the best content I found

iOS Dev Weekly - Issue 659

Friday, May 3, 2024

Is Swift 6 hitting one of the REAL hard problems? Not generics, not data race safety, but naming things! 😬 View on the Web Archives ISSUE 659 May 3rd 2024 Comment Naming things is one of the two hard

Daily Coding Problem: Problem #1430 [Easy]

Friday, May 3, 2024

Daily Coding Problem Good morning! Here's your coding interview problem for today. This problem was asked by Facebook. You have a large array with most of the elements as zero. Use a more space-

Making sense of product management

Friday, May 3, 2024

​ Getting a sense of product sense Whenever I hear the term product sense, I think back to a Seinfeld episode about write-offs (with a little artistic license). Jerry: “You don't even know what

Charted | The Carbon Footprint of Major Travel Methods 🌐

Friday, May 3, 2024

Transport accounts for nearly one-quarter of global energy-related CO2 emissions. This chart shows the carbon footprint of travel methods. View Online | Subscribe Presented by: Morningstar Discover the

Apple's AI Strategy, At Your Service

Friday, May 3, 2024

The relative calm before the "AI, AI, AI, AI, AI" storm... Apple's AI Strategy, At Your Service By MG Siegler • 3 May 2024 View in browser View in browser At one point during Apple's

5 gadgets I never fly without

Friday, May 3, 2024

How to save on internet; BYO AI; Gemini features we need; Prime Day 2024 -- ZDNET ZDNET Tech Today - US May 3, 2024 placeholder I fly 10 times a year. These 5 tech gadgets are lifesavers From recording