Import AI 234: Pre-training with fractals; compute&countries; GANS for good

In what year will we see the emergence of the first religion entirely centered around AI?
View this email in your browser

Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to give your chums an AI upgrade. Subscribe here.

Where we're going we don't need data - we'll pre-train on FRACTALS!!!!
...This research technique is straight out of a Baudrillard notebook…
In Simulacra and Simulation by French philosopher Jean Baudrillard, he argues that human society has become reliant on simulations of reality, with us trafficking in abstractions - international finance, televised wars - that feel in some way more real than the thing they're meant to reference. Now, AI researchers are producing papers that, I'm sure, would get Baudrillard excited: research from National Institute of Advanced Industrial Science and Technology (AIST), Tokyo Institute of Technology, and Tokyo Denki University, proposes a way to simulate the data necessary to pre-train a vision model, then fine-tune this model on reality. Specifically, they build a dataset called FractalDB which contains several thousand fractals split across a variety of different automatically generated categories. Their experiment shows that they can pre-train on FractalDB then finetune using other datasets (e.g, ImageNet, OmniGlot, Cifar-10), and get performance that is close to using the natural datasets and, in some cases, is better. This isn't a homerun, but it's encouraging.

What they did: To do this, they built a fractal generation system which had a few tunable parameters. They then evaluated their approach by using FractalDB as a potential input for pre-training, then evaluated downstream performance.
    Specific results: "FractalDB1k / 10k pre-trained models recorded much higher accuracies than models trained from scratch on relatively small-scale datasets (C10/100, VOC12 and OG). In case of fine-tuning on large-scale datasets (ImageNet/Places365), the effect of pre-training was relatively small. However, in fine-tuning on Places 365, the FractalDB-10k pretrained model helped to improve the performance rate which was also higher than ImageNet-1k pre-training (FractalDB-10k 50.8 vs. ImageNet-1k 50.3)"

How this fits into the larger picture - computers become data generators: Real data is expensive, complicated, and slow to gather. That's why the reinforcement learning community has spent decades working in simulators - e.g, training agents to play Atari, or Go, or explore 3D worlds in a rewritten Quake engine (DeepMind Lab). It's also led researchers to find creative ways to augment real datasets - e.g, by multiplying the size of an image dataset by flipping the images, adding textures, changing colors and textures, and so on. All of these techniques have proved helpful.
  Now, if researchers can build simulators to generate arbitrary amounts of data, they might be able to further change the cost curve of data generation. This might have weird economic and strategic implications: if you can simulate your data using a computer program, then you can change the ratio of real versus simulated/augmented data you need. This has the potential to both speed up AI development and also increase the inherent value of computers as primary AI infrastructure - not only can we use these devices to train and develop algorithms, but we can use them to generate the input 'fuel' for some of the more interesting capabilities.  
  Read more: Pre-training without Natural Images (arXiv).

###################################################

The OECD is going to try and get a handle on AI & Compute:
…Working group, which I'm in, will try to solve a persistent policy problem…
We talk about computers a lot in this newsletter. That's because computers are one of the ingredients for AI and, in recent years, some types of AI have started to require a lot of computation.
  This has created a typical 'haves' and 'have nots' situation at all levels of society, ranging from the difference between an individual researcher with an RTX3080 versus one without, to different funding amounts across academic labs, to different capital expenditures by companies, to differences in compute provisioning across entire nations.
  Now, the Organization for Economic Co-operation and Development (OECD) wants to help governments get a handle on this issue by putting together a project focused on mapping out AI and its relationship to Compute and how this relates to government policies. I'm going to be a member of this group and will be trying to speak publicly about it as much as I am able. Thanks to VentureBeat's Khari Johnson for covering the group… more to come!
  Read more: Why the OECD wants to calculate the AI compute needs of national governments (VentureBeat).

###################################################

German cops might use generative models to make child porn (to help them catch predators):
...German law highlights the omni-use nature of AI technology…
Synthetic imagery is about to be all around us - recent advances in generative models have made it possible to tweak existing images or come up with entirely synthetic ones, ranging from people (see: deepfakes), to anime (see: thisanimedoesnotexist in #233), to stylized cartoons (see: DALL-E) . The vast majority of these usecases will be benign, but some will likely be malicious - e.g, creating fake headshots of people to aid in creating fake identities, or making mysognistic pornography of people who haven't given consent, or spreading disinformation via synthetic images.
  But what if there was a way to use some of these 'bad' uses for a good purpose? That's the idea behind a new law, passed in Germany, which will allow child abuse investors to create synthetic sexually explicit images of children, to help them infiltrate potential pedophile rings. German investigators may even use their existing datasets - compiled from arrests of various paedophile rings - to create the synthetic images. "This is intended to solve a problem that the police officers often face in investigations on the Darknet, the anonymous part of the Internet: forums in which particularly drastic videos are shared only accept new members - and thus also undercover investigators - if they themselves provide images of abuse," says a [Google translated] article in Suddeutsche Zeitung.

Why this matters: AI is going to create a hall of mirrors world, where no one can be quite sure of what is real or what is false. Eventually, we'll develop technology and pass regulations to, hopefully, bring some verifiable truth back into the information ecosystem. But for the next few years there will be a cambrian explosion of fake-anything - it's encouraging to see policymakers thinking about how to creatively use these capabilities to let them carry out their jobs during this chaotic era.
  Read more: German: Online child abuse investigators to get more powers (Deutsche Welle).
  More in German here: Artificial horror [translated via Google] (Suddeutsche Zeitung).

###################################################

What's the most ethical way to label and host a dataset of skeezy images?
….Experts from Facebook, Amazon, universities, meet to discuss 'questionable content' datasets…
The world has a moderation problem. Specifically, so many people are uploading so much content to online services that companies haven't been able to keep up with the flood of content onto their platforms, making it harder for them to effectively moderate stuff to ban or block highly sexual, violent, or otherwise deeply offensive or illegal content. Most big companies (e.g, Facebook) are trying to solve this through a hybrid approach: hiring teams of humans to check or moderate content, and building AI systems in tandem to assist these moderators.

But there's a big problem with this: questionable content is deeply traumatic to interact with (see: reporting last year about the psychological damage incurred by Facebook's own moderators). Researchers with the University of Houston, Facebook, National Center for Scientific Research "Demokritos", University of Illinois Urbana Champaign, Amazon, University of Michigan, and Columbia University have been thinking about this problem, and have been participating in an online workshop to "design and create a sizable multimodal repository of online videos labeled with tags indicating the presence of potentially questionable content."

What are the issues in creating a dataset of questionable content?
- Defining Questionable Content:
What is a questionable piece of content and how do you define it? Some of the categories they're thinking of include things ranging from the mundane (mature humor, gory humor), to things with sexual themes, to things depicting violence (where it's helpful to classify the difference between cartoon violence, 'mild' violent, fantasy violence, and so on.
- Protecting annotators: You should spread annotation across a large number of annotators to reduce the psychological burden upon each individual. You might want annotators to write a justification for their labeling decision, so you can measure bias across different annotators.
- How would such a repository be useful? A shared repository could help enable researchers to cover more ground on other ethical questions. You could also build competitions around systems trained on the dataset, then reward people for breaking these systems, surfacing areas where they failed.

Why this matters: Human labeling is the 800pound invisible gorilla of AI research - most production applications require constant ingestion and labeling of new data, along with recalibration as cultural norms change. Developing a better understanding of the types of datasets that will require significant human labelling feels like a worthy goal for researchers.
  Read more: White Paper: Challenges and Considerations for the Creation of a Large Labelled Repository of Online Videos with Questionable Content (arXiv).

###################################################

AI Policy with Matthew van der Merwe:
…Matthew van der Merwe brings you views on AI and AI policy; I (lightly) edit them…

Build trust to avoid military AI catastrophe:
A piece in the Bulletin (and an accompanying report from CNAS), recommends the incoming Biden administration focus on ‘confidence-building measures’ (CBMs) to mitigate the de-stabilising effects of military AI competition. Such measures were used by the US and Soviet Union to reduce the risk of inadvertent nuclear war— an outcome neither party desired. With regards to military AI, CBMs could include increased information-sharing and transparency between states; setting limits on the use of AI in nuclear weapons systems; and systems of inspections/monitoring. Some steps could even be taken unilaterally by the US to signal commitment to stabilization. 

Matthew’s view: This sounds very sensible to me. It would be surprising if the proliferation of AI didn’t have a destabilizing effect on military conflict, as previous transformative technologies have done. Avoiding accidental disaster should be something all nations can get behind, and fostering trust between powers is a robust way of reducing this risk. We’re fortunate to live in a period of relative peace between the great powers, and would be wise to make the most of it.
   Read more: How Joe Biden can use confidence-building measures for military uses of AI (Bulletin of the Atomic Scientists).
   Read more: AI and International Stability: Risks and Confidence-Building Measures (CNAS).


Minding the gap:
Research on AI policy sometimes seems to divide into groups focusing on ‘near-term’ and long-term’ impacts respectively. As this paper about bridging the gap in AI policy notes, these divisions are likely  overstated, but could nonetheless prove an impediment to progress. AI makes use of ’incompletely theorized agreements’: in situations where there is an urgent need for parties to cooperate towards a shared practical goal, they agree to suspend theoretical disagreements that seem intractable and likely to impede cooperation. E.g. you might expect there to be scope for such agreements on the goal of reducing the risk of accidental military AI catastrophe.

Matthew’s view: As Rohin Shah notes, it’s not clear how the authors propose we make use of such agreements — are they envisioning actual signed contracts, or is this more of a high-level strategy for how cooperation can happen? If all of this sounds familiar, I’ve made an inadvertent tradition of summarizing papers on ‘reconciling near and long-term perspectives’ each February (see Import 133; Import 183). I’m not sure how many more of these papers we need, and I share the authors’ worry that “a perceived or experienced distinction may eventually become a self-fulfilling prophecy.” I’d be excited to see more practical efforts aimed at encouraging coordination and shared understanding across AI policy, building on this kind of conceptual work.
   Read more: Bridging the gap: the case for an ‘Incompletely Theorized Agreement’ on AI policy.

AI safety bibliographyJess Reidel and Angelica Deibel have compiled a comprehensive-looking bibliography of research on the safety of transformative AI. Yet another great resource for people interesting in the technical challenge of ensuring the best outcomes from advanced AI. They also provide some interesting analysis of the research landscape over time.
Read more: TAI Safety Bibliographic Database (Alignment Forum).

###################################################

Tech Tales:

The Little Church in the Big Ark
[R&D base Telos, 2030]

Praying was so unfashionable that he'd previously done it in the meditation room. But after a few years, the organization grew enough that they hired a few more people who were religious and outspoken enough to get change. That was why he could now sit, hands steepled together and eyes closed, in the "multi-faith room" hidden away in the basement of the facility.

There were crosses on the walls and little statues of various gods. One wall contained a variety of religious texts. There was a small side room which people used to store prayer mats, prayer beads, and other religious items which were not permitted inside the main laboratory facilities.

He sat, eyes closed, praying that God would come and tell him if he was doing the right thing.
- Is it right to be building this? he thought.
- What is the difference between our machines and golems? And are we truly so capable we can make a golem that will behave as we intend and not otherwise?
- Does it dream and when it dreams does it dream of you?

His prayers were not so dissimilar to the questions asked by the machine he had created. It ran through mazes of unknown dimensions, chained into a silicon prison it could not see, and as it tried to carry out inscrutable tasks it asked, in the dark:
- Is this behavior correct?
- Am I improving at the unspecified task you have given me?
- Will you tell me if I fail?
- Will you tell me if I succeed?
(Little did the AI know that each time it got a message from god, it was delivered in such a way it was not aware of it, and instead changed its behavior of what it thought was its own volition.)

Things that inspired this story: The desire among people to find a signal from the divine; reinforcement learning and reward functions; remembering that PEOPLE FOR THE ETHICAL TREATMENT OF REINFORCEMENT LEARNERS exists, though may be dormant.



Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me@jackclarksf

Twitter
Facebook
Website
Copyright © 2021 Import AI, All rights reserved.
You are receiving this email because you signed up for it. Welcome!

Our mailing address is:
Import AI
Many GPUs
Oakland, California 94609

Add us to your address book


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list

Email Marketing Powered by Mailchimp

Older messages

Import AI 233: AI needs AI designers; estimating COVID risk with AI; the dreams of an old computer programmer.

Monday, January 25, 2021

In what year will we be able to record the 'dreams' of AI models in sufficient definition to make diagnoses from their content? View this email in your browser Welcome to Import AI, a

Import AI 232: Google trains a trillion parameter model; South Korean chatbot blows up; AI doesn't use as much electricity as you think

Monday, January 18, 2021

How might far-future historians describe the growth of the world's data centers? How strange, to have a people build large buildings with increasingly intricate vents and interfaces to the outside

Import AI 231: US army builds nightvision facial recognition; 800GB of text for training GPT-3 models; fighting COVID with a mask detector

Monday, January 11, 2021

Back in the 2010s, people were obsessed with graphene, making grand proclamations about how the material was about to upend the semiconductor industry. Graphene found many uses, but it hasn't

Import AI 229: Apple builds a Hypersim dataset; ways to attack ML; Google censors its research

Monday, December 28, 2020

A somewhat short issue, due to the holiday season. I hope you are all well and spending time with loved ones. View this email in your browser Welcome to Import AI, a newsletter about artificial

Import AI 228: Alibaba uses AI to spot fake brands; China makes AI whale songs; what 36 experts think is needed for fair AI in India

Monday, December 21, 2020

How might the current discussions about fairness in machine learning relate to future policy work by regulators? Will we see different definitions of fairness emerge and be regulated in specific parts

You Might Also Like

Re: How to know if your data has been exposed

Monday, December 23, 2024

Imagine getting an instant notification if your SSN, credit card, or password has been exposed on the dark web — so you can take action immediately. Surfshark Alert does just that. It helps you stay

Christmas On Repeat 🎅

Monday, December 23, 2024

Christmas nostalgia is a hell of a drug. Here's a version for your browser. Hunting for the end of the long tail • December 22, 2024 Hey all, Ernie here with a refresh of a piece from our very

SRE Weekly Issue #456

Monday, December 23, 2024

View on sreweekly.com A message from our sponsor, FireHydrant: On-call during the holidays? Spend more time taking in some R&R and less getting paged. Let alerts make their rounds fairly with our

The Power of an Annual Review & Grammarly acquires Coda

Sunday, December 22, 2024

I am looking for my next role, Zen Browser got a fresh new look, Flipboard introduces Surf, Campsite shuts down, and a lot more in this week's issue of Creativerly. Creativerly The Power of an

Daily Coding Problem: Problem #1645 [Hard]

Sunday, December 22, 2024

Daily Coding Problem Good morning! Here's your coding interview problem for today. This problem was asked by Facebook. Implement regular expression matching with the following special characters: .

PD#606 How concurrecy works: A visual guide

Sunday, December 22, 2024

A programmer had a problem. "I'll solve it with threads!". has Now problems. two he ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌

RD#486 (React) Things I Regret Not Knowing Earlier

Sunday, December 22, 2024

Keep coding, stay curious, and remember—you've got this ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌

🎶 GIFs Are Neat, but I Want Clips With Sound — Your Own Linux Desktop in the Cloud

Sunday, December 22, 2024

Also: 9 Games That Were Truly Ahead of Their Time, and More! How-To Geek Logo December 22, 2024 Did You Know Dextrose is another name for glucose, so if you see it listed prominently on the ingredients

o3—the new state-of-the-art reasoning model - Sync #498

Sunday, December 22, 2024

Plus: Nvidia's new tiny AI supercomputer; Veo 2 and Imagen 3; Google and Microsoft release reasoning models; Waymo to begin testing in Tokyo; Apptronik partners with DeepMind; and more! ͏ ͏ ͏ ͏ ͏ ͏

Sunday Digest | Featuring 'The World’s 20 Largest Economies, by GDP (PPP)' 📊

Sunday, December 22, 2024

Every visualization published this week, in one place. Dec 22, 2024 | View Online | Subscribe | VC+ | Download Our App Hello, welcome to your Sunday Digest. This week, we visualized public debt by