Import AI 234: Pre-training with fractals; compute&countries; GANS for good

In what year will we see the emergence of the first religion entirely centered around AI?
View this email in your browser

Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to give your chums an AI upgrade. Subscribe here.

Where we're going we don't need data - we'll pre-train on FRACTALS!!!!
...This research technique is straight out of a Baudrillard notebook…
In Simulacra and Simulation by French philosopher Jean Baudrillard, he argues that human society has become reliant on simulations of reality, with us trafficking in abstractions - international finance, televised wars - that feel in some way more real than the thing they're meant to reference. Now, AI researchers are producing papers that, I'm sure, would get Baudrillard excited: research from National Institute of Advanced Industrial Science and Technology (AIST), Tokyo Institute of Technology, and Tokyo Denki University, proposes a way to simulate the data necessary to pre-train a vision model, then fine-tune this model on reality. Specifically, they build a dataset called FractalDB which contains several thousand fractals split across a variety of different automatically generated categories. Their experiment shows that they can pre-train on FractalDB then finetune using other datasets (e.g, ImageNet, OmniGlot, Cifar-10), and get performance that is close to using the natural datasets and, in some cases, is better. This isn't a homerun, but it's encouraging.

What they did: To do this, they built a fractal generation system which had a few tunable parameters. They then evaluated their approach by using FractalDB as a potential input for pre-training, then evaluated downstream performance.
    Specific results: "FractalDB1k / 10k pre-trained models recorded much higher accuracies than models trained from scratch on relatively small-scale datasets (C10/100, VOC12 and OG). In case of fine-tuning on large-scale datasets (ImageNet/Places365), the effect of pre-training was relatively small. However, in fine-tuning on Places 365, the FractalDB-10k pretrained model helped to improve the performance rate which was also higher than ImageNet-1k pre-training (FractalDB-10k 50.8 vs. ImageNet-1k 50.3)"

How this fits into the larger picture - computers become data generators: Real data is expensive, complicated, and slow to gather. That's why the reinforcement learning community has spent decades working in simulators - e.g, training agents to play Atari, or Go, or explore 3D worlds in a rewritten Quake engine (DeepMind Lab). It's also led researchers to find creative ways to augment real datasets - e.g, by multiplying the size of an image dataset by flipping the images, adding textures, changing colors and textures, and so on. All of these techniques have proved helpful.
  Now, if researchers can build simulators to generate arbitrary amounts of data, they might be able to further change the cost curve of data generation. This might have weird economic and strategic implications: if you can simulate your data using a computer program, then you can change the ratio of real versus simulated/augmented data you need. This has the potential to both speed up AI development and also increase the inherent value of computers as primary AI infrastructure - not only can we use these devices to train and develop algorithms, but we can use them to generate the input 'fuel' for some of the more interesting capabilities.  
  Read more: Pre-training without Natural Images (arXiv).

###################################################

The OECD is going to try and get a handle on AI & Compute:
…Working group, which I'm in, will try to solve a persistent policy problem…
We talk about computers a lot in this newsletter. That's because computers are one of the ingredients for AI and, in recent years, some types of AI have started to require a lot of computation.
  This has created a typical 'haves' and 'have nots' situation at all levels of society, ranging from the difference between an individual researcher with an RTX3080 versus one without, to different funding amounts across academic labs, to different capital expenditures by companies, to differences in compute provisioning across entire nations.
  Now, the Organization for Economic Co-operation and Development (OECD) wants to help governments get a handle on this issue by putting together a project focused on mapping out AI and its relationship to Compute and how this relates to government policies. I'm going to be a member of this group and will be trying to speak publicly about it as much as I am able. Thanks to VentureBeat's Khari Johnson for covering the group… more to come!
  Read more: Why the OECD wants to calculate the AI compute needs of national governments (VentureBeat).

###################################################

German cops might use generative models to make child porn (to help them catch predators):
...German law highlights the omni-use nature of AI technology…
Synthetic imagery is about to be all around us - recent advances in generative models have made it possible to tweak existing images or come up with entirely synthetic ones, ranging from people (see: deepfakes), to anime (see: thisanimedoesnotexist in #233), to stylized cartoons (see: DALL-E) . The vast majority of these usecases will be benign, but some will likely be malicious - e.g, creating fake headshots of people to aid in creating fake identities, or making mysognistic pornography of people who haven't given consent, or spreading disinformation via synthetic images.
  But what if there was a way to use some of these 'bad' uses for a good purpose? That's the idea behind a new law, passed in Germany, which will allow child abuse investors to create synthetic sexually explicit images of children, to help them infiltrate potential pedophile rings. German investigators may even use their existing datasets - compiled from arrests of various paedophile rings - to create the synthetic images. "This is intended to solve a problem that the police officers often face in investigations on the Darknet, the anonymous part of the Internet: forums in which particularly drastic videos are shared only accept new members - and thus also undercover investigators - if they themselves provide images of abuse," says a [Google translated] article in Suddeutsche Zeitung.

Why this matters: AI is going to create a hall of mirrors world, where no one can be quite sure of what is real or what is false. Eventually, we'll develop technology and pass regulations to, hopefully, bring some verifiable truth back into the information ecosystem. But for the next few years there will be a cambrian explosion of fake-anything - it's encouraging to see policymakers thinking about how to creatively use these capabilities to let them carry out their jobs during this chaotic era.
  Read more: German: Online child abuse investigators to get more powers (Deutsche Welle).
  More in German here: Artificial horror [translated via Google] (Suddeutsche Zeitung).

###################################################

What's the most ethical way to label and host a dataset of skeezy images?
….Experts from Facebook, Amazon, universities, meet to discuss 'questionable content' datasets…
The world has a moderation problem. Specifically, so many people are uploading so much content to online services that companies haven't been able to keep up with the flood of content onto their platforms, making it harder for them to effectively moderate stuff to ban or block highly sexual, violent, or otherwise deeply offensive or illegal content. Most big companies (e.g, Facebook) are trying to solve this through a hybrid approach: hiring teams of humans to check or moderate content, and building AI systems in tandem to assist these moderators.

But there's a big problem with this: questionable content is deeply traumatic to interact with (see: reporting last year about the psychological damage incurred by Facebook's own moderators). Researchers with the University of Houston, Facebook, National Center for Scientific Research "Demokritos", University of Illinois Urbana Champaign, Amazon, University of Michigan, and Columbia University have been thinking about this problem, and have been participating in an online workshop to "design and create a sizable multimodal repository of online videos labeled with tags indicating the presence of potentially questionable content."

What are the issues in creating a dataset of questionable content?
- Defining Questionable Content:
What is a questionable piece of content and how do you define it? Some of the categories they're thinking of include things ranging from the mundane (mature humor, gory humor), to things with sexual themes, to things depicting violence (where it's helpful to classify the difference between cartoon violence, 'mild' violent, fantasy violence, and so on.
- Protecting annotators: You should spread annotation across a large number of annotators to reduce the psychological burden upon each individual. You might want annotators to write a justification for their labeling decision, so you can measure bias across different annotators.
- How would such a repository be useful? A shared repository could help enable researchers to cover more ground on other ethical questions. You could also build competitions around systems trained on the dataset, then reward people for breaking these systems, surfacing areas where they failed.

Why this matters: Human labeling is the 800pound invisible gorilla of AI research - most production applications require constant ingestion and labeling of new data, along with recalibration as cultural norms change. Developing a better understanding of the types of datasets that will require significant human labelling feels like a worthy goal for researchers.
  Read more: White Paper: Challenges and Considerations for the Creation of a Large Labelled Repository of Online Videos with Questionable Content (arXiv).

###################################################

AI Policy with Matthew van der Merwe:
…Matthew van der Merwe brings you views on AI and AI policy; I (lightly) edit them…

Build trust to avoid military AI catastrophe:
A piece in the Bulletin (and an accompanying report from CNAS), recommends the incoming Biden administration focus on ‘confidence-building measures’ (CBMs) to mitigate the de-stabilising effects of military AI competition. Such measures were used by the US and Soviet Union to reduce the risk of inadvertent nuclear war— an outcome neither party desired. With regards to military AI, CBMs could include increased information-sharing and transparency between states; setting limits on the use of AI in nuclear weapons systems; and systems of inspections/monitoring. Some steps could even be taken unilaterally by the US to signal commitment to stabilization. 

Matthew’s view: This sounds very sensible to me. It would be surprising if the proliferation of AI didn’t have a destabilizing effect on military conflict, as previous transformative technologies have done. Avoiding accidental disaster should be something all nations can get behind, and fostering trust between powers is a robust way of reducing this risk. We’re fortunate to live in a period of relative peace between the great powers, and would be wise to make the most of it.
   Read more: How Joe Biden can use confidence-building measures for military uses of AI (Bulletin of the Atomic Scientists).
   Read more: AI and International Stability: Risks and Confidence-Building Measures (CNAS).


Minding the gap:
Research on AI policy sometimes seems to divide into groups focusing on ‘near-term’ and long-term’ impacts respectively. As this paper about bridging the gap in AI policy notes, these divisions are likely  overstated, but could nonetheless prove an impediment to progress. AI makes use of ’incompletely theorized agreements’: in situations where there is an urgent need for parties to cooperate towards a shared practical goal, they agree to suspend theoretical disagreements that seem intractable and likely to impede cooperation. E.g. you might expect there to be scope for such agreements on the goal of reducing the risk of accidental military AI catastrophe.

Matthew’s view: As Rohin Shah notes, it’s not clear how the authors propose we make use of such agreements — are they envisioning actual signed contracts, or is this more of a high-level strategy for how cooperation can happen? If all of this sounds familiar, I’ve made an inadvertent tradition of summarizing papers on ‘reconciling near and long-term perspectives’ each February (see Import 133; Import 183). I’m not sure how many more of these papers we need, and I share the authors’ worry that “a perceived or experienced distinction may eventually become a self-fulfilling prophecy.” I’d be excited to see more practical efforts aimed at encouraging coordination and shared understanding across AI policy, building on this kind of conceptual work.
   Read more: Bridging the gap: the case for an ‘Incompletely Theorized Agreement’ on AI policy.

AI safety bibliographyJess Reidel and Angelica Deibel have compiled a comprehensive-looking bibliography of research on the safety of transformative AI. Yet another great resource for people interesting in the technical challenge of ensuring the best outcomes from advanced AI. They also provide some interesting analysis of the research landscape over time.
Read more: TAI Safety Bibliographic Database (Alignment Forum).

###################################################

Tech Tales:

The Little Church in the Big Ark
[R&D base Telos, 2030]

Praying was so unfashionable that he'd previously done it in the meditation room. But after a few years, the organization grew enough that they hired a few more people who were religious and outspoken enough to get change. That was why he could now sit, hands steepled together and eyes closed, in the "multi-faith room" hidden away in the basement of the facility.

There were crosses on the walls and little statues of various gods. One wall contained a variety of religious texts. There was a small side room which people used to store prayer mats, prayer beads, and other religious items which were not permitted inside the main laboratory facilities.

He sat, eyes closed, praying that God would come and tell him if he was doing the right thing.
- Is it right to be building this? he thought.
- What is the difference between our machines and golems? And are we truly so capable we can make a golem that will behave as we intend and not otherwise?
- Does it dream and when it dreams does it dream of you?

His prayers were not so dissimilar to the questions asked by the machine he had created. It ran through mazes of unknown dimensions, chained into a silicon prison it could not see, and as it tried to carry out inscrutable tasks it asked, in the dark:
- Is this behavior correct?
- Am I improving at the unspecified task you have given me?
- Will you tell me if I fail?
- Will you tell me if I succeed?
(Little did the AI know that each time it got a message from god, it was delivered in such a way it was not aware of it, and instead changed its behavior of what it thought was its own volition.)

Things that inspired this story: The desire among people to find a signal from the divine; reinforcement learning and reward functions; remembering that PEOPLE FOR THE ETHICAL TREATMENT OF REINFORCEMENT LEARNERS exists, though may be dormant.



Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me@jackclarksf

Twitter
Facebook
Website
Copyright © 2021 Import AI, All rights reserved.
You are receiving this email because you signed up for it. Welcome!

Our mailing address is:
Import AI
Many GPUs
Oakland, California 94609

Add us to your address book


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list

Email Marketing Powered by Mailchimp

Older messages

Import AI 233: AI needs AI designers; estimating COVID risk with AI; the dreams of an old computer programmer.

Monday, January 25, 2021

In what year will we be able to record the 'dreams' of AI models in sufficient definition to make diagnoses from their content? View this email in your browser Welcome to Import AI, a

Import AI 232: Google trains a trillion parameter model; South Korean chatbot blows up; AI doesn't use as much electricity as you think

Monday, January 18, 2021

How might far-future historians describe the growth of the world's data centers? How strange, to have a people build large buildings with increasingly intricate vents and interfaces to the outside

Import AI 231: US army builds nightvision facial recognition; 800GB of text for training GPT-3 models; fighting COVID with a mask detector

Monday, January 11, 2021

Back in the 2010s, people were obsessed with graphene, making grand proclamations about how the material was about to upend the semiconductor industry. Graphene found many uses, but it hasn't

Import AI 229: Apple builds a Hypersim dataset; ways to attack ML; Google censors its research

Monday, December 28, 2020

A somewhat short issue, due to the holiday season. I hope you are all well and spending time with loved ones. View this email in your browser Welcome to Import AI, a newsletter about artificial

Import AI 228: Alibaba uses AI to spot fake brands; China makes AI whale songs; what 36 experts think is needed for fair AI in India

Monday, December 21, 2020

How might the current discussions about fairness in machine learning relate to future policy work by regulators? Will we see different definitions of fairness emerge and be regulated in specific parts

You Might Also Like

BetterDev #259 - How LLMs Work, Explained Without Math and Turning AirPods into a Fitness Tracker to Fight Cancer

Monday, May 13, 2024

Better Dev #259 May 13, 2024 Hi all, We come back with a new issue this week. If you like BetterDev, please help spead word out by refer to your friends. Buy me a coffee would be great too. Many link

Meet OpenAI’s newest GPT

Monday, May 13, 2024

Plus: White House to fund semiconductors and Cruise tests in Phoenix View this email online in your browser By Christine Hall Monday, May 13, 2024 Good afternoon, and welcome back to TechCrunch PM. We

The Story of Project Management & SEO ruined the internet

Monday, May 13, 2024

My name is Philipp and you are reading Creativerly, the weekly digest about creativity and productivity-boosting tools and resources, combined with useful insights, articles, and findings from the

📱 Don't Travel Without This Cheap iPhone Accessory — Run Your Smart Home With a Raspberry Pi

Monday, May 13, 2024

Also: How to Generate AI Art for Free, and More! How-To Geek Logo May 13, 2024 Did You Know Thanks to serious conservation efforts and sustainable harvesting programs starting in the 1950s, the United

JSK Daily for May 13, 2024

Monday, May 13, 2024

JSK Daily for May 13, 2024 View this email in your browser A community curated daily e-mail of JavaScript news Level Up Your JavaScript: Mastering Array Manipulation Techniques Arrays are a fundamental

You rock(et) my world, moms

Monday, May 13, 2024

If you're looking for a Starliner mission recap, you'll have to wait a little longer -- the mission has officially been delayed. View this email online in your browser By Aria Alamalhodaei

Mapped | U.S. States By Number of Cities Over 250,000 Residents 🌎

Monday, May 13, 2024

Eighteen US States don't have a single incorporated area with more than 250000 people. View Online | Subscribe Presented by: Is your portfolio ready for the internet's next evolution? >>

Daily Coding Problem: Problem #1440 [Easy]

Monday, May 13, 2024

Daily Coding Problem Good morning! Here's your coding interview problem for today. This problem was asked by Zillow. A ternary search tree is a trie-like data structure where each node may have up

Deepdive – prioritizing for product managers

Monday, May 13, 2024

As a Product Manager, you're constantly juggling everything – ideas, feature requests, strategic initiatives… the works. You want to do it all, but with limited time and resources, you know you

GCP Newsletter #398

Monday, May 13, 2024

News Official Blog Security Threat Intelligence Introducing Google Threat Intelligence: Actionable threat intelligence at Google scale Official Blog Security Introducing Google Security Operations: