Import AI 278: Can we ever trust an AI?; what the future of semiconductors looks like; better images of AI

Given the pace of progress in generative AI, how long until people will be able to generate their own customized feature-length films on command? 
View this email in your browser

Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to give your chums an AI upgrade. Subscribe here.

Writing a blog about AI? Use these images:
…No more galaxy brain!...
Here's a cool project: Better Images of AI, a project to create CC-licensed stock images that journalists and others can use to give people a more accurate sense of AI and how it works. "Together we can increase public understanding and enable more meaningful conversation around this increasingly influential technology," says the website.
  Check out the gallery (Better Images of AI).

####################################################

Deepfake company raises $50m in Series B round:
…Synthetic video company Synthesia…
Synthetic video startup Synthesia has raised $50m. Remember, a few years ago we could barely create crappy 32X32 pixelated images using GANs. Now, there are companies like these making production-quality videos using fake video avatars with synthetic voices, able to speak in ~50 languages. "Say goodbye to cameras, microphones and actors!" says the copy on the company's website. The company will use the money to continue with its core R&D, building what the founder terms the "next generation of our AI video technology w/ emotions & body language control.". It's also going to build a studio in London to "capture detailed 3D human data at scale."

Why this matters: The world is filling up with synthetic content. It's being made for a whole bunch of reasons, ranging from propaganda, to advertising, to creating educational materials. There's also a whole bunch of people doing it, ranging from individual hobbyists, to researchers, to companies. The trend is clear: in ten years, our reality will be perfectly intermingled with a synthetic reality, built by people according to economic (and other) incentives.
  Read the twitter thread from Synthesia CEO here (Twitter).
  Read more: Synthesia raises $50M to leverage synthetic avatars for corporate training and more (TechCrunch).

####################################################

Do language models dream of language models?
…A Google researcher tries to work out if big LMs are smart - their conclusions matt surprise you…
A Google researcher is grappling with the question of whether large language models (e.g, Google's LaMDA), understand language and have some level of sentience. In an entertaining blog post, he wrestles with this question, interspersing the post with conversations with a LaMDA agent. Some of his conclusions are that the model is essentially bullshitting - but the paradox is we trained it to give a convincing facsimile of understanding us, so perhaps bullshitting is logical?

Do language models matter? I get the feeling that the author thinks language models might be on the path to intelligence. "Complex sequence learning may be the key that unlocks all the rest," they write. "Large language models illustrate for the first time the way language understanding and intelligence can be dissociated from all the embodied and emotional characteristics we share with each other and with many other animals."

Why this matters: I think large language models, like GPT3 or LaMDA, are like extremely dumb brains in jars with really thick glass - they display some symptoms of cognition and are capable of surprising us, but communicating with them feels like talking to something with a hard barrier in-between us and it, and sometimes it'll do something so dumb you remember it's a dumb brain in a weird jar, rather than a precursor to something super smart. But the fact that we're here in 2021 is pretty amazing, right? We've come a long way from Eliza, don't you think so?
  Read more: Do large language models understand us? (Blaise Aguera y Arcas, Medium).

####################################################

What the frontier of safety looks like - get AIs to tell us when they doing things we don't expect:
…ARC's first paper tackles the problem of 'Eliciting Latent Knowledge' (ELK)...
Here's a new report from ARC, an AI safety organization founded this year by Paul Christiano (formerly of OpenAI). The report is on the topic of 'Eliciting latent knowledge: How to tell if your eyes deceive you', and it tackles the problem of building AI systems which we can trust, even if they do stuff way more complicated than what a human can understand.

What the problem is: "Suppose we train a model to predict what the future will look like according to cameras and other sensors. We then use planning algorithms to find a sequence of actions that lead to predicted futures that look good to us," ARC writes. But some action sequences could tamper with the cameras so they show happy humans regardless of what’s really happening. More generally, some futures look great on camera but are actually catastrophically bad. In these cases, the prediction model "knows" facts (like "the camera was tampered with") that are not visible on camera but would change our evaluation of the predicted future if we learned them. How can we train this model to report its latent knowledge of off-screen events?"

Why this matters: Problems like ELK aren't going to be solved immediately, but they're sufficiently complicated and broad that if we come up with approaches that help us make progress on ELK, we'll probably be able to put these techniques to work in building far more reliable, powerful AI systems.
  Read more: ARC's first technical report: Eliciting Latent Knowledge (Alignment Forum).

####################################################

Check out the future of semiconductors via HotChips:
…After a decade of homogeneity, the future is all about heterogeneous compute training common AI models…
What do NVIDIA, Facebook, Amazon, and Google all have in common? They all gave presentations at the premiere semiconductor get-together, Hot Chips. The Hot Chips 22 site has just been updated with copies of the presentations and sometimes videos of the talks, so take a look if you want to better understand how the tech giants are thinking about the future of chips.

Some Hot Chips highlights: Facebook talks about its vast recommendation models and their associated infrastructure (PDF); Google talks about how it is training massive models on TPUs (PDF); IBM talks about its 'Z' processor chip (PDF); and Skydio talks about how it has made a smart and semi-autonomous drone (PDF).

Why this matters: One side-effect of the AI revolution has been a vast increase in the demand by AI models for increasingly large amounts of fast, cheap compute. Though companies like NVIDIA have done a stellar job of converting GPUs to work well for the sorts of parallel computation required by deep learning, there are more gains to be had from creating specialized architectures.
  Right now, the story seems to be that all the major tech companies are building out their own distinct compute 'stacks' which use custom inference and training accelerators and increasingly baroque software for training large models. One of the surprising things is that all this heterogeneity is happening while these companies train increasingly similar neural nets to one another. Over the next few years, I expect the investments being made by these tech giants will yield some high-performing, non-standard compute substrates to support the next phase of the AI boom.
  Check out the Hot Chip 33 presentations here (Hot Chips site).####################################################

Tech Tales:

Noah's Probe
[Christmas Day, ~2080]

Humans tended to be either incompetent or murderous, depending on the length of the journey and the complexity of the equipment.

Machines, however, tended to disappear. Probes would just stop reporting after a couple of decades. Analysis said the chance of failures wasn't high enough to justify the amount of disappeared probes. So, we figured, the machines were starting to decide to do something different to what we asked them to.

Human and machine hybrids were typically more successful than either lifeform alone, but they still had problems; sometimes, the humans would become paranoid and destroy the machines (and therefore destroy themselves). Other times, the computers would become paranoid and destroy the humans - or worse; there are records of probes full of people in storage which then went off the grid. Who knows where they are now.

So that's why we're launching the so-called Noah's Probes. This series of ships tries to fuse human, animal, and machine intelligence into single systems. We've incorporated some of the latest in mind imagining techniques to encode some of the intuitions of bats and owls into the ocular sensing systems; humans, elephants, whales, and orangutans for the mind; octopi and hawks for navigation; various insects and arachnids for hull integrity analysis, and so on.

Like all things in the history of space, the greatest controversy with Noah's Probes relates to language. Back when it was just humans, the Americans and the Russians had enough conflict that they just decided to make both their languages the 'official' language of space. That's not as easy to do with hybrid minds, like the creatures on these probes. 

Because we have no idea what will work and what won't, we've done something that our successors might find distasteful, but we think is a viable strategy: each probe has a device that all the intelligences aboard can access. The device can output a variety of wavelengths of energy across the light spectrum, as well as giving access to a small sphere of reconfigurable matter that can be used to create complex shapes and basic machines.

Our hope is, somewhere out in that great darkness, some of the minds adrift on these probes will find ways to communicate with eachother, and become more than the sum of their parts. Our ancestors believe that we were once visited by angels who communicated with humans, and in doing so helped us humans be better than we otherwise would've been. Perhaps some of these probes will repeat this phenomena, and create something greater than the sum of its parts.

Things that inspired this story: Peter Watts Blindsight; Christmas; old stories about angels and aliens across different religions/cultures; synesthesia; multi-agent learning; unsupervised learning.



Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me@jackclarksf

Twitter
Facebook
Website
Copyright © 2021 Import AI, All rights reserved.
You are receiving this email because you signed up for it. Welcome!

Our mailing address is:
Import AI
Many GPUs
Oakland, California 94609

Add us to your address book


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list

Email Marketing Powered by Mailchimp

Older messages

Import AI 277: DeepMind builds a GPT-3 model; Catalan GLUE; FTC plans AI regs

Monday, December 13, 2021

Could crypto computation eventually compete with AI computation at the level of fab production capacity? View this email in your browser Welcome to Import AI, a newsletter about artificial intelligence

Import AI 276: Tracking journalists with computer vision; spotting factory defects with AI; and what simulated war might look like

Monday, December 6, 2021

What would be the smallest computational envelope required to simulate fluid dynamics to the same fidelity as reality? View this email in your browser Welcome to Import AI, a newsletter about

Import AI 274: Multilingual models cement power structures; a giant British Sign Language dataset;  and benchmarks for the UN SDGs

Monday, November 15, 2021

If you had the choice of having 1, 3, or 10 'AGI-class' systems come online at once, which would you pick? View this email in your browser Welcome to Import AI, a newsletter about artificial

Import AI 273: Corruption VS Surveillance; Baidu makes better object detection; understanding the legal risk of datasets

Monday, November 8, 2021

At what point will AI start to influence religion, and vice versa? View this email in your browser Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to give your

Import AI #272: AGI-never or AGI-soon?, simulating stock markets; evaluating unsupervised RL

Monday, November 1, 2021

If each individual parameter of every machine learning model in existence were rendered as a 1cm by 1cm cube, how much space would they all take up? View this email in your browser Welcome to Import AI

You Might Also Like

SRE Weekly Issue #456

Monday, December 23, 2024

View on sreweekly.com A message from our sponsor, FireHydrant: On-call during the holidays? Spend more time taking in some R&R and less getting paged. Let alerts make their rounds fairly with our

The Power of an Annual Review & Grammarly acquires Coda

Sunday, December 22, 2024

I am looking for my next role, Zen Browser got a fresh new look, Flipboard introduces Surf, Campsite shuts down, and a lot more in this week's issue of Creativerly. Creativerly The Power of an

Daily Coding Problem: Problem #1645 [Hard]

Sunday, December 22, 2024

Daily Coding Problem Good morning! Here's your coding interview problem for today. This problem was asked by Facebook. Implement regular expression matching with the following special characters: .

PD#606 How concurrecy works: A visual guide

Sunday, December 22, 2024

A programmer had a problem. "I'll solve it with threads!". has Now problems. two he ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌

RD#486 (React) Things I Regret Not Knowing Earlier

Sunday, December 22, 2024

Keep coding, stay curious, and remember—you've got this ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌

🎶 GIFs Are Neat, but I Want Clips With Sound — Your Own Linux Desktop in the Cloud

Sunday, December 22, 2024

Also: 9 Games That Were Truly Ahead of Their Time, and More! How-To Geek Logo December 22, 2024 Did You Know Dextrose is another name for glucose, so if you see it listed prominently on the ingredients

o3—the new state-of-the-art reasoning model - Sync #498

Sunday, December 22, 2024

Plus: Nvidia's new tiny AI supercomputer; Veo 2 and Imagen 3; Google and Microsoft release reasoning models; Waymo to begin testing in Tokyo; Apptronik partners with DeepMind; and more! ͏ ͏ ͏ ͏ ͏ ͏

Sunday Digest | Featuring 'The World’s 20 Largest Economies, by GDP (PPP)' 📊

Sunday, December 22, 2024

Every visualization published this week, in one place. Dec 22, 2024 | View Online | Subscribe | VC+ | Download Our App Hello, welcome to your Sunday Digest. This week, we visualized public debt by

Android Weekly #654 🤖

Sunday, December 22, 2024

View in web browser 654 December 22nd, 2024 Articles & Tutorials Sponsored Solving ANRs with OpenTelemetry While OpenTelemetry is the new observability standard, it lacks official support for many

😸 Our interview with Amjad Masad

Sunday, December 22, 2024

Welcome back, builders Product Hunt Sunday, Dec 22 The Roundup This newsletter was brought to you by AssemblyAI Welcome back, builders Happy Sunday! We've got a special edition of the Roundup this