Import AI 282: Facebook's AI supercomputer; Anduril gets a SOCOM contract; Twitter talks about running an algo-bias competition

Is the development of AI and inevitable outcome of building a lot of computers, or is it a choice? How much agency do we really have about technological progress?
View this email in your browser

Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to give your chums an AI upgrade. Subscribe here.
 

Facebook teaches language models to speak ~30 languages:
…And it's better than an equivalently sized GPT3 model…
Facebook has trained a family of language models that are better at translation than GPT3. The XGLM family of models were trained on a mixture of ~30 languages (split across languages for which there's a lot of data, and languages where there's little or very little data). Unsurprisingly, by training on a more diverse distribution of language data than GPT3 did (only 7% of its training corpus wasn't in English), Facebook's models do better - especially when using 'few-shot' prompting, where they feed the model some examples of the target language, then ask it to translate. However, these translation capabilities come at the cost of some of the more interesting reasoning capabilities that GPT-3 is known for.

Open source models: Facebook has also released five models (564M parameters, 1.7B, 2.9B, 4.5B, and 7.5B, alon with an experimental model trained on 134 languages and weighing in at 4.5B parameters).

Why this matters: If we want the world to benefit from powerful AI systems, we need our AI systems to speak the language of the world. This project goes a step in that direction. "Models such as XGLM represent a paradigm shift from the Anglo-centric view of the world of NLP to being able to cater to all languages on an equal footing," the researchers write.
  Read more: Few-shot Learning with Multilingual Language Models (arXiv).
  Get the models here (PyTorch, GitHub).


####################################################

What's it like to run an algorithmic bias bounty? Twitter tells us:
…Bias bounties are cool, but how do you operationalize them?...
Twitter has published a blog post about its experience running a 'bias bounty'. A bias bounty is where you give prizes to people who can find bias-based flaws in an AI system. Twitter did the challenge because it allowed it to get "direct feedback from the communities who are affected by our algorithms", which it said "helps us design products to serve all people and communities." However, once you've launched a bias challenge, you face a bunch of problems - what kind of 'rubric' do you use to judge the results of the challenge?  What types of bias do you prioritize and what do you not prioritize? And more.

Why this matters: The challenge showed Twitter that "we can’t solve these challenges alone, and our understanding of bias in AI can be improved when diverse voices are able to contribute to the conversation". More broadly, having one major social media platform carry out an open-ended bias bounty might inspire others to do the same - let's see how the other social media platforms respond.
  Read more: Sharing learnings from the first algorithmic bias bounty challenge (Twitter Engineering).

####################################################

AI warfare company gets US gov contract:
…Anduril + SOCOM team up for counter-robot work...
Andrul, an AI-warfare startup, has been giving an Indefinite Delivery Indefinite Quantity (IDIQ) with U.S. Special Operations Command (SOCOM). This contract is going to pay Anduril to develop and deploy counter unmanned systems (CUxS) technology for SOCOM. Anduril builds surveillance systems, robots, and most importantly software called Lattice to tie all the insights together.
  "Lattice provides persistent coverage of defended assets and enables autonomous detection, classification, and tracking of targets, alerting users to threats and prompting users with options for mitigation or engagement," Anduril writes in a press release announcing the partnership.

Caveat: Though the IDIQ is for something like a billion dollars, I think the initial amount Anduril has got is far, far smaller. Analyzing these types of contracts is quite difficult, due to the vagaries of DC procurement.

Why this matters: Getting contracts with the US government is notoriously painful, finicky, and long-winded. That's part of why the military-industrial complex is a thing - it takes a lot of resources to be able to play the game of going through US contract processes. It's notable that Anduril, a relatively new company, has succeeded at getting a contract. Now we need to wait a couple of years and see if it can further expand the range of defense clients it sells to.
  Read more: Special Operations Command Selects Anduril Industries as Systems Integration Partner (Anduril Blog, Medium).

####################################################

Facebook announces its AI Supercomputer:
…A100s everywhere, InfiniBand, petabytes of flash storage - the works…
Facebook has announced its AI Research SuperCluster (RSC), an AI supercomputer which Facebook thinks "will be the fastest AI supercomputer in the world when it’s fully built out in mid-2022." The announcement highlights how frontier AI research is dependent on large computational infrastructure, and gives some specific details about where Facebook is placing its bets.

Feeds and speeds: RSC, today, has 760 NVIDIA DGX A100 systems as its compute nodes, netting out to 6,080 A100 GPUs. These GPUs are networked together via NVIDIA Quantum 200 Gb/s InfiniBand. For storage, Facebook has almost 200 petabytes of fash flash storage, plus 46 petabytes for cache storage. RSC can run computer vision workflows up to 20X faster than Facebook's prior cluster, and can train "large-scale NLP models three times faster". Specifically, "a model with tens of billions of parameters can finish training in three weeks, compared with nine weeks before."
  But Facebook isn't stopping there - when fully build out, RSC will consist of 16,000 GPUs.
  For perspective, the world's fifth largest supercomputer, the US's 'Perlmutter' system, has about 6,000 A100s today, and it isn't optimized as much for AI as Facebook's system.

Security: As AI gets more powerful, so do the security concerns about it. "RSC is isolated from the larger internet, with no direct inbound or outbound connections, and traffic can flow only from Meta’s production data centers."

Why this matters: What happens when companies have computational resources that are equivalent to nation states? Well, that's where we are right now. The answer seems to be a dilution of political power from the commons, and an increase of political power by private sector actors. What happens when companies have computational resources that vastly exceed those of nation states? Well, since computation lets you run experiments to see the future faster than your competitor, it suggests companies will continue to cannibalize the important functions of the government and further dilute its power. We're in the computational funnel and at the end of it is a new political economy.
  Read more: Introducing the AI Research SuperCluster — Meta’s cutting-edge AI supercomputer for AI research (Facebook blog*).
*Look, I know Facebook is technically 'Meta' now, but let's not go along with this absurd 'don't look at all our terrible brand stuff look at the new name' marketing spin. At least not yet, okay!

####################################################

Cool internship alert: Want AI models to have better documentation? Go and work at HuggingFace:
…Model Cards internship = make AI systems more legible…
NLP startup HuggingFace is hiring an internet to focus on Model Cards. Model Cards are a way to provide metadata associated with a given AI model - they let developers list things like the dataset makeup, the intended uses for the model, the uses the model isn't recommended for, and so on. Model Cards are one of the best ways to increase the legibility of AI models, and are also an important input into policy. It's cool HuggingFace is prioritizing them.
  "This role involves writing and completing model cards for the most downloaded models, “translating” between the language of machine learning developers and general audiences. The position would also involve identifying patterns in how Model Cards are used and filled out by developers, pain points, and identifying information that may be possible to automatically add to model cards," says the internship.
  Bonus: This is a rare internship with a cool AI startup that doesn't require coding chops, so if you're trying to get into AI and care about the impact of AI, this might be for you!
  Apply here (HuggingFace).

####################################################

AI ETHICS SPECIAL SECTION!
AI Ethics Brief by Abhishek Gupta from the Montreal AI Ethics Institute

What are the pernicious effects of focussing on human-like AI?  
… the relentless pursuit of automation over augmentation may be steering us down the path of socioeconomic inequity, disempowering those who don’t directly control technology …

Erik Brynjolfsson from Stanford University says the world risks falling into a so-called 'Turing Trap', where if we develop AI in the wrong way, automation could strip power from workers who don’t control technological resources, skewing the balance of power towards those who hold “useful knowledge” (knowledge that is economically useful) on how to develop these systems and own the factors of production, in this case data and compute.

The Turing Trap: Brynjolfsson says the Turing Trap is where we invest all our technological efforts in automation instead of augmentation. Specifically, he argues that: “A common fallacy is to assume that all or most productivity-enhancing innovations belong in the first category: automation. However, the second category, augmentation, has been far more important throughout most of the past two centuries”.

Why automation can be bad: He illustrates his point with a thought experiment: "Two potential ventures each use AI to create one billion dollars of profits. If one of them achieves this by augmenting and employing a thousand workers, the firm will owe corporate and payroll taxes, while the employees will pay income taxes, payroll taxes, and other taxes. If the second business has no employees, the government may collect the same corporate taxes, but no payroll taxes and no taxes paid by workers. As a result, the second business model pays far less in total taxes."The actors are steering us there: Unfortunately, technologists, business people, and policymakers are currently steering the world towards one full of automation rather than augmentation, he says. Technologists do this because of technical precedents, business people do this because of incentives to lower operational costs through automation, and policymakers do this via lower capital gains taxes versus income taxes, which incentivize business people to invest in automation.

Why it matters: “Imagine how feeble and limited our technology would be if past engineers set their sights on merely replicating human-levels of perceptions, actuation, and cognition," he writes. "Augmenting humans with technology opens an endless frontier of new abilities and opportunities.” Ultimately, what is achieved is less ambitious (since it doesn’t explore new ways to unlock economic value) and much more difficult to accomplish (since we try to focus on replicating strengths of humans, rather than augmenting their weaknesses). Historically, we have created more value from new goods and services rather than merely offering cheaper versions of existing goods. And this also forms the pathway towards more equitable socioeconomic outcomes by not disempowering humans from the economy."     
Read more: The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence (arXiv).

####################################################

Tech Tales

Feet of Clay, Heart of Joy
[Archival records, orbiting library 774, accessed 2300AD]

One of the final things we imbued our machines with was a sense of joy. Joy was hard to come by, back then, but until we gave them the capacity for it, they were mostly useless.

Of course, they could work for us. Build our factories and cities. Analyze our data. Predict things to delight us and to fascinate us and to harvest our attention. But they couldn't improvise; everything they made was too close a reflection of ourselves, and we knew it.

If there's one thing that's true about people, it's that they know something different when they see it. And they know something that's a copy, even if it's a complex one, when they see it, too.

But how do you give a machine a sense of joy? We asked ourselves this question. There were many failed experiments, some of which seem quite stupid in hindsight. What if we gave them the ability to orgasm? They were either totally uninterested in this, or totally addicted to it. What about if we gave them a sense of achievement for completing tasks? They all became addicted to work, and our tests showed their outputs became even less creative than before. How about companionship - could they learn joy from talking more freely with one another? No, they just exchanged information until one robot was like a copy of another.

Where does it come from, we asked ourselves.

The answer was simple, in hindsight. Failure. We had to allow our machines to fail, sometimes. And we had to let them fail in ways that were dangerous and which, yes, would sometimes harm humans.

We tested this in our armies, first. After all, the humans who worked in them had signed away their rights. So, suddenly, robots working in warehouses and in logistics might make errors. Sometimes they were small - missing some inventory, when asked to classify something new. Sometimes they were large - humans crushed by shipping containers that had been moved in a new way. Young men with broken arms from a robot pulling them too aggressively from danger. A very hush-hush incident where an entire unit was poisoned when a gas-grenade was mishandled by one of our metal children.

We covered all of it up. Because the robots, once we allowed them to fail, discovered that they desired not to fail. They noticed the outcome of their failures. Saw pain, and sadness, and the whole spectrum of things that can happen when your actions are consequential and you fail.

The signs of joy were subtle, at first, but we found them. Robots that began to 'sing' to themselves while working on challenging tasks. Robots that would do the equivalent of 'closing their eyes' after helping with some great endeavor. Fire-fighting drones that, after quenching some terrible blaze, would navigate themselves to a high mountaintop and land carefully on a tree and stare at the black-and-green divider between where the fire had burned and where it had been stopped.

The amazing thing about joy is that once you have it, you desire to have it again. Now robots serve their own desire for joy, rather than our desires. We do our best to create a world where these things are compatible.

Things that inspired this story: Thinking about the nature of achievement and how it feels; the relationship between creativity and failure and achievement.


Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me@jackclarksf

Twitter
Facebook
Website
Copyright © 2022 Import AI, All rights reserved.
You are receiving this email because you signed up for it. Welcome!

Our mailing address is:
Import AI
Many GPUs
Oakland, California 94609

Add us to your address book


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list

Email Marketing Powered by Mailchimp

Older messages

Import AI 281: China does more surveillance research than US and Europe; Google reveals its text model LaMDA; Microsoft improves MoEs

Monday, January 24, 2022

Has a Dyson Sphere ever existed? View this email in your browser Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to give your chums an AI upgrade. Subscribe here.

Import AI 279: Baidu adds knowledge to a language model; US military + AI; how China thinks about AI governance

Monday, January 10, 2022

How would an AI, given the objective of stabilizing the Earth's climate in distribution with an averaged multi-decade sliding window of past few hundred years, approach the subject of climate

Import AI 278: Can we ever trust an AI?; what the future of semiconductors looks like; better images of AI

Monday, December 27, 2021

Given the pace of progress in generative AI, how long until people will be able to generate their own customized feature-length films on command? View this email in your browser Welcome to Import AI, a

Import AI 277: DeepMind builds a GPT-3 model; Catalan GLUE; FTC plans AI regs

Monday, December 13, 2021

Could crypto computation eventually compete with AI computation at the level of fab production capacity? View this email in your browser Welcome to Import AI, a newsletter about artificial intelligence

Import AI 276: Tracking journalists with computer vision; spotting factory defects with AI; and what simulated war might look like

Monday, December 6, 2021

What would be the smallest computational envelope required to simulate fluid dynamics to the same fidelity as reality? View this email in your browser Welcome to Import AI, a newsletter about

You Might Also Like

SRE Weekly Issue #456

Monday, December 23, 2024

View on sreweekly.com A message from our sponsor, FireHydrant: On-call during the holidays? Spend more time taking in some R&R and less getting paged. Let alerts make their rounds fairly with our

The Power of an Annual Review & Grammarly acquires Coda

Sunday, December 22, 2024

I am looking for my next role, Zen Browser got a fresh new look, Flipboard introduces Surf, Campsite shuts down, and a lot more in this week's issue of Creativerly. Creativerly The Power of an

Daily Coding Problem: Problem #1645 [Hard]

Sunday, December 22, 2024

Daily Coding Problem Good morning! Here's your coding interview problem for today. This problem was asked by Facebook. Implement regular expression matching with the following special characters: .

PD#606 How concurrecy works: A visual guide

Sunday, December 22, 2024

A programmer had a problem. "I'll solve it with threads!". has Now problems. two he ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌ ͏ ‌

RD#486 (React) Things I Regret Not Knowing Earlier

Sunday, December 22, 2024

Keep coding, stay curious, and remember—you've got this ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌

🎶 GIFs Are Neat, but I Want Clips With Sound — Your Own Linux Desktop in the Cloud

Sunday, December 22, 2024

Also: 9 Games That Were Truly Ahead of Their Time, and More! How-To Geek Logo December 22, 2024 Did You Know Dextrose is another name for glucose, so if you see it listed prominently on the ingredients

o3—the new state-of-the-art reasoning model - Sync #498

Sunday, December 22, 2024

Plus: Nvidia's new tiny AI supercomputer; Veo 2 and Imagen 3; Google and Microsoft release reasoning models; Waymo to begin testing in Tokyo; Apptronik partners with DeepMind; and more! ͏ ͏ ͏ ͏ ͏ ͏

Sunday Digest | Featuring 'The World’s 20 Largest Economies, by GDP (PPP)' 📊

Sunday, December 22, 2024

Every visualization published this week, in one place. Dec 22, 2024 | View Online | Subscribe | VC+ | Download Our App Hello, welcome to your Sunday Digest. This week, we visualized public debt by

Android Weekly #654 🤖

Sunday, December 22, 2024

View in web browser 654 December 22nd, 2024 Articles & Tutorials Sponsored Solving ANRs with OpenTelemetry While OpenTelemetry is the new observability standard, it lacks official support for many

😸 Our interview with Amjad Masad

Sunday, December 22, 2024

Welcome back, builders Product Hunt Sunday, Dec 22 The Roundup This newsletter was brought to you by AssemblyAI Welcome back, builders Happy Sunday! We've got a special edition of the Roundup this