Import AI 292: AI makes low-carbon concrete; weaponized NLP; and a neuro-symbolic language model

Will sentient machines treat old software like human archaeologists treat ancient ruins?
View this email in your browser

Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to give your chums an AI upgrade. Subscribe here.

Facebook uses AI to make low-carbon concrete, uses it to build (some of) a data center:
…From simulation into the lab into the data center - how's that for real world AI?...
There's always a lot of hand-wringing in AI about how much electricity AI systems use. What I tend to grumpily point out in these conversations is industries like long-haul transportation, mining, and concrete and aluminum production all generate titanic amounts of emissions but rarely get the same type of scrutiny. Now, a new paper from Facebook smashes together my worlds, as Facebook and other researchers use AI to come up with a low-carbon concrete formulation, then test it out in the construction of a new data center. 

Who did it: The research was done by an interdisciplinary team from UCLA, IBM, U Chicago, University of Illinois Urbana-Champaign, Facebook, and Ozinga Ready Mix.

What they did: The team used Conditional Variational Autoencoders (CVAEs) "to discover concrete formulas with desired properties". These desired properties were a significantly lower carbon footprint, while having the same strength and durability properties as regular concrete - and they succeed! Facebook poured out a bunch of concrete for a construction office and a guard tower on its new data center being built in DeKalb, IL, USA. They found that the "conditional average reduction for carbon (GWP) can be as high as 42%, while also achieving conditional reduction for sulfur (AP) as high as 21%...these formulations roughly halve the global warming potential as compared to the average of similar 28-day compressive strength formulations."
  Interesting choices: The specifics as to why its solutions worked was "to considerably decrease cement by replacing with other cementitious materials such as fly ash and slag."

Why it matters: This an example of how humans and AI systems can work together to create something greater than the sum of its parts.
  Read more: Accelerated Design and Deployment of Low-Carbon Concrete for Data Centers (arXiv).
  Read more: NumGLUE: A Suite of Fundamental yet Challenging Mathematical Reasoning Tasks (arXiv).

####################################################

Weaponized NLP: The era of AI warfare has started:
…Primer goes to war…
AI startup Primer has gone to war. Specifically, the NLP company's technology has been used in Ukraine, where it has, per Primer CEO, it has been used to "capture, translate and extract key tactical information in real time". Primer is a few years old and works mainly on text classification, generation, and summarization. "AI is changing the way we collect tactical information from the battlefield. Watch this space!," he said.

Modification for war: "Primer’s CEO, says the company’s engineers modified these tools to carry out four new tasks: To gather audio captured from web feeds that broadcast communications captured using software that emulates radio receiver hardware; to remove noise, including background chatter and music; to transcribe and translate Russian speech; and to highlight key statements relevant to the battlefield situation," according to Wired magazine.

Why this matters: AI is dramatically changing the cost of data collection and analysis - and whenever you make something cheaper, people find ways to use it more, or do things that they hadn't previously considered doing.
  Read more: Primer CEO Tweet (Twitter).
  Read more: As Russia Plots Its Next Move, an AI Listens to the Chatter (Wired).

####################################################

Text-Vision models are hella dumb, according to Winoground:

…Finally, a hard benchmark for multi-modal models…
Researchers with Hugging Face, Facebook, the University of Waterloo, and University College London have built and released 'Winoground', a new challenging benchmark to test text-vision AI systems on.

What is Winoground? The goal of Winoground is to look at two images and two captions, then match them correctly. The confounding part is that each of the captions contain identical words, just in a different order. The best part is Winoground seems really hard: "Surprisingly, all of the models rarely—and if so only barely—outperform chance. Our findings indicate that the visio-linguistic compositional reasoning capabilities of these models fall dramatically short of what we might have hoped."

How hard is it? On both the text and image components of Winoground, an 'MTurk Human' gets scores of 89.50 (text) and 88.50 (image), compared to models typically getting around ~30 on text and 15 or less on images. This suggests winoground is a genuinely challenging benchmark, and models have a long way to go before they match human capabilities. 

   Read more: Winoground: Probing Vision and Language Models for Visio-Linguistic Compositionality (arXiv).

   Get the dataset here: Winoground, HuggingFace.


####################################################

Resurrecting the dead with GPT3:
…In which humans begins to use funhouse mirrors of itself for its own entertainment…

An artist recently tried to bring their (imaginary) childhood friend back to life using GPT3. By the end of the experiment, their microwave tried to kill them. 

The longer story: Artist Lucas Rizzotto had an imaginary childhood friend and tried to bring them back to life using a language model. Specifically, they wrote about a hundred pages about the person, finetuned GPT3 on that resulting corpus, and then plugged the resulting model into a voice interface which was 'embodied' in the form of being attached to a microwave via some smart home automation. 

What happened: The artist felt like they were talking to their childhood friend in a deeply emotional, entertaining, and at times sad way. At one point, the friend asked them to put their head in the microwave. They pretended to put their head in and then the friend turned the microwave on. The friend, the artist reasoned, wanted to kill them because it thought they had ignored them for 20 years (as that's the implication of the corpus they were finetuned on). 

Why this matters: Besides being an amazing demonstration of the awesome personalization qualities of contemporary language models, this is also a nice example of just how unpredictable they are. Language model developers will typically put a ton of controls on the model, but once you can finetune it and deploy it yourself you can shapeshift all of this stuff into irrelevance. Add in some home automation and you end up with an LLM that tries to boil your brain. An amazing and optimistic art piece and also a cautionary tale.

    Check out the Tweet thread here: (Lucas Rizzotto, Twitter).

   Watch the video here: I gave my microwave a soul (Lucas builds the future, YouTube).


####################################################

Jack Clark goes to Washington:
…I'm on the National AI Advisory Committee!…
I've been elected to serve on the National AI Advisory Committee (the NAIAC), which will advise the USA's National AI Initiative Office and the President of the USA on matters relating to AI and AI strategy. (I'll be keeping my dayjob at Anthropic, as this is a part-time advisory position). I'll be in Washington DC on May 4th for the first meeting. I am delighted to get this privilege and hope to use the opportunity to strengthen the AI ecosystem in America and beyond.
  Read more: The National AI Advisory Committee (AI.gov).

####################################################

AI21 makes a neuro-symbolic language model:

…Turns out, frankAI can be pretty useful…
Israelie AI startup AI21 Labs has built a so-called 'Modular Reasoning, Knowledge, and Language' system and applied it to a language model it calls Jurassix-X. The tl;dr is this is a neuro-symbolic system; AI21 has paired a big generative model with a bunch of symbolic layers on top that it uses to make the underlying model more accurate, able to do mathematics, and better at planning. This is a neat demonstration of a way to get around some of the shortcomings of contemporary generative models, though it remains unclear whether these extrinsic interventions could eventually become irrelevant, if the models get intrinsically smart enough. 

Key details: "A MRKL system consists of an extendable set of modules, which we term 'experts', and a router that routes every incoming natural language input to a module that can best respond to the input," the authors write. The modules can be symbolic or neural, it's more about creating a layer of distinct, specific capabilities that can be used to augment and improve the responses of the raw generative model. 

Long term relevance: One question this research invites is how long it'll be relevant for - AI systems have a tendency to, given enough scale of data and compute, develop unexpected capabilities. My intuition is that we could  see pure deep learning models gain some of these capabilities over time - though I expect even deep learning models will end up being augmented with external knowledge bases (e.g, DeepMind Retro, BAIDU's Ernie 3.0 [Import AI 279], and so on)

Why this matters: While not a strict scientific breakthrough in itself, MRKL is reassuringly practical - it shows developers how they can integrate an arbitrary number of known and specific capabilities with the more unreliable capabilities provided by large-scale generative models. It also speaks to the shape of the language model economy - right now, everyone's trying to work out how to better constrain these models, either intrinsically (e.g, by training with human feedback), or extrinsically (e.g, via stuff like MKRL).

   Read more: Jurassic-X: Crossing the Neuro-Symbolic Chasm with the MRKL System (AI21 Labs, blog).
  Read the whitepaper about the system: MRKL Systems (AI21 PDF).

####################################################

AI Ethics Brief by Abhishek Gupta from the Montreal AI Ethics Institute

What can we learn from business ethics to make AI ethics more effective? 

… CSR and business ethics have grappled with the challenges in ensuring ethical behavior within organizations and we can cross-pollinate those ideas towards the adoption of AI ethics … 

Researchers from USI Universita dela Svizzera italiana in Switzerland have looked at how businesses have integrated corporate social responsibility (CSR) policies to figure out how we can apply AI ethics in the same way. The key ideas they surface include:

Stakeholder management: Similar to the recommendations made by the Ada Lovelace Institute to strengthen the EU AI Act (Import AI #290), the paper says companies should ensure they include people who are affected (or affects) the AI systems being developed. 

Standardized reporting: While there are many emergent regulations demanding that there be transparency and disclosures, there are as of yet no standards on how to do so. Companies should look at financial reporting and try to figure out standardized ways to describe their own AI developments. 

Corporate governance and regulation: Post the Sabanes-Oxley Act in 2002, corporate accountability was enforced through mechanisms like having an ethics officer and having a dedicated code of ethics. Translating those to apply to organizations using AI systems is one way to increase the responsibility of organizations developing this technology.

Curriculum accreditation: There is a lack of consistency in how AI ethics is taught across universities. Comparing it to the business world, the authors point to an example of how if a business department wants to obtain a Triple Crown Accreditation, it leads to action on the education front where ethics courses and dedicated faculty follow well-defined curricula with shared elements to prepare students for these requirements in their future careers. We don't really have this in AI today. 

Why it matters: As AI ethics becomes a more mainstream focus across the world (see the dedicated chapter in the 2022 AI Index Report), instead of reinventing the wheel for best practices and patterns, we can incorporate lessons from other domains of applied ethics like business, medical, and environmental ethics to accelerate the adoption of AI ethics principles and practices across organizations. We will most likely see more such efforts that draw lessons from a rich history of ensuring ethical behavior in various contexts being translated to govern and shape behavior of individuals and organizations engaged in the AI lifecycle.  

   Read more: Towards AI ethics' institutionalization: knowledge bridges from business ethics to advance organizational AI ethics 


####################################################

Tech Tales:

Silicon Stories

[A Father and Daughter's bedroom, 2028]

They'd sit up together and the kid would ask for whatever story they liked. "A jar of jam that's going to university", they'd say, and the Father would start improvising the story and the AI would project images and ad-lib dialog to fill out the tale. "Two robbers who realize that they've stolen the life savings of a poor widower", and suddenly the monitor would light up with images of two disconsolate thiefs looking at their treasure. "The planet earth fighting the sun" and suddenly the earth had arms and was reaching out to try and hurt the vast sun. In this way, generative models had changed storytime for children. 

Now, along with conjuring images in their minds, children - at least, the lucky ones - had parents who could use a gen model to create those images themselves. In this way, storytime became a lot more engaging and the kids spent a lot more time with their parents; both enjoyed the improvisational qualities afforded by the generative models.

For some families, this was fine. But some other families would move, or become poor, or suffer a disaster. For those families, the electricity and the internet would get removed. Once that happened, they wouldn't have any imaginations in a box to learn back on. Some families did okay, but some wouldn't - it's hard to become dependent on things, and after it happens you barely realize you've become dependent until it's too late. 

Things that inspired this story: DALL-E and DALL-E2; the long march of generative models towards Total Reality Synthesis; the industrialization of AI; ideas about fatherhood and daughterhood and kindredhood.

Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me@jackclarksf

Twitter
Facebook
Website
Copyright © 2022 Import AI, All rights reserved.
You are receiving this email because you signed up for it. Welcome!

Our mailing address is:
Import AI
Many GPUs
Oakland, California 94609

Add us to your address book


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list

Email Marketing Powered by Mailchimp

Key phrases

Older messages

Import AI 291: Google trains the world's biggest language model; how robots can be smarter about the world; Conjecture, a new AI alignment company

Monday, April 11, 2022

Humans tell lots of stories about our ancestors - the things we came from, but which are not strictly human. How might machines talk about their own ancestors? View this email in your browser Welcome

Import AI 290: China plans massive models; DeepMind makes a smaller and smarter model; open source CLIP data

Tuesday, April 5, 2022

If it's possible to build artificial general intelligence, how many people will be required to build it? View this email in your browser Welcome to Import AI, a newsletter about artificial

Import AI 289: Copyright v AI art; NIST tries to measure bias in AI; solar-powered Markov chains

Monday, March 28, 2022

How many computers may exist in the solar system, but not on earth or manmade craft? View this email in your browser Welcome to Import AI, a newsletter about artificial intelligence. Forward this email

Import AI 287: 10 exaflop supercomputer; Google deploys differential privacy; humans can outsmart deepfakes pretty well

Monday, March 7, 2022

What will the historical relics of this period of AI research turn out to be? View this email in your browser Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to

Import AI 286: Fairness through dumbness; planet-scale AI computing; another AI safety startup appears

Monday, February 28, 2022

What would it mean for AI systems to have dreams, and if they had them, what would they be like? View this email in your browser Welcome to Import AI, a newsletter about artificial intelligence.

You Might Also Like

📑 Discover The Power of AI With UPDF — 63% Off For a Limited Time

Thursday, March 28, 2024

Digitally Read/Sign/Edit/Summarize PDFs Seamlessly. Available Now at a Huge Discount! How-To Geek Logo March 28, 2024 Tired of Dealing With PDFs? Try AI-Powered UPDF With the Biggest Discount of the

Issue 310 - New Autopark looks awesome!

Thursday, March 28, 2024

View this email in your browser If you are just now finding out about Tesletter, you can subscribe here! If you already know Tesletter and want to support us, check out our Patreon page Issue 310 - New

Programmer Weekly - Issue 199

Thursday, March 28, 2024

View this email in your browser Programmer Weekly Welcome to issue 199 of Programmer Weekly. Let's get straight to the links this week. Quote of the Week "Optimization hinders evolution.

wpmail.me issue#660

Thursday, March 28, 2024

wpMail.me wpmail.me issue#660 - The weekly WordPress newsletter. No spam, no nonsense. - March 27, 2024 Is this email not displaying correctly? View it in your browser. News & Articles What's

New attack targets Apple devices

Thursday, March 28, 2024

Eufy's new Mach S1 Pro; Using VR in a car; April solar eclipse FAQ -- ZDNET ZDNET Tech Today - US March 28, 2024 placeholder New password reset attack targets Apple device users - what to do if it

Web Tools #558 - ImageKit Review, JS Libraries, Git/CLI Tools, Jamstack

Thursday, March 28, 2024

WEB VERSION Issue #558 • March 28, 2024 The following is a paid product review for ImageKit's Video API, a developer-friendly toolkit for real-time video optimizations and transformations, to help

An Emmy-winner's guide to AI video

Thursday, March 28, 2024

They built this in 2 months 👀 ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌

wpmail.me issue#660

Thursday, March 28, 2024

wpMail.me wpmail.me issue#660 - The weekly WordPress newsletter. No spam, no nonsense. - March 27, 2024 Is this email not displaying correctly? View it in your browser. News & Articles What's

Amazon writes Anthropic a $2.75B check

Thursday, March 28, 2024

Amazon has completed its promised $4B investment in the AI company View this email online in your browser By Alex Wilhelm Thursday, March 28, 2024 Welcome to TechCrunch AM! Today we have a giga-round

Airtrain, Pretzel, SpinKube, Glide, GPTScript, and more

Thursday, March 28, 2024

StackShare Weekly Email not displaying correctly? View it in your browser. StackShare Weekly Digest March 28th, 2024 Stop manually typing out what technologies are being used in your repos in README