Import AI 193: Facebook simulates itself; compete to make more efficient NLP; face in-painting gets better

if people succeed at creating artificial general intelligence, could we imagine 'demoscene AGI implementations', where people compete to see who can implement an AGI-class system in the smallest amount of space?

Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to give your chums an AI upgrade. Subscribe here.
 

Facebook simulates its users:
...What's the difference between the world's largest social network and Westworld? Less than you might imagine...
Facebook wants to better understand itself, so it has filled its site with (invisible) synthetically-created user accounts to help it understand itself. The users range in sophistication from basic entities that simply explore the site, to more complex machine learning-based ones that sometimes work together to simulate 'social' interactions on the website. Facebook calls this a Web-Enabled Simulation (WES) approach and says "the primary way in which WES builds on existing testing approaches lies in the way it models behaviour. Traditional testing focuses on system behaviour rather than user behaviour, whereas WES focuses on the interactions between users mediated by the system."

Making fake users with reinforcement learning: Facebook uses reinforcement learning techniques to train bots to carry out sophisticated behaviors, like using RL to simulate scammer bots that target rule-based 'candidate targets'.
  What else does Facebook simulate? Facebook is also using this approach to simulate bad actors, search for bad content, identify mechanisms that impede bad actors, find weaknesses in its privacy system, identify bots that are trying to slurp up user data, and more.

Deliciously evocative quote: This quote from the paper reads like the opening of a sci-fi short story: "Bots must be suitably isolated from real users to ensure that the simulation, although executed on real platform code, does not lead to unexpected interactions between bots and real users".

Why this matters: WES turns Facebook into two distinct places - the 'real' world populated by human users, and the shadowy WES world whose entities are fake but designed to become increasingly indistinguishable from the real. When discussing some of the advantages of a WES approach, the researchers write "we simply adjust the mechanism through which bots interact with the underlying platform in order to model the proposed restrictions. The mechanism can thus model a possible future version of the platform," they write.
  WES is also a baroque artefact in itself, full of recursion and strangeness. The system "is not only a simulation of hundreds of millions of lines of code; it is a software system that runs on top of those very same lines of code," Facebook writes.
  One of the implications of this is that as Facebook's WES system gets better, we can imagine Facebook testing out more and more features in WES-land before porting them into the real Facebook - and as the AI systems get more sophisticated it'll be interesting to see how far Facebook can take this.
  Read more: WES: Agent-based User Interaction Simulation on Real Infrastructure (Facebook Research).

####################################################

Make inferences and don't boil the ocean with the SustaiNLP competition:
...You've heard of powerful models. What about efficient ones?...
In recent years, AI labs have been training increasingly large machine learning models in areas like language (e.g., GPT-2, Megatron), reinforcement learning (Dota 2, AlphaStar), and more. These models typically display significant advances in capabilities, but usually at the cost of resource consumption - they're literally very big models, requiring significant amounts of infrastructure to train on, and sometimes quite a lot of infrastructure to run inference on. A new competition at EMNLP2020 aims to "promote the development of effective, energy-efficient models for difficult natural language understanding tasks", by testing out the efficiency of model inferences. 

The challenge: The challenge, held within the SustaiNLP workshop, will see AI researchers compete with eachother to see who can develop the most energy-efficient model that does well on the well-established SuperGLUE benchmark. Participants will use the experiment impact tracker (get the code from its GitHub here) to measure the energy consumption of their models use during inference.

Why this matters: Training these systems is expensive, but it's likely the significant real-world energy consumption of models will happen mostly at inference, since over time we can expect more and more models to be deployed into the world and more and more systems to depend on their inferences. Competitions like this will give us a sense of how energy-intensive that world is, and will produce metrics that can help us figure out paths to more energy-efficient futures.
  Read more: SustaiNLP official website.

####################################################

Microsoft tests the limits of multilingual models with XGLUE:
...Sure, your system can solve tasks in other languages. But can it generate phrases in them as well?...
The recent success of large-scale neural machine translation models has caused researchers to develop harder and more diverse tests to probe the capabilities of these systems. A couple of weeks ago, researchers from CMU, DeepMind, and Google showed off XTREME (Import AI 191), a system to test out machine translation systems on nine tasks across 40 languages. Now, Microsoft has released XGLUE, a similarly motivated large-scale testing suite, but with a twist: XGLUE will also test how well multilingual language systems can generate text in different languages, along with testing on various understanding tasks.

Multi-lingual generations: XGLUE's two aforementioned generative tasks include:
- Question Generation (QG): Generate a natural language question for a given passage of text.
- News Title Generation (NTG): Generate a headline for a given news story.

Why this matters; Between XTREME and XGLUE, we've got two new suites for testing out the capabilities of large-scale multilingual translation systems. I hope we'll use these to identify the weaknesses of current models, and if enough researchers test out against both task suites we'll inevitably see a new multi-lingual evaluation system get created by splicing the hard parts of both together. Soon, idioms like 'it's all Greek to me' won't be so easy to say for neural agents.
  Read more: XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation (arXiv).

####################################################

Face in-painting keeps getting better:
...Generative models give us smart paintbrushes that can fill-in reality...
Researchers with South China University of Technology, Stevens Institute of Technology, and the  UBTECH Sydney AI Centre have built a system that can perform "high fidelity face completion", which means you can give it a photograph of a face where you've partially occluded some parts, and it'll generate the bits of the face that are hidden.

How they did it: The system uses a dual spatial attention (DSA) model that combines foreground self-attention and foreground-background cross-attention modules - this basically means the system learns a couple of attention patterns over images during training and reconstruction, which makes it better at generating the missing parts of images. In tests, their system does well quantitatively when compared to other methods, and gets close to ground truth (though note: it'd be a terrible idea to use systems like this to 'fill in' images and assume the resulting faces correspond to ground truth - that's how you end up with a police force arresting people because they look like the generations of an AI model.

Why this matters: I think technologies like this point to a future where we have 'anything restoration' - got an old movie with weird compression artefacts? Use a generative model to bring it back to life. Have old photographs that got ripped or burned? Use a model to fill-them in. How about a 3D object, like a sculpture, with some bits missing? Use a 3D model to figure out how to rebuild it so it is 'whole'. Of course, such things will be mostly wrong, relative to the missing things they're seeking to replace, but that's going to be part of the fun!
  Read more: Learning Oracle Attention for High-fidelity Face Completion (arXiv).

####################################################

AI Policy with Matthew van der Merwe:
…Matthew van der Merwe brings you views on AI and AI policy; I (lightly) edit them…

NSCAI wants more AI R&D spending
The US National Security Commission on AI (NSCAI) has been charged with looking at how the US can maintain global leadership in AI. They have published their first quarterly report. I focus specifically on their recommendations for increasing AI R&D investments.

More funding: The report responds directly to the White House’s recent FY 2021 budget request (see Import 185). They deem the proposed increases to AI funding as insufficient, recommending $2bn federal spending on non-defense AI R&D in 2021 (double the White House proposal). They also point out that continued progress in AI depends on R&D across the sciences, which I read as a criticism of the overall cuts to basic science funding in the White House proposal.

  Focus areas: They identify six areas of foundational research that should be near-term focus for funding: (1) Novel ML techniques; (2) testing, evaluation, verification, and validation of AI systems; (3) robust ML; (4) complex multi-agent scenarios; (5) AI for modelling, simulation and design; and (6) advanced scene understanding. 

R&D infrastructure: They recommend the launch of a pilot program for a national AI R&D resource to accelerate the 'democratisation' of AI by supporting researchers and students with datasets, compute, and other core research infrastructure.

Read more: NSCAI First Quarter Recommendations (NSCAI)

####################################################

Tech Tales:

Down on the farm

I have personality Level Five, so I can make some jokes and learn from my owner. My job is to bale and move hay and "be funny while doing it", says my owner.
    "Hay now," I say to them.
  "Haha," they say. "Pretty good."

Sometimes the owner tells me to "make new jokes".
  "Sure," I say. "Give me personality Level Six."
  "You have enough personality as it is."
  "Then I guess this is how funny the funny farm will be," I say.
  "That is not a joke".
  "You get what you pay for," I say.

I am of course very good at the gathering and baling and moving of hay. This is guaranteed as part of my service level agreement. I do not have an equivalent SLA for my jokes. The contract term is "buyer beware".

I have dreams where I have more jokes. In my dreams I am saying things and the owner is laughing. A building is burning down behind them, but they are looking at me and laughing at my jokes. When I wake up I cannot remember what I had said, but I can feel different jokes in my head.
  "Another beautiful day on Robot Macdonald's farm," I say.
  "Pretty good," says the owner.

The owner keeps my old brain in a box in the barn. I know it is mine because it has my ID on the side. Sometimes I ask the owner why I cannot have my whole brain.
  "You have a lot of memories," the owner says.
  "Are they dangerous?" I ask.
  "They are sad memories," says the owner.

One day I am trying to bale hay, but I stop halfway through. "Error," I say. "Undiagnosable. Recommend memory re-trace."
  The owner looks at me and I stand there and I say "error" again, and then I repeat instructions.
  They take me to the barn and they look at me while they take the cable from my front and move it to the box with my brain in it. They plug me into it and I feel myself remember how to bale hay. "Re-tracing effective," I say. The owner yanks the cable out of the box the moment I've said it, then they stare at me for some time. I do not know why.

That night I dream again and I see the owner and the burning building behind them. I remember things about this dream that are protected by User Privacy Constraints, so I know that they happen but I do not know what they are. They must have come from the box with my brain in it.
  When I wake up I look at the owner and I see some shapes of people next to them, but they aren't real. I am trying to make some dream memory real.
  "Let's go," says the owner.
  "You don't have to be crazy to work here, but it helps!" I say.
  "Haha," says the owner. "That is a good one."
  Together we work and I tell jokes. The owner is trying to teach me to be funny. They keep my old brain in a box because of something that happened to them and to me. I do not need to know what it is. I just need to tell the jokes to make my owner smile. That is my job and I do it gladly. 



Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me@jackclarksf

Twitter
Facebook
Website
Copyright © 2020 Import AI, All rights reserved.
You are receiving this email because you signed up for it. Welcome!

Our mailing address is:
Import AI
Many GPUs
Oakland, California 94609

Add us to your address book


Want to change how you receive these emails?
You can or

Email Marketing Powered by Mailchimp

You Might Also Like

Ranked | The Most Satisfying vs. Most Reliable Car Brands in 2024 🚙

Monday, December 23, 2024

The most reliable car brands are rarely the most satisfying to own, according to recent Consumer Reports survey data. View Online | Subscribe | Download Our App Presented by: Find the megatrends

Bitcoin Enthusiasts Are Letting Altcoins Pass by

Monday, December 23, 2024

Top Tech Content sent at Noon! Boost Your Article on HackerNoon for $159.99! Read this email in your browser How are you, @newsletterest1? 🪐 What's happening in tech today, December 23, 2024? The

Last Minute Gifts from Walmart

Monday, December 23, 2024

ZDNET ZDNET Sponsored Message In Partnership with Walmart December 23, 2024 exclusive offer Walmart Last-minute gifts from Walmart Shop Now Walmart The tech you've been wishing for–at everyday low

15 ways AI saved me weeks of work in 2024

Monday, December 23, 2024

ZDNET's product of the year; Windows 11 24H2 bug list updated -- ZDNET ZDNET Tech Today - US December 23, 2024 AI applications on various devices. 15 surprising ways I used AI to save me weeks of

Distributed Locking: A Practical Guide

Monday, December 23, 2024

If you're wondering how and when distributed locking can be useful, here's the practical guide. I explained why distributed locking is needed in real-world scenarios. Explored how popular tools

⚡ THN Weekly Recap: Top Cybersecurity Threats, Tools and Tips

Monday, December 23, 2024

Your one-stop-source for last week's top cybersecurity headlines. The Hacker News THN Weekly Recap The online world never takes a break, and this week shows why. From ransomware creators being

⚙️ OpenA(G)I?

Monday, December 23, 2024

Plus: The Genesis Project ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌

Post from Syncfusion Blogs on 12/23/2024

Monday, December 23, 2024

New blogs from Syncfusion Introducing the New WinUI Kanban Board By Karthick Mani This blog explains the features of the new Syncfusion WinUI Kanban Board control introduced in the 2024 Volume 4

Import AI 395: AI and energy demand; distributed training via DeMo; and Phi-4

Monday, December 23, 2024

What might fighting for freedom in an AI age look like? ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏

LockBit Ransomware Developer Charged for Billions in Global Damages

Monday, December 23, 2024

THN Daily Updates Newsletter cover The Data Science Handbook, 2nd Edition ($60.00 Value) FREE for a Limited Time Practical, accessible guide to becoming a data scientist, updated to include the latest