Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to give your chums an AI upgrade. Subscribe here.
DarkLighter lets drones see (kind of) in the dark:
...Splendid, the drones will find you, now…
Drones have a hard time seeing in the dark, in the same way cameraphones do. So researchers with Tongji University in Shanghai, China, have tried to fix this with a tool called DarkLighter that, they say, works as "a plug-and-play enhancer for UAV tracking". What DarkLighter does is "iteratively decomposes the reflectance map from low-light images" to make it easier to make out the faint shapes of objects captured in low-light situations, allowing mobile drones to analyze and track these objects. DarkLighter boosts performance by around ~21% when integrated into a system, they say. They also tested out the approach in the real world and found a decent level of agreement between the drone-generated identifications and those coming from ground truth data.
Why this matters: Drones are flying robots filled with AI systems and are being put to work in a huge range of areas across the economy (and military). Though some drones will ship with thermal or infrared vision, the vast majority of drones will ship with smartphone-esque cameras, so we'll need to use AI techniques to improve their ability to see-in-the-dark. The approach outlined in this paper shows how we can use a combination of traditional techniques and contemporary computer vision approaches to improve drone performance under low light conditions.
Read more:DarkLighter: Light Up the Darkness for UAV Tracking (arXiv).
####################################################
Chinese researchers release a high-performance reinforcement learning library:
...Tianshou ships with MuJoCo tests and a bunch of algo implementations...
Researchers with Tsinghua University have released Tianshou, a PyTorch-based software library for doing deep reinforcement learning research. Tianshou ships with implementations of a bunch of widely-used Rl algorithms including PPO, DQN, A2C, DDPG, SAC, and ABC (that last one is a joke - Ed).
What is Tianshou? Tianshou is a PyTorch-based library for running deep reinforcement learning experiments. The software is modular, ships with several integrated reinforcement learning algorithms, and has support for model-free, multi-agent RL (MARL), model-based RL, and Imitation Learning approaches. Tianshow is built on top of PyTorch and uses a curated set of environments from OpenAI Gym. It supports both synchronous and asynchronous environment simulation, and also ships with an inbuilt MuJoCo benchmark to help people evaluate system performance - in tests, the algo implementations in Tianshou appear superior to those in OpenAI Baselines, Stable Baselines, and Ray/RLLib - other popular RL libraries with algorithm implementations.
Why this matters: Software frameworks are the tools AI researchers use to get stuff done. Tianshou already has 3.3k stars and 536 forks on GitHub, which is non-trivial (by comparison, OpenAI Gym is 24.8k stars and 7.1k forks). Tracking the popularity of tools like TIanshou gives us a sense of who is using what tools to carry out their experiments, and also helps us identify groups - like these Tsinghua researchers - that are building the underlying frameworks that'll be used by others.
Read more:Tianshou: a Highly Modularized Deep Reinforcement Learning Library (arXiv).
Get the code for Tianshou here (GitHub).
####################################################
What's been happening in natural language processing and what are the problems of the future?
...Christopher Potts' ACL keynote lays out where we've been and where we're going…
Here's a great video lecture from Stanford's Christopher Potts about the past, present, and future of natural language processing (NLP). It spends quite a lot of time talking about how as new NLP systems have emerged (e.g, GPT-3), it's become more important to invest in ways to accurately measure and assess their capabilities - a topic we write a lot about here at Import AI.
Watch the lecture here: Reliable characterizations of NLP systems as a social responsibility (YouTube).
####################################################
What do US AI researchers think about themselves? And how might this alter politics?
...Survey of 500+ researchers gives us a sense of how these people think about hot-button issues…
Researchers with Cornell University, the Center for the Governance of AI at Oxford University, and the University of Pennsylvania, have surveyed 524 AI/ML researchers to understand how they think about a variety of issues. The survey - which was done in 2019 - is valuable for giving us a sense of how this influential set of people think about some contemporary issues, and also for expressing the distinctions between their thoughts and those of the US general public.
What do AI researchers think? AI researchers trust international organizations (e.g the UN) more than the general public (who put a lot of trust in the US military). 68% of researchers think AI safety should be prioritized more than it is currently.
Open vs closed: 84% think that high-level descriptions of research should be shared, but only 22% think trained models should be shared.
AI weapons - Johnny won't build it: 58% of researchers 'strongly oppose' working on lethal autonomous weapons, compared to 6% for military-relevant logistics algorithms.
China vs US competition: A survey of the US public in 2018 found very high concern over issues relating from US-China competition, while AI researchers are much less concerned.
Why this matters: AI researchers are like a political constituency, in that governments need to appeal to them to get certain strategic things done (e.g, the development of surveillance capabilities, or the creation of additional AI safety and/or adversarial AI techniques). Therefore, understanding how they feel about research and governments gives us a sense for how govs may appeal to them in the future.
Read more:Ethics and Governance of Artificial Intelligence: Evidence from a Survey of Machine Learning Researchers (Journal of Artificial Intelligence Research).
####################################################
DeepMind makes a data-agnostic architecture called Perceiver - and it could be important:
...Who cares about your data input if you can just imagine it into something else?...
DeepMind has developed Perceiver IO, a Transformer-inspired AI model that can take in a broad variety of inputs, generate a diverse set of outputs, and can generally serve as an all-purpose replacement for (some of) the specialized networks today. The key technical innovation is using an attention process to help the Perceiver IO system take in an arbitrary input, map it to an internal latent space, process over that latent space, then generates a specificable output. "This approach allows us to decouple the size of elements used for the bulk of the computation (the latent) from the size of the input and output spaces, while making minimal assumptions about the spatial or locality structure of the input and output."
What can Perceiver do? They run Perceiver through tasks ranging from token- and byte-level text prediction, to optical flow prediction in video, to encoding and classification of units in a StarCraft game, to image classification. This inherent generality means "Perceiver IO offers a promising way to simplify the construction of sophisticated neural pipelines and facilitate progress on multimodal and multiask problems," they write. It does have some limitations - " we don’t currently address generative modeling", the authors note.
Read more: Building architectures that can handle the world's data (DeepMind blog).
Read more:Perceiver IO: A General Architecture for Structured Inputs & Outputs (arXiv).
Get the codefor Perceiver here (DeepMind GitHub).
####################################################
ANOTHER big model appears - a 6BN parameter code model, specifically:
...Do you like Python? You will like this…
Some AI researchers have fine-tuned Eleuther's GPT-J 6BN parameter model on 4GB of Python Code, to create a model named Genji-python-6B.
Why does Google want to create so many open source AI models? The compute for these models came from Google's TPU Research Cloud, according to one of the model's developers. I'm still unsure as to what Google's attitude is with regards to model diffusion and proliferation, and I'd love to see a writeup. (Perhaps this is just a fairly simple 'wanna make TPUs get users, so might as well train some big models on TPUs to kickstart the ecosystem', but if so, tell us!)
Try out the models here: Genji-Python-6B (HuggingFace).
####################################################
Tech Tales:
Down at the Robot Arcade
[Detroit, 2040]
Who'd have thought one of the best ways to make money in the post-AGI era was to make games for robots? Certainly not me! But here I am, making some extra cash by amusing the superintelligences. I started out with just one machine - I hacked an old arcade game called Mortal Kombat to increase the number of characters onscreen at any time, reduce the latency between their moves, and wired up the 'AI' to be accessible over the net. Now I get to watch some of the more disastrous robots try their luck at the physical machine, playing against different AI systems that access the box over the internet. I think the machines get something out of it - they call it just another form of training. Now I've got about five machines and one of the less smart robots says it wants to help me build some new cabinets for some of the newer robots coming down the line - this will give me a purpose it says. "You and me both buddy!" I say, and we work on the machines together.
Things that inspired this story: The inherent desire for challenges in life; how various stories relating to the decline of capitalism usually just lead to another form of capitalism; asymmetric self-play.
Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me@jackclarksf
|