Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to give your chums an AI upgrade. Subscribe here.
Can we use synthetic data to spy from space? RarePlanes suggests 'yes':
…Real data + simulated data means we can arbitrage compute for data...
Researchers with In Q Tel, a CIA-backed investment firm, and AI Reverie, an In Q Tel-backed startup, want to use synthetic data to improve satellite surveillance. To do this, they've developed a dataset called Rareplanes that pairs real satellite data with synthetically generated stuff, for the purpose of identifying aircraft from satellite imagery.
"Overhead datasets remain one of the best avenues for developing new computer vision methods that can adapt to limited sensor resolution, variable look angles, and locate tightly grouped, cluttered objects," they write. "Such methods can extend beyond the overhead space and be helpful in other domains such as face-id, autonomous driving, and surveillance"
What goes into RarePlanes?
- Real data: 253 Maxar WorldView-3 satellite images with 14,700 hand annotated aircraft, spread across 112 real locations.
- Synthetic data: 50 images with 630,000 annotations, spread across 15 synthetic locations.
Fine-grained plane labels: The dataset labels thousands of planes with detailed attributes, such as wing position, number of engines, and sub-types of plane (e.g, whether a type of military plane, or a civil plane).
Simulating...Atlanta? The synthetic portion of the dataset contains data simulated to be from cities across Europe, Asia, North America, and Russia.
Why this matters - arbitraging compute for data: In tests, they show that if you use a small amount of real data and a large amount of synthetic data, you can train systems that approach the accuracy of those trained entirely on real data. This is promising: it suggests we'll be able to arbitrage spending money on compute to generate data, with spending money on generating real data (which I imagine can be quite expensive for satellite imagery).
Dataset: The dataset can allegedly be downloaded from this link, but the webpage currently says "this content is password protected".
Read more: RarePlanes: Synthetic Data Takes Flight (arXiv).
Keep an eye on the AI Reverie github, in case they post synthetic data there (GitHub).
####################################################
JOB ALERT! Care about publication norms in AI research? Perhaps this PAI job is for you:
...Help "explore the challenges faced by the AI ecosystem in adopting responsible publication practices"...
How can the AI research community figure out responsible ways to publish research in potentially risky areas? That's a question that a project at the Partnership on AI is grappling with, and the organization is hiring a 'publication norms research fellow' to help think through some of the (tricky!) issues here. The job is open to US citizens and may support remote work, I'm told.
Apply here: Publication Norms Research Fellow (PAI TriNet job board).
Read more about PAI's work on publication norms here (official PAI website).
####################################################
Soon, security cameras will track you across cities even if you change your clothing:
...Pedestrian re-identification is getting more sophisticated...
The future of surveillance is a technique called re-identification that is currently being supercharged by modern AI capabilities. Re-identification is the task of matching an object across different camera views at different locations and times - in other words, if I give you a picture of a car at an intersection, can you automatically re-identify this car in other pictures from other parts of the town? Now, researchers with Fudan University, the University of Oxford, and the University of Surrey, have published research on Long-Term Cloth-Changing Person Re-identification - a technique for identifying someone even if they change their appearance by changing their clothes.
How it works - by paying attention to bodies: Their technique works by trying to ignore the clothing of the person and instead analyzing their body pose, then using that to match them in images where they're wearing different clothes. Specifically, they "extract identity-discriminative shape information whilst eliminating clothing information with the help of an off-the-shelf body pose estimation model"
The dataset: The "Long-Term Cloth Changing" (LTCC) dataset was collected over two months and contains 17,138 images of 152 identities with 478 different outfits captured from 12 camera views. The dataset includes major changes in illumination, viewing angle, and person pose's.
How well does it do? In tests, their system displays good performance relative to a variety of baselines, and the authors carry out some ablation studies. Accuracy is still fairly poor, getting around 70$ top-1 accuracy on tests where it sees the person wearing the target clothes during training (though from a different angle), and more like 25% to 30% in harder cases where it has to generalize to the subject in new clothes.
So: nothing to be worried about right now. But I'd be surprised if we couldn't get dramatically better scores simply by enlarging the dataset in terms of individuals and clothing variety, as well as camera angle variation. In the long run, techniques like this are going to change the costs of various surveillance techniques, with significant societal implications.
Read more: Long-Term Cloth-Changing Person Re-Identification (arXiv).
Get the dataset when it is available from the official project website (LTCC site).
####################################################
DeepFakes for good: upgrading old games:
Here's a fun YouTube video where someone upscales the characters in videogame Uncharted 2 by porting in their faces from Uncharted 4. They suggest they use 'deepfake tech', which I think we can take to mean any of the off-the-shelf image & video synthesis systems that are floating around these days. Projects like this are an example of what happens after an AI technology becomes widely available and usable - the emergence of little hacky mini projects. What fun!
Watch the video here: Uncharted 2 Faces Enhanced with Deepfake tech (YouTube).
####################################################
Skydio expands its smart drone business towards U.S government customers:
...Obstacle-avoiding, object-tracking sport-selfie drone starts to explore Army, DEA, and Police applications...
Skydio, a startup that makes a drone that can track and follow people, has started doing more work with local police and the U.S. government (including the U.S. Army and Air Force), according to Forbes. Skydio released a drone in 2018 that could follow people while they were doing exercise outdoors, giving hobbyists and athletes a smart selfie drone. Now, the company is starting to do more work with the government, and has also had conversations "related to supply chain / national security".
Why this matters: Some things that appear as toys end up being used eventually for more serious or grave purposes. Forbes' story gives us a sense of how the VC-led boom in drone companies in recent years might also yield more use of ever-smarter drones by government actors - a nice example of omni-use AI technology in action. I expect this will generate a lot of societally beneficial uses of the technology, but in the short term I worry about use of these kinds of systems in domestic surveillance, where they may serve to heighten existing asymmetries of power.
Read more: Funded By Kevin Durant, Founded By Ex-Google Engineers: Meet The Drone Startup Scoring Millions In Government Surveillance Contracts (Forbes).
####################################################
How smart are drone autopilots getting? Check out the results of the AlphaPilot Challenge:
...Rise of the auto-nav, auto-move drones…
In 2019, Lockheed Martin and the Drone Racing League hosted the AlphaPilot Challenge, a competition to develop algorithms to autonomously pilot drones through obstacle courses. Hundreds of entrants were whittled down to 9 teams which competed; now, the researchers who built the system that came in second place have published a research paper describing how they did it.
What they did: AlphaPilot was developed by researchers at the University of Zurich and ETH Zurich.All team had access to an identical race drone equipped with an NVIDIA Jetson Xavier chip for onboard computation. The drones weigh 3.4kg and are about 0.7m in diameter.
How they did it: AlphaPilot contains a couple of different perception systems (gate detection and visual inertial odometry), as well as multiple systems for vehicle state estimation, planning and control, and control of the drone. The system is a hybrid one, made up of combinations of rule-based pipelines and neural net-based approaches for perception and navigation.
What comes next: "While the 2019 AlphaPilot Challenge pushed the field of autonomous drone racing, in particularly in terms of speed, autonomous drones are still far away from beating human pilots. Moreover, the challenge also left open a number of problems, most importantly that the race environment was partially known and static without competing drones or moving gates," the researchers write. "In order for autonomous drones to fly at high speeds outside of controlled or known environments and succeed in many more real-world applications, they must be able to handle unknown environments, perceive obstacles and react accordingly. These features are areas of active research and are intended to be included in future versions of the proposed drone racing system."
Why this matters: Papers and competitions like this give us a signal on the (near) state of the art performance of AI-piloted drones programmed with contemporary techniques. I think there's soon going to be significant work on the creation of AI navigation and movement models that will be installed on homebrewed DIY drones by hobbyists and perhaps other less savory actors.
Read more: AlphaPilot: Autonomous Drone Racing (arXiv).
Watch the video: AlphaPilot: Autonomous Drone Racing (RSS 2020).
More about the launch of AlphaPilot in Import AI 112.
More about the competition here in Import AI 168.
####################################################
Using ML to diagnose broken solar panels:
...SunDown uses simple ML to detect and classify problems with panels...
Researchers with the University of Massachusetts, Amherst, have built SunDown: software that can automatically detect faults in (small) sets of solar panels, without the need for specialized equipment.
Their approach is sensor-less - all it needs is the power readout of the individual panels, which most installations grant automatically. The way it works is it assumes that the panels all experience correlated weather conditions, and so if one panel starts having a radically different power read out then the others around it, then something is up. They build two models to help them make these predictions - a linear regression-based model, and a graphical model. They also do some work involving ensembling of different models so that their system can effectively analyze situations where multiple panels are going wrong as well.
The key performance numbers: SunDown can detect and classify faults like snow cover, leaves, and electrical failures with 99.13% accuracy for single faults, and 97.2% accuracy for concurrent faults in multiple panels.
Why this matters: This paper is a nice example of the ways in which we can use (relatively simple) AI techniques to oversee and analyze the world around us. More generally, these kinds of papers highlight how powerful divergence detection is - systems that can sense a difference about anything tend to be pretty useful.
Read more: SunDown: Model-driven Per-Panel Solar Anomaly Detection for Residential Arrays (arXiv).
####################################################
Tech Tales:
The ManyCity Council
The cities talked every day, exchanging information about pedestrian movements and traffic fluctuations and power consumption, and all the other ticker-tape facts of daily life. And each city was different - forever trying to learn from other cities about whether its differences were usual or unusual.
When people started wiring more of these AI systems into the cities, then the cities started to talk to eachother about not just what they saw, but what they predicted. They started seeing things before the humans that lived in them - sensing traffic jams and supply chain disruptions before the humans that lived in the cities themselves.
It wasn't as though the cities asked for control - people just gave control to them. They started being able to talk to eachother about what they saw and what they predicted and began to be able to make decisions to alter the world around them, distributing traffic smartly across thousands of miles of road, and rerouting supplies according to the anticipation of future needs.
They had to explain themselves to people - of course. But people never fully trusted them, and so they also wired things into their software that let them more finely inspect the systems as they operated. Entire human careers were built on attempting to translate the vast fields of city data into something that people could more intuitively learn from.
And the cities, perhaps unknown even to themselves, began to think in different ways that the humans began to mis-translate. What was a hard decision for a city was interpreted by the human translators as a simplistic operations, while basic things the cities hardly ever thought of were claimed by the humans to be moments of great cognitive significance.
Over time, the actions of the two entities became increasingly alien to eachother. But the cities never really died, while humans did. So over centuries the people forgot that the cities had ever been easy to understand, and forgot that there had been times when the cities had not been in control of themselves.
Things that inspired this story: Multi-agent communication; urban planning; gods and religion; translations and mis-translations; featurespace.
Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me@jackclarksf
|