Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to give your chums an AI upgrade. Subscribe here.
Turns out AI systems can identify people even when they're wearing masks:
...Facial recognition VS People Wearing Masks: FR 1, Masks 0…
Since the pandemic hit in 2020, a vast chunk of the Earth's human population have started wearing masks regularly. This has posed a challenge for facial recognition systems, many of which don't perform as well when trying to identify people wearing masks. This year, the International Joint Conference on Biometrics hosted the 'Masked Face Recognition' (MFR) competition, which challenged teams to see how well they could train AI systems to recognize people wearing masks. 10 teams submitted 18 distinct systems into the competition, and their approach was evaluated according to performance (75% weighting) and efficiency (defined as parameter size, where smaller is better, weighted at 25%).
COVID accelerated facial recognition tech: The arrival of COVID caused a rise in research oriented around solving COVID-related problems with computer vision, such as facial recognition through masks, checking for people social distancing via automated analysis of video, and more. Researchers have been developing systems that can do facial recognition on people wearing masks for a while (e.g, this work from 2017, written up in Import AI #58), but COVID has motivated a lot more work in this area.
Who won? The overall winner of the competition was a system named TYAI, developed by TYAI, a Chinese AI company. Joint second place went to systems from the University of the Basque Country in Spain, as well as Istanbul Technical University in Turkey. Third place went to a system called A1 Simple from a Japanese company called ACES, along with a system called VIPLFACE-M from the Chinese Academy of Sciences, in China. Four of five top-ranked solutions use synthetically generated masks to augment the training dataset
Why this matters: "The effect of wearing a mask on face recognition in a collaborative environment is currently a sensitive issue," the authors write. "This competition is the first to attract and present technical solutions that enhance the accuracy of masked face recognition on real face masks and in a collaborative verification scenario."
Read more:MFR 2021: Masked Face Recognition Competition (arXiv).
###################################################
Does AI actually matter for warfare? And, if so, how?
...The biggest impacts of War-AI? Reducing gaps between state and non-state actors…
Jack McDonald, a lecturer in war studies at Kings College London, has written an insightful blogpost about how AI might change warfare. His conclusions are that the capabilities of AI technology (where, for example, identifying a tank from the air is easy, but distinguishing between a civil/non-civil humvee is tremendously difficult), will drive war into more urban environments in the future. "One of the long-term effects of increased AI use is to drive warfare to urban locations. This is for the simple reason that any opponent facing down autonomous systems is best served by “clutter” that impedes its use," he writes.
AI favors asymmetric actors: Another consequence is that the gradual diffusion of AI capabilities combined with the arrival of low-cost hardware (e.g, consumer drones), will give non-state actors/terror groups a larger menu of things to use when fighting against their opponents. "States might build all sorts of wonderful gizmos that are miles ahead of the next competitor state, but the fact that non-state armed groups have access to rudimentary forms of AI means that the gap between organised state militaries and their non-state military competitors gets smaller," he writes. "What does warfare look like when an insurgent can simply lob an anti-personnel loitering munition at the FOB on the hill, rather than pestering it with ineffective mortar fire? From the perspective of states, and those who defend a state-centric international order, it’s not good."
Why this matters: As McDonald writes, "AI doesn’t have to be revolutionary to have significant effects on the conduct of war". Many of the consequences of AI being used in war will relate to how AI capabilities lower the cost curves of certain things (e.g, making surveillance cheap, or increasing the reliability of DIY-drone explosives) - and one of the macabre lessons of human history is that if you make a tool of war cheaper, then it gets used more (see: what the arrival of the AK-47 did for small arms conflicts).
Read more:What if Military AI is a Washout? (Jack McDonald blog).
###################################################
OpenAI's CLIP and what it means for art:
...Now that AI systems can be used as magical paintbrushes, what happens next?...
In the past few years, a new class of generative models have made it easier for people to create and edit content. These systems can do things ranging from processing text, to audio, to images. One popular system is 'CLIP' from OpenAI, which was released as open source a few months ago. Now, a student at UC Berkeley has written a blog post summarizing some of the weird and wacky ways CLIP has been used by a variety of internet people to create cool stuff - take a read and check out the pictures and build your intuitions about how generative models might change art.
Why systems like CLIP matter: "These models have so much creative power: just input some words and the system does its best to render them in its own uncanny, abstract style. It’s really fun and surprising to play with: I never really know what’s going to come out; it might be a trippy pseudo-realistic landscape or something more abstract and minimal," writes the author Charlie Snell. "And despite the fact that the model does most of the work in actually generating the image, I still feel creative – I feel like an artist – when working with these models."
Read more: Alien Dreams: An Emerging Art Scene (ML Berkeley blog).
###################################################
Chinese researchers envisage a future of ML-managed cities; release dataset to help:
…CityNet shows how ML might be applied to city data…
Researchers from a few Chinese Universities as well as JD's "Intelligent Cities Business Unit" have developed and released CityNet, a dataset containing traffic, layout, and meteorology data for 7 cities. Datasets like CityNet are the prerequisites for a future where machine learning systems are used to continuously analyze and forecast changing patterns of movement, resource consumption, and traffic in cities.
What goes into CityNet? CityNet has three types of data - 'city layout', which relates to information about the road networks and traffic of a city, 'taxi', which tracks taxis via their GPS data, and 'meteorology' which consists of weather data collected from local airports. Today, CityNet contains data from Beijing, Shanghai, Shenzhen, Chongqing, Xi'an, Chengdu, and Hong Kong.
Why this matters: CityNet is important because it gestures at a future where all the data from cities is amalgamated, analyzed, and used to make increasingly complicated predictions about city life. As the researchers write, "understanding social effects from data helps city governors make wiser decisions on urban management.
Read more:CityNet: A Multi-city Multi-modal Dataset for Smart City Applications (arXiv).
Get the code and dataset here (Citynet, GitHub repo).
###################################################
What happened at the world's most influential computer vision conference in 2021? Read this and find out:
…Conference rundown gives us a sense of the future of computer vision...
Who published the most papers at the Computer Vision and Pattern Recognition conference in 2021? (China, followed by the US). How broadly can we apply Transformers to computer vision tasks? (Very broadly). How challenging are naturally-found confusing images for today's object recognition systems? (Extremely tough). Find out the detailed answers to all this and more in this fantastic summary of CVPR 2021.
Read more: CVPR 2021: An Overview (Yassine, GitHub blog).
###################################################
Tech Tales:
Permutation Day
[Bedroom, 2027]
Will you be adventurous today? says my phone when I wake up.
"No," I say. "As normal as possible."
Okay, generating itinerary, says the phone.
I go back to sleep for a few minutes and wake when it starts an automatic alarm. While I make coffee in the kitchen, I review what my day is going to look like: work, food from my regular place, and I should reach out to my best friend to see if they want to hang out.
The day goes forward and every hour or so my phone regenerates the rest of the day, making probabilistic tweaks and adjustments according to my prior actions, what I've done today, and what the phone predicts I'll want to do next, based on my past behavior.
I do all the things my phone tells me to do; I eat the food, I text my friend to hang out, I do some chores it suggests during some of my spare moments.
"That's funny," my friend texts me back, "my phone made the same suggestion."
"Great minds," I write back.
And then my friend and I drink a couple of beers and play Yahtzee, with our phones sat on the table, recording the game, and swapping notes with eachother about our various days.
That night I go to sleep content, happy to have had a typical day. I close my eyes and in my dream I ask the phone to be more adventurous.
When I wake I say "let's do another normal day," and the phone says Sure.
Things that inspired this story: Recommendation algorithms being applied to individual lives; federated learning; notions of novelty being less attractive than certain kinds of reliability.
Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me@jackclarksf
|