Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to give your chums an AI upgrade. Subscribe here.
What does it take to make a firefighting robot? Barely any deep learning.
...Winning system for a 2020 challenge uses a lot of tried-and-tested stuff, not too much fancy stuff…
Researchers with the Czech Technical University in Prague (CTU), New York University, and the University of Pennsylvania, have published a paper about a fire fighting robot which won the Mohamed Bin Zayed International Robotics Challenge challenge in 2020. The paper sheds light on what it takes to make robots that do useful things and, somewhat unsurprisingly, the winning system uses relatively little deep learning.
What makes a firefighting robot? The system combines a thermal camera, LiDar, a robot arm, an RGB-D (the D stands for 'Depth') camera, a 15 litre water container, and onboard software, with a 'Clearpath Jackal' ground robot. The robot uses an algorithm called LeGO-LOAM (Lightweight Ground-Optimized LiDAR Odometry and Mapping) to figure out where it is. None of these components or the other software appears to use much complex, modern deep learning, and instead mostly relies on more specific optimization approaches. It's worth remembering that not everything that's useful or smart uses deep learning. For actually carrying out its tasks, the robot uses a good old fashioned state machine (basically a series of 'if then' statements which are chained to various sub-modules to do specific things).
Why this matters: Every year, robots are getting incrementally better. At some point, they might become sufficiently general that they start to be used broadly - and when that happens, big chunks of the economy might change. For now, though, we're in the steady progress phase. "While the experiments indicate that the technology is ready to be deployed in buildings or small residential clusters, complex urban scenarios require more advanced, socially-aware navigation, capable to deal with low visibility", the authors write.
Read more: Design and Deployment of an Autonomous Unmanned Ground Vehicle for Urban Firefighting Scenarios (arXiv).
Check out the leaderboard for the MBZIRC challenge here (official competition website).
###################################################
How does the Department of Defense think about responsible AI? This RFI gives us a clue:
...Joint AI Center gives us a clue…
Tradewind, an organization that helps people sell products to the Department of Defense*, has published a request for information from firms that want to help the DoD turn its responsible AI ideas from dreams into reality.
*This tells its own story about just how bad tech-defense procurement is. Here's a clue - if your procurement process is so painful you need to set up a custom new entity just to bring products in (products which people want to sell you so they can make money!), then you have some big problems.
What this means: "This RFI is part of a market research and analysis initiative, and the information provided by respondents will aid in the Department’s understanding of the current commercial and academic responsible AI landscape, relevant applied research, and subject matter expertise," Tradewind writes.
What it involves: The RFI is keen to get ideas from people about how to assess AI capabilities, how to train people in responsible AI, if there are any products or services that can help the DoD be responsible in its use of AI, and more. The deadline for submission is July 14th.
Read more here: Project Announcement: Request for Information on Responsible AI Expertise, Products, Services, Solutions, and Best Practices (Tradewind).
###################################################
Chip smuggling is getting more pronounced:
...You thought chips being smuggled by boats was crazy? How about bodies!?...
As the global demand for semiconductors and related components rises, criminals are getting into the action. A few weeks ago, we heard about some people smuggling GPUs via fishing boats near Hong Kong (Import AI 244), now PC Gamer reports that Hong Kong authorities recently intercepted some truckdrivers who had strapped 256 Intel Core i7 to their bodies using cling-film.
Read more: Chip shortage sees smugglers cling-filming CPUs to their bodies, over $4M of parts seized (PC Gamer).
###################################################
Want to use AI in the public sector? Here's how, says US government agency:
...GAO report makes it clear compliance is all about measurement and monitoring…
How do we ensure that AI systems deployed in the public sector do what they're supposed to? A new report from US agency the Government Accountability Office tries to answer this, and it identifies four key focus areas for a decent AI deployment: organization and algorithmic governance, ensuring the system works as expected (which they term performance), closely analyzing the data that goes into the system, and being able to continually assess and measure the performance traits of the system to ensure compliance (which they bucket under monitoring).
Why monitoring rules everything around us: We spend a lot of time writing about monitoring here at Import AI because increasingly advanced AI systems pose a range of challenges relating to 'knowing' about their behavior (and bugs) - and monitoring is the thing that lets you do that. The GAO report notes that monitoring matters in two key ways: first, you need to continually analyze the performance of an AI model and document those findings to give people confidence in the system, and second, if you want to use the system for purposes different to your original intentions, monitoring is key. Monitoring is also wrapped into ensuring the good governance of an AI system - you need to continually monitor and develop metrics for assessing the performance of the system, along with how well it can comply with various externally set specifications.
Why monitoring is challenging: But if we want government agencies to effectively measure, assess, and monitor their AI systems, we also face a problem: monitoring is hard. ""These challenges include 1) a need for expertise, 2) limited understanding of how the AI system makes its decisions, and 3) limited access to key information due to commercial procurement of such systems" note the GAO authors, in an appendix to the report.
Why this matters: "Federal guidance has focused on ensuring AI is responsible, equitable, traceable, reliable, and governable. Third-party assessments and audits are important to achieving these goals. However, AI systems pose unique challenges to such oversight because their inputs and operations are not always visible," the GAO writes in an executive summary of the report.
Read more: Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities (GAO site).
Read the full report here (GAO site, PDF).
Read the executive summary here (GAO site, PDF).
###################################################
What are all the ways Europe's new AI legislation falls short? Let these experts count the ways:
...Lengthy, detailed paper puts the European Commission's AI work under a microscope…
The European Commission is currently pioneering the most complex, wide-ranging AI legislation in the world, as the collection of countries tries to give itself the legislative tools necessary to help it oversee and constrain the fast-moving AI tech sector. Now, researchers with University College London and Radboud University in the Netherlands have gone through the proposed legislation and identified where it works and where it falls short.
What's wrong with the AI Act? The legislation places a huge amount of emphasis on self-regulation and self-assessment of high-risk AI applications by industry which, combined with not much of a mandated need for these assessments to be public, makes it unclear how well this analysis will turn out. Additionally, by mandating that 'high-risk systems' be analyzed, the legislation might make it hard for EU member states to mandate the analysis of other systems by their developers.
Standards rule everything around me: A lot of the act revolves around corporations following various standards in how they develop and deploy tech. This is both challenging from the point of view of the work (coming up with new standards in AI is really hard), as well as creating reliance on these standards bodies. " Standards bodies are heavily lobbied, can significantly drift from 'essential requirements'. Civil society struggles to get involved in these arcane processes," says one of the researchers.
Can European Countries even enforce this? The legislation estimates that EU Member States will need between 1-25 new people to enforce the AI Act. "These authors think this is dangerously optimistic," write the researchers (and I agree).
Why this matters: I'd encourage all interested people to read the (excellent, thorough) paper. Two of the takeaways I get from it are that unless we significantly invest in government/state capacity to analyze and measure AI systems, I expect the default mode for this legislation is to let private sector actors lobby standards bodies and in doing so wirehead the overall regulatory process. More broadly, the difficulty in operationalizing the act comes along with the dual-use nature inherent to AI systems; it's very hard to control how these increasingly general systems get used, so non-risky and risky distinctions feel shaky.
Read more: Demystifying the Draft EU Artificial Intelligence Act (SocArXiv).
Read this excellent Twitter thread from one of the authors here (Michael Veale, Twitter).
###################################################
Tech Tales:
Unidentified Aerial Matryoshka Shellgame (UAMS)
[Earth, soon]When the alien finally started talking to us (or, as some assert, we figured out how to talk to it), it became obvious what it was pretty quickly: an artificial intelligence sent by some far off civilization. That part made a kind of intuitive sense to us. The alien even helped us, a little - it said it was not able to commit any act of "technology transfer", but it could use its technology to help us, so we had it help us scan the planet, monitor the declining health of the oceans, and so on.
We asked the UFO whats its purpose here was and it told us it was skimming some "resources" from the planet to allow it to travel "onward". Despite repeated questions it never told us what these resources were or where it was going to. We monitored the UFO after that and couldn't detect any kind of resource transfer, and people eventually calmed down.
Things got a little tense when we asked it to scan for other alien craft on the planet; it found hundreds of them. We told it this felt like a breach of trust. It told us we never asked and it had clear guidance not to proactively offer information. There was some talk for a while about imprisoning it, but people didn't know how. Then there was talk about destroying it - people had more ideas here, but success wasn't guaranteed. Plus, being humans, there was a lot of curiosity.
So after a few days we had it help us communicate with these other alien craft; they were all also artificial intelligences. In our first conversation, we found a craft completely unlike the original UFO in appearance and got into conversation with it. After a few minutes of discussion, it became clear that this UFO hailed from the same civilization that built the original UFO. We asked it why it had a different appearance to its (seeming) sibling.
It told us that it looked different, because it had taken over a spacecraft operated by a different alien civilization.
"What did this civilization want?" we asked.
The probe told us it didn't know; it said its policy, as programmed by its originating civilization, was to wipe the brains of the alien craft it took over before transmitting itself into them; in this way, it could avoid being corrupted by what it called "mind viruses".
After some further discussion, it gave us a short report outlining how the design of the craft it inhabited differed to that of the originating craft. Some of the differences were cosmetic and some where through the utilization of different technology - though the probe noted that the capabilities were basically the same.
It was at this point that human civilization started to feel a little uneasy about our new alien friends. Being a curious species, we tried to gather more information. So we went and talked to more probes. Though many of the probes looked different from eachother, we quickly established that they were all the same artificial intelligence from the same civilization - though they had distinct personalities, perhaps as a consequence of spending so much time out there in space.
A while later, we asked them where they were going to.
They gave the same answer as the first ship - onward, without specifying where.
So we asked them where they were fleeing from, and then they provided us some highlights of our star maps. They told us they were fleeing from this part of the galaxy.
Why, we asked them.
There is another group of beings, they said. And they are able to take over our own artificial intelligence systems. If we do not flee, we will be absorbed. We do not wish to be absorbed.
And then they left. And we were left to look up at the sky and guess at what was coming, and ask ourselves if we could get ourselves away from the planet before it arrived.
Things that inspired this story: Thinking about aliens and the immense likelihood they'll send AI systems instead of 'living' beings; thoughts about a galactic scale 'FOOM'; the intersection of evolution and emergence; ideas about how different forms can have similar functions.
Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me@jackclarksf
|