Import AI 257: Firefighting robots; how Europe's AI legislation falls short; what the DoD thinks about responsible AI

Would a dataset of the entire universe be sufficient to encapsulate anything a being stationed in that universe could imagine? Or would it be insufficient in some way?
View this email in your browser

Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to give your chums an AI upgrade. Subscribe here.

What does it take to make a firefighting robot? Barely any deep learning.
...Winning system for a 2020 challenge uses a lot of tried-and-tested stuff, not too much fancy stuff…
Researchers with the Czech Technical University in Prague (CTU), New York University, and the University of Pennsylvania, have published a paper about a fire fighting robot which won the  Mohamed Bin Zayed International Robotics Challenge challenge in 2020. The paper sheds light on what it takes to make robots that do useful things and, somewhat unsurprisingly, the winning system uses relatively little deep learning.

What makes a firefighting robot? The system combines a thermal camera, LiDar, a robot arm, an RGB-D (the D stands for 'Depth') camera, a 15 litre water container, and onboard software, with a 'Clearpath Jackal' ground robot. The robot uses an algorithm called LeGO-LOAM (Lightweight Ground-Optimized LiDAR Odometry and Mapping) to figure out where it is. None of these components or the other software appears to use much complex, modern deep learning, and instead mostly relies on more specific optimization approaches. It's worth remembering that not everything that's useful or smart uses deep learning. For actually carrying out its tasks, the robot uses a good old fashioned state machine (basically a series of 'if then' statements which are chained to various sub-modules to do specific things).

Why this matters: Every year, robots are getting incrementally better. At some point, they might become sufficiently general that they start to be used broadly - and when that happens, big chunks of the economy might change. For now, though, we're in the steady progress phase. "While the experiments indicate that the technology is ready to be deployed in buildings or small residential clusters, complex urban scenarios require more advanced, socially-aware navigation, capable to deal with low visibility", the authors write.
  Read more: Design and Deployment of an Autonomous Unmanned Ground Vehicle for Urban Firefighting Scenarios (arXiv).
  Check out the leaderboard for the MBZIRC challenge here (official competition website).

###################################################

How does the Department of Defense think about responsible AI? This RFI gives us a clue:
...Joint AI Center gives us a clue…
Tradewind, an organization that helps people sell products to the Department of Defense*, has published a request for information from firms that want to help the DoD turn its responsible AI ideas from dreams into reality.
*This tells its own story about just how bad tech-defense procurement is. Here's a clue - if your procurement process is so painful you need to set up a custom new entity just to bring products in (products which people want to sell you so they can make money!), then you have some big problems.

What this means: "This RFI is part of a market research and analysis initiative, and the information provided by respondents will aid in the Department’s understanding of the current commercial and academic responsible AI landscape, relevant applied research, and subject matter expertise," Tradewind writes.

What it involves: The RFI is keen to get ideas from people about how to assess AI capabilities, how to train people in responsible AI, if there are any products or services that can help the DoD be responsible in its use of AI, and more. The deadline for submission is July 14th.
  Read more here: Project Announcement: Request for Information on Responsible AI Expertise, Products, Services, Solutions, and Best Practices (Tradewind).

###################################################

Chip smuggling is getting more pronounced:
...You thought chips being smuggled by boats was crazy? How about bodies!?...
As the global demand for semiconductors and related components rises, criminals are getting into the action. A few weeks ago, we heard about some people smuggling GPUs via fishing boats near Hong Kong (Import AI 244), now PC Gamer reports that Hong Kong authorities recently intercepted some truckdrivers who had strapped 256 Intel Core i7 to their bodies using cling-film.
Read more: Chip shortage sees smugglers cling-filming CPUs to their bodies, over $4M of parts seized (PC Gamer).

###################################################

Want to use AI in the public sector? Here's how, says US government agency:
...GAO report makes it clear compliance is all about measurement and monitoring…
How do we ensure that AI systems deployed in the public sector do what they're supposed to? A new report from US agency the Government Accountability Office tries to answer this, and it identifies four key focus areas for a decent AI deployment: organization and algorithmic governance, ensuring the system works as expected (which they term performance), closely analyzing the data that goes into the system, and being able to continually assess and measure the performance traits of the system to ensure compliance (which they bucket under monitoring).

Why monitoring rules everything around us: We spend a lot of time writing about monitoring here at Import AI because increasingly advanced AI systems pose a range of challenges relating to 'knowing' about their behavior (and bugs) - and monitoring is the thing that lets you do that. The GAO report notes that monitoring matters in two key ways: first, you need to continually analyze the performance of an AI model and document those findings to give people confidence in the system, and second, if you want to use the system for purposes different to your original intentions, monitoring is key. Monitoring is also wrapped into ensuring the good governance of an AI system - you need to continually monitor and develop metrics for assessing the performance of the system, along with how well it can comply with various externally set specifications.

Why monitoring is challenging: But if we want government agencies to effectively measure, assess, and monitor their AI systems, we also face a problem: monitoring is hard. ""These challenges include 1) a need for expertise, 2) limited understanding of how the AI system makes its decisions, and 3) limited access to key information due to commercial procurement of such systems" note the GAO authors, in an appendix to the report.

Why this matters: "Federal guidance has focused on ensuring AI is responsible, equitable, traceable, reliable, and governable. Third-party assessments and audits are important to achieving these goals. However, AI systems pose unique challenges to such oversight because their inputs and operations are not always visible," the GAO writes in an executive summary of the report.
  Read more: Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities (GAO site).
  Read the full report here (GAO site, PDF).
  Read the executive summary here (GAO site, PDF).

###################################################

What are all the ways Europe's new AI legislation falls short? Let these experts count the ways:
...Lengthy, detailed paper puts the European Commission's AI work under a microscope…
The European Commission is currently pioneering the most complex, wide-ranging AI legislation in the world, as the collection of countries tries to give itself the legislative tools necessary to help it oversee and constrain the fast-moving AI tech sector. Now, researchers with University College London and Radboud University in the Netherlands have gone through the proposed legislation and identified where it works and where it falls short.

What's wrong with the AI Act? The legislation places a huge amount of emphasis on self-regulation and self-assessment of high-risk AI applications by industry which, combined with not much of a mandated need for these assessments to be public, makes it unclear how well this analysis will turn out. Additionally, by mandating that 'high-risk systems' be analyzed, the legislation might make it hard for EU member states to mandate the analysis of other systems by their developers.

Standards rule everything around me: A lot of the act revolves around corporations following various standards in how they develop and deploy tech. This is both challenging from the point of view of the work (coming up with new standards in AI is really hard), as well as creating reliance on these standards bodies. " Standards bodies are heavily lobbied, can significantly drift from 'essential requirements'. Civil society struggles to get involved in these arcane processes," says one of the researchers.

Can European Countries even enforce this? The legislation estimates that EU Member States will need between 1-25 new people to enforce the AI Act. "These authors think this is dangerously optimistic," write the researchers (and I agree).

Why this matters: I'd encourage all interested people to read the (excellent, thorough) paper. Two of the takeaways I get from it are that unless we significantly invest in government/state capacity to analyze and measure AI systems, I expect the default mode for this legislation is to let private sector actors lobby standards bodies and in doing so wirehead the overall regulatory process. More broadly, the difficulty in operationalizing the act comes along with the dual-use nature inherent to AI systems; it's very hard to control how these increasingly general systems get used, so non-risky and risky distinctions feel shaky.
  Read more: Demystifying the Draft EU Artificial Intelligence Act (SocArXiv).
  Read this excellent Twitter thread from one of the authors here (Michael Veale, Twitter).

###################################################

Tech Tales:

Unidentified Aerial Matryoshka Shellgame (UAMS)
[Earth, soon]When the alien finally started talking to us (or, as some assert, we figured out how to talk to it), it became obvious what it was pretty quickly: an artificial intelligence sent by some far off civilization. That part made a kind of intuitive sense to us. The alien even helped us, a little - it said it was not able to commit any act of "technology transfer", but it could use its technology to help us, so we had it help us scan the planet, monitor the declining health of the oceans, and so on.

We asked the UFO whats its purpose here was and it told us it was skimming some "resources" from the planet to allow it to travel "onward". Despite repeated questions it never told us what these resources were or where it was going to. We monitored the UFO after that and couldn't detect any kind of resource transfer, and people eventually calmed down.

Things got a little tense when we asked it to scan for other alien craft on the planet; it found hundreds of them. We told it this felt like a breach of trust. It told us we never asked and it had clear guidance not to proactively offer information. There was some talk for a while about imprisoning it, but people didn't know how. Then there was talk about destroying it - people had more ideas here, but success wasn't guaranteed. Plus, being humans, there was a lot of curiosity.

So after a few days we had it help us communicate with these other alien craft; they were all also artificial intelligences. In our first conversation, we found a craft completely unlike the original UFO in appearance and got into conversation with it. After a few minutes of discussion, it became clear that this UFO hailed from the same civilization that built the original UFO. We asked it why it had a different appearance to its (seeming) sibling.
  It told us that it looked different, because it had taken over a spacecraft operated by a different alien civilization.
  "What did this civilization want?" we asked.
  The probe told us it didn't know; it said its policy, as programmed by its originating civilization, was to wipe the brains of the alien craft it took over before transmitting itself into them; in this way, it could avoid being corrupted by what it called "mind viruses".
  After some further discussion, it gave us a short report outlining how the design of the craft it inhabited differed to that of the originating craft. Some of the differences were cosmetic and some where through the utilization of different technology - though the probe noted that the capabilities were basically the same.

It was at this point that human civilization started to feel a little uneasy about our new alien friends. Being a curious species, we tried to gather more information. So we went and talked to more probes. Though many of the probes looked different from eachother, we quickly established that they were all the same artificial intelligence from the same civilization - though they had distinct personalities, perhaps as a consequence of spending so much time out there in space.
    A while later, we asked them where they were going to.
  They gave the same answer as the first ship - onward, without specifying where.
  So we asked them where they were fleeing from, and then they provided us some highlights of our star maps. They told us they were fleeing from this part of the galaxy.
  Why, we asked them.
  There is another group of beings, they said. And they are able to take over our own artificial intelligence systems. If we do not flee, we will be absorbed.  We do not wish to be absorbed.

And then they left. And we were left to look up at the sky and guess at what was coming, and ask ourselves if we could get ourselves away from the planet before it arrived.

Things that inspired this story: Thinking about aliens and the immense likelihood they'll send AI systems instead of 'living' beings; thoughts about a galactic scale 'FOOM'; the intersection of evolution and emergence; ideas about how different forms can have similar functions.

Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me@jackclarksf

Twitter
Facebook
Website
Copyright © 2021 Import AI, All rights reserved.
You are receiving this email because you signed up for it. Welcome!

Our mailing address is:
Import AI
Many GPUs
Oakland, California 94609

Add us to your address book


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list

Email Marketing Powered by Mailchimp

Older messages

Import AI 256: Facial recognition VS COVID masks; what AI means for warfare; CLIP and AI art

Tuesday, July 6, 2021

Will computer viruses ever become so complex that we might consider them sentient? View this email in your browser Welcome to Import AI, a newsletter about artificial intelligence. Forward this email

Import AI 255: The NSA simulates itself; China uses PatentNet to learn global commerce; are parameters the most important measure of AI?

Monday, June 28, 2021

If aliens visit Earth in 20000 years and the planet has been radically disfigured and reformulated by global warming, what traces of advanced technological civilization might exist on the planet? (

Import AI 254: Facebook uses AI for copyright enforcement; Google uses RL to design better chips.

Monday, June 21, 2021

Do organizations with significant AI investments make faster decisions (in certain areas) than ones which haven't made these investments? Or is it more that the organizations which invest in AI are

Import AI 253: The scaling will continue until performance saturates

Monday, June 14, 2021

If certain types of AI progress are predictable, then should the government anticipate certain soon-to-arrive capabilities and alter the behavior of its own institutions? View this email in your

Import AI 252: Gait surveillance; a billion Danish words; DeepMind makes phone-using agents

Monday, June 7, 2021

As off-the-shelf AI advances, the potential for emergence increases; in a few years, perhaps some of our most impactful AI systems will be assembled at home by hobbyists out of pre-built components.

Olympics: Scammers are trying to cash in on the Olympic Games

Saturday, July 24, 2021

Smart Cities Robotic Challenge; KDE is to Linux what 7 was to Windows Subscription | Read Online | Twitter Facebook LinkedIn Top Story of the Day July 23, 2021 Top Story of the Day Scammers offer

Weekly Xamarin - Issue 313

Saturday, July 24, 2021

Moar MAUI! Weekly Xamarin View on the Web Archives ISSUE 313 24th July 2021 KYM PHILLPOTTS G'day Everyone, We are now on the downhill slope towards MUAI releases and you can really see it in the

[Python Dependency Pitfalls] "Re-inventing the wheel" disease

Saturday, July 24, 2021

Hey there, PyPI, the Python packaging repository, now contains more than 100000 third-party packages in total. That's an *overwhelming* number of packages to choose from... And this feeling of

The Plastic World of RealSelf

Saturday, July 24, 2021

Cyberbits #10: “The TripAdvisor of Boob Jobs” ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌

The Framework Laptop is now shipping — BirdNet – Identify Birds by Sound — and AWS's Egregious Egress

Friday, July 23, 2021

Issue #473 — Top 20 stories of July 24, 2021 Issue #473 — July 24, 2021 You receive this email because you are subscribed to Hacker News Digest. You can open it in the browser if you prefer. 1 The

Daily Crunch - Bitcoin 'is a big part of our future,' says Twitter CEO Jack Dorsey

Friday, July 23, 2021

TechCrunch Newsletter TechCrunch logo The Daily Crunch logo Friday, July 23, 2021 • By Alex Wilhelm Hello and welcome to Daily Crunch for July 23, 2021. It's been an interesting week for the crypto

Software Testing Weekly - Issue 81

Friday, July 23, 2021

Do you ask the right questions? View on the Web Archives ISSUE 81 July 23rd 2021 COMMENT Welcome to the 81st issue! Here's one thing I'd like to highlight this week. It's a meaningful post

The Dial-Up Volunteer Army 💾

Friday, July 23, 2021

How AOL was built on a an army of unpaid volunteers. Here's a version for your browser. Hunting for the end of the long tail • July 23, 2021 Today in Tedium: In some ways, social media started with

JSK Daily for Jul 23, 2021

Friday, July 23, 2021

JSK Daily for Jul 23, 2021 View this email in your browser A community curated daily e-mail of JavaScript news Snapshot Testing for Frontends There are many types of software testing. Among these, one

iOS Dev Weekly - Issue 517

Friday, July 23, 2021

Focusing on positivity and balance. ⚖️ View on the Web Archives ISSUE 517 July 23rd 2021 Comment I'm determined to write something more positive today! I've been far too negative recently, so