| | Good morning. Today is Election Day in the U.S. | Don’t trust anything you see regarding the election or local polling locations on social media. Verify everything through trusted sources (local municipalities are a good one here). | Probably a good day to get into a bottle of wine and settle in for a long night. Or, better yet, ignore it and check the scores tomorrow (that’s what I do when the Knicks are playing the Celtics). | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | 🌌 AI for Good: Meteor monitoring 💻 Perplexity launches AI election hub ⚡️ UAE to deploy AI in the energy sector 🏛️ Interview: AIPI director on what comes after SB 1047 veto
|
| |
| AI for Good: Meteor monitoring | | Source: NASA |
| We’ve talked before about the automated ways in which NASA is keeping itself apprised of potential future asteroid impacts here on Earth; what we haven’t talked about is meteors, whether they’re being tracked and why they should be. | Let’s start with some definitions: asteroids are space rocks — smaller than planets — that float around in orbit. Meteorites refer to chunks of rock that break off of larger asteroids. And meteors are the tiny little remnants of these meteorites that break up into bright flares as they enter Earth’s atmosphere. | We get millions of those each year. | The trouble with tracking: Since meteors are often small — and are only visible when they burst into a flare (AKA, a shooting star), the only way to track meteors is through camera networks. This is where the AI comes in. | In 2023, a team of scientists developed a deep learning model to classify meteor detections, a laborious process that was previously done manually by scientists. The model performed highly accurately, achieving a precision rate of 98%.
| Why track meteors: Being able to accurately and automatically track meteors means scientists can track more meteors, more accurately and more quickly. This acts as a boon to the study of meteors, enabling faster and better recovery of space debris for examination, as well as enabling scientists to gain a better understanding of our solar system. |
| |
| | Simplify complex tasks with AI Workers from MindStudio | | From customer service to financial reporting, MindStudio’s platform offers robust tools to help you deploy tailored AI solutions for any department. | It doesn’t require coding knowledge and it’s used by teams at Fortune 500 companies and governments alike. | MindStudio users deployed over 100,000 AI workers - will you be the next? | Get started for free |
| |
| UAE to deploy AI in the energy sector | | Source: ADNOC |
| ADNOC — the state-owned oil company of the United Arab Emirates — on Monday announced that it will apply AI tools and agents to its processes in collaboration with Microsoft, G42 and AIQ. | The details: The system, called EnergyAI, combines Large Language Models (LLMs) with so-called “agentic” AI, which, in this case, refers to highly autonomous systems trained to perform specific functions within ADNOC. | Trained on 80 years of historical ADNOC data, the company expects the system to dramatically accelerate the construction of geological models, reduce planning processes from years to weeks and integrate cost-and-emission-saving efficiencies across its ecosystem. CEO Sultan Al Jaber said that the tech will “future-proof ADNOC, reinforce our position at the forefront of AI deployment and ensure we continue to provide secure and sustainable energy to the world.”
| In its statement, ADNOC made no mention of how it plans to mitigate the impact of hallucinations on its processes here. | The UAE, in an effort to become a global leader in AI, has already spent billions of dollars on the tech, and it plans to spend more. Earlier this year, the country established MGX, a tech investment firm focused solely on AI, with a target of surpassing $100 billion in assets under management. |
| |
| | Not every project can be a P0. Snap your roadmap into focus with
AI-powered prioritization. | | Consistently align your people to the most strategic priorities, discover product opportunities from deep customer insights, and gain total visibility on execution with Airtable ProductCentral, the complete operating system for Product teams, built on Airtable's powerful platform. | Learn More |
| |
| | | | | Elon Musk and America PAC running ‘scam,’ and ‘illegal’ voter lottery, DA says (CNBC). Corporate spending on OpenAI threatens Salesforce (The Information). In a frank internal meeting, The New York Times wrestled with its political role (Semafor). Meta AI is ready for war (The Verge). Meta’s plan for nuclear-powered AI data centre thwarted by rare bees (FT).
| If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here. | | | | |
| |
| Perplexity launches AI election hub | | Source: Perplexity |
| AI search startup Perplexity, just days before the U.S. presidential election, unveiled an election hub designed to help voters get access to necessary information and track election results. The hub is the result of partnerships with AP News and Democracy Works, which also provides election-related information to Google. | The hub allows users to search for information by geographical location, serving up ballot details — complete with AI-generated summaries — based on those searches. Perplexity says the information comes from a “curated set of the most trustworthy and informative sources,” and includes footnotes to those sources throughout its summaries.
| The issue: While I didn’t come across any blatant inaccuracies while poking around the hub, there is a bit of fine print that reads: “For all election-related questions, we recommend verifying voting information in the cited sources.” | The problem here is complacency with a technology that, according to computational linguistics expert Dr. Emily Bender, shouldn’t be used for search at all. Bender wrote recently that since the LLMs that power AI search remain nothing more than “statistical models of the distribution of word forms in text,” correct output is a byproduct of chance. | Even with hallucination mitigation practices in place, these systems are not reliable 100% of the time, something The Verge discovered with Perplexity’s election hub, prompting changes from the company. “A system that is right 95% of the time is arguably more dangerous than one that is right 50% of the time,” Bender said. “People will be more likely to trust the output, and likely less able to fact-check the 5%.” | Even if accuracy could be guaranteed, Bender said that such an approach reduces information literacy, encouraging people to just “accept answers as given, especially when they are stated in terms that are both friendly and authoritative.” |
| |
| Interview: AIPI director on what comes after SB 1047 veto | | Source: Gov. Gavin Newsom |
| At the end of September, California Gov. Gavin Newsom vetoed California’s highly contentious SB 1047, a piece of legislation that had battled its way through the state senate with the basic promise to hold tech companies accountable if their AI models cause “catastrophic” harm. | Newsom defended his decision in a note to the senate saying that, while SB 1047 was “well-intentioned,” it didn’t “take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data.” | He took issue with the bill’s attempt to proactively regulate a hypothetical catastrophic scenario, saying that regulation ought to “keep pace with the technology itself.” “Let me be clear — I agree with the author — we cannot afford to wait for a major catastrophe to occur before taking action to protect the public,” Newsom said. “California will not abandon its responsibility.”
| For weeks, the bill existed at the center of a maelstrom of aggression and massive lobbying efforts from tech venture capitalists, tech corporations and some AI researchers. Phrases like ‘it would stifle innovation’ became a common refrain, even as the actual requirements of the bill were massively misunderstood by the bulk of its extremely vocal critics (which included a16z, Y Combinator, Google, OpenAI and several members of Congress). | This all despite broad public approval, both specifically for SB 1047 as well as for AI legislation in general, according to repeated polling from the Artificial Intelligence Policy Institute (AIPI). | Daniel Colson, the executive director of the AIPI, called Newsom’s decision to veto “misguided, reckless and out of step with the people he’s tasked with governing.” | I caught up with Colson to break down the 1047 veto and the next steps in the battle for AI regulation. | “The industry opposition was in bad faith in a way that I think the support wasn’t,” he told me, saying that the surprising dynamic in the battle for SB 1047 was that the “fight was dirty.” | He said that the whole reaction to SB 1047 highlights the ethos of the entire tech industry; since the tech sector has historically been economically productive and super light on regulation, the people who are “most disinclined to regulation” have wound up at the center of it. Going forward, Colson said that this first battle for regulation was vital, even though the bill didn’t get a signature: “These same players are going to be at the table for the next 10 years, and this was the first time, basically, that everyone met.”
| The key at the center of all of this is the simple fact that, according to a wide swath of polling, the public supports proactive regulation. An overwhelming majority of the public supports mandatory safety measures, liability for harm and emergency shutdown capabilities; Colson said that this is driven by “general risk-aversion.” | “People generally don’t feel excited about insane, transformative technologies,” he said. | But despite the public’s perspective, AI is not yet a major political issue. And regulators are still learning how the public feels about AI, two elements that Colson expects to soon change. | “What's really going to happen, though, is as AI capabilities advance, AI political salience will massively increase. And if AI capabilities massively advance, the political salience will massively increase, and that's when everyone will know about where the public is at, because it's going to be people's top political issue,” Colson said. “I'm kind of expecting that because it seems like that's where capabilities are going.” | The problem with reactive regulation — which is how Congress tends to work — is that it enables the entrenchment of a given technology. Think of the internet, or social media; according to Colson, “once the technology gets entrenched, it becomes, in many ways, dramatically harder to pass rules on it, because it already has all these vested economic interests.” | “I think that's a lot of the reason why these companies are pushing so hard against regulating now is because now is actually the opportunity where, if some guardrails were passed, it would dramatically change the way that the technology develops,” he said. “Technology is super path dependent, especially when you have large institutions developing very centralized pieces of technology. And so, whichever version they decide to deploy is, in many ways, the one we're going to get. And they have a lot of versatility in what they can choose to deploy, and can be held to certain guardrails.”
| SB 1047 isn’t dead yet: Colson said that the reintroduction of SB 1047 next session is “beyond likely,” adding that “Californians believe that this issue is too critical to let die.” | But even if the bill still has legs, there remains a mismatch between state regulatory efforts and federal regulatory efforts, with Congress largely stalled out while California’s regulatory approaches continue to make waves. The interesting dichotomy here, according to Colson, is that the lab leaders want clear, universal rules that apply to everyone. | The venture capitalists don’t want any rules at all. | “The labs see the technology that they're building as very powerful,” Colson said. “What's interesting is the VCs are really opposed, and that's where so much of the opposition has come … this really feels like round one of what's going to be a lasting political dynamic.” | | | Which image is real? | | | | | 🤔 Your thought process: | Selected Image 2 (Left): | | A great example of an LLM being able to trick non-experts. | Selected Image 1 (Right): | |
| |
| 💭 A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on Disney’s AI plans: | 32% think it’s great, 23% said AI and the arts shouldn’t mix and 18% said they’re canceling Disney+. | Something else: | “So many technical, legal, ethical and people issues to work through; we are in the nascent stages of AI development with the future uncertain and, therefore, business AI application and ROI uncertain. A slippery slope as Disney does not want to be left behind but not sure of how AI will impact its entertainment business and result in acceptable ROI, now or in the future.”
| If reintroduced, do you think SB 1047 will become law next year? | |
|
|
|