| | Good morning. HarperCollins has made a deal with an unknown AI company to license the work of its authors for training purposes, 404 Media reported. | Unlike other such arrangements, this one is limited in scope, with the publisher allowing authors to opt-in to the program if they so choose. The opt-in part of this, at least, is perhaps something of a bright spot in a landscape that otherwise operates stringently without consent. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | 🚨 AI for Good: Detecting lithium battery fires 🏛️ Elon Musk sues to block California deepfake law 🤖 Interview: Microsoft VP on agentic push and Ignite
|
| |
| AI for Good: Detecting lithium battery fires | | Source: Jian Chen/Xi'an University of Science and Technology |
| Our gadgets today, from phones to laptops and smart bikes, are often powered by lithium-ion batteries. Damage to these batteries can result in an instantaneously ultra-hot, explosive fire; indeed, in 2023, the New York City Fire Department responded to more than 200 deadly residential fires sparked by lithium-ion batteries in e-bikes. | What happened: Researchers at the National Institute of Standards and Technology (NIST) have developed an algorithm that uses sound to detect when a lithium battery is about to explode. | Right before one of these batteries catches fire, there is a sharp click-hiss noise, the result of a pressure build-up inside the battery. Researchers recorded audio from 38 exploding batteries, turning that sound into 1,000 audio samples which they then used to train a machine learning algorithm. Using a microphone, the completed algorithm was able to detect an explosion before it happened in 94% of cases.
| Why it matters: While the team plans to continue its research in the arena of microphones, algorithms and explosive batteries, the technology, according to NIST, could eventually be applied to smart smoke alarms in homes and buildings, providing people with the critical few minutes of early warning they need to evacuate before a fire spreads. |
| |
| | Missed Out on Shark Tank’s Big Wins? Don’t Miss RYSE | | The hit show Shark Tank has introduced the world to some of today’s most successful brands: | Bombas – raised $200M in follow-on investment Scrub Daddy – over $300M in sales Ring – valued at $7M on Shark Tank, later acquired by Amazon for $1.2B, after all the Sharks passed!
| Now, Dragon’s Den is proving to be another launchpad for promising brands, and RYSE Smart Shades secured not just one, but two offers from the Dragons. For a limited time, you have the chance to invest alongside the Dragons in a brand that may become the next household name. | With their breakthrough smart shade technology and distribution already in 127 Best Buy stores, RYSE is poised to become the next big thing in tech. | This is your chance to invest early in a smart home company with big momentum. | Invest Now and Earn Bonus Shares! |
| |
| Elon Musk sues to block California deepfake law | | Source: California Gov. Gavin Newsom |
| In September, California Gov. Gavin Newsom signed 17 bills that promised AI protections and regulations into law (while also vetoing SB 1047). One of these bills — AB 2655, which you can read here — would require large social media platforms to block, remove and/or clearly label deceptive content related to elections 120 days before and after an election in California. | The law, which will allow election officials to sue platforms over deepfakes, will take effect next year. | What happened: X, Elon Musk’s social media platform, is suing to block the law. The suit — which you can read here — alleges that the law would violate the First Amendment and Section 230, among other things. | On the First Amendment front: the whole point of that amendment is to protect speech, not necessarily truth, according to the Harvard Law Review. However, not all speech is protected — such as calls to violence and illegal activity — and, as the Review points out, private media platforms can take legally-protected measures to block or reduce the dissemination of propaganda, especially bot-driven propaganda. Further, private media companies are legally allowed to censor their content, and in fact, do so all the time; it’s when the government gets involved in apparent censorship efforts that things get complicated. This claim is anything but clear-cut.
| Section 230 — which you can read here — holds that online platforms cannot be held liable for third-party content posted to their platforms, even if that content is illegal. Section 230, a highly controversial law whose repeal has been oft-discussed by Congress, also incentivizes the platforms to moderate and regulate harmful content, according to the Department of Justice. | So, that claim, too, seems anything but clear-cut. | Some context: Where other GenAI platforms recently kicked into overdrive to tamp down election-related deepfakes — with OpenAI saying it blocked more than a quarter million attempts to generate such content during the election — Musk’s X and xAI have taken a different tack. Grok, the GenAI system available to X premium users, has no discernible guardrails; Musk himself — a prominent ally of President-elect Donald Trump — has posted deepfake videos of Kamala Harris and Joe Biden to his 200 million followers. | Newsom took notice of this, saying at the time: “Manipulating a voice in an ‘ad’ like this one should be illegal. I’ll be signing a bill in a matter of weeks to make sure it is.” |
| |
| | Secure Your Passwords with Confidence – Keeper Has You Covered | | Forgot your password? That’ll never happen again with Keeper’s easy-to-use password manager that saves you time, increases your security and streamlines your online experience. | Trusted by millions of individuals and thousands of organizations, Keeper Security is transforming how people secure their passwords, passkeys and confidential information against growing online threats. | Don't get hacked. Get Keeper for 50% off today! |
| |
| | | OpenAI is paying Dotdash Meredith a minimum of $16 million per year to license its content for training, according to AdWeek. Aside from NewsCorp, which OpenAI is paying around $250 million over a five-year term, OpenAI has kept quiet regarding the details of its ever-lengthening list of content-licensing deals. OpenAI CEO Sam Altman will serve as a co-chair on San Francisco mayor-elect Daniel Lurie’s transition team, according to TechCrunch. He will be one of 10 people providing Lurie’s team with guidance on “ways the city can innovate.”
| | Buzzfeed’s AI ads take a dark turn (404 Media). European tech CEOs urge ‘Europe-first’ mentality to counter U.S. dominance after Trump victory (CNBC). Welcome to Elontown, USA: an unlikely Texas home base for Musk’s business empire (Fortune). There’s no longer any doubt that Hollywood writing is powering AI (The Atlantic). Mistral brings web search, other capabilities to le chat chatbot (Mistral).
| If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here. | | | | |
| |
| Interview: Microsoft VP on agentic push and Ignite | | Source: Microsoft |
| Microsoft’s annual developer conference — Microsoft Ignite — got started in Chicago today. The four-day-long event, which has more than 200,000 registered attendees (14,000 of which are in person) will include more than 800 sessions, including keynote speeches, demos and plenty of announcements. | Unsurprisingly, artificial intelligence is at the forefront of this year’s Microsoft Ignite. I caught up with Marco Casalaina, a VP of Product on Microsoft’s Azure AI Platform team, before the conference kicked off to dive into all the nitty gritty details. | But first, an overview of the announcements: Microsoft’s core focus here is on AI agents, a term — with a vague, flexible definition — that is loosely indicative of AI models that complete actions, rather than spitting out text or image files, at the behest of their human users. (As professors Arvind Narayanan and Ethan Mollick recently pointed out, most of the so-called ‘agents’ we see today are a simple rebrand of traditional automation. But it sounds good). | In this vein, Microsoft announced a series of out-of-the-box “agents,” in addition to a number of ways for enterprise customers to custom-build agents within Copilot Studio. The idea here is that enterprise customers can build a tool that performs a specific function “without having to prompt the agent each time.” Part of this effort involves the launch of the Azure AI Foundry, a portal that brings together generative AI tooling, monitoring and safety tech.
| Part of its enterprise push involves an expansion of Microsoft’s existing partnership with enterprise AI firm C3.AI; all of C3.AI’s software is now available on Microsoft’s Commercial Cloud portal. | Microsoft also revealed the results of a study it commissioned, which reportedly found that the top leaders using GenAI are seeing a 10x return on their investment; the average company is seeing a 3.7x return. | Properly selling its generative AI technology has become a paramount focus for the company, which recently said that its generative AI business will soon reach a $10 billion annual run rate. It remains unclear how exactly these projects are helping Microsoft's bottom line; at the moment, all we know is that AI is fueling massive and growing capital expenditures; the details of resulting revenue are shrouded in general ‘cloud growth,’ and so remain murky. | And Copilot, Microsoft’s flagship GenAI product, has frustrated users and insiders alike, with some users recently pausing their costly subscriptions to the generative service due to privacy, security and reliability concerns, according to The Information and Business Insider. | But Casalaina told me that “Microsoft has seen immense adoption of AI.” He said that the most recent quarter saw a 60% quarter-over-quarter growth in Copilot sign-ups. He ascribed this to a gradually growing awareness of generative AI and Copilot, adding: “part of it is also the ever-increasing capabilities … certainly that's going to continue.” | Casalaina, whose team works on the next generation of AI at Microsoft, said several times that “you would not believe” some of the things he’s working on. When asked to elaborate on this, he said that today, models are capable but limited. “I guess my point is: think beyond just what the model can do and think toward what the framework above it can do.”
| Adoption numbers and ROI numbers remain abstract — as Microsoft itself mentioned, only about 30% of enterprise AI projects have gone into full production and implementation. And when we’re talking about email, scheduling and writing automators, the thing that has remained consistently unclear is how, exactly, businesses are using this technology. | In response to this question, Casalaina told me that “we are beyond the experimentation stage, now.” | He added that he recently used Outlook Copilot to find and summarize an email that had gotten lost in his inbox, saying: “So, I mean, in that sense, that's how people are actually using it today.” | This push toward action-taking generative AI also poses additional trust and safety risks; a model restricted to output in a chat interface can cause far less accidental damage than one that has permission to, say, purchase airline tickets. | To that end, Microsoft announced new evaluation and monitoring tools. | “We do need to evolve our processes, our policies and our products to ensure that these things are safe and trustworthy,” Casalaina said. The key thing here, according to Casalaina, is that these “agents” can only function with the “powers that you give them …. I think one thing that most people don't realize is that these AI agents, they can only do what you give them the ability to do.” | | | Which image is real? | | | | | 🤔 Your thought process: | Selected Image 1 (Left): | | Selected Image 2 (Right): | |
| |
| 💭 A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on the valuations of AI companies: | 35% said yes; 22% said no and 25% would like these companies to go public so they can invest. | What do you think of CA's deepfake election law? | |
|
|
|