| | Good morning. Yesterday, Apple held its big iPhone event, unveiling the iPhone 16 and the generative AI features that are slated to come to IOS next month. | Analyst Gene Munster called it the “most important event in a decade.” | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | |
| |
| AI for Good: Predicting tsunamis | | Source: Created with AI by the Deep View |
| Existing tsunami warning systems are far from perfect. One such application — DART-buoys — can accurately measure tsunamis, but doesn’t give a whole lot of warning time. | Seismometers can detect earthquakes, but run into difficulties when it comes to the tsunamis that earthquakes cause. | UNESCO’s Intergovernmental Oceanographic Commission has been working to develop other options, options that leverage AI. | The details: This new approach combines two models — an AI model and an analytical model — with sound signals recorded through underwater microphones. | | Why it matters: This system, combined with existing approaches, creates more accurate predictions with greater warning time, which can aid in enabling timely evacuations and any necessary preparations. |
| |
| | Better GenAI Results With a Knowledge Graph | | Combine vector search, knowledge graphs, and RAG into GraphRAG. Make your GenAI results more: | Accurate Explainable Relevant
| Read this blog for a walkthrough of how GraphRAG works with examples. | Get an overview of building a knowledge graph for better GenAI results. |
| |
| US proposes mandatory reporting for AI developers | | Source: Unsplash |
| The U.S. Commerce Department’s Bureau of Industry and Security (BIS) on Monday unveiled a notice of proposed rulemaking focused on AI developers and cloud providers. | The details: The rule would require developers of the “most powerful” AI systems to provide detailed reports to the U.S. government. | This would include details about developmental efforts, cybersecurity measures and the outcomes of red-teaming efforts, which are meant to test the dangerous ways models can be used or misused. “As AI is progressing rapidly, it holds both tremendous promise and risk,” Secretary of Commerce Gina M. Raimondo said in a statement. “This proposed rule would help us keep pace with new developments in AI technology to bolster our national defense and safeguard our national security.”
| The context: AI firms have largely been chafing against the idea of regulation, despite constantly reiterating the importance of it. California’s SB 1047 includes similar language regarding reporting requirements. But this marks the first federal attempt at such a requirement. |
| |
| | You'll need this to launch your business | | You’ve brainstormed long and hard and now you’re ready to launch your business. | Just one problem, you need a website. | Fortunately, there’s Porkbun (USA Today’s Number 1 domain registrar for 2023 and 2024). | Porkbun offers the best pricing on lots of their domains, plus you get FREE features like WHOIS Privacy, SSL certificates, URL forwarding, email forwarding and web & email hosting trials with each domain. Porkbun avoids upsells like the plague. No pushing of made-up products you’ll never use & no constant recommendation of other domains to buy.
| And if you need help, you can reach a real human expert (not an automated system) 365 days a year. | Get $1 off your next domain name if you sign up for Porkbun right now! |
| |
| | | | | New OpenAI investors don’t get equity — they get a share of the profits, once OpenAI starts making them (The Information). Amazon is allowing Audible narrators to clone themselves with AI (The Verge). Google’s blockbuster antitrust trial begins (WSJ). Here’s everything Apple just announced (CNBC).
| If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here. | | | | | | |
| |
| 100+ AI employees publish letter in support of SB 1047 | | Source: Created with AI by The Deep View |
| Even after being watered down, California’s headline-grabbing first attempt to regulate the AI industry (SB 1047) remains enormously controversial, with many opponents — including Meta, OpenAI, Google, a16z and Y Combinator — saying that it would stifle innovation. | What happened: A statement signed Monday by more than 100 current and former employees of leading AI companies — including OpenAI, DeepMind, Meta, xAI and Anthropic — calls for California Gov. Gavin Newsom to sign SB 1047 into law. | The bill recently passed both houses of California’s legislature and has moved to the governor’s desk, where he has until Sept. 30 to sign it into law. “We believe that the most powerful AI models may soon pose severe risks, such as expanded access to biological weapons and cyberattacks on critical infrastructure,” the statement reads. “It is feasible and appropriate for frontier AI companies to test whether the most powerful AI models can cause severe harms, and for these companies to implement reasonable safeguards against such risks.”
| “Despite the inherent uncertainty in regulating advanced technology, we believe SB 1047 represents a meaningful step forward,” they said. “We recommend that you sign SB 1047 into law.” | Why it matters: The statement exposes a rift regarding regulation between some AI engineers and their executive leadership. Half of the current employees who signed did so anonymously. |
| |
| South Korea seeks to create a ‘responsible AI regime’ for the military | | Source: Unsplash |
| AI has gone to war. | Ukrainian soldiers are experimenting — and deploying — advanced automated weaponry in a number of ways, including AI-enabled attack drones and machine guns equipped with computer vision-based autonomous targeting and firing systems. | The country needs “maximum automation,” Ukraine’s minister of digital transformation, Mykhailo Fedorov, told the New York Times. “These technologies are fundamental to our victory.” | Similar tech is on display in Gaza, where the Israeli military has said that it is using AI to select bombing targets in real-time. NPR reported that one of the IDF’s algorithms — the Gospel — is suggesting targets at a rate that’s roughly 50 times faster than what teams of human analysts used to be able to achieve. | With the ethics of such deployments fraught, even as integration is rapidly ramping up, South Korea is aiming to establish an international blueprint for the responsible use of AI in the military. | What happened: South Korea on Monday convened a two-day international summit on responsible AI in the military. Representatives from more than 90 countries — including the U.S. and the U.K. — were in attendance. | This is the second international meeting on the subject; the first summit was held last year in Amsterdam. It resulted in a non-legally binding call to action issued by the attendees. “AI will drastically impact the future of military affairs, and this impact may well be devastating if unchecked,” South Korean Foreign Minister Cho Tae-yul said in his opening remarks. “The evolution of AI and its integration in the military domain demands checks-and-balances in the form of rules, norms and principles on responsible use.”
| The goal of the summit, a senior official told Reuters, is to establish a blueprint that would lay out minimum guardrails for the use of AI in the military. Even if it is agreed to, such a document likely would not be legally enforceable. | Here’s where the U.S. stands: The U.S. Department of Defense has been studying and deploying AI in the military for years. Last year, the Pentagon unveiled its latest AI strategy. | "As we focused on integrating AI into our operations responsibly and at speed, our main reason for doing so has been straightforward: because it improves our decision advantage," Deputy Defense Secretary Kathleen Hicks said in a statement. Last year, following the first summit, the U.S. released a political declaration on the use of AI in the military, saying that it should include proper oversight to ensure any application remains within the bounds of international and humanitarian law.
| The ethics of AI in the military: As I mentioned above, the ethical implications of this application are fraught. We know AI models to be both biased and unreliable; autonomous target selection — without human oversight — could easily result in the deaths of innocents who have been misidentified by an algorithm. | The combination of a lack of explainability in model decision-making with the high pressures of combat situations, experts told NPR, could result in human analysts trusting AI-derived targets more than they should, while also complicating the chain of accountability for lethal system failures.
| The United Nations and the International Committee of the Red Cross called on world leaders last year to establish legally binding restrictions on the use of autonomous weapons. The main concern here is fully autonomous decision-making: “Human control must be retained in life and death decisions,” they said. | The UN later said that even if an algorithm could reliably determine the legalities of international warfare, it can “never determine what is ethical.” | | This is an instance where model reliability matters less than the moral principle here, that decision-making must stay human, especially when it concerns acts of warfare. | We must remain in control. We must not sacrifice diligence and human morality for speed. | I worry that military leaders have come to a different conclusion. | | | Which image is real? | | | | | 🤔 Your thought process: | Selected Image 2 (Left): | | Selected Image 2 (Left): | |
| |
| 💭 A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on moving to Mars: | After my own heart, some 40% of you said you’d rather money be spent on fixing climate change than jetting off to another planet; 23% would be down to go and 20% prefer Planet Earth. | Nope, I like it here: | | Something else: | “I don't think Mars will be livable in my lifetime, and if Elon Musk says it is, I'd run the other way. But far in the future when it's safe and regulated, sure I would.”
| What do you think about the use of AI in the military? | |
|
|
|