| | Good morning. I don’t know about you, but it feels like Friday … | Anyway, Nvidia unsurprisingly smashed through earnings expectations, yet again. But its stock fell slightly regardless. We break it down below. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | 🌎 AI for Good: Autonomous ocean mapping 💻 Microsoft, OpenAI ask NYT for evidence of harm 📊 Nvidia stock falls despite record-setting revenue numbers 🏛️ US commission recommends ‘Manhattan Project’ for AI
|
| |
| AI for Good: Autonomous ocean mapping | | Source: XOcean |
| Ocean data provider XOcean combines uncrewed surface vessels (USVs) with artificial intelligence to autonomously collect maritime datasets. | The details: These datasets can then be used by researchers, surveyors, governments and companies to monitor oceanic environments. | In 2021, XOcean partnered with Marine AI to bring the company’s Guardian Vision algorithms to the edge. The Guardian Vision system improves the USV’s “situational awareness,” processing data from onboard cameras and classifying potential hazards.
| The result is a lower cost of operation, which means more data can be gathered more quickly. |
| |
| | Hiring Studio by Metaview: Collaborate with AI to create the perfect job description, in seconds | | Craft job descriptions that perfectly align with your role, culture, and style—without staring at a blank page. | Metaview's free AI writes the content for you; you simply guide it in the right direction. | Try it now, at no cost! |
| |
| Microsoft, OpenAI ask NYT for evidence of harm | | Source: Unsplash |
| OpenAI and Microsoft earlier this week demanded that the New York Times provide clear documentation regarding any harm that its business has sustained as a result of their generative AI technology. Specifically, they’re after information regarding subscription losses, advertising revenue and web traffic data. | The details: The letters requesting this information claim that it’s vital to their ‘fair use’ argument — the fourth prong of fair use law examines whether the copyright violation harms the original market. This was first reported by Bloomberg Law. | But this fourth prong of fair use law calls for an examination of harm upon the “potential market,” not necessarily the current one. And as Bloomberg reported, the Times agreed to produce documents that would demonstrate a reduction in traffic related to generative AI. Further, the burden of proof here is on the defendants, not the Times.
| This marks only the latest escalation in the months-long litigation between the Times and OpenAI/Microsoft, with the Times alleging sweeping copyright infringement. | An increasing number of legal, academic and copyright experts have said that the training of generative AI models does not appear to be protected by the fair use doctrine. |
| |
| | Save 1 hour every day with FyxerAI | | FyxerAI organizes your inbox, drafts extraordinary emails in your tone of voice, and writes better-than-human meeting notes. | Start your 14-day free trial — no credit card required, and set up in just 30 seconds. Designed for teams using Gmail, Outlook, Slack, Teams, Google Meet, or Zoom. |
| |
| | | Elon Musk’s Neuralink has received approval to begin recruiting for human trials for its Prime brain-computer-interface study in Canada. OpenAI shipped an update to GPT-4o, saying that the model can now produce more relevant, natural-sounding text output, or “creative writing,” to use OpenAI’s words. As AI researcher Chomba Bupe noted, these claims are unverified and unfounded.
| | Inside the booming ‘AI Pimping’ industry (404 Media). Explicit deepfake scandal shuts down Pennsylvania school (Ars Technica). Google AI chatbot responds with threatening message: ‘human … please die’ (CBS News). Anyone can buy data tracking US soldiers and spies (Wired). Anthropic CEO calls for mandatory AI safety testing (Bloomberg).
| If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here. |
| |
| Nvidia stock falls despite record-setting revenue numbers | | Source: Created with AI by The Deep View |
| Shares of Nvidia — which have risen nearly 200% this year alone — fell around 2% in after-hours trading following the company’s release of its third-quarter earnings. Nvidia’s report here caps off the round of decidedly mixed Q3 Big Tech earnings that got started last month. | Here’s how they did: | Nvidia reported record quarterly revenue of $35.1 billion, above the $33.16 billion expected by analysts. This represents a 94% year-over-year increase, that, while significant, indicates that the rate of Nvidia’s growth is starting to slow down; revenue for the previous three quarters rose 122%, 262% and 265% respectively. The firm reported earnings of 81 cents per share, above the 75 cents expected by analysts.
| Nvidia’s data center brought in $30.8 billion in revenue alone, a 112% year-over-year increase. | The company’s CFO said that "both Hopper and Blackwell systems have certain supply constraints, and the demand for Blackwell is expected to exceed supply for several quarters in fiscal 2026." | “I think what matters is what’s demand gonna look like in 2026. The reason why that’s so important is this is a boom or bust business,” Deepwater’s Gene Munster said. “I believe that for 2026, this is going to be a 30%-plus grower. I do believe we will see a day where Nvidia rapidly hits the wall; I don’t think it’s in the foreseeable future.” |
| |
| US recommends ‘Manhattan Project’ for AI | | Source: Created with AI by The Deep View |
| The Manhattan Project “was built on fear: fear that the enemy had the bomb, or would have it before we could develop it,” Robert Furman, assistant to General Leslie Groves and the Chief of Foreign Intelligence for the Manhattan Project, said in a recorded interview, more than 60 years after the U.S. deployed nuclear weapons in the second world war. | The reality was that, though Germany was studying nuclear weapons, it was making little progress — by 1942, Germany’s nuclear program lost much of its funding, according to the U.S. Department of Energy, with Germany redirecting much of its funds to more immediate needs. | In that same year, the Manhattan Project — the American effort to make a nuclear bomb before Germany — began. By 1944, the evidence was clear that Germany had made no progress in this endeavor. In 1945, the U.S. dropped nuclear weapons on Hiroshima and Nagasaki. | The U.S.-China Economic and Security Review Commission (USCC), hearkening back to those days of lethal arms races, is now calling for the U.S. to “establish and fund a Manhattan Project-like program,” this time dedicated to “racing to and acquiring artificial general intelligence (AGI) capability.” | What’s going on: The USCC’s annual report — a roughly 800-page document — dedicates a whole chapter to the technology dynamics between the two countries. A major focus of that chapter is artificial intelligence, which, according to the document, “will” add a ton of value to global economies, reshape industries and has the potential to “transform the military balance” between the U.S. and China through enhanced data collection and “battlefield decision-making.” | Aside from calling for an unfettered race to AGI, the report calls for additional funding for U.S.-based AI companies, as well as legislation that would disallow China-based investors to gain board seats in certain technological arenas. At the same time, one of the USCC commissioners — Jacob Helberg — told Reuters that “China is racing toward AGI.”
| Similar to the Manhattan Project of the 1940s, there’s no clear evidence listed in the report that China is, indeed, racing toward AGI. Reuters didn’t challenge Helberg — who serves as a senior advisor to Palantir, an AI company that has made its name by providing technology and AI solutions to the U.S. government and her allies — on the claim, nor did it ask him to elaborate. | The only evidence listed in the report is a reference to the 2017 document: A Next Generation Artificial Intelligence Development Plan, translated into English here by the New America think tank. But that document makes no references to AGI — it just outlines a goal to lead the world in AI tech by 2030, which — as journalist Garrison Lovely pointed out — isn’t necessarily the same thing. | “Third, by 2030, China’s AI theories, technologies, and applications should achieve world-leading levels, making China the world’s primary AI innovation center, achieving visible results in intelligent economy and intelligent society applications, and laying an important foundation for becoming a leading innovation-style nation and an economic power.” | | Next Generation AI Development Plan |
|
| That same report also calls for the development and introduction of comprehensive AI regulations that are already coming together; China expert Matt Sheehan wrote in 2023 that “Beijing is leading the way in AI regulation.” | These regulations are motivated, according to Sheehan, by three main goals and one auxiliary goal: to shape and control the technology so it serves the government’s agenda; to address the social and ethical problems posed by the technology; to help China become a global leader in AI; to lead to the world in AI governance and regulation. Chinese President Xi Jinping also recently met with U.S. President Joe Biden; he called for greater “dialogue and cooperation” between the two countries, and likened artificial intelligence to climate change, saying both issues must be faced together. The Biden Administration has already established export controls and investment restrictions against China, to blunt the flow of U.S. tech to the country.
| To sum up: We have no clear evidence that this AGI-specific race is even being run. The USCC did not return a request for comment regarding this point. | The AGI of it all: The Commission defines AGI as “systems that are as good as or better than human capabilities across all cognitive domains and would surpass the sharpest human minds at every task.” | Aside from the fact that this definition is inherently contradictory, this rather confident stab at defining AGI misses quite a lot; for instance, that there is no unified, scientifically accepted definition of AGI due to the fact that it is an entirely hypothetical technology. Researchers, as I’ve mentioned often, don’t know when or if it is even technically achievable, and they sure as hell don’t know how they would quantify it, or what a supposed AGI might look like.
| Despite optimistic comments from executives, and despite clear efforts from Big Tech firms to build AGI, it remains entirely unclear how current AI technology — mainly, the large language models behind generative AI — could lead to a system that would legitimately mimic human intelligence. | Some scientists have said that AGI will never be possible, owing to a combination of energy restraints and an ongoing lack of understanding about human cognition and the careful design of our organic structures that result in conscious intelligence. | The report makes no mention of the numerous ethical issues — alignment, control, resource intensity, transparency in utilization, civil rights and work protections, etc. — tied to the hypothetical success of this Manhattan Project, and so makes no mention of how they would be addressed. It also doesn’t choose to explain how this hypothetical AGI would be used. | The intention of nuclear weaponry was clear, if grisly. | Here, the intention is vague at best. | | The frightening truth behind all of this is that we don’t know. We don’t know if AGI is possible, or if it would be a good thing to have. We don’t know if China is working to build it. If so, we don’t know if that actually constitutes a threat that would require us to get there first. | But we know where evidence, both scientific and geopolitical, remains lacking. | I would add — to echo a Twitter user — that the framing of this as a Manhattan-like Project, as opposed to an Apollo-like project, feels purposeful and significant. And therein lies my concern here; this recommendation feels rooted in an effort toward blatant, unrestrained, opaque weaponization. | It feels significant that the immediate beneficiaries of this would be the very AI companies that have been trying so hard to wring a bit of return on their enormous investments. | The murky realities behind the hypothetical legitimacy of AGI do not mean that we should conduct a concerted, government-funded effort to attempt to achieve it. | | | Which image is real? | | | | | 🤔 Your thought process: | Selected Image 2 (Left): | | Selected Image 1 (Right): | |
| |
| 💭 A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on Nvidia post-earnings: | A quarter of you were expecting a huge earnings beat and a subsequent stock spike. | A quarter also anticipated an earnings beat and a stock fall; 16% expect an earnings miss and a stock slide and 18% expect CEO Jensen Huang to say something on the call that sets off a stock boost. | If you use GPT-4o, have you noticed the update? Do you like it? | |
|
|
|