| | | Good morning. Alrighty, today’s the day. | Nvidia’s highly anticipated earnings report is entering into a challenging economic environment; stocks had another rough day Tuesday, with the S&P 500 falling for its fourth straight day. | Nvidia fell more than 2%, and Tesla tumbled around 9%, all against a landscape of tariff threats, weakening consumer confidence and highly inflated valuations. | We’ll see if Nvidia makes things — as my eye doctor says — better, or worse. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | ⚕️ AI for Good: Enhanced protein understanding 🏛️ Chegg challenges Google for AI-driven, ‘unfair’ competition 🚨 Report: The misguided race to regulation
|
| |
| 🎙️ Podcast: IBM’s Chief Scientist on the state of AI | In the latest episode of The Deep View: Conversations, I sat down with Dr. Ruchir Puri, the Chief Scientist of IBM Research. We talk about everything from AGI and X-Risk to his perspective on the idea of ‘useful,’ rather than ‘general’ intelligence. | Check it out below: |  | IBM Chief Scientist talks 'useful intelligence' |
|
|
| |
| AI for Good: Enhanced protein understanding |  | Source: MIT |
| We’ve talked before about how predictive models for protein folding alone aren’t enough; researchers need to take things several steps further, understanding where proteins move within our cells and how they interact with other molecules. The more complete that picture of protein behavior becomes, the greater the impact will actually be. | What happened: Researchers at MIT recently made a significant stride in fleshing out that picture with the launch of a biological AI model called FragFold. | The model, which is built on the back of Google DeepMind’s AlphaFold, is able to predict how protein fragments bind to and inhibit full-length proteins in E. Coli. It has the potential, according to the researchers, to act as a “generalizable approach” to finding molecules that are likely to inhibit targeted protein functions. Importantly, the model is able to apply this level of behavioral prediction to “proteins without known functions, without known interactions, without even known structures,” meaning researchers can better understand molecular behavior without understanding the details of disordered proteins that are historically difficult to study.
| Why it matters: Unsurprisingly, this might have significant implications for biological research, specifically, the study of disease and design of therapeutics. Gene-Wei Li, an associate professor of biology at MIT, said that “we can imagine delivering functionalized fragments that can modify native proteins, change their subcellular localization, and even reprogram them to create new tools for studying cell biology and treating diseases.” |
| |
| | 30+ Senior Experts — For the Price of One Hire | | Awesomic is a subscription-based app that connects companies with vetted designers, developers, and other specialists—ready to start immediately. | With the new Super Plan, you get a full team of senior experts for just $3,000 per month. Start with one specialist and seamlessly switch between different experts as your project evolves—from UI/UX designers to developers, from copywriters to marketers. | ✅ An entire team (design, development, marketing, and more) for the price of one hire ✅ At least 60% savings compared to agencies, in-house hiring, and freelancers ✅ Vetted senior talent, broad expertise, no hiring or management hassle | Why pay more when you can get all the talent you need for a flat monthly fee? | Hire Smarter with Super Plan |
| |
| Chegg challenges Google for AI-driven, ‘unfair’ competition |  | Source: Unsplash |
| Online education platform Chegg, which has been struggling mightily since the launch of generative AI tools in 2023, said Monday that it had filed a lawsuit against Google, alleging that its launch of AI Overviews — which has “transformed Google from a ‘search engine’ into an ‘answer engine’” — is at least partially responsible for crippling the platform’s business. | The details: The lawsuit comes alongside an affirmation from CEO Nathan Schultz that Chegg is undertaking a strategic review which has the company exploring a number of options, including being acquired or going private. The company reported a net loss of $837 million for 2024 amid a decrease in subscribers and a massive reduction in non-subscriber web traffic. | Chegg, which was trading Tuesday at a 52-week low of around $1, has a market cap of around $118 million. At the end of 2022, the company’s market cap was $3.7 billion. | Google “forces companies like Chegg to supply our proprietary content in order to be included in Google’s search function,” according to Schultz, then “unfairly exercises its monopoly power within search and other anti-competitive conduct to muscle out companies like Chegg.” “We believe this isn’t just about Chegg — it’s about students losing access to quality, step-by-step learning in favor of low-quality, unverified AI summaries. It’s about the digital publishing industry. It’s about the future of internet search,” Schultz said.
| At the same time, Chegg has been rushing to integrate generative AI and machine learning technology into its own processes, an approach that has reduced the cost of its content creation by 70%. | “Every day, Google sends billions of clicks to sites across the web, and AI Overviews send traffic to a greater diversity of sites,” Google spokesperson José Castañeda said in an emailed statement. “We will defend against these meritless claims.” |
| |
| | Supercharge Your Revenue Teams with 200+ Expert AI Prompts | | Momentum is an enterprise listening platform for revenue teams that uses advanced AI agents to transform your customer conversations—emails, calls, support tickets—into actionable insights for revenue success. | Now, exclusively through Momentum Partners like Deepview, we're excited to launch our Prompt Library—a curated collection of 200+ high-impact prompts engineered by our lead prompt expert to supercharge Sales, Customer Success, Marketing, RevOps, and Leadership teams. | Get started now to unlock these powerful tools and take your go-to-market strategy to the next level for free. |
| |
| | | An AI protest album: More than 1,000 musicians, including Kate Bush, banded together to release a silent protest album entitled ‘Is This What We Want?’ The tracklist spells out the message: “The British Government Must Not Legalise Music Theft To Benefit AI Companies.” It is specifically protesting the British government’s plan to upend existing copyright law in an attempt to boost AI efforts. British newspapers participated in the protest at the same time, blanketing their covers with the phrase ‘Make it Fair.’ Open-source video generation: Alibaba on Tuesday announced the launch and open-sourcing of its latest video-generation model, Wan 2.1. The company will release code and weights for the model, but not training data, so it’s not truly open-source.
| | Fired federal workers share the crucial jobs no longer being done (Science News). Can Google's new research assistant AI give scientists 'superpowers'? (New Scientist). Perplexity AI launching $50 million venture fund to back early-stage startups (CNBC). Apple isn’t jumping on the anti-DEI bandwagon (BI). Signs of economic doubt emerge in the US (Semafor).
|
| |
| Report: The misguided race to regulation |  | Source: Unsplash |
| From the moment Sam Altman uttered the words “if this technology goes wrong, it can go quite wrong,” the world has found itself witness to several, increasingly tense races — one, between companies racing to develop slightly more powerful models on the road to a hypothesized general intelligence; two, between countries competing to develop and control powerful generative AI models; and three, between legislators urgently attempting to regulate the tech, both to prevent things from going wrong and to ensure that things go (economically, at least) right. | While the first two races there are heating up, the third one seems to have changed recently. Governments are increasingly signaling a greater interest in ‘winning the race’ than coming up with rules for it, with even the European Union promising to boost investment and roll back red tape. In the U.S., of course, federal regulation has remained pretty much nonexistent, despite the popularity of such laws. | But even in the midst of this shift in perspective, U.S. states have been proposing and implementing AI-related legislation for some time now, while their federal counterparts keep trying to figure out the right approach. The European Union’s AI Act, meanwhile, has begun entering into force, even as the U.K. weighs the right regulatory approach (one that keeps the businesses happy) and China cements its own AI-related legislation.
| But the idea of oversight in AI is fundamentally flawed, owing, in part, to the frustratingly vague, rather intensely misleading term itself. Artificial intelligence, according to Dr. Milton Mueller, a professor in the School of Public Policy at Georgia Tech, doesn’t exist. At least, not in the way it is presented. The term instead refers to a number of related, but distinct technologies, mainly machine learning. | “The evidence suggests that the boom in legislative and regulatory initiatives rests on deeply flawed understandings of what they are trying to govern,” Mueller wrote in a recent paper, adding that “what we now lump under the unitary label ‘artificial intelligence’ is not a single technology, but a highly varied set of software applications enabled and supported by a globally ubiquitous digital ecosystem.” | It is not a new technology, according to Mueller, rather, it represents “at best an inflection point in computing capabilities that have been developing since the 1960s.” The diversity in applications that fit under that umbrella term are such that they “render the concept of ‘AI governance’ practically meaningless.” | He proposes a different regulatory approach, one that focuses on addressing specific problems caused by specific machine learning applications, rather than governing the wider bundle of tech in at once. | “Governing AI, in other words, means governing everything in information and communications technology,” Mueller said. | The details: Mueller lays out an examination of our digital ecosystem, one that has four main components: computing devices to generate/process information, networks to transmit information, data, which acts as a manifestation of that information and the software that ties it all together. | What we today call “AI,” according to Mueller, is a product of this ecosystem; it requires tons of computing power and data in order to run. As such, Mueller argues that “AI” isn’t an emerging technology at all, since today’s models are deeply rooted in decades-old precursors (theorized in the 1940s and implemented in the 1960s). The thing that changed, enabling the more powerful large language models (LLMs) that we have today, was access to more data and more powerful chips. And when it comes to policy, according to Mueller, the problems that we are faced with today are problems, tied to the rise of the internet and its digital ecosystem, that we’ve been facing since the 1990s. The most obvious of these have to do with the recommendation algorithms that make social media platforms and search engines tick, algorithms that have faced plenty of criticism for their goal of maximizing user engagement, even (and sometimes especially) when it’s harmful to that user.
| Aside from claims that AI poses an existential risk to humanity, an impression that scientists don’t agree on, an impression that is not supported by any sort of rigorous scientific evidence, Mueller argues that today’s AI doesn’t pose any truly novel policy questions. | For Europe’s AI Act to actually accomplish what it claims to accomplish, for instance, Mueller said that the EU would have to extend control over both networks and data “in order to maintain regulatory authority over AI applications,” something that is not happening, and something that, he argues, shouldn’t. | | I am reminded of what Dr. Seena Rejal, CCO of the AI startup NetMind, told me recently, that policymakers can’t keep up with AI because they don’t “really understand it at its core.” | Some of the legislation that’s come out of the states is really promising — laws to protect artists from generative impersonation, or laws that protect victims of sexual harassment, are good laws. And the good laws do have that theme of targeting highly specific applications; to Mueller’s point, it is impossible to regulate the umbrella term, here, because that necessarily involves regulating an entire ecosystem the breadth of which is largely being ignored. | | | Which image is real? | | | | | 🤔 Your thought process: | Selected Image 1 (Left): | | Selected Image 2 (Right): | |
| |
| 💭 A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on Claude 3.7: | Most of you haven’t tried it. | 18% of you think it’s awesome, though 22% said it’s not good. | 9% think it’s just okay. | Do you think the UK's protest will sway the government? | | If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here. |
|
|
|