| | Good morning. Global markets took a harsh tumble Monday, with the S&P 500, Nasdaq and Dow Jones Industrial Average all landing in the red. | The “Magnificent 7” tech companies — namely Nvidia and Apple — led the selloff, erasing hundreds of billions of dollars in value. | The reasons behind Monday’s market reaction are numerous and complex; the way AI factors into it is simple: for Big Tech (which makes up a heavily weighted portion of these indices), pressure has been mounting for returns on massive AI spending. And last week’s earnings run told investors they might have to wait a little longer. | ALSO, Google lost its antitrust case against the U.S. on Monday. A federal judge ruled that Google is a “monopolist.” It’s unclear at this point what that means for the tech giant. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | |
| |
| AI for Good: Matching cancer drugs to patients | | Source: National Cancer Institute |
| While there are cancer treatments, such as chemotherapy and radiation, that attack all rapidly dividing cells in the body, there are also certain precision treatments that target mutations and block signals to divide. | But patients need to be biologically matched up with these drugs for them to be effective, an approach that currently requires the bulk sequencing of tumor DNA and RNA. | | Researchers at the National Cancer Institute in April successfully tested an AI tool that uses “data from individual cells inside tumors to predict whether a person’s cancer will respond to a specific drug.” | The machine learning model was built using bulk RNA sequencing data, then fine-tuned with single-cell RNA sequencing data, a newer approach that provides higher-resolution data. “The AI models accurately predicted how individual cells would respond to both single drugs and combinations of drugs.”
|
| |
| | | Meta AI launched SAM 2 this week and showed that it's 8.4x faster for data labeling tasks compared to the original SAM model. Just 24 hours after the release, Encord became the first platform to integrate SAM 2 for automated data labeling. | But does SAM 2 work on your domain-specific data? | Join their upcoming webinar to learn how to fine-tune SAM on YOUR data, to accelerate your data labeling processes and slash the time to production-ready AI. | Register Now |
| |
| Take 2: Elon Musk tries to sue OpenAI again | | Source: Tesla, Senate Judiciary Committee |
| After withdrawing his lawsuit against Sam Altman/OpenAI in June — which alleged that OpenAI violated its founding agreement by not being very … open — Elon Musk has sued the startup again. The new complaint, which alleges many of the same things as the first one, takes a slightly different tack. | | The details: The new complaint, which was filed in federal court on Monday, alleges that Altman — in addition to Greg Brockman, another OpenAI co-founder — “intentionally courted and deceived Musk, preying on Musk’s humanitarian concern about the existential danger posed by artificial intelligence.” | It goes on to allege that Altman and Brockman “assiduously manipulated” Musk into co-founding the company by promising to be safer and more open than “profit-driven tech giants” like Google. The suit alleges that this was the “hook for Altman’s long con,” adding that the “perfidy and deceit are of Shakespearean proportions.”
| The suit ties in OpenAI’s single largest investor, saying that, “in partnership with Microsoft, Altman established an opaque web of for-profit OpenAI affiliates, engaged in rampant self-dealing, seized OpenAI, Inc.’s Board and systematically drained the non-profit of its valuable technology and personnel.” | Microsoft has invested billions of dollars into the startup, which was recently valued at more than $80 billion. Musk is seeking a judicial determination that OpenAI’s license to Micosoft is “null and void.” | Musk’s lawyer told the New York Times that “this is a much more forceful lawsuit.” | OpenAI didn’t respond to a request for comment. |
| |
| | | | | U.S. stocks fell sharply on Monday as part of a global market rout (CNBC). Secretaries of state urge Musk to fix AI chatbot spreading false election info (The Washington Post). Leaked Documents Show Nvidia Scraping ‘A Human Lifetime’ of Videos Per Day to Train AI (404 Media). The AI safety debate is all wrong (Daren Acemoglu). Once high-flying software firms confront sluggish growth (The Information).
| | | | | | |
| |
| Actors’ union strikes video game companies over AI protections | | Source: SAG-AFTRA |
| Nearly a year after the Hollywood writers’ and actors’ unions successfully struck production companies, SAG-AFTRA’s members have once again joined the picket lines. This time, though, the actors are striking against the video game industry. | The main sticking point? Artificial intelligence. | The details: The union noted that, although it was able to make agreements with the studios on many points throughout their lengthy negotiation process, “the employers refuse to plainly affirm, in clear and enforceable language, that they will protect all performers” from AI. | The specific sticking point, according to The Verge, has to do with who these protections extended to. Employers eventually agreed to extend protections to motion performers, but only if “the performer is identifiable in the output of the AI digital replica.” The union rejected that proposal.
| “We’re not going to consent to a contract that allows companies to abuse AI to the detriment of our members. Enough is enough,” union President Fran Drescher said in a statement. | Some context: This coincides with massive layoffs in the video game industry (10,000+ in 2023 and 11,000+ in 2024, so far). And as Brian Merchant reported recently for Wired, generative AI has already seeped into the industry, with major studios employing genAI systems to compensate for their suddenly smaller staffs. | The industry is valued at nearly $200 billion. | | | Imagine your calendar filling with qualified sales meetings, on autopilot. That's Ava's job. She's an AI BDR who automates your entire outbound demand generation. | Ava operates within the Artisan platform, which consolidates every tool you need for outbound: | 300M+ High-Quality B2B Prospects Automated Lead Enrichment With 10+ Data Sources Included Full Email Deliverability Management Personalization Waterfall using LinkedIn, Twitter, Web Scraping & More
| Book a demo and supercharge your sales team today. |
| |
| Interview: When incorporating generative AI, start with a problem | | Source: Unsplash |
| Despite all of its hype, generative AI has become increasingly regarded as a solution in search of a problem. | As a recent Goldman Sachs report said, with the industry expected to spend more than a trillion dollars investing in AI technology, the question — “What $1 trillion problem will AI solve?” — becomes much more important. | The answer to that question — at best — remains to be seen. That report said that genAI is fundamentally dissimilar to the internet, in that it represents a high-cost solution to a non-problem “problem,” whereas the internet represented a low-cost solution to a real problem.
| As researcher and software engineer Molly White recently pointed out: Large language models (LLMs) “do a poor job of much of what people try to do with them, they can't do the things their creators claim they one day might and many of the things they are well suited to do may not be altogether that beneficial.” | That’s not to say that generative AI is useless, or that, if the bubble bursts, AI will just disappear. As White herself pointed out, genAI is helpful as a proofreader and coding assistant. It’s decent at summarizing meetings and is good at finding patterns in large swaths of data. | Diane Gutiw, CGI’s global AI research lead, told me that the key to realizing a return on genAI starts by flipping the equation. Start with a problem, she said — specifically, a small problem — and then look for a tool that might serve that problem. | Sometimes, the answer might involve an AI tool of some kind. Sometimes it won’t. But “if you start with the problem, you're always going to get more value out of the tool in the end.”
| She said several times that, despite recent advancements in genAI, it’s “just a tool,” one that can be “really powerful” if pointed at actual problems. A few applications of the tech that Gutiw thinks are really promising involve genAI in contact centers, diagnostic imaging and internal productivity platforms. | But the difference between genAI and other tools involves ethics and reliability; transparency, data privacy and algorithmic bias are all considerations that must be kept top-of-mind for users of genAI. And issues with reliability make it necessary to keep a “human on the loop” to audit and monitor a system’s output. | I asked her if the combination of rampant AI hype, ethical considerations and shaky reliability has made genAI “tools” a tough sell. | She said it hasn’t. And it all has to do with framing real problems against actual solutions. | “We don't take the angle of ‘here's a thing that’ll do everything,’ we take an angle of ‘okay, what do you need something to do? What is the thing you're trying to solve?’ And then how can we do that in a way that it's working, again, within those guardrails and those boundaries.” | | | Which image is real? | | | | | |
| |
| A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Your view on how deepfakes impact you: | Around half of you said that experiencing a deepfake firsthand changes your impression of it. | The other half were undecided. | Absolutely: | | How do you actually use genAI? | |
|
|
|