| | Good morning. Last year — when Sam Altman was fired and then re-hired — the OpenAI saga began. It has continued relatively unabated, with board members and safety researchers leaving the company over the past few months. | This week, a few more key members of OpenAI’s leadership staff departed as well. | We break it all down for you below. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | |
| |
| AI for Good: Enhanced cancer research | | Source: Unsplash |
| Earlier this year, researchers at the Mayo Clinic invented a new class of artificial intelligence systems they called “hypothesis-driven AI.” They said that this new class of system will improve cancer research and treatment strategies. | What happened: Conventional AI systems analyze statistical probabilities based on their training data. The researchers said that such a system isn’t flexible enough to aid in knowledge discovery. | But the hypothesis-driven AI they have begun to explore incorporates known scientific knowledge, in addition to specific hypotheses, into the design of an algorithm, resulting in more interpretable results and a higher level of collaboration between researchers and algorithms. This new system is an attempt to offer a “targeted and informed approach” to a number of challenges of current AI systems, from dataset collection/reliability to challenges of interpretability due to the ‘black box’ nature of these systems.
| “This new class of AI opens a new avenue for better understanding the interactions between cancer and the immune system and holds great promise not only to test medical hypotheses but also predict and explain how patients will respond to immunotherapies," Dr. Daniel Billadeau, a co-author of the study, said in a statement. |
| |
| | | | Are you struggling to solve problems for your business? 🤯 | Or want to create new strategies to grow it? 🚀 | Join this 4-Hour AI Business Growth & Strategy Masterclass to learn secrets that the top Professionals at McKinsey & Company use to solve business problems. | Here’s the gist of what you’ll Learn: | 🚀Use 10+ AI Tools curated exclusively for Starting a Consulting Business | 🚀Make over $20,000 as passive income using AI-powered strategies | 🚀Get data insights that increase your business revenue by 5 times! | 🚀Reduce your 40-hour work week to a 20-hour work week! | 👉Claim your Free slot now (free for only the first 100)🎁 |
| |
| Youtuber files class action lawsuit against OpenAI | | Source: Unsplash |
| Youtuber David Millette has filed a class-action lawsuit against OpenAI, alleging that the ChatGPT-maker has been “covertly transcribing YouTube videos to create training datasets that they then use to train their AI products.” | Millette is the first to bring a YouTube-related lawsuit into a field that is full of copyright-infringement lawsuits from media companies, authors, artists and musicians. Though he is the first to file suit, other major YouTube creators, including Marques Brownlee, have expressed their frustration as more details about YouTube scraping have come out.
| The complaint — which you can read here — is quite similar to other lawsuits that play in this arena; it alleges that OpenAI’s models are built on content that it scraped without “consent, without credit and without compensation.” | | The context: It has long been assumed that artificial intelligence companies have been training their generative systems on YouTube videos in addition to, well, the rest of the internet. | Lately, we’ve been getting pieces of confirmation; 404 Media reported last month that Runway trained its models on thousands of scraped YouTube videos without permission; it reported this week that Nvidia did the same thing. Other investigations have found that the transcripts of hundreds of thousands of YouTube videos have been scraped to train text models. |
| |
| | | Cybersecurity company Abnormal Security has raised $250 million in a Series D funding round, valuing the firm at more than $5 billion. Chinese genAI startup Moonshot raised $300 million at a $3.3 billion valuation.
| | Samsung's upcoming solid-state EV batteries promise 9-minute charging and 600-mile range (Tech Spot). Elon Musk slammed by British government after comments on UK riots (CNBC). Elon Musk’s Twitter sues advertisers for boycotting the social platform (Reuters). Worldcoin may not be legal in Colombia, but that’s not stopping it (Rest of World). Now that Google is a monopolist, what’s next? (The Verge).
| | | | | | |
| |
| Figure unveils ‘world’s most advanced’ AI hardware | | Source: Figure |
| OpenAI-backed robotics company Figure on Tuesday unveiled its second-generation model, the Figure 2, a model that the company’s founder Brett Adcock called the “world’s most advanced AI hardware.” | The details: The Figure 2 involved a full redesign of the original model, according to Adcock, and features six cameras, an upgraded battery, integrated wiring, 3x the computing power of the original, an onboard vision language model and speech-to-speech capabilities. | | It’s not clear how much the Figure 2 costs to build, when it might become available to a wider customer base or how much Figure plans to charge customers for it. Figure’s “Master Plan” involves addressing global labor shortages by filling dangerous and undesirable jobs with robots. | It also sees an opportunity for in-home robots, specifically to care for the elderly. Figure’s timeline for this remains unclear. | Machine learning researcher Filip Piekniewski commented on the unveiling, saying: “It has everything, except a brain. This has been the theme in this genre for the past 25+ years. More impressive robots lacking a brain. This one is no different. A body without a brain is well, just a body.” | | Figure @Figure_robot | |
| |
Meet Figure 02 - the world's most advanced AI hardware | | | 11:59 AM • Aug 6, 2024 | | | | 2.36K Likes 686 Retweets | 213 Replies |
|
| | | | Our friends at Innovating with AI just welcomed 170 new students into The AI Consultancy Project, their new program that trains you to build a business as an AI consultant. Here are some highlights... | The tools and frameworks to find clients and deliver top-notch services A 6-month plan to build a 6-figure AI consulting business A chance for AI Tool Report readers to get early access to the next enrollment cycle
| Click here to get early access to The AI Consultancy Project |
| |
| OpenAI goes through another massive leadership shake-up | | Source: OpenAI |
| This week has been a significant one for the AI sector. A list of Big Tech giants, led by Nvidia and Apple, underwent relatively severe stock market corrections on Monday; Google lost its anti-trust trial, which has implications for its ability to win the genAI race, also on Monday, and OpenAI said we won’t see GPT-5 at its next developer conference. | And amidst all of this — which, of course, plays against a backdrop of mounting skepticism of the industry, with growing pressure from Wall Street for companies to produce returns on their massive AI expenditure — OpenAI’s leadership has, once again, thinned out. | The Information reported that Greg Brockman — OpenAI’s president and one of its co-founders — is taking an extended leave of absence. John Schulman, another co-founder who ran post-training for ChatGPT, is leaving OpenAI and joining Anthropic. And product leader Peter Deng has also left. | In a post on Twitter, Brockman said that he’s taking a sabbatical through the rest of this year. The mission, he said, is “far from complete.” Also in a post on Twitter, Schulman said that he’s leaving OpenAI to focus in on technical alignment research. He added: “To be clear, I'm not leaving due to lack of support for alignment research at OpenAI.”
| In the context of OpenAI’s long-running Game of Thrones, leadership at the company has been rocky since Sam Altman’s removal and subsequent reinstatement. Since then, OpenAI’s board has been largely replaced, with Helen Toner and Tasha McCauley leaving shortly after Altman’s failed ouster. | OpenAI has more recently bled safety researchers; most notably, Jan Leike and another co-founder, Ilya Sutskever, departed the company. | AI researcher and cognitive scientist Gary Marcus said that these departures “would be inconceivable if AGI or even just major profitability were close.” | “Your regular reminder that chaos at OpenAI (itself a regular occurrence since last November) is a strong argument for strong AI regulation,” Marcus said. “If they can’t govern themselves, how can we expect them to keep AI safe?” | | As it stands today, the industry of generative AI is fueled by trust, faith and confidence. Faith in the impact that AI might one day have — and trust/confidence in the abilities of OpenAI, etc. to achieve it — is what fueled the massive hype cycle that’s been ongoing since the launch of ChatGPT in November of 2022. | The ‘bubble’ of generative AI — highly inflated stocks, enormous levels of VC funding, investor excitement — is built on the back of that hype cycle, and so it is based on faith. The faith is beginning to fade. And this latest leadership shakeup might further reduce the trust and confidence investors have placed in OpenAI to actually pull this off; it signals, as Marcus said, that the company is not in as wonderful a position as its $80 billion valuation might imply.
| For this bubble to do anything but burst, the industry needs to see a bit of evidence that investing in AI is the right call; massive profits would do it, as would the achievement of artificial general intelligence. | If OpenAI was really on the brink of that piece of critical, and possibly game-changing, evidence, I find it highly unlikely that key leaders would leave. | Without faith, and without trust or confidence, the hype will fade and the bubble will most certainly burst. Right now, it’s not clear what a burst bubble looks like here — and it definitely won’t spell the end of AI or even generative AI. But what it will do is bring a heavy dose of reality into an industry that has been flying high on cunningly packaged fictions for months. | | | Which image is real? | | | | | |
| |
| A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s how you actually use generative AI: | Around a third of you don’t use it; 25% use it for writing help, 19% as a coding assistant and 20% as a research assistant. | I don’t use it: | “So many tools and yet on everyday work, mundane tasks are easier to do by oneself. If it would be more proactive, propose and then work alongside the day, that would be awesome.”
| Writing help: | | Would you pay for an in-home robot either as a general assistant or to care for elderly/sick relatives? | |
|
|
|