| | Good morning. Former President Donald Trump posted AI-generated images of Taylor Swift endorsing him over the weekend, once again highlighting genAI’s role in the spread of online misinformation. | And Stability AI brought in a new CTO, Hanno Basse, a veteran of the entertainment industry. | Welcome to your week. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | |
| |
| AI for Good: Identifying rare immune disorders | | Source: UCLA |
| Researchers at UCLA recently leveraged machine learning tools to help speed up the diagnosis of rare diseases and disorders. They found that, by employing such tech, patients can be diagnosed “years earlier.” | The details: The study focused on common variable immunodeficiency (CVID), a cluster of rare disorders with varying symptoms, a combination that makes diagnosis very difficult. | The researchers developed a machine learning tool called PheNet, which was trained on the electronic health records of verified CVID patients in order to recognize similar patterns in the health records of new patients. The system then ranks the likelihood that a patient has CVID; those with high scores are recommended to an immunology specialist.
| Dr. Manish Butte, one of the authors of the paper, said that CVID symptoms often intersect with many other medical specialties; patients might be treated for sinus infections or pneumonia while the root cause goes overlooked. | “This fragmentation of care across multiple specialists leads to long delays in diagnosis and treatment,” Butte said. “We had to find a better way to find these patients.” |
| |
| | Save Time & Scale Video Creation with AI | | Say goodbye to video production bottlenecks. PlayPlay's AI Video Assistant removes the hassle of video editing — helping teams create professional-looking videos in seconds. | Simply describe your video needs in a sentence and watch the Assistant create your video using the perfect template, text, images, and music. | Cut down time and money, while delivering consistent video comms. Boost engagement and conversions with captivating videos. Streamline content creation workflows across your entire organization.
| Trusted by 3,000+ brands like Dell, CVS Health and L'Oréal. | Create engaging videos now — start your 14-day free trial. |
| |
| $16,000 robot getting ready for mass production | | Source: Unitree |
| Chinese robotics company Unitree on Monday unveiled an update to its $16,000 humanoid robot (the G1), saying it has been “upgraded into a mass production version.” | The company did not confirm that mass production has actually begun, just that the robot is now “in line with mass production requirements.” | The details: Until December of 2023, Unitree was focused on developing four-legged, vaguely dog-shaped robots. But in May, the company unveiled its entry in humanoid robotics, a field littered with well-funded competitors including Tesla and Figure. | Monday’s unveiling included a video demonstration of the G1 jumping, twisting, kicking, leaping and walking up and down stairs. The robot also refused to fall after a member of the development team repeatedly tried to shove it down. The robot is loaded up with cameras, lidar and deep learning tech.
| | Unitree @UnitreeRobotics | |
| |
Unitree G1 mass production version, leap into the future! Over the past few months, Unitree G1 robot has been upgraded into a mass production version, with stronger performance, ultimate appearance, and being more in line with mass production requirements. We hope you like it.🥳… x.com/i/web/status/1… | | | | 7:31 AM • Aug 19, 2024 | | | | 1.45K Likes 386 Retweets | 83 Replies |
|
|
| |
| | | The toughest part about onboarding new employees is teaching them how to use the software they’ll need. | Guidde makes it easy. | How it works: Guidde’s genAI-powered platform enables you to quickly create high-quality how-to guides for any software or process. And it doesn’t require any prior design or video editing experience. | With Guidde, teams can quickly and easily create personalized internal (or external) training content at scale, efficiently sharing knowledge across organizations while saving tons of time for everyone involved. | Transform your organization today with Guidde. |
| |
| | | | | GM lays off over 1,000 salaried software, services employees (Reuters). Investors Undaunted by Spate of AI Acqui-Hires (The Information). Google IPO banker tracks two-decade journey from Silicon Valley upstart to $2 trillion (CNBC). Donald Trump posts a fake AI-generated Taylor Swift endorsement (The Verge). Elon Musk Is No Climate Hero (Wired).
| If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here. | | | | | | |
| |
| Waymo unveils cheaper, next-gen self-driving taxi | | Source: Waymo |
| Self-driving car company Waymo on Monday unveiled the sixth generation of its self-driving car. Waymo said it had “significantly reduced” the cost of the system, while simultaneously making it more capable. | The details: With 13 cameras, four lidar, six radar and an array of external audio receivers, the new robotaxi has fewer sensors than previous generations. | Still, Waymo said that the car remains loaded up with several redundancy layers, meaning safety isn’t being compromised for cost savings. Waymo said it’s designed to handle harsh weather conditions, including fog and rain.
| The context: Tesla’s driver-assisted self-driving tech only uses cameras (Elon Musk is famously anti-lidar). But some researchers have told me that redundancies in the form of lidar, radar, etc., are essential for more advanced systems to actually operate safely and reliably. | Some researchers have also told me that self-driving cars will never be safe — and so will never scale — as it is impossible to train models to respond to edge cases (dangerous situations that aren’t in the training data) that haven’t happened yet. |
| |
| California’s controversial SB 1047 gets watered down | | Source: Created with AI by The Deep View |
| Since last we spoke about California’s SB 1047 — a bill that has become infamous for its attempts to impose real regulations on the AI industry — things have shifted. First, the bill was amended somewhat significantly by its lead author, California state Sen. Scott Wiener. | Following this latest round of amendments, the bill was passed by the Assembly Appropriations Committee and has advanced to the Assembly floor. | The amendments: Wiener said that the amendments largely follow the core principles requested by Anthropic’s recent proposal for the legislation. | The bill now includes no criminal penalties; those who commit perjury regarding the safety tests of their models will now face only civil penalties. The proposed Frontier Model Division agency has been eliminated — some of its functions have instead been moved to existing government agencies. Instead of requiring “reasonable assurance” that a model is safe before release, the bill would now require “reasonable care.” The bill now explicitly states that models that cost less than $10 million to train are not covered by its requirements. And the state attorney general is now only able to sue companies for civil liability if harm has occurred, not beforehand.
| “While the amendments do not reflect 100% of the changes requested by Anthropic … we accepted a number of very reasonable amendments proposed, and I believe we’ve addressed the core concerns expressed by Anthropic and many others in the industry,” Wiener said in a statement. | But still, the opposition continues: Members of Congress have fired off letters of opposition to the bill, even as elements of the industry remain staunchly opposed. | Speaker Emerita Nancy Pelosi, in a statement of opposition, suggested that regulators draw inspiration from Fei-Fei Li and her recent letter against the bill. Nearly every point Li made, as we pointed out at the time, was inaccurate. | Mozilla and Hugging Face on Monday also came out against the bill, though their criticisms were more specifically targeted at ensuring open-source protections. Google and Meta have lobbied against the bill, as have Andreessen Horowitz and Y Combinator, both of which own stakes in OpenAI. An OpenAI spokesperson told The New York Times the bill would stifle innovation.
| And tech organizations, according to Transformer, including TechNet, the Consumer Technology Association and Chamber of Progress, whose members include the magnificent seven, have lobbied against the bill. | Note: Five of the eight members of Congress who asked California Gov. Gavin Newsom to veto the bill if necessary are funded by the Consumer Technology Association’s PAC (Zoe Lofgren, Anna Eshoo, Scott Peters, Tony Cardena and Ami Bera), according to Open Secrets. | Lofgren’s top contributor is Google, but Apple and Amazon are in the top 10 and Meta came in at number 12, according to Open Secrets. (The money comes through the organizations’ PACs). And Andreessen Horowitz — whose PAC is the seventh-largest campaign contributor (at nearly $52 million in 2024) that Open Secrets tracks — has spent $940,000 on lobbying efforts this year so far. Last year, the firm spent $950,000. There are four months left in 2024.
| Daniel Colson, the executive director of the AIPI — a think tank whose polling has found that the vast majority of U.S. voters support SB 1047 (even before the amendments) — told me that members of Congress coming out against the bill are “on the opposite side of the general public.” | “Americans strongly prefer a more tightly regulated environment to a more unregulated one,” he said. “Even a complete ban on the technology is more popular than no regulation; 48% prefer a ban, while 18% prefer no regulation.” | | What’s going on with SB 1047 is a good test for how AI will be regulated in the U.S. It points to the idea that, like social media, it probably won’t be. The original intent of this bill, as Bruna de Castro e Silva, an AI governance specialist at Saidot, told ITPro, was to “establish a proactive, risk-based framework.” | “This revised bill encourages a reactive, ex-post approach that addresses safety only after damage has occurred.” | The antitrust regulators of the early 1900s didn’t care if the companies they were regulating approved of the regulation; corporations have, after all, never exactly begged to be regulated. That’s why it made such a splash when this industry, led by its figurehead, OpenAI’s Sam Altman, asked Congress to regulate them, and quickly.
| That plea now sounds extraordinarily thin. | The reaction (and the wide-reaching lobbying effort) to one state’s attempt to legally balance safety with innovation, rather than through voluntary commitments, does not bode well for any broader legislative efforts. | | | Which image is real? | | | | | |
| |
| A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on AI-generated books: | More than half of you said that we just need disclosures. The rest were evenly split between “ban them,” “do nothing, “I don’t know” and “something else.” | Something else: | | Ban them on online platforms: | | Would you prefer a ban on AI over no regulation at all? | |
|
|
|