California's controversial AI bill - Sync #481
I hope you enjoy this free post. If you do, please like ❤️ or share it, for example by forwarding this email to a friend or colleague. Writing this post took around eight hours to write. Liking or sharing it takes less than eight seconds and makes a huge difference. Thank you! California's controversial AI bill - Sync #481Plus: Neuralink shares a progress update on the second clinical trial; Windows Recall is coming back in October; a $16,000 humanoid robot from Unitree; a lot about drones; and more!Hello and welcome to what was previously known as “Weekly News Roundup,” but will now be called Sync. The previous name was too long and too generic, so I spent some time thinking about a new name that draws from the language of technology and better reflects the purpose of these articles—to bring together all the essential news and stories from AI, robotics, biotech, and the bleeding edge of technology that promises to make us more than human. I believe the new name, Sync, is that name. Please let me know what you think about the new name. And now, let’s dive into Sync #481! The main story this week is California's controversial AI bill, SB 1047, how it was declawed by Silicon Valley, and what this reveals about tech companies. In other news, Neuralink shared a progress update on their second clinical trial. In the AI space, Microsoft is not giving up on Windows Recall yet, Eric Schmidt speaks the quiet part aloud, and Nvidia’s AI NPCs will debut in a game next year. There’s been a lot of activity in robotics this week, with Unitree announcing it's ready to mass-produce its cheaper, $16,000+ humanoid robot. Meanwhile, Boston Dynamics shows off how good the new all-electric Atlas is at doing push-ups, and a company with ties to the legendary IHMC reveals their own humanoid robot. We also have a mini-section about drones. We’ll wrap up this week’s issue with a look at a team turning plastic-eating bacteria into food and how to win a bike race by hacking opponents’ gear shifters. Enjoy! California's controversial AI billThe global race to regulate AI is intensifying. The EU has the AI Act, which is now in force, and the Chinese government is also regulating the use of AI. Meanwhile, the US does not yet have its own equivalent of the EU’s AI Act at the federal level, but that hasn’t stopped individual states from putting forward their own AI regulations. That is exactly what California is doing. In February 2024, Senator Scott Wiener introduced the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, also known as SB 1047. This landmark bill is designed to prevent AI systems from causing catastrophic harm, such as mass casualties or large-scale cyberattacks, by imposing strict safety requirements on the developers of the most advanced AI models. However, those proposed stringent safety requirements were met with fierce opposition from Silicon Valley and showed what tech companies will do to protect their interests. The original SB 1047The original version of SB 1047 introduced a comprehensive framework aimed at preventing catastrophic harm from advanced AI systems. The bill specifically targeted large AI models—models that cost at least $100 million and use 10^26 FLOPS for training (these thresholds set the bar roughly at the level of resources needed to train GPT-4 and could be raised as needed). Developers of these systems were required to implement stringent safety protocols, including an "emergency stop" feature that would allow for the immediate shutdown of an AI model if it posed a significant risk. Additionally, developers had to undergo annual third-party audits to ensure compliance with these safety measures. To enforce these rules, SB 1047 proposed the creation of the Frontier Model Division (FMD), a new government agency responsible for overseeing AI safety. The FMD, governed by a five-person board with representatives from the AI industry, open-source community, and academia, would set safety guidelines and advise the California attorney general on potential violations. Developers were also required to submit safety certifications under penalty of perjury, providing legal assurance that their AI systems adhered to the bill’s stringent requirements. Non-compliance with these regulations could result in severe penalties, with fines reaching up to $10 million for the first violation and up to $30 million for subsequent offences. The bill also empowered the attorney general to bring civil actions against developers who failed to comply. To encourage transparency, SB 1047 included whistleblower protections for employees who reported unsafe AI practices to the authorities. It is also worth noting that the proposed rules would apply to any company doing business in California, regardless of where they are based. Silicon Valley did not like SB 1047However, the bill faced strong opposition from Silicon Valley and the broader AI community. Critics argued that the bill’s regulations could stifle innovation, particularly for startups and open-source projects. In an open letter opposing SB 1047, OpenAI warned that the proposed bill could significantly hinder AI innovation and drive companies out of California. a16z, one of the largest venture capital companies in Silicon Valley, also strongly opposed SB 1047, arguing that it would burden start-ups with its arbitrary and shifting thresholds. Yann LeCun, Meta's Chief AI Scientist, has been vocal against the bill, arguing that it would harm research efforts and could effectively "kill" open-source AI. Chamber of Progress, a tech industry trade group representing companies like Google, Apple, and Amazon, stated that SB 1047 would restrain free speech and push tech innovation out of California. Other opponents of the bill included Fei-Fei Li, a prominent AI researcher and Stanford professor, who criticised the bill for potentially harming California’s AI ecosystem, and Andrew Ng, who argued that the bill makes a fundamental mistake by regulating AI technology instead of specific AI applications, which he believes would be a more effective approach. The amended SB 1047The strong opposition from the tech industry and the AI community has led California’s lawmakers to change the original bill. One of the most notable changes was the reduction of the attorney general's power. Initially, the bill allowed the attorney general to sue AI developers for negligent safety practices before a catastrophic event occurred. The amended bill now limits this power, permitting the attorney general to seek injunctive relief to stop potentially dangerous activities, but lawsuits can only be filed after a harmful event has taken place. Another major amendment was the removal of the Frontier Model Division. Instead, the bill establishes the Board of Frontier Models within the existing Government Operations Agency, expanding the board from five to nine members. This board will still be responsible for setting thresholds, issuing safety guidelines, and regulating auditors, but within a more integrated governmental structure. Additionally, the amendments brought leniency to the bill’s safety certification requirements. Developers are no longer required to submit safety certifications under penalty of perjury. Instead, developers will be required to provide public statements about their safety practices. The bill also introduced protections for open-source AI projects, ensuring that smaller developers who spend less than $10 million fine-tuning a model are not held liable, shifting the responsibility back to the original developers. The weakened rules, however, still do not satisfy everyone. Martin Casado, general partner at a16z, wrote in a tweet that “the edits are window dressing. They don’t address the real issues or criticisms of the bill.” Additionally, eight US Congress members representing California wrote a letter asking Governor Gavin Newsom to veto SB 1047. What’s next for SB 1047?Despite the still existing opposition, on 15th August 2024, the amended SB 1047 passed through California’s Appropriations Committee and now is heading to California’s Assembly floor for a final vote. If it passes, the bill will return to the Senate for approval of the latest amendments before potentially being signed into law by Governor Newsom. If SB 1047 is signed into law, it could significantly impact how AI is regulated in the US. Other states may follow California’s example and enact their own AI laws. Furthermore, any potential federal law to govern AI might borrow ideas from SB 1047. However, the whole SB 1047 story showcases the tech industry’s reluctance to be regulated. Publicly, these companies claim to be in favour of regulations, but when new regulations are proposed, they do whatever they can to either repel or weaken them, as seen with SB 1047. If anything, SB 1047 is a “mask off” moment for the AI industry, revealing how far tech companies will go to protect their technology and interests from being regulated and held accountable. If you enjoy this post, please click the ❤️ button or share it. Do you like my work? Consider becoming a paying subscriber to support it For those who prefer to make a one-off donation, you can 'buy me a coffee' via Ko-fi. Every coffee bought is a generous support towards the work put into this newsletter. Your support, in any form, is deeply appreciated and goes a long way in keeping this newsletter alive and thriving. 🦾 More than a humanPRIME Study Progress Update — Second Participant Designer Babies Are Here — So Why Aren't We Talking About It? 🔮 Future visions▶️ The Next Technological Revolution (45:02) Steam power, electricity, computers, the internet—these are some of the technologies that have revolutionised our daily lives. But what technology will be next to join this list? There are many contenders, and in this video, Isaac Arthur takes a closer look at some of them, from 3D printing to biotech, nanotech to advanced robotics, and new energy sources, examining how they could revolutionise our lives in the near future. 🧠 Artificial IntelligenceMicrosoft will release controversial Windows Recall AI search feature to testers in October 3x3 AI Video Matchup: US vs China Ex-Google CEO says successful AI startups can steal IP and hire lawyers to ‘clean up the mess’ Brands should avoid AI. It’s turning off customers White House says no need to restrict ‘open-source’ artificial intelligence — at least for now Nvidia’s AI NPCs will debut in a multiplayer mech battle game next year If you're enjoying the insights and perspectives shared in the Humanity Redefined newsletter, why not spread the word? 🤖 Robotics$16,000 humanoid robot ready to leap into mass production Chinese robotics company Unitree announced that their more affordable humanoid robot, G1, is ready for mass production, with pricing starting at $16,000. The G1 boasts advanced features such as 3D LiDAR, a RealSense depth camera, noise-cancelling microphones, and a quick-release battery. The company also released a new video showing the G1 jumping and performing other acrobatic tricks. In the first video since the announcement of the new all-electric Atlas, Boston Dynamics shows how good their robot is at doing push-ups. Meet Boardwalk Robotics' Addition to the Humanoid Workforce A new humanoid robot joins the party! Named Alex, it is made by Boardwalk Robotics, a company related to the legendary IHMC, the Institute for Human and Machine Cognition in Pensacola, Florida. The new robot consists only of a torso with two arms, with legs expected in the future. The release video shows how Alex can handle various tasks with great dexterity and speed. Boardwalk is currently selecting commercial partners for a few more pilots, and for researchers, the robot is available right now. Tesla is hiring people to do the robot Ikea expands its inventory drone fleet What’s next for drones Amazon's delivery drones are so loud they are like a 'giant hive of bees’ 🧬 BiotechnologyHow we could turn plastic waste into food 💡TangentsWant to Win a Bike Race? Hack Your Rival’s Wireless Shifters Thanks for reading. If you enjoyed this post, please click the ❤️ button or share it. Humanity Redefined sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human. A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support! My DMs are open to all subscribers. Feel free to drop me a message, share feedback, or just say "hi!" |
Older messages
Google goes all in with Gemini - Weekly News Roundup - Issue #480
Tuesday, August 20, 2024
Plus: Grok-2; AI and BCI helps a person with ALS speak; Nvidia delays its new AI chips; new rumours about Apple's secret robotics project; drones to carry cargo missions in the Himalayas; and more!
Figure unveils a new humanoid robot - Weekly News Roundup - Issue #479
Friday, August 9, 2024
Plus: more OpenAI drama; the race to 150; a robot playing table tennis; Groq raised $640 million; video game actors are on strike over AI concerns; and more! ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
A peek into Apple Intelligence - Weekly News Roundup - Issue #478
Friday, August 2, 2024
Plus: EU AI Act is in force now; a titanium heart pumps blood inside a living human; an AI necklace to combat loneliness; autonomous cars drifting in tandem; and more! ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
OpenAI announces SearchGPT - Weekly News Roundup - Issue #477
Friday, July 26, 2024
Plus: Will billionaires live forever; a police robot dog jamming wireless networks; Alphabet to invest $5B into Waymo; warnings about “model collapse”; a new partnership for AI security; and more! ͏ ͏
It's Strawberry Summer at OpenAI - Weekly News Roundup - Issue #476
Friday, July 19, 2024
Plus: GPT-4o-mini; first Miss AI contest sparks controversy; lab-grown meat for pets approved in the UK; Tesla delays robotaxi reveal until October; 'Supermodel granny' drug extends life in
You Might Also Like
🎉 Black Friday Early Access: 50% OFF
Monday, November 25, 2024
Black Friday discount is now live! Do you want to master Clean Architecture? Only this week, access the 50% Black Friday discount. Here's what's inside: 7+ hours of lessons .NET Aspire coming
Open Pull Request #59
Monday, November 25, 2024
LightRAG, anything-llm, llm, transformers.js and an Intro to monads for software devs ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Last chance to register: SecOps made smarter
Monday, November 25, 2024
Don't miss this opportunity to learn how gen AI can transform your security workflowsㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤ elastic | Search. Observe. Protect
SRE Weekly Issue #452
Monday, November 25, 2024
View on sreweekly.com A message from our sponsor, FireHydrant: Practice Makes Prepared: Why Every Minor System Hiccup Is Your Team's Secret Training Ground. https://firehydrant.com/blog/the-hidden-
Corporate Casserole 🥘
Monday, November 25, 2024
How marketing and lobbying inspired Thanksgiving traditions. Here's a version for your browser. Hunting for the end of the long tail • November 24, 2024 Hey all, Ernie here with a classic
WP Weekly 221 - Bluesky - WP Assets on CDN, Limit Font Subsets, ACF Pro Now
Monday, November 25, 2024
Read on Website WP Weekly 221 / Bluesky Have you joined Bluesky, like many other WordPress users, a new place for an online social presence? Also in this issue: CrawlWP, Asset Management Framework,
🤳🏻 We Need More High-End Small Phones — Linux Terminal Setup Tips
Sunday, November 24, 2024
Also: Why I Switched From Google Maps to Apple Maps, and More! How-To Geek Logo November 24, 2024 Did You Know Medieval moats didn't just protect castles from invaders approaching over land, but
JSK Daily for Nov 24, 2024
Sunday, November 24, 2024
JSK Daily for Nov 24, 2024 View this email in your browser A community curated daily e-mail of JavaScript news JavaScript Certification Black Friday Offer – Up to 54% Off! Certificates.dev, the trusted
OpenAI's turbulent early years - Sync #494
Sunday, November 24, 2024
Plus: Anthropic and xAI raise billions of dollars; can a fluffy robot replace a living pet; Chinese reasoning model DeepSeek R1; robot-dog runs full marathon; a $12000 surgery to change eye colour ͏ ͏
Daily Coding Problem: Problem #1618 [Easy]
Sunday, November 24, 2024
Daily Coding Problem Good morning! Here's your coding interview problem for today. This problem was asked by Zillow. Let's define a "sevenish" number to be one which is either a power