How OpenAI and Anthropic Plan to Maintain US Dominance in AI - Sync #510
I hope you enjoy this free post. If you do, please like ❤️ or share it, for example by forwarding this email to a friend or colleague. Writing this post took around eight hours to write. Liking or sharing it takes less than eight seconds and makes a huge difference. Thank you! How OpenAI and Anthropic Plan to Maintain US Dominance in AI - Sync #510Plus: Manus; Gemini Robotics; inside Google's investments into Anthropic; world's first "Synthetic Biological Intelligence"; Ozempic show promise in preventing age-related conditions; and more!
Hello and welcome to Sync #510! In response to the AI Action Plan, OpenAI and Anthropic have submitted their AI policy proposals to the Office of Science and Technology Policy, which we will take a closer look at in this week’s issue of Sync. Elsewhere in AI, Manus, a new AI agent from China, is making waves on the internet, The New York Times examines Google's investments in Anthropic and OpenAI releases new tools for developing AI agents. In robotics, Google DeepMind released Gemini Robotics, a vision-language-action model trained for robotics, while Boston Dynamics shows how it designs, builds and tests new Atlas robots and DARPA learns how many drones a single person can control at once. Additionally, we have the world’s first "Synthetic Biological Intelligence," which uses real human neurons for computation, insights into how Ozempic could be used to prevent age-related conditions, and many more updates from the world of AI, robotics, and biotech this week. Enjoy! How OpenAI and Anthropic Plan to Maintain US Dominance in AI
One of President Trump’s first actions after assuming power was to revoke President Biden’s Executive Order, which the White House called “dangerous” and hindering AI innovation by imposing onerous and unnecessary government control over the development of AI. Trump’s administration views AI as one of the key tools and technologies for maintaining US global dominance. To support this objective, President Trump issued an Executive Order focusing on AI, affirming the United States' commitment to sustaining and enhancing America’s leadership in AI to promote human flourishing, economic competitiveness, and national security. Part of the Executive Order is the launch of an AI Action Plan, with the public being invited to share their AI policy ideas. In response to the AI Action Plan, OpenAI and Anthropic have submitted their policy proposals to the Office of Science and Technology Policy. Both proposals align on key priorities in national security, AI governance, infrastructure, regulation, and workforce development. However, there are some differences between both proposals, which we will examine in more detail later in this article. After reading both documents, is clear that AI is not just a powerful technology—it is also a geopolitical issue. Both proposals from OpenAI and Anthropic, as well as Trump’s administration's push to advance AI, offer an idea of what the new regulatory AI landscape in the US could look like. OpenAI and Anthropic recognise the threat to the US dominance in technology coming from China, which has some advantages in terms of what Chinese companies can do while the US companies cannot. OpenAI explicitly frames AI as a struggle between democratic and authoritarian models, advocating for “exporting democratic AI.” In contrast, Anthropic focuses on AI security and containment, warning that adversarial states could misuse advanced AI capabilities. This strategic outlook translates into their policy recommendations, which include building new gigawatt-scale AI-focused data centres, expanding energy and AI infrastructure, reducing bureaucratic hurdles, expanding access to training data (including governmental data and copyrighted materials), and streamlining regulations to accelerate AI adoption and to create a favourable environment for the broader AI industry in the US. National Security and Export ControlsBoth OpenAI and Anthropic recognise AI as a critical national security asset, but their policy recommendations differ significantly. Anthropic advocates a defensive strategy, emphasising strict security measures to protect US AI models from adversarial misuse. The company proposes mandatory risk assessments for powerful AI models, ensuring they are thoroughly tested for potential security vulnerabilities before deployment. Additionally, Anthropic supports strong export controls on AI-related hardware, such as semiconductors and model weights, to prevent China and other foreign actors from acquiring cutting-edge AI hardware or models. The company also recommends closer collaboration between AI developers and intelligence agencies to give the US government real-time insights into adversarial AI developments. Meanwhile, OpenAI proposes a three-tiered export control system that categorises countries based on their alignment with democratic AI principles. In this framework, Tier I countries (US allies) would have unrestricted access to American AI, Tier II countries (moderate-risk nations) would be subject to additional security measures, and Tier III countries (China and adversaries) would be completely restricted from accessing US AI models and infrastructure. OpenAI argues that this strategy ensures US AI spreads globally on democratic terms, countering China’s push for its AI solutions. Another key distinction between the two proposals is their stance on government oversight and industry collaboration. While Anthropic calls for strict AI security protocols, direct intelligence collaboration, and mandatory government evaluations, OpenAI suggests a voluntary partnership model where AI companies share security insights in exchange for regulatory relief from complex state-level AI laws. OpenAI also warns that overly restrictive AI export controls could backfire, allowing China to gain an advantage through regulatory arbitrage and state-subsidised AI development. By contrast, Anthropic prioritises containment, arguing that even a single breach of AI security—such as the theft of model weights—could severely undermine US national security. AI Infrastructure and EnergyAs AI models become more advanced, their energy and computational demands are skyrocketing, making infrastructure and energy policy a key area of focus for both OpenAI and Anthropic. Anthropic emphasises the need to rapidly scale domestic energy supply to prevent AI companies from relocating overseas in search of cheaper power. The company warns that foreign nations are actively offering incentives, such as low-cost energy, to attract AI research and infrastructure, which could pose a national security risk if US-developed AI technology is deployed or stored abroad. To counter this risk, Anthropic proposes an ambitious goal of expanding US energy capacity by 50 gigawatts by 2027 dedicated to the AI industry. This would ensure that AI companies have the necessary power to train and deploy cutting-edge models while keeping critical AI infrastructure within US borders. OpenAI frames AI infrastructure development as a massive economic opportunity. The company proposes a National AI Transmission Highway Act, modelled after the 1956 Interstate Highway Act, to modernise energy transmission infrastructure and ensure AI data centres have reliable access to power. OpenAI supports government-backed financial incentives, such as tax credits, loans, and investment programmes, to accelerate the construction of AI-specific energy projects. The company argues that rather than solely focusing on preventing AI firms from leaving the US, policymakers should view AI infrastructure as a driver of American economic growth, creating high-paying jobs, revitalising local economies, and reinforcing US industrial leadership. Anthropic sees government intervention as necessary to secure AI development, calling for faster permitting processes, direct federal investments, and strategic partnerships to ensure AI firms can access reliable energy. OpenAI proposes more market-driven solutions, such as the creation of AI Economic Zones—regions with special tax breaks and regulatory incentives designed to attract private investment in AI infrastructure. OpenAI also recommends using the Defense Production Act (DPA) to prioritise AI-related energy projects, ensuring that AI-related data centres and computing clusters receive the necessary resources without bureaucratic delays. Government AI AdoptionBoth OpenAI and Anthropic agree that AI should play a significant role in transforming government operations, but their approaches to federal AI adoption reflect their broader priorities—OpenAI emphasises efficiency and rapid deployment, while Anthropic focuses on systematic, structured AI integration with security safeguards. Anthropic proposes a government-wide initiative to identify every workflow that involves text, images, audio, or video processing and systematically integrate AI tools to enhance productivity. This approach envisions AI as a government-wide assistant, helping federal employees automate repetitive tasks, streamline services, and improve efficiency across agencies. Anthropic believes this strategy will maximise the public sector’s productivity while ensuring AI benefits are widely distributed. OpenAI, in contrast, focuses on eliminating bureaucratic obstacles that slow AI adoption, advocating for a modernised, fast-track procurement system to allow government agencies to quickly deploy cutting-edge AI tools. The company highlights that current cybersecurity accreditation processes, such as the Federal Risk and Authorisation Management Programme (FedRAMP), take 12–18 months to approve AI applications for government use, whereas commercial deployments often take just one to three months. OpenAI proposes streamlining this process, enabling agencies to test AI tools while maintaining continuous security monitoring rather than waiting for full accreditation before use. By reducing these barriers, OpenAI argues that the government can adopt AI at the pace of the private sector, making agencies more responsive and effective. Another key distinction is their stance on AI’s role in national security and defence applications. Anthropic’s recommendations are primarily focused on AI’s role in improving general government functions, such as public services, document processing, and interagency communications. OpenAI, however, emphasises AI’s strategic role in national security, calling for specialised AI models tailored for defence and intelligence applications. OpenAI proposes that the US government fund and develop custom AI models trained on classified datasets, ensuring that AI tools are optimised for tasks such as geospatial intelligence, cyber defence, and military applications. To support this, OpenAI calls for expedited security clearances for AI labs, allowing private-sector AI developers to collaborate more closely with national security agencies. AI RegulationAnthropic advocates a stricter regulatory framework, emphasising the need for AI risk assessments and compliance measures to ensure AI systems are safe and reliable before deployment. The company supports mandatory third-party evaluations of powerful AI models, arguing that developers should be required to test their systems for potential security risks before they are widely deployed. Anthropic also calls for greater transparency from AI companies, urging the government to establish standardised reporting practices that require AI labs to disclose information about their models' capabilities, risks, and security safeguards. OpenAI warns against excessive regulation that could stifle AI innovation and weaken America’s global leadership in AI. The company strongly opposes fragmented state-by-state AI laws, arguing that a patchwork of regulations creates uncertainty for AI developers and could slow down progress. Instead, OpenAI proposes a federal pre-emption framework that would establish a single national AI regulatory standard, overriding conflicting state-level rules. OpenAI believes that regulatory consistency will ensure AI companies can focus on innovation rather than navigating complex legal requirements. The company also advocates a voluntary partnership model, where AI developers collaborate with the federal government on security and safety initiatives in exchange for regulatory relief from state laws. Additionally, OpenAI takes a strong stance in favour of AI’s ability to train on copyrighted materials. OpenAI argues that copyright laws should allow AI models to learn from existing content, similar to how humans learn from books, music, and art. The company warns that if AI developers in the US face restrictive copyright regulations while China and other competitors do not, American AI firms will be at a significant disadvantage. OpenAI also criticises recent European copyright laws that limit AI training data, arguing that such policies could slow AI progress and benefit adversarial nations. AI’s Economic and Workforce ImpactAnthropic focuses on tracking and analysing AI’s influence on jobs and productivity, ensuring that policymakers have the necessary data to make informed decisions about workforce transitions. The company has launched the Anthropic Economic Index, an initiative designed to monitor how AI affects labour markets by analysing AI adoption patterns and correlating them with employment trends. Anthropic advocates government-led efforts to measure AI’s economic effects in real-time, including integrating AI-related questions into the Census Bureau’s American Time Use Survey and Annual Business Survey. This would allow the government to closely track how AI is being used across industries and identify early signs of economic disruption. OpenAI, on the other hand, takes a more proactive approach, focusing on workforce development and AI education initiatives. Instead of just tracking AI’s impact, OpenAI proposes preparing American workers for AI-driven jobs through training programmes and policy incentives. The company suggests expanding 529 education savings plans to cover AI-related training programmes, making it easier for workers to gain new skills in AI and emerging technologies. OpenAI also encourages public-private workforce training partnerships, where AI companies collaborate with universities, community colleges, and vocational training centres to equip workers with the skills needed for an AI-driven economy. OpenAI argues that proactively training workers will help mitigate job displacement and ensure that AI contributes to economic growth rather than exacerbating inequality. Another key difference between the two companies’ proposals is their view on the role of AI in productivity and economic growth. Anthropic remains cautious about AI’s economic effects, emphasising the need for continuous monitoring to ensure that AI-driven productivity gains do not disproportionately benefit certain segments of society while leaving others behind. The company supports policies aimed at ensuring AI-driven economic gains are widely shared and suggests that government intervention may be necessary to manage large-scale workforce disruptions. OpenAI is more optimistic about AI’s ability to drive long-term prosperity, arguing that AI will create more jobs than it displaces if the workforce is adequately prepared. OpenAI views AI as a tool for economic acceleration, pushing for policies that incentivise AI adoption in businesses while simultaneously preparing workers for the new opportunities it creates. The full text of OpenAI’s proposals is available here and Anthropic’s here. If you enjoy this post, please click the ❤️ button or share it. Do you like my work? Consider becoming a paying subscriber to support it For those who prefer to make a one-off donation, you can 'buy me a coffee' via Ko-fi. Every coffee bought is a generous support towards the work put into this newsletter. Your support, in any form, is deeply appreciated and goes a long way in keeping this newsletter alive and thriving. 🦾 More than a humanOzempic’s New Frontier: The War on Aging Australian man survives 100 days with artificial heart in world-first success Stem cell therapy trial reverses "irreversible" damage to cornea Seeing with the Hands AI Model Measures Pace of Brain Aging, Could Aid Prediction of Cognitive Decline 🧠 Artificial IntelligenceThere is a new Chinese AI that makes waves on the Internet. Manus is a general AI agent capable of independently analysing, planning, and executing complex tasks from start to finish. It operates as a multi-agent system, assigning tasks to specialised sub-agents. According to benchmarks provided by the company, Manus outperforms OpenAI’s Deep Research and other state-of-the-art AI agents. Manus is currently in private beta and requires an invitation code to access it. New tools for building agents Inside Google’s Investment in the A.I. Start-Up Anthropic The Assistant experience on mobile is upgrading to Gemini ▶️ The Government Knows A.G.I. Is Coming (1:03:40) In this video, Ezra Klein speaks with Ben Buchanan, the top adviser on AI in the Biden White House, about the emergence of artificial general intelligence (AGI)—AI systems capable of performing almost any cognitive task a human can do. According to many experts, this could happen within the next two to five years. Buchanan discusses how the US government has been preparing for this shift, particularly in national security and competition with China. The conversation explores the challenges of regulating AI, its impact on labour markets, the profound economic and societal changes AI will bring, and the contrasting approaches of the Biden and Trump administrations—one emphasising safety and oversight, the other focused on rapid acceleration. CoreWeave inks $11.9 billion contract with OpenAI ahead of IPO OpenAI claims it made a new AI model that’s good at creative writing Introducing Gemma 3 Meta begins testing its first in-house AI training chip OpenAI reportedly plans to charge up to $20,000 a month for specialized AI ‘agents’ CoreWeave to Acquire Weights & Biases - Industry Leading AI Developer Platform for Building and Deploying AI Applications ▶️ The Genius of DeepSeek’s 57X Efficiency Boost (18:08) Welch Labs explains in this video the key innovation the team at DeepSeek applied to their R1 model—Multi-Head Latent Attention (MLA)—in an easy-to-understand way. Together with Key-Value Caching (KV Caching) and some clever computational tricks, MLA improved the performance of the R1 model, resulting in R1 generating tokens six times faster than the vanilla Transformer model while reducing the KV cache size by a factor of 57. How AI Takeover Might Happen in 2 Years If you're enjoying the insights and perspectives shared in the Humanity Redefined newsletter, why not spread the word? 🤖 RoboticsGemini Robotics brings AI into the physical world Google DeepMind has unveiled Gemini Robotics, a vision-language-action (VLA) model based on Gemini 2.0 designed to bring advanced AI capabilities to robotics. Alongside it, DeepMind has also released Gemini Robotics-ER, which enhances spatial reasoning and enables roboticists to run their own programs using Gemini’s embodied reasoning (ER) abilities. These models allow robots to perform complex tasks with greater generality, interactivity, and dexterity, adapting to various environments and robot types. DeepMind is partnering with Apptronik, Boston Dynamics, Agility Robotics, and other companies to advance humanoid robotics. These models can learn from human demonstrations, generate real-time code, and improve task success rates by 2–3 times, paving the way for more capable and adaptive robots. More details can be found in the paper describing Gemini Robotics. ▶️ Pick, Carry, Place, Repeat | Inside the Lab with Atlas (6:11) In this video, Boston Dynamics takes us inside its lab, where it designs, builds, and tests the next-generation all-electric Atlas humanoid robot. While engineers from Boston Dynamics explain how the new Atlas works, we can take a look at the robot’s current capabilities as it performs various tasks not so dissimilar to those it would carry out in a real factory or warehouse. Just How Many Robots Can One Person Control at Once? New York targets weaponized machines in landmark robotics bill Prosthetic robot hand ‘knows’ what it’s touching Sony's aibo dog could soon walk quietly and perform elaborate dance routines Aibo from Sony is a popular robot dog, but it has one problem—it can be quite loud when it moves. Researchers from ETH Zurich and Sony have presented a new method that uses a reinforcement learning (RL)-based model to reduce footstep noise, making the robot quieter. This approach reduced the noise from 32.9 dB to 22.7 dB. Additionally, the researchers introduced a second RL model that taught Aibo to dance. The results are adorable. 🧬 BiotechnologyWorld's first "Synthetic Biological Intelligence" runs on living human cells The Making of a Gene Circuit 💡Tangents▶️ What Would an Alien Robot Look Like? (11:41) In this video, John Michael Godier explores the topic of alien robots and what such machines would look like. In some ways, those machines would be different, just as the robots and space probes we build are different from us. However, because the laws of physics are the same everywhere in the universe, the designs of robots might converge on similar ideas. As always, John presents an interesting and thought-provoking question. Thanks for reading. If you enjoyed this post, please click the ❤️ button or share it. Humanity Redefined sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human. A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support! My DMs are open to all subscribers. Feel free to drop me a message, share feedback, or just say "hi!" |
Older messages
Where’s Apple Intelligence? - Sync #509
Sunday, March 9, 2025
Plus: Musk vs OpenAI trial set for expedited trial this year; scientists create woolly mice; an android with artificial muscles; another dancing humanoid robot; how to make superbabies; and more! ͏ ͏ ͏
Claude 3.7 Sonnet and GPT-4.5 - Sync #508
Sunday, March 2, 2025
Plus: Plus: Alexa+; Google AI co-scientist; humanoid robots for home from Figure and 1X; miracle HIV medicine; a startup making glowing rabbits; and more! ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
AI that can model and design the genetic code for all domains of life - Sync #507
Thursday, February 27, 2025
Plus: Grok 3; Figure shows its in-house AI model for humanoid robots; HP acquires Humane AI's IP; Meta explores robotics and announces LlamaCon; Microsoft's quantum chip; and more! ͏ ͏ ͏ ͏ ͏ ͏
CES 2025 - Sync #501
Sunday, January 12, 2025
Plus: Sam Altman reflects on the last two years; Anthropic reportedly in talks to raise $2B at $60B valuation; e-tattoo decodes brainwaves; anthrobots; top 25 biotech companies for 2025; and more! ͏ ͏
500 weeks later
Thursday, January 9, 2025
Reflections on a decade-long and looking ahead to 2025 ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
You Might Also Like
📱 I Wish More Companies Made Phones This Bonkers — How to Check the Age of Your Hard Drive
Tuesday, March 18, 2025
Also: The 10 Best Apple TV+ Shows You're Missing Out On, and More! How-To Geek Logo March 12, 2025 Did You Know The weekday that falls most frequently on the 13th day of the month in the Gregorian
Tomorrow's Photo Management Class: How to sign up!
Tuesday, March 18, 2025
[Attention: Our final free class on photo management is happening tomorrow! This is your last chance to sign up. Register now.] Open your Photos app. What do you see? Thousands of random pictures?
The Sequence Engineering #508: AGNTCY, the Agentic Framework that Brought LangChain and LlamaIndex Together
Tuesday, March 18, 2025
The new framework outlines the foundation for the internet of agents. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
URGENT: Microsoft Patches 57 Security Flaws, Including 6 Actively Exploited Zero-Days
Tuesday, March 18, 2025
THN Daily Updates Newsletter cover ⚡ LIVE WEBINAR ➟ ASPM: The Future of AppSec -- Boom or Bust? Discover How ASPM is Redefining Application Security with Smarter, Unified Solutions. Download Now
⚙️ Making AI for coding work
Tuesday, March 18, 2025
Plus: New self-driving data
Post from Syncfusion Blogs on 03/12/2025
Tuesday, March 18, 2025
New blogs from Syncfusion ® Sneak Peek at .NET MAUI: 2025 Volume 1 By Paul Anderson Let's explore the new features and enhancements that will be added in the Syncfusion .NET MAUI suite for the
AI agents are changing work 🤖
Tuesday, March 18, 2025
Windows vs. M4 MacBook Air; VPN tips; Metallica on Vision Pro -- ZDNET ZDNET Tech Today - US March 12, 2025 person walking to work AI agents aren't just assistants: How they're changing the
Interested in a tailored threat briefing for you and your team?
Tuesday, March 18, 2025
Learn from expert identity attack researchers and creators of the SaaS attacks matrix ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Biome vs. Prettier & ESLint; ESLint flat config improvements; returning from async functions;
Tuesday, March 18, 2025
We have 6 links for you - the latest on JavaScript and tools Is Biome ready to replace Prettier & ESLint? medium.com ESLint: Evolving flat config with extends eslint.org @nzakas@fosstodon.org @
What is a HoundDog(.ai)?
Tuesday, March 18, 2025
Still waiting to hear back from HoundDog… if you have an in, please reach out and let them know I want them on the show!! Until then… Stop PII Leak Detection in the Code! ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏