| | Good morning, and welcome to the final part of our five-part special edition series. | 2024 was a long, crazy year. 2025 promises to just be, well, more. And we’ll be in the thick of it the whole way, with more podcasts, videos, interviews, reports, analyses and stories. I’ve got a lot of gratitude for all of you for sticking around, reading in and just thinking hard about this technology. Just by being here, you make this possible, so thank you. | Let me know how you liked this week of special editions, if it’s something you want to see more of every now and then, or if you never want to see something like this again! It’s impossible to fit the contents of a year into five brief pieces, but I had a lot of fun preparing this novella of recent history and murky tomorrows. | Today, to close things out, we’re diving into the question of regulation and oversight, a dynamic that began to evolve in 2024 and promises to remain complex in 2025, particularly with the pending introduction of a new presidential administration in the U.S. | — Ian Krietzberg, Editor-in-Chief, The Deep View |
| |
| | Fastest Test Coverage with Minimum Maintenance | | testRigor is the #1 generative AI-based codeless test automation tool that lets you spend 99.5% less time in test script maintenance. With testRigor, you can write or generate comprehensive end-to-end tests in plain English from a real user’s perspective. This remarkable technology lets everyone on the team contribute to testing processes, not just the automation engineers. | Quick and simple test creation with Generative AI Single tool to test web, mobile (hybrid/native), desktop, and API Enables you to create tests 15X faster than Selenium Near-zero maintenance with no locators Ultra-stable tests without CSS/XPath reliance
| See how your team can save time now: Request a Demo | |
| |
| The backdrop | | Source: Created with AI by The Deep View |
| At the beginning of the AI frenzy — May of 2023, to be specific — OpenAI CEO Sam Altman came before Congress and asked the government to move quickly to regulate the technology. | At the time, he said a few things that certainly captured the attention of lawmakers, such as: “If this technology goes wrong, it can go quite wrong.” He has since changed his tune; in 2024, he came out against proposed legislation and has since downplayed the risks he spent 2023 playing up. Appearing at the New York Times DealBook summit a few weeks ago, he said the following: “my guess is we will hit AGI sooner than most people in the world think and it will matter much less.” | As regulatory efforts in the U.S. grew more serious in 2024, Big Tech went on the offensive, parroting the idea that regulation would “stifle” innovation, this despite the fact that overwhelming numbers of Americans heavily support the idea of AI regulation, even if it comes at the cost of so-called innovation. | In the following, I’m going to focus on broader legislative efforts. This piece won’t include a look at the many, many lawsuits that have been filed against AI developers, and where those might go in 2025. | A look back at related stories we’ve done in the past year: | | Regulation in 2024 | The world | As the rest of the world grappled with the idea of AI regulation, the European Union launched it. The EU’s AI Act was finalized in March of 2024, and began to enter into force in August; it will be fully in force by 2026. | The Act takes a risk-based approach to the regulation of AI, classifying different types and applications of the technology based on how risky they may be, and creating a regulatory regime around that. Certain, highly-risky applications were banned outright; others now come with a bunch of regulatory caveats, such as transparency and oversight requirements. | Other countries have yet to assemble something quite so comprehensive, though some seem to be getting closer. An Australian senate committee, for instance, recently called for the establishment of a dedicated, risk-based regulatory framework for AI. The U.K., meanwhile, seems to be taking a bit of a different approach, with Prime Minister Keir Starmer vowing in October to “rip up the bureaucracy that blocks investment … Are we leaning in and seeing this as an opportunity, as I do? Or are we leaning out, saying ‘this is rather scary, we better regulate it,’ which I think will be the wrong approach.”
| And in September, a number of countries — including the U.S. and the U.K. — signed an international treaty on AI, but the treaty itself is super broad, and bogged down with caveats. What we don’t have, what we’re nowhere near achieving, is the kind of international agency committed to the oversight of artificial intelligence that cognitive scientist Gary Marcus has so passionately called for. | The U.S. | On the federal level, there exists no legislation specifically designed to address AI. What we do have is President Joe Biden’s circa-2023 AI executive order, in addition to Biden’s more recent National Security Memorandum on AI, which sought to build upon the order. | The focus of both of these efforts was to establish ethical and moral safeguards around AI systems, while also boosting their adoption in the government. Among other things, the executive order called for the creation of the U.S. AI Safety Institute, whose fate is now uncertain with President-elect Donald Trump expected to repeal Biden’s executive order. It remains unclear whether it will be replaced, or with that; Trump has not talked about his plans for regulating AI, though a deregulatory environment seems highly likely. | A number of bills, meanwhile, were introduced to Congress, all seeking to address specific gaps in AI safety. This includes bills that would require companies to get consumer consent before training an AI model on their data, alongside bills that would address the environmental impacts of AI and bills that would create remedies for victims of deepfake abuse and harassment. None of these bills have made any progress beyond their introduction. You can track them all here.
| A look back at related stories we’ve done in the past year: | | State governments, meanwhile, have been much more active on the AI legislation front than their federal counterparts. The result, however, is a highly varied, patchwork ensemble of different laws, each with different approaches, across the country, something that makes compliance a bit challenging for corporations. | Much of the bills that were passed at the state level in 2024 sought to address issues of algorithmic discrimination, among other things. You can track state legislative progress here and here. | You can’t talk about state legislation without talking about California, which is home to many of the Big Tech companies that are developing AI. Though the state passed a number of AI-related bills, including ones that would require transparency, California Gov. Gavin Newsom also vetoed SB 1047, a proposed bill that would have held companies accountable if their AI products wreaked “catastrophic” harm. | The bill existed for months at the center of a maelstrom of anti-regulatory lobbying campaigns conducted by Big Tech and its allies, and so acts as a pretty good barometer regarding Big Tech’s willingness to be regulated. | Daniel Colson, the executive director of the Artificial Intelligence Policy Institute (AIPI), called Newsom’s veto “misguided, reckless and out of step with the people he’s tasked with governing.” | Regulation in 2025 | On that front, though, Colson expects some version of SB 1047 to be reintroduced next year; since Newsom had specific criticisms of the bill, he thinks it is possible that a version of 1047 might become law, although it would have to battle through a maze of lobbyists to do so. | Colson further said that the AI regulatory landscape could see “significant evolution across multiple fronts” next year. He expects a Trump administration to “push to streamline AI infrastructure development through executive action,” adding, however, that “much substantial deregulation would require congressional support.” | “National security concerns are likely to drive bipartisan support for targeted AI policy, particularly around export controls and security requirements. An interesting development may emerge in the form of novel political coalitions between AI safety advocates and more pragmatic deregulation proponents, potentially leading to compromise legislation that increases oversight in critical areas while reducing barriers in others.” “Meanwhile, state legislatures are poised to pass a wave of AI bills varying widely in their sophistication and regulatory burden — tech companies may find themselves regretting their concentrated opposition to California's SB 1047, especially given Governor Newsom's justification for vetoing it rested partly on other AI legislation he supported, demonstrating how politicians increasingly need to show some appetite for AI oversight.”
| Computer scientist and AI expert Dr. Srinivas Mukkamala told me that 2025 will be an “interesting” year for the U.S. | “It'll be a less regulatory regime. I mean, these are capitalists. The President-elect is a capitalist … (Elon Musk) is a capitalist. Is humanity a top priority? I don't know. It's all about innovation. If you have to put humanity versus innovation, I think they'll take innovation … which is (not necessarily) wrong, by the way. Humanity can thrive in an innovative environment,” he said. “The current environment will push more innovation, more disruption and break the status quo. Whatever it takes.”
| Liran Hason, a computer scientist and the co-founder/CEO of Aporia, a guardrails and observability platform for AI applications, told me that, like Mukkamala, he expects to see a less regulatory environment next year. He’s not sure that it’s a good thing. | "President-elect Trump has voiced plans to roll back Biden’s executive order on AI — a decision that might fast-track AI innovation, but also introduces serious risks if left unchecked. It's becoming clear that, as the Trump administration considers AI regulation shifts, we need more than guidelines — we need strong, enforceable guardrails to ensure AI develops responsibly. This isn’t about hindering AI’s potential. The real danger lies in developing AI without the protections needed to prevent misuse in critical areas like suicide prevention and child protection,” he said. “We understand the incredible potential of AI, but we’ve also seen firsthand the risks. Without guardrails, the unintended consequences can be disastrous. This isn’t a partisan issue — it’s a matter of American leadership and responsibility in technology. If we want the U.S. to lead globally in AI, we need to ensure it’s done safely and thoughtfully. Guardrails don’t restrict AI; they make sure it enhances human well-being instead of undermining it. President Trump has the opportunity to adopt a responsible path that will keep AI innovation moving forward while safeguarding American lives."
| A look back at related stories we’ve done in the past year: | | Galileo co-founder Yash Sheth said that, “even if Trump doesn't want any regulations, the big tech corporations are gonna lobby the hell out of him.” | “There are hundreds of billions of dollars invested in AI already, and if the world doesn't see enough ROI from it, there's going to be huge implications. The only way you can see true value through AI is if you can deploy it free, and you can deploy it freely only if you have the right set of regulations around deploying it. I'm optimistic overall as a person, and even through uncertain times. I do think that, you know, common sense will prevail.” | | Yash Sheth |
|
| Lucas Hansen, co-founder of the non-profit CivAI, told me that “there's some weird, conflicting influences that are going on.” Deregulation seems likely, according to Hansen, though he expects that the content of Biden’s AI order will, in some fashion, remain in place. | “I think there is a chance that Trump would want more direct control over what's going on there, and so might want to move it to what would be directly underneath his influence; the Office of Science and Technology might be a place to put some of those responsibilities and research, and that's mostly how Trump exerted his influence on the trajectory of AI during his administration,” Hansen said. “So I think it's possible that things will move towards that. But the overall stance of, if not the AI executive order, then at least the national security memo on AI, feels like it aligns pretty closely with what (Trump) wants … But who knows. It’s an odd situation.”
| Harry Muncey, Senior Director of Data Science, and Responsible AI at Elsevier, expects to see more global clarity “on how AI regulation will be implemented in practice. I think we'll also see maturing of standards and 3P certifications to support transparency around which organizations & systems are compliant.” | Dr. John Licato, an associate professor of computer science at the University of South Florida and the founder/director of its Advancing Machine and Human Reasoning lab, told me that one of his greatest concerns for 2025 involves this environment of regulatory uncertainty. | “It’s always been the case the past two years that we’ve been talking about regulation and no one knows what it looks like. The worry was the regulation might be too heavy to fast,” he said. “Now, people are worried about the opposite, that the regulations are going to go away too fast, at a time when we do need some guardrails on malicious uses of this technology.” “I guess it’s just the uncertainty, the uncertainty and the idea that the people making the decisions may not necessarily be tech experts, perhaps.”
| “Really, the technology itself is super promising, so much potential. The technology itself isn’t what worries me. It’s all the things around the technology,” he added. “How do you make sure people aren’t going to use it to sway public opinion? Even tangential issues, the fact that so many research labs depend on high-quality foreign students that are super talented, but all these issues about students that can’t emigrate to the U.S., all these additional burdens. These things are things that, I think people in AI who are doing the research, it’s something that worries us more than the actual technology.” | | Like everyone I talked to, I expect Trump to repeal Biden’s executive order almost immediately. What’s unclear is what happens next. Historically, we would expect a deregulatory environment, but, considering Trump’s close association with the movers and shakers of this business — Elon Musk, for one — a lack of federal regulation might actually pose a challenge. | When these developers are thinking about deployment across the country, where each state is now developing its own particular rules of the road, compliance becomes confusing, challenging and expensive. A solution to this, as this 2018 Axios piece points out, is — perhaps not a lot of it — but a little bit of federal regulation, which would offer, at the minimum, universal guidelines that would make compliance and deployment far less complicated. | It’s hard to tell how much of a concern this is for AI developers. Depending on how much they care, we might well see lobbying efforts for some degree of federal regulation. (It’s a big ‘might,’ I know). I would also be surprised if Trump kills Biden’s order without some sort of replacement — during his first term, Trump himself issued an executive order on AI that became the foundation for Biden’s order … | That aside, my baseline expectation is a blind push for ‘innovation’ and a rollback of possibly growing regulatory intention, specifically around those vital places in which sustainability and AI meet. | Then, there’s Congress, which has expressed bipartisan interest in developing AI regulation since 2023. | I would, however, be very surprised if any of it comes to fruition. What is far more likely is the states — led by California — will create the regulatory bar to which these companies will be held. | And I think we’ll see that bar solidify in 2025 (though it won’t be especially high). | | | Which image is real? | | | | | 🤔 Your thought process: | Selected Image 1 (Left): | | Selected Image 2 (Right): | |
| |
| | | Mark Zuckerberg gave Meta’s Llama team the OK to train on copyrighted works, filing claims (TechCrunch). Nvidia’s tiny $3,000 computer for AI developers steals the show at CES (CNBC). 'Entirely foreseeable': The L.A. fires are the worst-case scenario experts feared (NBC News). Supreme Court won’t block Trump’s sentencing in hush money case (WSJ). Deel accused of laundering, sanctions failures in Ponzi Scheme lawsuit (The Information).
| If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here. | | Hire Your Next Stack Engineer for 70% Less! | | | Find Out More! * |
| |
| 💭 A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on investing trends in AI: | 36% of you think things will keep going higher in 2026; 26% think it’ll be the year the bubble bursts. | Burst: | | No idea: | | What do you think regulation will look like next year? | |
|
|
|