| | Good morning. I’ve only got two stories for you today. | But we’re going in-depth into what seems to be a regulatory dynamic on the brink of dramatic change. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | |
| |
| OpenAI secures major legal victory | | Source: Created with AI by The Deep View |
| A New York federal judge last week dismissed one of the many copyright infringement lawsuits that have been filed against OpenAI. The complaint — filed by media outlets Raw Story and Alternet in February — alleged that OpenAI had violated the Digital Millennium Copyright Act (DMCA) of 1998, which holds that it is illegal to strip copyright information (author, title, etc.) from an article with the intent of concealing potential infringement. | These DMCA violations constituted the main pillar of their case, which claimed that OpenAI designed ChatGPT “not to acknowledge or respect copyright, not to notify ChatGPT users when the responses they received were protected by journalists’ copyrights and not to provide attribution when using the works of human journalists." | The details: The judge, agreeing with several points made by OpenAI’s lawyers, granted OpenAI’s motion to dismiss the case. | Importantly, the dismissal was granted without prejudice, allowing the plaintiffs the opportunity to refile their complaint. Judge Colleen McMahon wrote in her decision that the complaint failed to provide evidence of damages related to OpenAI’s removal of copyright information from plaintiffs’ articles.
| What’s next? Matt Topic, a partner at the law firm representing Raw Story, told Wired that the group is “confident that we can address the court’s concerns in an amended complaint.” | Raw Story’s CEO told Wired that he intends to “continue the case.” | | Finally, a developer tool that makes digging around for the answer a thing of the past. | | Other developer tools can’t tell you how your codebase works and why. Unblocked can. We augment your source code with context from Slack, Confluence, Jira (and more), so your team gets helpful answers without having to search for them. | Unblocked gives you: | Instant answers about every aspect of your codebase Relevant, historical context for any code open in an IDE Automated answers to questions asked in Slack, removing interruptions to you
| ⌛Engineers who use Unblocked save an hour or more a day. | “Every developer now has the ability to tap into past discussions and decisions to fill their knowledge gaps, regardless of their tenure. We are moving faster and making more accurate decisions as a team as a result.” - Alex Mallet, EVP of Engineering at Forto | Start your 21-day trial today | | What does the dismissal mean? | | Source: OpenAI |
| At this stage, outright dismissals of major copyright infringement cases against OpenAI might seem to signal a death knell for the many other copyright-related lawsuits — notably including one brought by the New York Times — that have been filed against OpenAI. | But that is not necessarily true, and the reason has to do with the specificity of this particular case. | Unlike other cases, this complaint did not argue that the training of economically valuable generative AI models without the knowledge, permission or compensation of the copyright holders constituted copyright infringement. (On that point, OpenAI believes its actions are protected by the ‘fair use’ doctrine, a point that still has yet to be legally decided). Instead, this case specifically argued a violation of the DMCA due to OpenAI’s removal of identifiable copyright information.
| This is a point that McMahon herself acknowledged, saying: “Let us be clear about what is really at stake here. The alleged injury for which Plaintiffs truly seek redress is not the exclusion of CMI from Defendants' training sets, but rather Defendants' use of Plaintiffs' articles to develop ChatGPT without compensation to Plaintiffs. Whether there is another statute or legal theory that does elevate this type of harm remains to be seen. But that question is not before the Court today.” | OpenAI, as McMahon noted, has acknowledged that it has trained on copyrighted materials, going so far as to say that it would "be impossible to train today's leading AI models without using copyrighted materials." The company has also inked an ever-increasing number of licensing deals — for unknown millions — with many news organizations that have opted against taking legal action against the startup. | As this dynamic continues to play out, an increasing number of academics and legal experts have said that the training of generative AI models does, in fact, constitute copyright infringement. Former general counsel of the U.S. Copyright Office Jacqueline Charlesworth wrote in a recent paper that the ‘fair use’ defense does not apply to AI training, since the (monetized) output is a “function of the copied materials.”
| While it remains to be seen how this case will shake out — and while it seems unlikely that it will cripple the other ongoing cases — this dismissal is certainly a major early win for OpenAI and other AI labs. |
| |
| | Automate competitive and market research with Klue AI | | Klue AI automates competitive/market intelligence to provide real-time insights and recommendations you can action today: | Noise Reduction: Filters out 90% of noise to surface actual intel Summarize Alerts: Summarize any article in alerts to get to “why it matters” faster Review Insights: Summarizes competitor reviews for positive or negative sentiment
| See Klue in action today. |
| |
| | | | | AI slop is flooding Medium (Wired). Amazon may up its investment in Anthropic (The Information). Tesla’s social media posts falsely implied that its cars are robotaxis, NHTSA warns (CNBC). Deforestation in Brazil’s Amazon falls to lowest level in nine years (Semafor). India’s ambitious lithium dreams have stalled (Rest of World).
| If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here. | | | | In the lead-up to the U.S. presidential election last week, OpenAI said that it had implemented a number of safeguards across ChatGPT. The startup estimated that ChatGPT rejected more than 250,000 requests to generate images of the candidates who were on the ballot. The Information reported that OpenAI’s new flagship model — Orion — represents a far smaller increase in performance quality compared to the jump between previous flagships GPT-3 and GPT-4. Some OpenAI researchers reportedly believe Orion isn’t “reliably better than its predecessor in handling certain tasks,” such as coding.
|
| |
| What a Trump presidency might mean for AI | | Source: Created with AI by The Deep View |
| With former President Donald Trump months away from reclaiming the White House, an obvious question is how his impending administration will impact the established regulatory dynamic around artificial intelligence. | The answer is not at all clear. | First, the established dynamic: In the two years since ChatGPT entered the scene, regulatory efforts in the U.S. remain nascent. A handful of bills related to AI — covering security, civil rights and consumer protections — have been introduced during the 118th Congress (tracked here), but none have made much progress. This, despite the fact that the vast majority of American voters earnestly crave regulatory intervention in the space. | Differing layers of legislation (tracked here) have been proposed and signed into law across the states, but it’s a patchwork approach that makes compliance challenging and consumer protections unclear. | The clearest element of the current regulatory environment is President Joe Biden’s executive order and his recent national security memorandum, both of which are designed to establish civil and consumer rights and protections related to AI, while enhancing education and government-wide adoption of the technology. | On this front, Trump has promised to repeal the order. | This promise is in line with the Republican party’s official platform, which claims that the order “hinders AI innovation.” The platform says that the executive order will be replaced by one loosely “rooted in free speech and human flourishing.”
| To that end, the Washington Post reported in July that the America First Policy Institute had drafted a potential replacement to Biden’s executive order that would establish a “Manhatten Project” for AI, while simultaneously addressing “unnecessary and burdensome regulations.” However, this is not yet an official policy position; there is no mention of AI at all on Trump’s official campaign website. | Since Biden’s order called for the creation of the AI Safety Institute (AISI), there is a chance the organization gets axed as well (though some Big Tech firms are pushing for Congress to make it permanent before Trump takes office). | “The emphasis will shift away from the regulatory environment,” creating more space for companies to make their own decisions on safety, transparency and civil rights, Dr. Suresh Venkatasubramanian, director of the Center for Technological Responsibility, Reimagination and Redesign at Brown University, told Nature. | Venkatasubramanian has previously told me that regulation is vital in AI, saying that it would act as a boon for innovation, rather than a hindrance, as it would “trigger market creation” by mandating responsible innovation. | Here’s where it gets complicated: | While the general expectation — enforced by Trump’s circle of backers, explicitly including J.D. Vance and Marc Andreessen — is that a Trump administration will roll back or otherwise stymie AI regulation, a possible wild card in those plans is Elon Musk. | Musk has, in recent months, become a staunch Trump supporter and campaign donor, contributing more than $100 million while also appearing at numerous events and rallies. Trump has said he plans to create a Department of Government Efficiency for Musk to head; the mission here would be to cut federal spending by gutting federal agencies. This, combined with Musk’s regular complaints about regulatory efforts at his other companies — notably SpaceX — seems to indicate an environment of forcibly fewer regulations by way of crippling regulatory bodies (many legal experts I’ve spoken with have expressed far more confidence in regulatory bodies reigning AI in, as opposed to Congress, due to polarization and the slow pace of lawmaking).
| But, as much as he hates federal regulation, Musk has also been sounding the alarm around so-called existential risk for several years now. He came out in support of California’s SB 1047, which, though ultimately vetoed by Gov. Gavin Newsom, would have held companies accountable for “catastrophic” harm exhibited by AI. | The other element here involves some proposed policies that, while not specifically pertaining to AI, will certainly impact the sector. | | Looking to the past to predict the future: Despite not talking much about AI, the same Biden-era executive order that Trump has promised to repeal actually builds on an executive order regarding AI that Trump himself issued during his first term. | That order called for an expansion of AI innovation and growth, while also highlighting a number of factors that have carried over to Biden’s approach, such as risks to privacy and civil liberties, which the order said could negatively impact public trust in AI. | | Though things remain very far from certain, my expectation here is that we’ll see a very light, hands-off approach to AI regulation, coming at a time when clear regulation — enshrining civil rights, worker rights and environmental protections — is sorely needed. | Daren Acemoglu, a recently Nobel Prize-winning economist, said that “we are likely to get a lot of bad AI, and not so much good, AI under Trump’s approach to AI regulation.” | The biggest cost of this, according to Acemoglu, is “more automation, bypassing the development of pro-worker technology. More automation will be costly for American workers.” | The reality is that no one really knows what’s going to happen next. | What is clear, though, is that Big Tech — and the billionaires that fill its ranks — has never been closer to controlling its own regulatory environment, and regulation has never been further away. As Gary Marcus noted, the “time to get this right is limited … grassroots action is likely the only possible way in which U.S. citizens might get meaningful protection from tech oligarchs. Even then, the odds aren’t great.” | | | Which image is real? | | | | | 🤔 Your thought process: | Selected Image 1 (Left): | | Selected Image 2 (Right): | |
| |
| 💭 A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on Anthropic’s partnership with Palantir: | 30% don’t like the idea of genAI in the government; 21% think it’s fine if handled responsibly, 21% think it’s great and the rest aren’t sure. | I don’t know: | | What do you think of the judge's decision re the copyright lawsuit? | |
|
|
|