| | Good morning, and happy Friday. Tesla had a very good day yesterday. | The stock soared nearly 22% following its better-than-expected earnings report Wednesday night. The time to buy would’ve been Wednesday. The time to sell … well, that’s up to you. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | 🌍 AI for Good: Ocean cleanup 🏛️ White House pushes AI adoption amid national security concerns 💻 Ex-OpenAI researcher says company violated copyright law 👁️🗨️ The OpenAI exodus continues; the ‘world is not ready’ for AGI
|
| |
| AI for Good: Ocean cleanup | | Source: The Ocean Cleanup |
| Our oceans have a plastic problem. As of 2023, scientists estimated that there were more than 171 trillion pieces of plastic floating in the oceans, a reality that is harming ecosystems, marine life and humans. | These pieces of plastic will continue to break down into smaller pieces — which are harder to remove — unless they are manually cleared out of the water. There are a few groups that are aiming to do exactly that. | One of them, a nonprofit called The Ocean Cleanup, uses AI to help. | Because of the quantity of plastic in the ocean, it’s important for the project to understand the most optimal locations to deploy its cleanup hardware. In 2022, it trained an AI model on thousands of photos of marine debris that, when combined with GPS-tagging, “creates a remote sensing approach to detect and map the dynamic behavior of floating ocean plastic more efficiently.”
| Why it matters: As with all things in this section, to solve a problem, you must first understand its scope: “Knowing how much and what kind of plastic has accumulated in the ocean garbage patches is especially important. This knowledge determines the design of cleanup systems, the logistics of hauling plastic back to shore, the methods for recycling plastic and the costs of the cleanup.” |
| |
| | Protect yourself from Data Breach | | We live in an online world with little privacy and tons of threats. Surfshark is the best way to ensure that you're protected. | | Surfshark performs real-time credit and ID alerts, plus they provide instant email data breach notifications and regular personal data security reports. | Protect yourself online with Surfshark. Get 80% off when you sign up today. |
| |
| White House pushes AI adoption amid national security concerns | | Source: Official White House Photo |
| President Joe Biden on Thursday issued a National Security Memorandum (NSM) on artificial intelligence. The memo is predicated on the idea that “frontier of AI will have significant implications for national security and foreign policy in the near future.” | The details: It focuses on three major areas: ensuring that the U.S. leads in AI, harnessing AI to advance national security and advancing AI governance. | It is specifically “designed to galvanize federal government adoption of AI to advance the national security mission, including by ensuring that such adoption reflects democratic values and protects human rights, civil rights, civil liberties and privacy.” It also makes it a "top-tier intelligence priority" to ensure that U.S.-based AI companies maintain their edge, specifically over foreign competition; the government will provide these companies with cybersecurity and counterintelligence information.
| The NSM builds upon Biden’s AI executive order, which was unveiled last year. |
| |
| | Time is Running Out to Own a Piece of the Next Disney-Level Disruption! | | The clock is ticking—only 5 days left to invest in Elf Labs before the window closes on October 30th. Don’t miss your chance to be part of history in the $2 trillion global entertainment + licensing industry. | Elf Labs has already achieved 100+ historic trademark victories, securing the rights to some of the most iconic characters in history, including Cinderella and Rapunzel. Now, they’re taking these timeless stories into the future, using revolutionary patented VR technology and AI-powered toys to bring them to life like never before. | This opportunity is too big to ignore. With the entertainment and licensing industry primed for disruption, Elf Labs is positioned to be the next major player. Imagine being able to say you invested before they exploded onto the scene. | The last chance to invest is almost here. Don’t let this opportunity slip away. | Invest in Elf Labs today, before it’s too late. |
| |
| | | | | Elon Musk floats tackling robotaxi regulations in potential government role (Bloomberg). American creating deepfakes of Harris works with Russian intelligence (Washington Post). TSMC’s Arizona chip production yields surpass Taiwan’s in win for US (Bloomberg). The Information’s 50 most popular startups for 2024 (The Information). Nvidia supplier SK Hynix posts record quarterly profit as AI boom drives demand (CNBC).
| If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here. | | Hiring the best machine learning engineer just got 70% cheaper. * | | Today we are highlighting AI talent for you, courtesy of our partner, Athyna. If you are looking for the best bespoke tech talent, these stars are ready to work with you — today! Reach out here if we can make an introduction to these talents and get a $1,000 discount because you’re a reader of The Deep View!
| |
| |
| Ex-OpenAI researcher says company violated copyright law | | Source: Created with AI by The Deep View |
| After spending close to four years as a researcher at OpenAI, Suchir Balaji — who spent more than a year working on ChatGPT — started thinking harder about the impact of OpenAI’s technology. | He came to the conclusion that OpenAI’s widespread scraping of the bulk of internet content wasn’t right, telling the New York Times: “this is not a sustainable model for the internet ecosystem as a whole … If you believe what I believe, you have to just leave the company.” | Balaji believes that, contrary to OpenAI’s position, ‘fair use’ is a “pretty implausible defense” for most generative AI services. Fair use is a specific section of copyright law that protects certain uses of copyrighted material; the U.S. Copyright Office has not yet weighed in on whether the training of AI models constitutes fair use. It’s a question that a number of lawsuits are aiming to answer. He wrote that, “for the basic reason that they can create substitutes that compete with the data they're trained on,” the training of generative AI models does not seem to be protected by fair use.
| Legal experts and academic studies have come to the same conclusion; this is notable considering that Balaji came from OpenAI, and left for this reason. |
| |
| The OpenAI exodus continues; the ‘world is not ready’ for AGI | | Source: Sam Altman |
| OpenAI, a company explicitly attempting to build, deploy and monetize human-level artificial intelligence, or artificial general intelligence (AGI), has become something of a hot seat in recent months. | This week, Miles Brundage, an OpenAI researcher and senior advisor to the company’s AGI Readiness team, became the latest in a long line of researchers to resign his position. Brundage had been working as a policy researcher at the company for six years. | He said that he’s leaving the company to better “impact and influence AI's development from outside the industry.” | Toward the end of Brundage’s lengthy farewell essay, he said that the AGI Readiness team is being “distributed among other teams,” in a disbandment reflective of the Superalignment team’s dissolution several months ago. | He said that the “opportunity cost” of remaining at OpenAI had become “very high.” Explaining that OpenAI’s high-profile status was partly to blame, Brundage said that he wasn’t able to work on the research he thought was important. Brundage also said that he’s already accomplished much of what he wanted to at OpenAI. “I’ve begun to think more explicitly about two kinds of AGI readiness — OpenAI’s readiness to steward increasingly powerful AI capabilities, and the world’s readiness to effectively manage those capabilities.”
| “On the former,” he said, “I’ve already told executives and the board a fair amount about what I think OpenAI needs to do and what the gaps are, and on the latter, I think I can be more effective externally.” | When it comes to AGI readiness, Brundage claimed that “neither OpenAI nor any other frontier lab is ready, and the world is also not ready.” | Noting that he doesn’t view this as a “controversial statement among OpenAI’s leadership,” Brundage acknowledged that the term “AGI readiness” is a loaded one, adding: “when I say ‘ready for AGI,’ I am using this as shorthand for something like ‘readiness to safely, securely and beneficially develop, deploy and govern increasingly capable AI systems.’” | A few interesting points from Brundage’s essay: Brundage said that government action is both urgent and critical (the majority of civilians agree with him on this, though the federal government is moving incredibly slowly). “I think AI and AGI benefiting all of humanity is not automatic and requires deliberate choices to be made by decision-makers in governments, non-profits, civil society and industry, and this needs to be informed by robust public discussion.” | He noted that quantitative AI evaluation is vital, adding that “there isn’t actually a large gap between what capabilities exist in labs and what is publicly available to use.” Brundage said that “there needs to be more debate about the big picture of how humanity will ensure AI and AGI benefit all of humanity.” He called this the “AI grand strategy,” an effort to inform action based on the real risks and real benefits of the technology.
| Since the boardroom drama last year in which CEO Sam Altman was suddenly fired (and then very quickly rehired), the startup — which began as a humble nonprofit research lab — has bled a mix of founders and safety researchers. The non-profit board that fired Altman, has, of course, been completely replaced. | Ilya Sutskever, a co-founder and former chief scientist at OpenAI, left in May. At the same time, Jan Leike, who co-led OpenAI’s since disbanded Superalignment team with Sutskever, quit, saying: “safety culture and processes have taken a backseat to shiny products.” At around the same time, safety and governance researchers Daniel Kokotajlo and William Saunders quit the company, expressing concern about how OpenAI would handle AGI. In August, co-founder John Schulman and product lead Peter Deng left the company; at the same time, co-founder and President Greg Brockman announced a lengthy sabbatical.
| Last month, CTO Mira Murati announced her departure, alongside research officer Bob McGrew and research VP Barret Zoph. | These departures notably came alongside rumors that OpenAI was planning on restructuring into a for-profit organization. Now, of course, we know that said restructuring is imminent; as a term of its recent $6.6 billion fundraise (at a $157 billion valuation), OpenAI has guaranteed investors it will transition into a for-profit. If it doesn’t do so within two years, investors can ask for their money back. | Former OpenAI researcher Carrol Wainright said at the time: “The non-profit mission was a promise to do the right thing when the stakes got high. Now that the stakes are high, the non-profit structure is being abandoned, which means the promise was ultimately empty.” | | The legitimate, hypothetical and dubious potential of AGI is one thing; the clear and present harms presented by current technology is another thing entirely. The risks of both, however, can be addressed at the same time and in the same way: thoughtful regulation designed to incentivize corporate deployment to protect and benefit people. | With generative AI becoming ever more integrated into different layers of society, OpenAI’s theme of pivoting away from safety and caution in the interest of its investors seems darkly obvious, yet another sign of just how important regulation is. | | | Which image is real? | | | | | 🤔 Your thought process: | Selected Image 1 (Left): | | Selected Image 2 (Right): | |
| |
| 💭 A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on FRAPPE: | A third said you’d use it; a third aren’t sure. | 12% said they wouldn’t use it and 12% said maybe if it ran in the background. | Yes: | | I don’t know: | | Did you know that the government is interested in adopting AI? How do you feel about that? | |
|
|
|