| | | Good Morning. Our Real or AI section yesterday featured a photo (taken by yours truly) of Toledo, Spain. Kudos to our reader who’s going to Toledo and found me out (you will love it!) | In our poll yesterday, a little more than half of you said you were excited about AI Overviews in Google search; around 48% would rather Google just not. One reader doesn’t trust Google anymore. Another would prefer the option to toggle on/off AI summaries as desired. | In today’s newsletter: | 🍃 OpenAI’s safety exodus 🏦 European Central Bank considers regulating the use of AI in finance 🥀 Microsoft’s carbon emissions 30% higher today than in 2020 (due to AI) 🏛️ Chuck Schumer’s AI roadmap is a ‘complete whiff’
|
| |
| | OpenAI’s got a bit of an exodus on its hands | | Ilya Sutskever, former OpenAI Chief Scientist (OpenAI). |
| Ilya Sutskever, an OpenAI co-founder and its chief scientist, announced this week that he is officially departing the company. Ilya most recently co-led OpenAI’s Superalignment team, alongside Jan Leike, who announced his own resignation mere hours after Sutskever’s. | | First a trickle, then a flood: | Earlier this week, I wrote about the departure of two safety and governance researchers — Daniel Kokotajlo and William Saunders — from OpenAI. Kokotajlo “quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI.” The Information recently reported that safety researchers Leopold Aschenbrenner & Pavel Izmailov had been fired for allegedly leaking information; OpenAI’s VP of people (Diane Yoon) and its head of nonprofit & strategic initiatives (Chris Clarke) also left the company in early May.
| | Jan Leike @janleike | |
| |
I resigned | | May 15, 2024 | | | | 7.55K Likes 718 Retweets 831 Replies |
|
| You can read some of Leike’s musings on AI safety and alignment here. | Are you concerned about OpenAI's loss of Jan & Ilya? | | | European Central Bank considers regulating the use of AI in finance | | European Central Bank (Unsplash). |
| Though it might be moving a bit more slowly than other industries, parts of the financial industry — in everything from AI platforms to robo advisors — have lately been adopting AI technology. | In a post Wednesday, the European Central Bank acknowledged the growing interest in AI & the ways the tech could improve the financial and banking sectors. But it also noted that a greater integration of AI in the financial sector poses a number of risks (that might call for a dash of regulation). | The gist: | Misuse of AI models “could have an impact on public trust in financial intermediation, which is a cornerstone of financial stability … The implementation of AI across the financial system needs to be closely monitored as the technology evolves. Additionally, regulatory initiatives may need to be considered if market failures become apparent that cannot be tackled by the current prudential framework.”
| The details: | The bank cited known issues of bias, hallucination and explainability, which it said makes LLMs “less robust,” adding that there is a clear potential for “misuse or overreliance.” The bank said that if financial institutions allow AI models to make decisions, it could result in economic losses and “disorderly” market moves.
| | Together with Enquire Pro | | Enquire PRO is designed for entrepreneurs and consultants who want to make better-informed decisions, faster, leveraging AI. Think of us as the best parts of LinkedIn Premium and ChatGPT. We built a network of 20,000 vetted business leaders, then used AI to connect them for matchmaking and insight gathering.
Our AI co-pilot, Ayda, is trained on their insights, and can deliver a detailed, nuanced brief in seconds. When deeper context is needed, use a NetworkPulse to ask the network, or browse for the right clients, collaborators, and research partners.
Right now, Deep View readers can get Enquire PRO for just $49 for 12 months, our best offer yet. | Click the link, sign up for the annual plan, and use code DISCOUNT49 at checkout for the AI co-pilot you deserve. | | Microsoft’s carbon emissions 30% higher today than in 2020 (due to AI) | | Image Source: Unsplash |
| One of the oft-repeated promises of AI is simple: It will help us “solve” climate change. The reality of the past year, however, has been less magical – both training and running AI have proven to demand an exorbitant amount of energy, which has spiked data center emissions and water consumption all in one. | In 2020, Microsoft pledged to be carbon-negative by 2030. Since then, Microsoft’s emissions have increased by about 30%, according to its recent environmental report.
| Brad Smith, president of Microsoft, told Bloomberg that the culprit is AI. | | Microsoft said in the report that increasing investments in AI will do more for the climate than scaling things back. | AI researchers I’ve spoken with have said that, although AI can be leveraged to help us combat climate change, that work must be more narrowly focused through small models (which emit less). This current race between Big Tech to outdo itself with genAI-enhanced products isn’t helping the environment by any stretch of the imagination; it’s just hurting. |
| |
| | 💰AI Jobs Board: | Artificial Intelligence Engineer: The CRM Corporation · United States · Remote · Full-time · (Apply here) Senior Research Scientist: Google · United States · Multiple Locations · Full-time · (Apply here) Generative AI Data Scientist: Deepwatch · United States · Remote · Full-time · (Apply here)
| | 🗺️ Events: * | Ai4, the world’s largest gathering of artificial intelligence leaders in business, is coming to Las Vegas — August 12-14, 2024.
| | 🌎 The Broad View: | U.S. efforts to curb China’s access to AI tech are working, and Chinese AI firms are falling behind (Semafor). Singaporean writers are reacting fiercely to their government’s request to allow LLMs to be trained on their work (Rest Of World). Google CEO Sundar Pichai told CNBC that it would “sort it out” if OpenAI did indeed train its Sora model on YouTube data (CNBC).
| *Indicates a sponsored link |
| |
| Together with Bright Data | Unlock the web and collect web data at scale with Bright Data, your ultimate web scraping partner. | | Access complex sites with the most advanced JavaScript frameworks with our robust web scraping solutions. | Award-Winning Proxy Network - Bypass IP blocks & CAPTCHAs Advanced Web Scrapers - From low-code environments to high-level programming Ready-Made and Custom Datasets - Access extensive datasets directly or request customized collections AI and LLM Ready - Equip your AI and language models with the diverse data they require
| Experience the Unstoppable – Try Bright Data Today |
| |
| | Chuck Schumer’s AI roadmap is a ‘complete whiff’ | | Created with AI by The Deep View. |
| After hosting a series of nine forums on artificial intelligence (notably attended by the biggest tech executives of the moment), the Senate’s AI Working Group – led by majority leader Chuck Schumer – on Wednesday published its roadmap for AI-related policy. | | You can read the report here. | The report’s priorities, in brief: | Boosting AI Innovation Addressing AI & the workforce Addressing safety & transparency in AI Safeguarding elections from AI Increasing national security with AI The report makes no actual legislative proposals
| A few good things: | | Dr. Suresh Venkatasubramanian, an AI researcher who co-authored the White House’s AI Bill of Rights, said that the report has “strong ‘I had to turn in a final report to get a passing grade so I won't think about the issues and will just copy old stuff and recycle it’ vibes.” | | And Dr. Alondra Nelson, former acting director of the White House Office of Science and Technology Policy, told Fast Company that the report is “striking for its lack of vision.” | “The Senate roadmap doesn’t point us toward a future in which patients, workers and communities are protected from the current and future excesses of the use of AI tools and systems,” she said. “What it does point to is government spending, not on accountability, but on relieving the private sector of their burdens of accountability.” | My thoughts: | The word “consider” was used 24 times in the 21-page report. | | The report feels too vague. This was a great opportunity for a strong stance on AI governance, which could have informed comprehensive regulation which, in turn, could have supercharged an economy of responsible AI. That might still happen, but this report does not inspire confidence. | |
| |
| | | | | Image 1 |
| Which image is real? | | | Image 2 |
|
| |
| | Parea.ai: A developer toolkit for monitoring & debugging LLMs. Canyon: An all-in-one platform for job-seekers that helps you perfect your resume and track your applications. Workflow: A simple workflow-management platform designed for visual work and powered by AI.
| Have cool resources or tools to share? Submit a tool or reach us by replying to this email (or DM us on Twitter). | *Indicates a sponsored link |
| |
| SPONSOR THIS NEWSLETTER | The Deep View is currently one of the world’s fastest-growing newsletters, adding thousands of AI enthusiasts a week to our incredible family of over 200,000! Our readers work at top companies like Apple, Meta, OpenAI, Google, Microsoft and many more. | If you want to share your company or product with fellow AI enthusiasts before we’re fully booked, reserve an ad slot here. |
| |
| One last thing👇 | | Rim Turkmani @Rim_Turkmani | |
| |
So refreshing to listen to Gary Marcus @GaryMarcus bringing values, human rights and dignity to the discussion on AI in a very powerful and inspiring talk at Starmus today @StarmusFestival | | | May 14, 2024 | | | | 24 Likes 3 Retweets 1 Reply |
|
| That's a wrap for now! We hope you enjoyed today’s newsletter :) | What did you think of today's email? | | We appreciate your continued support! We'll catch you in the next edition 👋 | -Ian Krietzberg, Editor-in-Chief, The Deep View |
| |
|
|