TheSequence - The Controversial AI Moratorium Letter
Was this email forwarded to you? Sign up here The Controversial AI Moratorium LetterSundays, The Sequence Scope brings a summary of the most important research papers, technology releases and VC funding deals in the artificial intelligence space.Next Week in The Sequence
📝 Editorial: The Controversial AI Moratorium LetterLast week, the AI community found itself divided by a controversial letter during a crucial phase of innovation. Over 1,400 leaders and researchers in the industry recently signed an open letter urging all AI labs to pause the training of AI systems more powerful than GPT-4 for at least six months. The Future of Life Institute, a non-profit based in Pennsylvania, published the letter. The letter warns that contemporary AI systems are now competing with humans in general tasks, and advanced AI poses a risk of flooding the media with false information and automating many jobs. The signatories requested that the pause be public, verifiable, and include key actors, and called on governments to institute a moratorium if the pause couldn't be enacted quickly. Emad Mostaque, CEO of Stability AI, signed the letter but later tweeted that he didn't think a six-month pause was the best idea. On the other hand, Yann LeCun, the chief AI scientist for Meta, didn't sign the letter because he disagreed with its premise, although he later deleted his tweet. David Deutsch, a visiting professor of physics at the Centre for Quantum Computation at the University of Oxford, declined to sign the letter, stating that it read like a suggestion to stop developing anything whose effects we can’t prophesy, which gives totalitarian governments and organized criminals a chance to catch up. The argument posed by the AI letter is a divisive one. One side sees foundation AI models as something like nuclear weapons that could unleash massive harm, while the other side believes that an AI moratorium is the equivalent of the restrictions that were once imposed in technologies like the printing press, the telegraph, or electricity that enabled massive leaps forward and created massive wealth, improving the quality of life for generations. At The Sequence, we avoid getting involved into controversial arguments that we believe are not direct contributors to the progress of AI. Today, I would make an exception given the level of debate caused by the AI letter. In my opinion, the moratorium on AI development is not only a bad idea but also an impractical one. The risks posed by large AI models are real, and a thoughtful path towards regulation and safety controls is certainly needed. The research behind many of these models is publicly available, and many implementations are open source without any constraints. Authoritarian governments, terrorist organizations, and bad actors already have access to this technology, whether we like it or not. The only way to mitigate the risk is to continue advancing research and improving their alignment and safety. Foundational AI models represent the biggest technological breakthrough of many generations and, as such, should not be restricted but carefully nurtured to align with the “better angels of our nature”. In his posthumous Meditations book, Marcus Aurelius said: “The mind adapts and converts to its purposes the obstacle to our acting. The impediment to action advances action. What stands in the way becomes the way." The most popular version of this quote certainly applies to the current state foundation AI models: “The obstacle is the way”. 🔎 ML ResearchPRESTO Google Research published a paper and open sourced a version of PRESTO, a dataset for task-oriented dialogues. PRESTO is based on over 500 million conversations between users and virtual assistants —> Read more. ReflexionResearchers from MIT and Northeastern University published a paper presenting Reflexion, a technique used to identify mistakes in LLMs. Reflexion simulates human’s self-reflection capabilities by asking models to find possible mistakes and optimize the respective prompts —> Read more. ARTResearchers from the University of Washington, Microsoft, Meta AI, Allen Institute of AI and University of California, Irvine published a paper detailing ART, a tool that uses frozen LLMs to generate intermediate reasoning steps. ART uses a few-shot technique to decompose a task into multistep micro tasks that simulate reasoning —> Read more. 22 Billion Vision TransformerGoogle Research published a paper detailing a dense vision transformer with 22 billion parameters. This is a significant size improvement over previous architectures which average low single digit billion parameters —> Read more. Robots that Learn from VideosMeta AI published two papers about techniques used in embodied agents to learn from agents. One of the papers proposes a technique called VC-1 that masters sensorimotor skills from videos. The other paper details a method called ASC for object manipulation —>Read more. 📌 EVENT: Join us at LLM in Production conference – the first of its kindHow can you actually use LLMs in production? There are still so many questions. Cost. Latency. Trust. How are the best navigating this? MLOps community decided to create the first free virtual conference to go deep into these unknowns. Come hear technical talks from over 30 speakers working at companies like Notion, You.com, Adept.ai, and Intercom. You will also get the opportunity to join workshops that will teach you how to set up your use cases and skip over all the headaches. 🤖 Cool AI Tech ReleasesTensorFlow and Keras 2.12The new releases of TensorFlow and Keras are now available with interesting features that include a new exporting format —> Read more. Kubeflow 1.7The new release of Kubeflow is now available —> Read more. DollyDatabricks open sourced Dolly, a ChatGPT like model that can follow instructions —> Read more. 🛠 Real World MLAirbnb discusses their ML + human in the loop architecture for their Categories platform —> Read more. 📡AI Radar
You’re on the free list for TheSequence Scope and TheSequence Chat. For the full experience, become a paying subscriber to TheSequence Edge. Trusted by thousands of subscribers from the leading AI labs and universities. |
Older messages
📝 Guest Post: Switching from Spreadsheets to Experiment Tracker and How It Improved My Model Development Process*
Friday, March 31, 2023
In this guest post, neptune.ai shares the story of one of its users, Nikita Kozodoi. He talks about his model development process before and after using Neptune. Give it a read! You can find the full
Edge 278: Inside LaMDA, Google's Alternative to GPT-4
Thursday, March 30, 2023
The model powering services such as Bard and the conversational capabilities in Google Suite.
📌 Learn the ABCs of LLMs from OpenAI, 🦙 LlamaIndex, Hugging Face 🤗, and Others At Arize:Observe
Wednesday, March 29, 2023
Join us at the third annual Arize:Observe conference, which is shaping up to be one of the leading events this year on large language models, generative AI, and ML observability. Sessions include: A
Edge 277: Federated Transfer Learning
Tuesday, March 28, 2023
Federated transfer learning, the TorchFL paper and the OpenFL framework.
OpenAI’s Frantic Pace of Releases and the Generative AI Short Innovation Cycles
Sunday, March 26, 2023
Sundays, The Sequence Scope brings a summary of the most important research papers, technology releases and VC funding deals in the artificial intelligence space.
You Might Also Like
🤳🏻 We Need More High-End Small Phones — Linux Terminal Setup Tips
Sunday, November 24, 2024
Also: Why I Switched From Google Maps to Apple Maps, and More! How-To Geek Logo November 24, 2024 Did You Know Medieval moats didn't just protect castles from invaders approaching over land, but
JSK Daily for Nov 24, 2024
Sunday, November 24, 2024
JSK Daily for Nov 24, 2024 View this email in your browser A community curated daily e-mail of JavaScript news JavaScript Certification Black Friday Offer – Up to 54% Off! Certificates.dev, the trusted
OpenAI's turbulent early years - Sync #494
Sunday, November 24, 2024
Plus: Anthropic and xAI raise billions of dollars; can a fluffy robot replace a living pet; Chinese reasoning model DeepSeek R1; robot-dog runs full marathon; a $12000 surgery to change eye colour ͏ ͏
Daily Coding Problem: Problem #1618 [Easy]
Sunday, November 24, 2024
Daily Coding Problem Good morning! Here's your coding interview problem for today. This problem was asked by Zillow. Let's define a "sevenish" number to be one which is either a power
PD#602 How Netflix Built Self-Healing System to Survive Concurrency Bug
Sunday, November 24, 2024
CPUs were dying, the bug was temporarily un-fixable, and they had no viable path forward
RD#602 What are React Portals?
Sunday, November 24, 2024
A powerful feature that allows rendering components outside their parent component's DOM hierarchy
C#533 What's new in C# 13
Sunday, November 24, 2024
Params collections support, a new Lock type and others
⚙️ Smaller but deeper: Writer’s secret weapon to better AI
Sunday, November 24, 2024
November 24, 2024 | Read Online Ian Krietzberg Good morning. I sat down recently with Waseem Alshikh, the co-founder and CTO of enterprise AI firm Writer. Writer recently made waves with the release of
Sunday Digest | Featuring 'How Often People Go to the Doctor, by Country' 📊
Sunday, November 24, 2024
Every visualization published this week, in one place. Nov 24, 2024 | View Online | Subscribe | VC+ | Download Our App Hello, welcome to your Sunday Digest. This week we visualized the GDP per capita
Android Weekly #650 🤖
Sunday, November 24, 2024
View in web browser 650 November 24th, 2024 Articles & Tutorials Sponsored Why your mobile releases are a black box “What's the status of the release?” Who knows. Uncover the unseen challenges