TheSequence - The Llama 2 Effect
Was this email forwarded to you? Sign up here The Llama 2 EffectSundays, The Sequence Scope brings a summary of the most important research papers, technology releases and VC funding deals in the artificial intelligence space.Next Week in The Sequence:
Go Subscribe! 📝 Editorial: The Llama 2 EffectThe debate between open source and closed source foundation models has become as interesting as ever, and the open source space has found an unlikely champion: Meta. The “accidental leak” of the weights of the Llama model sparked a tremendous level of innovation in open source foundation models, triggering the creation of models such as Vicuna, Koala, Red Pajama, MPT, Alpaca, Gorilla, and many others. Last week, Meta announced the open-source release and commercial availability of Llama 2 and a distribution partnership with none other than Microsoft. Llama 2 was trained on a dataset over 40% larger than its predecessor, using 2 trillion pretraining tokens. The model was released in three main versions with 7B, 13B, and 70B parameters, respectively. Another solid improvement was the use of reinforcement learning with human feedback (RLHF) and proximal policy optimization (PPO) to improve the usefulness of the responses. The model was evaluated across many LLM benchmarks and performed very strongly relative to the recent generation of open-source LLMs. And then there is the partnership with Microsoft. As part of their strategic alliance, Microsoft announced support for Llama 2 on Azure and Windows. The Azure support includes the ability to deploy and fine-tune all versions of Llama 2 from the Azure AI Model Catalog. The Windows support enables the local execution of Llama 2 models using DirectML. Beyond the initial set of capabilities, Microsoft’s endorsement of Llama 2 represents a strong validation for the viability of open source foundation models. Together with Databricks’ acquisition of MosaicML and the recent funding rounds by companies like Stability AI, this event is signaling to the market that open source foundation models are a force to be reckoned with. The Llama effect was about unlocking innovation in the open-source LLM space. The Llama 2 effect is about robustness and commercial readiness at the highest level. 💡Report: State of Applied Machine Learning 2023We surveyed over 1700 ML practitioners for this inaugural report on the state of applied machine learning. It provides a comprehensive overview of applied ML, and shares the challenges and opportunities in the space, along with common trends across a diverse set of ML initiatives. Download the full report for key findings, recommendations, and a deeper dive into the trends that will shape the future of applied ML! 🔎 ML ResearchCM3leonMeta AI Research published a paper introducing CM3leon a text-to-image and image-to-text foundation model. CM3leon was trained with including a large-scale retrieval-augmented pre-training stage and a second multitask supervised fine-tuning (SFT) stage and achieve state of the art results in both modalities —> Read more. Diffusion Model Fine Tuning with RLResearchers from Berkeley AI Research(BAIR) lab published a paper detailing a reinforcement learning method used to fine tune diffusion models. The method fine tunes Stable Diffusion on different objective such as image compressibility, human-perceived aesthetic quality, and prompt-image alignment —> Read more. SimPerGoogle Research published a paper detailing SimPer, a self-supervised model for periodic data. SimPer uses contrastive learning to learn temporal properties of periodic target —> Read more. Consistent Reasoning in LLMsAmazon Science published a paper outlining a new chain-of-thought reasoning method for LLMs. The core idea is to use a teacher-student model that leverages knowledge distillation in question-answer pairs to improve the reasoning chain —> Read more. Flash Attention-2Researchers from Stanford University and Princeton published a paper FlashAttention-2, an IO-aware attention mechanism. FlashAttention-2 builds on its predecessor by adding several optimizations that reduce the FLOPs and parallelize attention computations —> Read more. 🤖 Cool AI Tech ReleasesLLama 2Meta AI released LLama 2, the next version of their marquee LLM now with commercial support —> Read more. ChatGPT Custom InstructionsOpenAI released ChatGPT Custom Instructions, which allow users to set preferences that ChatGPT should consider when producing outputs —> Read more. MPT-7B-8KMosaic ML unveiled MPT-7B-8k, a new LLM with an 8k context window —> Read more. 🛠 Real World MLPrompt Engineering at GitHub The GitHub engineering team discusses prompt engineering best practices —> Read more. Time Series Analysis at PinterestThe Pinterest engineering team shares some details about their architecture and techniques for time series analysis —> Read more. 📡AI Radar
You’re on the free list for TheSequence Scope and TheSequence Chat. For the full experience, become a paying subscriber to TheSequence Edge. Trusted by thousands of subscribers from the leading AI labs and universities. |
Older messages
How OpenAI Uses GPT-4 to Interpret the Functions of Neurons in Other Language Models
Thursday, July 20, 2023
A new interpretability method based on GPT-4 can derive explanations about specific neurons in LLMs.
Luca Beurer-Kellner: ETH Zürich, Creator, Language Model Query Language,
Wednesday, July 19, 2023
LMQL, language model programming and the future of LLMs.
Edge 309: What is Active Prompting?
Tuesday, July 18, 2023
Understanding one of the most effective techniques to improve the effectiveness of prompts in LLM applications.
The Sequence Chat: Emmanuel Turlay – CEO, Sematic
Sunday, July 16, 2023
Model orchestration, Airflow limitaitons in ML and new ideas about MLOps.
Meet LMQL: An Open Source Query Language for LLMs
Sunday, July 16, 2023
Developed by ETH Zürich, the language explores new paradigms for LLM programming.
You Might Also Like
Import AI 399: 1,000 samples to make a reasoning model; DeepSeek proliferation; Apple's self-driving car simulator
Friday, February 14, 2025
What came before the golem? ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Defining Your Paranoia Level: Navigating Change Without the Overkill
Friday, February 14, 2025
We've all been there: trying to learn something new, only to find our old habits holding us back. We discussed today how our gut feelings about solving problems can sometimes be our own worst enemy
5 ways AI can help with taxes 🪄
Friday, February 14, 2025
Remotely control an iPhone; 💸 50+ early Presidents' Day deals -- ZDNET ZDNET Tech Today - US February 10, 2025 5 ways AI can help you with your taxes (and what not to use it for) 5 ways AI can help
Recurring Automations + Secret Updates
Friday, February 14, 2025
Smarter automations, better templates, and hidden updates to explore 👀 ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
The First Provable AI-Proof Game: Introducing Butterfly Wings 4
Friday, February 14, 2025
Top Tech Content sent at Noon! Boost Your Article on HackerNoon for $159.99! Read this email in your browser How are you, @newsletterest1? undefined The Market Today #01 Instagram (Meta) 714.52 -0.32%
GCP Newsletter #437
Friday, February 14, 2025
Welcome to issue #437 February 10th, 2025 News BigQuery Cloud Marketplace Official Blog Partners BigQuery datasets now available on Google Cloud Marketplace - Google Cloud Marketplace now offers
Charted | The 1%'s Share of U.S. Wealth Over Time (1989-2024) 💰
Friday, February 14, 2025
Discover how the share of US wealth held by the top 1% has evolved from 1989 to 2024 in this infographic. View Online | Subscribe | Download Our App Download our app to see thousands of new charts from
The Great Social Media Diaspora & Tapestry is here
Friday, February 14, 2025
Apple introduces new app called 'Apple Invites', The Iconfactory launches Tapestry, beyond the traditional portfolio, and more in this week's issue of Creativerly. Creativerly The Great
Daily Coding Problem: Problem #1689 [Medium]
Friday, February 14, 2025
Daily Coding Problem Good morning! Here's your coding interview problem for today. This problem was asked by Google. Given a linked list, sort it in O(n log n) time and constant space. For example,
📧 Stop Conflating CQRS and MediatR
Friday, February 14, 2025
Stop Conflating CQRS and MediatR Read on: my website / Read time: 4 minutes The .NET Weekly is brought to you by: Step right up to the Generative AI Use Cases Repository! See how MongoDB powers your