Messages
1/12/2024
9 : 14
Meet Ghostbuster: An AI Technique for Detecting LLM-Generated Content
Created by Berkeley University, the new method uses a probability distribution method to detect the likelihood of AI-generated tokens within a document.
1/9/2024
12 : 14
Edge 359: Understanding Tree-Of-Thoughts in LLM Reasoning
A variation of chain-of-thought for evaluating different reasoning paths.
1/7/2024
12 : 14
The Transformer Robots are Here, Just a Different Kind
An impressive week in robotic models from both DeepMind and Stanford University and much more...
1/4/2024
12 : 4
Edge 358: Inside AGENTS: An Open Source Framework for Autonomous Language Agents
The framework includes the core building blocks to enable autonomous agents based applications.
1/2/2024
12 : 14
Edge 357: Understanding Chain-of-Thought Prompting
A deep dive into the most popular LLM reasoning technique.
12/31/2023
12 : 14
My Five Favorite AI Papers of 2023
LLM interpretability, small language models, autonomous agents, API fine-tuning, discovering new algorithms
12/28/2023
12 : 14
Inside Orca 2: Microsoft's Small Language Model that Outperforms Models 10x Larger in Reasoning Capabilities
The model innovating in the training procedures to improve reasoning abilities in small language models.
12/26/2023
12 : 14
Edge 355: A Taxonomy to Understand LLM Reasoning Methods
Not all LLM reasoning methods are created equal. Here are the main categories to understand the different types of LLM reasoning techniques.
12/24/2023
12 : 14
Apple GPT is Coming!
A new research breakthrough outlines the path to run LLMs in IPhones and IPads.
12/21/2023
12 : 24
Inside Mixtral 8x7B: One of the Most Exciting Open Source LLM Ever Releases of this Year
The model follows Mistral 7b with an innovative mixture-of-experts architecture that deviates a bit from monolthical transformer models.
12/19/2023
12 : 14
Edge 353: A New Series About Reasoning in Foundation Models
We dive into the most important research and technology frameworks in the LLM reasoning space.
12/17/2023
12 : 14
Four Releases from Google DeepMind in a Single Week!
An impressive week by Google DeepMind plus a summary of the top research paper, tech releases and news in the AI space.
12/15/2023
13 : 55
The Sequence Chat: Hugging Face's Lewis Tunstall on ZEPHYR , RLHF and LLM Innovation
One of the creators of ZEPHYR discusses ideas and lessons learned building LLMs at scale.
12/15/2023
13 : 44
Edge 352: Inside the Embeddings Architecture Powering Job Recommendations at LinkedIn
Some insights about one of the largest embedding architectures ever built.
12/15/2023
13 : 26
💡 Discover key GenAI trends from the annual ML Insider report
Remember, you participated in the ML Insider Survey? Now it's time to get your copy of the ML Insider 2023 Report! Discover insights on the state of machine learning and generative AI, and find out
12/12/2023
12 : 4
Edge 351: A Summary of Our Series About Fine-Tuning in Foundation Models
This series explored PEFT, LoRa, QLoRA, RLHF, RLAIF, Constitutional AI and many more of the top fine-tuning methods in foundation model apps.
12/11/2023
15 : 4
📝 Guest Post: Do We Still Need Vector Databases for RAG with OpenAI's Built-In Retrieval?
In this guest post, Jael Gu, an algorithm engineer at Zilliz, will delve into the constraints of OpenAI's built-in retrieval and walk you through creating a customized retriever using Milvus, an
12/10/2023
12 : 14
Gemini and Mistral MoE: Both Impactul Altough Very Different Releases
Next Week in The Sequence: Edge 351: Presents a detailed summary of our series about fine-tuning in foundation models. Edge 352: Will dive into LinkedIn's embedding architecure that power its
12/8/2023
12 : 44
📝 Guest Post: How to Maximize LLM Performance*
In this post, Jordan Burgess, co-founder and Chief Product Officer at Humanloop, discusses the techniques for going from an initial demo to a robust production-ready application and explain how tools
12/7/2023
12 : 14
Meet Zephyr: How Hugging Face's Instruction Fine Tuned LLM Outperformed Models 10 Times Its Size
A fine-tuned version of Mistral, Zephyr applied some very clever techniques that led it to outperform LLaMA 70B and other much larger models.