Edge 392: Meet RAFT: UC Berkeley's New Method to Improve RAG Patterns in LLMs
Was this email forwarded to you? Sign up here Edge 392: Meet RAFT: UC Berkeley's New Method to Improve RAG Patterns in LLMsThe method brings the best of RAG and supervised fine tuning.Pretraining Large Language Models (LLMs) on massive text datasets has become the norm. When these LLMs are applied to specific tasks, it’s often necessary to integrate additional information, such as the latest news or specialized knowledge, into the already trained model. This can be achieved either by prompting the model with new data or by fine-tuning it. Yet, the best way to incorporate this new knowledge into the models is still under debate. A recent paper from UC Berkeley proposes RAFT, a new technique to address precisely that issue. One of the key challenges in enhancing LLMs with new information is figuring out how to adjust these models for use in Retrieval Augmented Generation (RAG) within specialized fields. The main strategies considered are in-context learning through RAG and supervised fine-tuning. RAG allows LLMs to refer to external documents for answers, but this approach doesn’t fully utilize the learning potential in specific domain settings or make use of available documents beforehand. On the other hand, supervised fine-tuning aims to identify broader patterns in the documents, which could lead to better performance in tasks and alignment with user needs. However, this method may not always take advantage of documents during the testing phase or may overlook errors in document retrieval... Subscribe to TheSequence to read the rest.Become a paying subscriber of TheSequence to get access to this post and other subscriber-only content. A subscription gets you:
|
Older messages
Edge 391: Autonomous Agents and LLM Function Calling
Tuesday, April 30, 2024
LLMs that invoke external functions, UC Berkeley's LLM Compiler and the Phidata framework. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Nobody Likes a Know-It-All: Smaller LLMs are Gaining Momentum
Sunday, April 28, 2024
Phi-3 and OpenELM, two major small model releases this week. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Edge 390: Diving Into Databricks' DBRX: One of the Most Impressive Open Source LLMs Released Recently
Thursday, April 25, 2024
The model uses an MoE architecture which exhibits remarkable perfromance on a relatively small budget. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Edge 389: Understanding Large Action Models
Tuesday, April 23, 2024
One of the most important concepts in autonomous agents. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Some Cool Details About Llama 3
Sunday, April 21, 2024
Solid performance, new tokenizer, fairly optimal training and other details about Meta AI's new model. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
You Might Also Like
Tuesday Triage #200 and giveaway
Tuesday, May 14, 2024
Your weekly crème de la crème of the Internet is here! The 200th edition featuring annual subscriptions giveaway, thoughts on nearly four years of ...
🎮 How AI Tools Are Changing Game Development — Grab a Pixel 8a Instead of Waiting for Pixel 9
Tuesday, May 14, 2024
Also: Sharing Your Google Maps Trip Progress, and More! How-To Geek Logo May 14, 2024 Did You Know In a bid to keep the ingredients secret, WD-40 was never patented. 🤖 The New GPT It's Tuesday!
Meta shuts down Workplace
Tuesday, May 14, 2024
Plus: Everything that happened at Google I/O and AWS CEO steps down View this email online in your browser By Christine Hall Tuesday, May 14, 2024 Hello, and welcome back to TechCrunch PM. The team
Flattening Lists of Lists, Python 3.13, Sets, and More
Tuesday, May 14, 2024
Flattening a List of Lists in Python #629 – MAY 14, 2024 VIEW IN BROWSER The PyCoder's Weekly Logo Flattening a List of Lists in Python In this video course, you'll learn how to flatten a list
Daily Coding Problem: Problem #1441 [Easy]
Tuesday, May 14, 2024
Daily Coding Problem Good morning! Here's your coding interview problem for today. This problem was asked by Google. UTF-8 is a character encoding that maps each symbol to one, two, three, or four
Noonification: 3 Quick Ways to Optimize RecyclerView
Tuesday, May 14, 2024
Top Tech Content sent at Noon! Get Algolia: AI Search that understands How are you, @newsletterest1? 🪐 What's happening in tech today, May 14, 2024? The HackerNoon Newsletter brings the HackerNoon
Using 97 fewer cores thanks to PGO
Tuesday, May 14, 2024
Plus an HNSW indexed vector store library, a new Go game hits the Steam store, and is 'ok' ok?. | #507 — May 14, 2024 Unsub | Web Version Together with Stytch logo Go Weekly Reclaiming CPU for
Ranked | The Top 6 Economies by Share of Global GDP (1980-2024) 📈
Tuesday, May 14, 2024
Gain a unique perspective on the world's economic order from this graphic showing percentage share of global GDP over time. View Online | Subscribe Presented by: Data that drives the
Free online event this Thursday: Getting ahead with time series data
Tuesday, May 14, 2024
Free Online Event Do you know how your competitors use time series data to get ahead? Join us on Thursday, May 16 at 10am PT/1pm ET for a free, hour-long online fireside chat called “Unleash the Full
Here's the deal
Tuesday, May 14, 2024
We wanted you to be among the first to know about our plans to relaunch the Gigantic training courses that Product Collective now powers! Here's the deal: From May 20th - May 31st, anybody that