📝 Guest Post: Meet LoRAX: The Open Source System that Serves 1000s of Fine-Tuned LLMs on a Single GPU*
Was this email forwarded to you? Sign up here In this guest post, Travis Addair, CTO and Co-founder of Predibase, introduces LoRAX, their open-sourced solution to the challenges of serving fine-tuned LLMs. He provides an in-depth exploration of LoRAX's inner workings and explains how you can begin utilizing LoRAX in your projects. Building with large language models (LLMs) is at the top of every developer’s to-do list and teams that have progressed beyond simple experimentation are quickly realizing that smaller open-source LLMs like LLaMA-2 outperform costly general-purpose commercial models like GPT-4 when fine-tuned for a specific task. But even though these fine-tuned models are relatively small compared to GPT-4, existing LLM inference systems require each model to be hosted on its own dedicated GPU hardware. This can quickly add up to tens of thousands of dollars per month in cloud costs for just a handful of fine-tuned models. In contrast, one of the most popular commercial LLM APIs – OpenAI’s gpt-3.5-turbo – charges just $6 per million tokens for fine-tuned models. The future is faster, cheaper, fine-tuned open source models, but to get there, the cost to serve such models must become competitive with commercial APIs. LoRAX (or LoRA eXchange) was created by Predibase to eliminate the cost barrier to serving fine-tuned models. Unlike existing LLM inference solutions, LoRAX is optimized for productionizing many fine-tuned models using a single set of GPU resources. Leveraging state-of-the-art optimizations from the AI research community, LoRAX allows users to pack upwards of 1000 fine-tuned LLMs into a single deployment, dramatically reducing serving costs. LoRAX is open-source and free to use commercially under the Apache 2.0 license. It comes batteries-included with pre-built Docker images, Helm charts for deploying on Kubernetes, and numerous optimizations including continuous batching, Paged Attention v2, Flash Attention v2, SGMV multi-adapter fusion, asynchronous adapter prefetching and offload, and support for quantization techniques including bitsandbytes and GPT-Q. Fine-Tuning and Serving LLMs with LoRAThe conventional approach to fine-tuning a deep neural network is to update all the parameters of the model as a continuation of the training process. For LLMs with billions of parameters, this requires a massive amount of GPU memory (every trainable parameter amounts to about 4x additional overhead to fine-tune it) and storage waste (tens of gigabytes per model checkpoint). To make fine-tuning less resource-hungry, parameter-efficient fine-tuning techniques like Low Rank Adaptation (LoRA) introduce adapters consisting of a small number of new parameters that are trained, while the original model parameters remain frozen. LoRA achieves performance comparable to full fine-tuning with much less overhead. At serving time, both the original model parameters and the new adapter parameters are loaded together as a single deployment. Treating the base model and the adapter as a single deployable entity makes sense when you only have a single fine-tuned model. But as soon as you deploy an additional fine-tuned model using the same base model, the problem becomes clear: the majority of the GPU resources are being allocated to serving additional copies of the same base model parameters for every fine-tuned model! The part of the deployment that is unique to the fine-tuned model – the adapter weights – accounts for less than 1% of the total parameters, meaning many of these adapters could fit within the GPU memory capacity in most cases. This all raises the question: what if we could pack multiple fine-tuned models into a single deployment by reusing the common base model parameters? Introducing LoRA eXchange (LoRAX)LoRA Exchange (LoRAX) is a new approach to LLM serving infrastructure specifically designed for serving many fine-tuned models at once using a shared set of GPU resources. Compared with conventional dedicated LLM deployments, LoRAX consists of three novel components:
When a request is sent to LoRAX for inference, the adapter will begin downloading in the background immediately and load the weights into host (CPU) memory. Once the scheduler decides the adapter is eligible to begin serving its requests, the weights will be prefetched onto the GPU, and once they’ve been fully loaded into the GPU, the scheduler will start incorporating requests from the newly loaded adapter into the continuously updated batch sent for decoding. Once the adapter’s requests are in the batch and being sent for decoding, LoRAX ensures that only the single associated adapter is applied to each row of the batch using a technique developed by researchers at University of Washington and Duke University called Segmented Gather Matrix Vector multiplication (SGMV). Using SGMV, we observe that even with 128 concurrent adapters per batch, LoRAX maintains near constant throughput and latency scaling. In cases where LoRA ranks differ between rows within a batch, LoRAX is able to fallback to a simpler loop-based approach that applies a mask to the output of each adapter to zero its contribution to rows associated with a different adapter. We compare this worst-case scenario with respect to throughput below using a baseline we call the “break even threshold”, which is the throughput scaling at which it would be more cost effective to spin up a dedicated deployment per adapter. As shown, even at 128 concurrent adapters all with different ranks, LoRAX’s worst case throughput sits well above the break even threshold. After a configurable amount of time, if there are other adapters waiting to be loaded onto GPU and processed, the scheduler will begin offloading the adapter so that it may be exchanged for another. This ensures that LoRAX is able to scale to thousands of adapters, well beyond the limit of how many adapters can fit on a single GPU. Getting Started with LoRAXLoRAX ships pre-built Docker images that include optimized CUDA kernels for fast GPU accelerated inference, including Flash Attention v2, Paged Attention, and SGMV. LoRAX can be launched serving a Llama or Mistral base model using a single command:
-e PORT="8080" \ -p 8080:8080 \ ghcr.io/predibase/lorax:latest \ --model-id mistralai/Mistral-7B-Instruct-v0.1 |
Older messages
Welcome to the World of Small(er) Language Models
Friday, November 24, 2023
Smaller, highly specialized and cost-effective LLMs are a trend to track in generative AI.
Inside LlaVA: The Very Popular Open Source Alternative to GPT-4V
Thursday, November 23, 2023
The model outperforms GPT-4 in several visual instruction tasks.
The Sequence Chat: Doug Burger- Technical Fellow, Microsoft Research About Building Autonomous Agents, AutoGen and…
Wednesday, November 22, 2023
One of the members of the AutoGen team shares insights about its vision, architecture and the future of autonomous agents.
Edge 345: Deep Diving Into Reinforcement Learning with Human Feedback
Tuesday, November 21, 2023
Details about the most important fine-tuning technique ever created.
📝 Guest Post: Creating your first Data Labeling Agent*
Monday, November 20, 2023
In this guest post, Jimmy Whitaker, Data Scientist in Residence at Human Signal, focuses on guiding users in building an agent using the Adala framework. He dives into the integration of Large Language
You Might Also Like
Import AI 399: 1,000 samples to make a reasoning model; DeepSeek proliferation; Apple's self-driving car simulator
Friday, February 14, 2025
What came before the golem? ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Defining Your Paranoia Level: Navigating Change Without the Overkill
Friday, February 14, 2025
We've all been there: trying to learn something new, only to find our old habits holding us back. We discussed today how our gut feelings about solving problems can sometimes be our own worst enemy
5 ways AI can help with taxes 🪄
Friday, February 14, 2025
Remotely control an iPhone; 💸 50+ early Presidents' Day deals -- ZDNET ZDNET Tech Today - US February 10, 2025 5 ways AI can help you with your taxes (and what not to use it for) 5 ways AI can help
Recurring Automations + Secret Updates
Friday, February 14, 2025
Smarter automations, better templates, and hidden updates to explore 👀 ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
The First Provable AI-Proof Game: Introducing Butterfly Wings 4
Friday, February 14, 2025
Top Tech Content sent at Noon! Boost Your Article on HackerNoon for $159.99! Read this email in your browser How are you, @newsletterest1? undefined The Market Today #01 Instagram (Meta) 714.52 -0.32%
GCP Newsletter #437
Friday, February 14, 2025
Welcome to issue #437 February 10th, 2025 News BigQuery Cloud Marketplace Official Blog Partners BigQuery datasets now available on Google Cloud Marketplace - Google Cloud Marketplace now offers
Charted | The 1%'s Share of U.S. Wealth Over Time (1989-2024) 💰
Friday, February 14, 2025
Discover how the share of US wealth held by the top 1% has evolved from 1989 to 2024 in this infographic. View Online | Subscribe | Download Our App Download our app to see thousands of new charts from
The Great Social Media Diaspora & Tapestry is here
Friday, February 14, 2025
Apple introduces new app called 'Apple Invites', The Iconfactory launches Tapestry, beyond the traditional portfolio, and more in this week's issue of Creativerly. Creativerly The Great
Daily Coding Problem: Problem #1689 [Medium]
Friday, February 14, 2025
Daily Coding Problem Good morning! Here's your coding interview problem for today. This problem was asked by Google. Given a linked list, sort it in O(n log n) time and constant space. For example,
📧 Stop Conflating CQRS and MediatR
Friday, February 14, 2025
Stop Conflating CQRS and MediatR Read on: my website / Read time: 4 minutes The .NET Weekly is brought to you by: Step right up to the Generative AI Use Cases Repository! See how MongoDB powers your