📝 Guest Post: Local Agentic RAG with LangGraph and Llama 3*
Was this email forwarded to you? Sign up here In this guest post, Stephen Batifol from Zilliz discusses how to build agents capable of tool-calling using LangGraph with Llama 3 and Milvus. Let’s dive in. LLM agents use planning, memory, and tools to accomplish tasks. Here, we show how to build agents capable of tool-calling using LangGraph with Llama 3 and Milvus. Agents can empower Llama 3 with important new capabilities. In particular, we will show how to give Llama 3 the ability to perform a web search, call custom user-defined functions Tool-calling agents with LangGraph use two nodes: an LLM node decides which tool to invoke based on the user input. It outputs the tool name and tool arguments based on the input. The tool name and arguments are passed to a tool node, which calls the tool with the specified arguments and returns the result to the LLM. Milvus Lite allows you to use Milvus locally without using Docker or Kubernetes. It will store the vectors you generate from the different websites we will navigate to. Introduction to Agentic RAGLanguage models can't take actions themselves—they just output text. Agents are systems that use LLMs as reasoning engines to determine which actions to take and the inputs to pass them. After executing actions, the results can be transmitted back into the LLM to determine whether more actions are needed or if it is okay to finish. They can be used to perform actions such as Searching the web, browsing your emails, correcting RAG to add self-reflection or self-grading on retrieved documents, and many more. Setting things up
Using LangGraph and MilvusWe use LangGraph to build a custom local Llama 3-powered RAG agent that uses different approaches: We implement each approach as a control flow in LangGraph:
General ideas for Agents
Examples of AgentsTo showcase the capabilities of our LLM agents, let's look into two key components: the Hallucination Grader and the Answer Grader. While the full code is available at the bottom of this post, these snippets will provide a better understanding of how these agents work within the LangChain framework. Hallucination GraderThe Hallucination Grader tries to fix a common challenge with LLMs: hallucinations, where the model generates answers that sound plausible but lack factual grounding. This agent acts as a fact-checker, assessing if the LLM's answer aligns with a provided set of documents retrieved from Milvus.
Answer GraderFollowing the Hallucination Grader, another agent steps in. This agent checks another crucial aspect: ensuring the LLM's answer directly addresses the user's original question. It utilizes the same LLM but with a different prompt, specifically designed to evaluate the answer's relevance to the question.
You can see in the code above that we are checking the predictions by the LLM that we use as a classifier. Compiling the LangGraph graph.This will compile all the agents that we defined and will make it possible to use different tools for your RAG system.
ConclusionIn this blog post, we showed how to build a RAG system using agents with LangChain/ LangGraph, Llama 3, and Milvus. These agents make it possible for LLMs to have planning, memory, and different tool use capabilities, which can lead to more robust and informative responses. Feel free to check out the code available in the Milvus Bootcamp repository. If you enjoyed this blog post, consider giving us a star on Github, and share your experiences with the community by joining our Discord. This is inspired by the Github Repository from Meta with recipes for using Llama 3 *This post was written by Stephen Batifol and originally published on Zilliz.com here. We thank Zilliz for their insights and ongoing support of TheSequence.You’re on the free list for TheSequence Scope and TheSequence Chat. For the full experience, become a paying subscriber to TheSequence Edge. Trusted by thousands of subscribers from the leading AI labs and universities. |
Older messages
One Week, 7 Major Foundation Model Releases
Sunday, July 21, 2024
Apple, HuggingFace, OpenAI, Mistral, Groq all released innovative models in the same week. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
📽 [Virtual Talk] Supercharge Production AI with Features as Code
Friday, July 19, 2024
Data is essential for AI/ML systems but often becomes a development bottleneck. Data scientists and engineers face challenges in building and maintaining feature pipelines, ensuring data consistency
Edge 414: Inside Meta AI's HUSKY: A New Agent Optimized for Multi-Step Reasoning
Thursday, July 18, 2024
New research from Meta AI, Allen AI, and the University of Washington tackles one of the most important problems in LLM reasoning. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Edge 413: Autonomous Agents and Semantic Memory
Tuesday, July 16, 2024
Can agents capture memory that encodes actual knowledge? ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
📽 [Virtual Talk] Building a Resilient, Real-Time Fraud System at Block
Monday, July 15, 2024
Data is crucial for AI/ML systems but often becomes a bottleneck in development. Data scientists and engineers grapple with the complexity of building and maintaining feature pipelines, ensuring
You Might Also Like
Caught In The Middle 💸
Thursday, October 31, 2024
On rich guys, collateral damage, and The Washington Post. Here's a version for your browser. Hunting for the end of the long tail • October 30, 2024 Caught In The Middle The mess with Bezos and The
Powering public sector resilience on Elastic Search AI Platform
Thursday, October 31, 2024
Developing observability capabilities with Elasticㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤ elastic | Search. Observe. Protect Driving public sector innovation Elastic AI-
Tuesday Triage #224
Wednesday, October 30, 2024
Your weekly crème de la crème of the Internet is here! The 224th edition featuring PayPal mafia, Modern Martyrs, and awnings. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
📝 You Probably Don't Need to Compile a Linux Kernel Anymore — Adding Sticky Notes to Your iPhone Home Screen
Wednesday, October 30, 2024
Also: Gaming GPUs Used to Be Fun, Not Anymore, and More! How-To Geek Logo October 30, 2024 Did You Know Ancient Romans divided daylight and darkness into 12 increments each. In Rome, this meant an hour
JSK Daily for Oct 30, 2024
Wednesday, October 30, 2024
JSK Daily for Oct 30, 2024 View this email in your browser A community curated daily e-mail of JavaScript news Three.js : BatchedMesh and Post processing with WebGPURenderer An exploration of Three.js
Daily Coding Problem: Problem #1594 [Easy]
Wednesday, October 30, 2024
Daily Coding Problem Good morning! Here's your coding interview problem for today. This problem was asked by Google. You are given given a list of rectangles represented by min and max x- and y-
Ranked | Tech Companies by R&D Investment Change in 2023 📊
Wednesday, October 30, 2024
Most tech companies were born disruptors. So who's prioritizing their next innovation? We track R&D investment changes to find out. View Online | Subscribe | Download Our App Presented by:
JSK Weekly - 30th October, 2024
Wednesday, October 30, 2024
JavaScript powers many modern websites' dynamic and interactive elements. As the complexity of JavaScript apps increases, so does the need for robust testing frameworks to ensure their reliability
Top Tech Deals 👀 MacBook Air, Harman Kardon BT Speaker, Echo Show, and More
Wednesday, October 30, 2024
Score a MacBook, headphones, or PC accessories on sale this week. How-To Geek Logo October 30, 2024 Top Tech Deals: MacBook Air, Harman Kardon BT Speaker, Echo Show, and More Score a MacBook,
We Need More Layer 1s, Please
Wednesday, October 30, 2024
Top Tech Content sent at Noon! How the world collects web data Read this email in your browser How are you, @newsletterest1? 🪐 What's happening in tech today, October 30, 2024? The HackerNoon