📝 Guest Post: Creating your first Data Labeling Agent*
Was this email forwarded to you? Sign up here In this guest post, Jimmy Whitaker, Data Scientist in Residence at Human Signal, focuses on guiding users in building an agent using the Adala framework. He dives into the integration of Large Language Model-based agents for automating data pipelines, particularly for tasks like data labeling. The article details the process of setting up an environment, implementing an agent, and the iterative learning approach that enhances the agent's efficiency in data categorization. This approach combines human expertise with AI scalability, making these agents more effective in precise data tasks. LLM-based agents have remarkable capabilities in problem-solving, leading to a surge in their application across various industries. They can adapt their instructions to accomplish tasks, all through human-generated prompts. Unsurprisingly, channeling these capabilities reliably is becoming a crucial task. Adala is a framework for creating LLM-based agents to automate data pipelines, including tasks like data labeling and data generation. Its primary function is to provide these agents with guided learning opportunities, enabling them to act within the confines of a ground truth dataset and learn through dynamic feedback. The concept behind Adala is to combine human precision and expertise with AI model scalability. By doing so, these agents become more efficient in tasks where accurate data categorization is paramount. This article aims to guide you through building your first data labeling agent using the Adala framework. The focus will be on understanding the underlying principles of these agents, setting up the necessary environment, and implementing a simple yet effective agent capable of classifying data based on a provided ground truth dataset. Through this, you will gain insights into the technical aspects of creating such an agent and the practical applications and benefits it offers. Getting started with AdalaIn this example, we will work through the Classification Skill Example notebook provided by Adala, using the pre-built classification skill. We aim to develop an agent that aids in data labeling for text classification, specifically categorizing product descriptions. Adala agents are autonomous and have the novel ability to teach themselves, acquiring skills through iterative learning. As their environment evolves, agents continuously refine these skills. The agent will then teach itself by comparing its predictions to the ground truth dataset, using trial and error to refine its labeling instructions. In many cases, having LLMs perform these tasks can be sufficient. However, relying solely on LLMs comes at a high operational cost. Curating a dataset to distill this prior knowledge into a simpler model is more cost-effective over time. Creating an Initial DatasetWe begin by creating an initial dataset for the agent to learn from. To show the learning process for the agents, we’ll start with a labeled dataset in a pandas DataFrame (df) that will be used as our ground truth data.
This code generates a dataframe with product descriptions and their corresponding categories, like so: Building Your First Adala AgentBuilding an Adala agent involves integrating two critical components: skills and the environment.
We start with the pre-built `ClassificationSkill`. This skill restricts the LLM output to the data labels. When run, this skill generates predictions in a new column within our DataFrame, enriching our environment with valuable insights. In practical scenarios, the environment can be set up to gather ground truth signals from actual human feedback, further enhancing the learning phase of the agent. Here's how to set up your Adala agent:
As we continually add data to our ground truth dataset, the agent gains access to more sophisticated and diverse information, enhancing its learning and predictive capabilities. Agent LearningThe learning process of the agent involves three distinct steps:
The agent autonomously cycles through these steps when the `learn` function is called. This iterative process of applying skills, analyzing results, and making improvements enables the agent to align its predictions more closely with the ground truth dataset. The cycle can repeat until the agent achieves a state where errors are minimized or eliminated.
This command displays the enhanced classification skill. For instance, the skill to categorize products is now fine-tuned with specific examples, showcasing the agent's improved understanding and classification accuracy: These examples illustrate the agent’s ability to accurately label products based on their primary function or purpose, demonstrating the effectiveness of the learning process. Testing the Agent’s SkillWith the agent's skills refined, it's time to assess its categorization capability using new product descriptions. The following example showcases a test DataFrame:
The `run` command enables the agent to apply its learned classification skill to the new dataset, predicting the most appropriate category for each product description. These predictions can be incorporated into data labeling platforms like Label Studio for human verification. This is one of the key aspects of Adala - incorporating human feedback by way of ground truth data. Once reviewed, we can optionally add this data to our environment to iteratively improve our classification skill. Where does Adala fit?Adala, as a framework for creating autonomous data agents, occupies a unique niche in the landscape of LLM-based applications. Its comparison with other notable LLM implementations like OpenAI GPTs and AutoGPT highlights its distinctive role and capabilities. Unlike the broad applicability of OpenAI GPTs in generating text and engaging in conversation, Adala's focus is narrower yet deeper in its domain of data processing. Another differentiator is that Adala is a framework for building “data-centric” agents guided by human feedback, whether a ground truth dataset or reviewing predictions via different channels. This specialization makes Adala more suited for tasks that require precision and reliability, a critical aspect of machine learning and data analysis. Although it can utilize the same GPT models from OpenAI, Adala has the capacity to support multiple runtimes depending on the domain-specific use case, economics, or even requirements for data privacy. ConclusionAdala distinguishes itself in the generative AI arena today by enhancing data labeling accuracy and efficiency, and the community will continue work to automate data pipelines that fuel AI models and applications. Adala's focused approach to data labeling makes it a vital tool for combining human-like meticulousness with AI scalability. Adala is under active development with new releases every two weeks. The latest version, 0.3.0, includes additional skills and environments along with a number of other enhancements. To keep abreast of these developments, follow the Adala repository on GitHub. Also, try these features and capabilities on your own data and share your feedback with us in the Adala Discord! *This post was written by Jimmy Whitaker, Data Scientist in Residence at Human Signal. We thank HumanSignal for their insights and ongoing support of TheSequence.You’re on the free list for TheSequence Scope and TheSequence Chat. For the full experience, become a paying subscriber to TheSequence Edge. Trusted by thousands of subscribers from the leading AI labs and universities. |
Older messages
Thank you for supporting TheSequence
Sunday, November 19, 2023
TheSequence Thank you for reading TheSequence. As a token of our appreciation, we're offering you a limited-time offer of 20% off a paid subscription. Redeem special offer Here are the benefits you
I Promise, this Editorial is NOT About OpenAI
Sunday, November 19, 2023
Some major milestones in generative video were announced this week.
😎 Private Preview: Build Real-Time AI Applications Using Only Python
Friday, November 17, 2023
Our friends from Tecton launched a new, AI-optimized, Python-based compute engine called Rift. Now you can build real-time AI applications in minutes! Using Tecton with Rift, you can: Build better
Edge 344: LLMs and Memory is All You Need. Inside One of the Most Shocking Papers of the Year
Friday, November 17, 2023
Can memory-augmented LLMs simulate any algorithm?
Edge 343: Understanding Llama-Adapter Fine-Tuning
Tuesday, November 14, 2023
One of the most intriguing fine-tuning methods that combines prefix-tuning and PEFT.
You Might Also Like
🕹️ Retro Consoles Worth Collecting While You Still Can — Is Last Year's Flagship Phone Worth Your Money?
Saturday, November 23, 2024
Also: Best Outdoor Smart Plugs, and More! How-To Geek Logo November 23, 2024 Did You Know After the "flair" that servers wore—buttons and other adornments—was made the butt of a joke in the
JSK Daily for Nov 23, 2024
Saturday, November 23, 2024
JSK Daily for Nov 23, 2024 View this email in your browser A community curated daily e-mail of JavaScript news React E-Commerce App for Digital Products: Part 4 (Creating the Home Page) This component
Not Ready For The Camera 📸
Saturday, November 23, 2024
What (and who) video-based social media leaves out. Here's a version for your browser. Hunting for the end of the long tail • November 23, 2024 Not Ready For The Camera Why hasn't video
Daily Coding Problem: Problem #1617 [Easy]
Saturday, November 23, 2024
Daily Coding Problem Good morning! Here's your coding interview problem for today. This problem was asked by Microsoft. You are given an string representing the initial conditions of some dominoes.
Ranked | The Tallest and Shortest Countries, by Average Height 📏
Saturday, November 23, 2024
These two maps compare the world's tallest countries, and the world's shortest countries, by average height. View Online | Subscribe | Download Our App TIME IS RUNNING OUT There's just 3
⚙️ Your own Personal AI Agent, for Everything
Saturday, November 23, 2024
November 23, 2024 | Read Online Subscribe | Advertise Good Morning. Welcome to this special edition of The Deep View, brought to you in collaboration with Convergence. Imagine if you had a digital
Educational Byte: Are Privacy Coins Like Monero and Zcash Legal?
Saturday, November 23, 2024
Top Tech Content sent at Noon! How the world collects web data Read this email in your browser How are you, @newsletterest1? 🪐 What's happening in tech today, November 23, 2024? The HackerNoon
🐍 New Python tutorials on Real Python
Saturday, November 23, 2024
Hey there, There's always something going on over at Real Python as far as Python tutorials go. Here's what you may have missed this past week: Black Friday Giveaway @ Real Python This Black
Re: Hackers may have stolen everyone's SSN!
Saturday, November 23, 2024
I wanted to make sure you saw Incogni's Black Friday deal, which is exclusively available for iPhone Life readers. Use coupon code IPHONELIFE to save 58%. Here's why we recommend Incogni for
North Korean Hackers Steal $10M with AI-Driven Scams and Malware on LinkedIn
Saturday, November 23, 2024
THN Daily Updates Newsletter cover Generative AI For Dummies ($18.00 Value) FREE for a Limited Time Generate a personal assistant with generative AI Download Now Sponsored LATEST NEWS Nov 23, 2024