🏗 Edge#186: From Feature Stores to Feature Platforms
Was this email forwarded to you? Sign up here On Thursdays, we do deep dives into one of the freshest research papers or technology frameworks that is worth your attention. Our goal is to keep you up to date with new developments in AI to complement the concepts we debate in other editions of our newsletter. 💥 Deep Dive: From Feature Stores to Feature PlatformsFeature stores have emerged as a central piece of the MLOps stack and, in 2021, became a consolidated category. MLOps platforms have started to incorporate feature storage and lifecycle management capabilities as first-class citizen. The origin of feature store came from Uber Michelangelo, which was the platform that allowed Uber to go from 0 ML models in production to 1000s (Check Edge#77 about How Feature Stores Were Started). After Michelangelo, Uber was using ML in every aspect of its business: pricing, demand forecasting, predicting ETAs, matching, etc. Uber Michelangelo was focused on solving the end-to-end ML workflow: from transforming raw data into features, serving those features to models, deploying the models, making predictions, and monitoring predictions. What are Feature Stores and What Problem Do They Solve?The team at Uber Michelangelo recognized that, out of the entire end-to-end ML platform, the most difficult part of putting ML applications into production was managing and transforming the data.
Building production-grade data pipelines was often the main bottleneck in getting models to production. In addition, providing better data was the single most effective way of improving the model’s performance:
So why are data pipelines the main bottleneck to getting models in production? Without a solution, teams first create features in local notebooks, and once they’re ready to go into production, need to build bespoke pipelines by piecing together disparate tools: data from a warehouse, processing with a compute engine and/or a stream processor, setting up an orchestrator, storage in low-latency stores, and implementing serving infrastructure. In addition, teams compromise on using real-time data. Managing real-time data is too difficult without the right tools, so only the most sophisticated teams build the infrastructure required to process real-time data into machine learning models. To solve these challenges, the Michelangelo team built the industry’s first feature store. Internal Feature Stores have emerged at every other major ML player: Facebook, Google, LinkedIn, Netflix, Twitter, etc. A feature store is aimed at being the central backbone of an ML application. It provides data scientists and data engineers a way to define features using SQL or Python, and automatically sets up the compute to transform data from the data warehouse into features, keep those features fresh as new data comes in, and make those features available to train models or to serve features during online inference. The Evolution of Feature StoresThe Uber Michelangelo team decided to found Tecton to make production-grade ML accessible to every organization. Tecton unveiled its feature store in 2020, which was soon followed by large players introducing their own products in the category: AWS SageMaker introduced its feature store in Dec 2020. In 2021, Databricks and Google launched their own feature stores. Snowflake and Azure partnered with Tecton and Feast (the most popular open-source feature store – read in Edge#78) to bring feature stores to their own customers. The questions are: can one feature store serve them all? What are important differences in capabilities between offerings? What will the evolution of feature stores bring us? Understanding the MarketEvery feature store needs to provide storing, sharing, and re-using of features. It is the least common denominator of feature store capabilities. But – storing, sharing and re-using features solves only part of the problem. Teams still have to build feature pipelines to generate the feature values, and this is often the main bottleneck experienced by data teams. Feature stores have an opportunity to do more – to solve the end-to-end feature lifecycle by also automating the data pipelines that transform raw data into features. Tecton, the Feature Platform for MLTecton, coming from the veterans of Uber Michelangelo, goes beyond the capabilities of a regular feature store, tending to be a complete feature platform. With a feature platform, users define features using simple SQL or Python. Tecton automatically transforms raw data into features by running and orchestrating data pipelines. It allows for large-scale retrieval for training, and low-latency retrieval for online serving. In addition, Tecton goes beyond batch data and supports the transformation of streaming and real-time data, allowing teams to use the freshest data available to make predictions. With a full feature platform, data engineers don’t need to re-build data pipelines, allowing teams to put ML applications into production in a matter of days. ConclusionWhile the AI/ML industry has put a heavy emphasis on model training, optimization and serving, it’s important to remember that high-quality data is the single most important factor in increasing model accuracy. The best model in the world can’t make a prediction without the right data. Getting high-quality data to our models is hard and requires complicated data engineering work. Data teams are often resigned to using sub-optimal data to simplify their data engineering challenges. Feature stores have become an essential part of the MLOps stack as they are purpose-built to solve this data challenge of ML. However, feature stores are not enough. There’s an opportunity to expand into a complete feature platform that solves the end-to-end data problem for ML, managing the entire processing from data source to models. You’re on the free list for TheSequence Scope and TheSequence Chat. For the full experience, become a paying subscriber to TheSequence Edge. Trusted by thousands of subscribers from the leading AI labs and universities. |
Key phrases
Older messages
🎙 Hyun Kim/CEO of Superb AI About Challenges with Data Labeling in Computer Vision
Wednesday, April 27, 2022
the fundamental differences and challenges between automated data labeling techniques for image and video datasets and much more
🕸 Edge#185: Centralized vs. Decentralized Distributed Training Architectures
Tuesday, April 26, 2022
In this issue: we overview Centralized vs. Decentralized Distributed Training Architectures; we explain GPipe, an Architecture for Training Large Scale Neural Networks; we explore TorchElastic, a
🦾 Serverless ML Execution
Sunday, April 24, 2022
Weekly news digest curated by the industry insiders
📌 Event: Join us at apply() – the ML Data Engineering Community Conference
Friday, April 22, 2022
It's free
🧑🎨 Edge#184: Inside DALL-E 2: OpenAI’s Upgraded Supermodel that can Generate Artistic Images from Text
Thursday, April 21, 2022
The new model outperforms its predecessor by generating higher quality images from highly complex language descriptions
You Might Also Like
Quick question
Sunday, April 28, 2024
I want to learn how I can better serve you
Kotlin Weekly #404 (NOT FOUND)
Sunday, April 28, 2024
ISSUE #404 28st of April 2024 Announcements Kotlin Multiplatform State of the Art Survey 2024 Help to shape and understand the Kotlin Multiplatform Ecosystem! It takes 4 minutes to fill this survey.
📲 Why Is It Called Bluetooth? — Check Out This AI Text to Song Generator
Sunday, April 28, 2024
Also: What to Know About Emulating Games on iPhone, and More! How-To Geek Logo April 28, 2024 📩 Get expert reviews, the hottest deals, how-to's, breaking news, and more delivered directly to your
Daily Coding Problem: Problem #1425 [Easy]
Sunday, April 28, 2024
Daily Coding Problem Good morning! Here's your coding interview problem for today. This problem was asked by Microsoft. Suppose an arithmetic expression is given as a binary tree. Each leaf is an
PD#571 Software Design Principles I Learned the Hard Way
Sunday, April 28, 2024
If there's two sources of truth, one is probably wrong. And yes, please repeat yourself.
When Procrastination is Productive & Ghost integrating with ActivityPub
Sunday, April 28, 2024
Automattic, Texts, and Beeper join forces to build world's best inbox, Reflect launches its iOS app, how to start small rituals, and a lot more in this week's issue of Creativerly. Creativerly
C#503 Building pipelines with System.Threading.Channels
Sunday, April 28, 2024
Concurrent programming challenges can be effectively addressed using channels
RD#453 Get your codebase ready for React 19
Sunday, April 28, 2024
Is your app ready for what's coming up in React 19's release
☁️ Azure Weekly #464 - 28th April 2024
Sunday, April 28, 2024
Azure Weekly Newsletter Issue #464 powered by endjin Welcome to issue 464 of the Azure Weekly Newsletter. In AI we have a good mix of high-level and deep-dive technical articles. Next-Gen Customer
Tesla profits tumble, Fisker flatlines, and California cities battle for control of AVs
Sunday, April 28, 2024
Plus, an up-close look at the all-electric Mercedes G-Wagen and more View this email online in your browser By Kirsten Korosec Sunday, April 28, 2024 Welcome back to TechCrunch Mobility — your central