🎙 Piotr Niedzwiedz, neptune's CEO on Ideas About Machine Learning Experimentation
Was this email forwarded to you? Sign up here It’s so inspiring to learn from practitioners and thinkers. Getting to know the experience gained by researchers, engineers, and entrepreneurs doing real ML work is an excellent source of insight and inspiration. Share this interview if you like it. No subscription is needed. 👤 Quick bio / Piotr Niedzwiedz
Piotr Niedzwiedz (PN): I am Piotr, and I am the CEO of neptune.ai. Day to day, apart from running the company, I focus on the product side of things. Strategy, planning, ideation, getting deep into user needs and use cases. I really like it. My path to ML started with software engineering. Always liked math and started programming when I was 7. In high school, I got into algorithmics and programming competitions and loved competing with the best. That got me to the best CS and Maths program in Poland which funny enough today specializes in machine learning. Did internships at Facebook and Google and was offered to stay in the Valley. But something about being a FAANG engineer didn’t feel right. I had this spark to do more, build something myself. When I came to the ML space from software engineering, I was surprised by the messy experimentation practices, lack of control over model building, and a missing ecosystem of tools to help people deliver models confidently. It was a stark contrast to the software development ecosystem, where you have mature tools for DevOps, observability, or orchestration to execute efficiently in production. And then, one day, some ML engineers from Deepsense.ai came to me and showed me this tool for tracking experiments they built during a Kaggle competition (which we won btw), and I knew this could be big. Asked around, and everyone was struggling with managing experiments. I decided to spin it off as a VC-funded product company, and the rest is history. 🛠 ML Work
PN: While most companies in the MLOps space try to go wider and become platforms that solve all the problems of ML teams, Neptune.ai’s strategy is to go deeper and become the best-in-class tool for model metadata storage and management. In a more mature software development space, there are almost no end-to-end platforms. So why should ML, which is even more complex, be any different? I believe that by focusing on providing the best developer experience for experiment tracking and model registry, we can become the foundation of any MLOps tool stack. Today we have a super flexible data model that allows people to log and organize model metadata in any way they want. You can:
But we still see a lot to do when it comes to developer experience tailored for specific use cases. So in 2022, we will focus on three things:
PN: Great question. In my opinion, it is:
So, in many ways, it is exactly the same as many other observability solutions like the ELK stack (Elastic, Logstash, Kibana). I actually think that a lot of things in MLOps are very much the same as in traditional software development, but there are some differences. Those differences come from various personas and jobs they want to solve with your tool. You have data scientists, ML engineers, DevOps people, software engineers, subject matter experts who work together on ML projects. While all of them may need “ML observability”, the things they want to observe are completely different. So for example, in experiment tracking, those main needs are:
If you really want to deliver a good developer experience here, you need to go deep and really understand how people work with different data and model types (vision, nlp, forecasting). You need to make it easy for them to use their tools and try to enhance, not change their workflow. For the model registry, you need to make the handover of the production-ready model from data scientists and ML engineers and then make it easy for the ML engineer to deploy/roll back/retrain that model.
PN: From my perspective, it is actually not that different. There is metadata about those processes that you want to compare, debug, organize, find, and share. Last year I spent a lot of time rethinking our underlying data model to make those things easy regardless of the ML use case because of that. If you think about it “from first principles”, the things that are the most important, regardless of your use case, are flexibility and expressibility. And we build our product on those pillars. But to give you an example, time series forecasting is a use case that is hard to solve with a rigid solution. In forecasting, you rarely train one model. You actually train and test models on various time series, for example, one model per every product line or physical location of a shop. And then, you want to visualize and evaluate your model based on all of those locations. And you want to update the evaluation charts when new data comes in. To do that comfortably, you may need a very custom way to log and display model metadata, but the underlying job you solve is the same: evaluating models.
PN: Well, I think they could. But then it just moves a layer of abstraction higher IMHO. You still have hyper-hyper parameters to optimize, NAS or AutoML models to compare etc. I don’t think that will go away any time soon as it seems very dangerous to leave your production models to “do their thing” with no visibility into how they work (yes, it is hard), but at least how they were built.
PN: Yeah, I believe there will be standalone components that you can plug into your deep learning frameworks and MLOps stacks. But both frameworks and end-to-end platforms will probably have some basic logging/tracking functionality in there as well. Something to get people started. For example, let’s take data warehouses – do they come with inbuilt BI/visualization components. No, we have a few market standard standalone platforms because the problem of data visualization is big/challenging enough and requires the product team to be focused on it. And some teams don’t even need any BI/visualization. Model metadata management is similar. You should be able to plug it into your MLOps stack. I think it should be a separate component that integrates rather than a part of a platform. When you know you need solid experiment tracking capabilities, you should be able to look for a best-in-class point solution and add it to your stack. It happened many times in software, and I believe it will happen in ML as well. We’ll have companies providing point solutions with great developer experience. It won’t make much sense to build it yourself unless you have a custom problem. Look at Stripe (payments), Algolia (search and recommendations), Auth0 (authentication and authorization). But even in ML today. Imagine how weird would it be if every team was building their own model training framework like PyTorch. Why is experiment tracking, orchestration, or model monitoring any different? I don’t think it is. And so, I think we’ll see more specialization around those core MLOps components. Perhaps at some point, adjacent categories will merge into one, just as we are seeing with experiment tracking and model registry merge into one metadata storage and management category. 💥 Miscellaneous – a set of rapid-fire questions
Decision-making paradox: Selecting the best decision-making method is a decision problem in itself.
“Machine Learning Design Patterns: Solutions to Common Challenges in Data Preparation, Model Building, and MLOps” by Valliappa Lakshmanan, Sara Robinson, Michael Munn.
Seems that with GPT-3, GAN, and other generative models, it is becoming harder and harder to tell AI-generated content from reality. We are not quite there yet but almost. When it comes to alternatives, maybe... I would like to see something more objective. E.g. alphacode getting to Google Code Jam World Finals – been there once, it is a very challenging task!
Hey, if I knew, I would have reinvested this $1M into Neptune :) You’re on the free list for TheSequence Scope and TheSequence Chat. For the full experience, become a paying subscriber to TheSequence Edge. Trusted by thousands of subscribers from the leading AI labs and universities. |
Older messages
🌅🏞 Edge#177: An Overview of StackGANs
Tuesday, March 29, 2022
+an overview of NVIDIA's Impressive GAN Projects
📝 Guest post: How to Build an ML Platform from Scratch*
Monday, March 28, 2022
No subscription is needed
💻 Another NVIDIA AI Week
Sunday, March 27, 2022
Weekly news digest curated by the industry insiders
🧠 Edge#176: Meta’s New Architecture for Build AI Agents that Can Reason Like Humans and Animals
Thursday, March 24, 2022
The new architecture provides the foundation for autonomous AI agents.
📝 Guest post: You are probably doing MLOps at a reasonable scale. Embrace it*
Wednesday, March 23, 2022
No subscription is needed
You Might Also Like
🔒 The Vault Newsletter: November issue 🔑
Monday, November 25, 2024
Get the latest business security news, updates, and advice from 1Password. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
🧐 The Most Interesting Phones You Didn't See in 2024 — Making Reddit Faster on Older Devices
Monday, November 25, 2024
Also: Best Black Friday Deals So Far, and More! How-To Geek Logo November 25, 2024 Did You Know If you look closely over John Lennon's shoulder on the iconic cover of The Beatles Abbey Road album,
JSK Daily for Nov 25, 2024
Monday, November 25, 2024
JSK Daily for Nov 25, 2024 View this email in your browser A community curated daily e-mail of JavaScript news JavaScript Certification Black Friday Offer – Up to 54% Off! Certificates.dev, the trusted
Ranked | How Americans Rate Business Figures 📊
Monday, November 25, 2024
This graphic visualizes the results of a YouGov survey that asks Americans for their opinions on various business figures. View Online | Subscribe Presented by: Non-consensus strategies that go where
Spyglass Dispatch: Apple Throws Their Film to the Wolves • The AI Supercomputer Arms Race • Sony's Mobile Game • The EU Hunts Bluesky • Bluesky Hunts User Trust • 'Glicked' Pricked • One Massive iPad
Monday, November 25, 2024
Apple Throws Their Film to the Wolves • The AI Supercomputer Arms Race • Sony's Mobile Game • The EU Hunts Bluesky • Bluesky Hunts User Trust • 'Glicked' Pricked • One Massive iPad The
Daily Coding Problem: Problem #1619 [Hard]
Monday, November 25, 2024
Daily Coding Problem Good morning! Here's your coding interview problem for today. This problem was asked by Google. Given two non-empty binary trees s and t , check whether tree t has exactly the
Unpacking “Craft” in the Software Interface & The Five Pillars of Creative Flow
Monday, November 25, 2024
Systems Over Substance, Anytype's autumn updates, Ghost's progress with its ActivityPub integration, and a lot more in this week's issue of Creativerly. Creativerly Unpacking “Craft” in the
What Investors Want From AI Startups in 2025
Monday, November 25, 2024
Top Tech Content sent at Noon! How the world collects web data Read this email in your browser How are you, @newsletterest1? 🪐 What's happening in tech today, November 25, 2024? The HackerNoon
GCP Newsletter #426
Monday, November 25, 2024
Welcome to issue #426 November 25th, 2024 News LLM Official Blog Vertex AI Announcing Mistral AI's Large-Instruct-2411 on Vertex AI - Google Cloud has announced the availability of Mistral AI's
⏳ 36 Hours Left: Help Get "The Art of Data" Across the Finish Line 🏁
Monday, November 25, 2024
Visual Capitalist plans to unveal its secrets behind data storytelling, but only if the book hits its minimum funding goal. View Online | Subscribe | Download Our App We Need Your Help Only 36 Hours