TheSequence - ✅ Edge#193: ML Training at Scale Recap
Was this email forwarded to you? Sign up here ✅ Edge#193: ML Training at Scale RecapTheSequence is the best way to build and reinforce your knowledge about machine learning and AILast week we finished our mini-series about high scale ML training, one of our most popular series so far. Here is a full recap for you to catch up with the topics we covered. As the proverb (and many ML people) says: Repetition is the mother of learning ;) 💡The challenges of ML training at scaleTraining is one of the most important aspects of the lifecycle of ML models. In an ecosystem dominated by supervised learning techniques, having proper architectures for training is paramount for building robust ML systems. In the context of ML models, training is one of those aspects that is relatively simple to master at a small scale. But its complexity grows exponentially (really) with the size and complexity of a neural network. Over the last few years, the ML community has made significant advancements in both the research and implementation of high scale ML training methods. We will dedicate the next few weeks of TheSequence Edges to exploring the latest ML training methods and architectures powering some of the largest ML models in production. Forward this email to those who might benefit from reading it or give a gift subscription. →in Edge#181 (read it without a subscription), we discuss the complexity of ML training architectures; explain SeedRL, an architecture for massively scaling the training of reinforcement learning agents; overview Horovod, an open-source framework created by Uber to streamline the parallelization of deep learning training workflows. → in Edge#183, we explore data vs model parallelism in distributed training; discuss how AI training scales; overview Microsoft DeepSpeed, a training framework powering some of the largest neural networks in the world. → in Edge#185, we overview Centralized vs. Decentralized Distributed Training Architectures; explain GPipe, an Architecture for Training Large Scale Neural Networks; explore TorchElastic, a Distributed Training Framework for PyTorch. → in Edge#187, we overview the different types of data parallelism; explain TF-Replicator, DeepMind’s framework for distributed ML training; explore FairScale, a PyTorch-based library for scaling the training of neural networks. → in Edge#189, we discuss pipeline parallelism; explore PipeDream, an important Microsoft Research initiative to scale deep learning architectures; overview BigDL, Intel’s open-source library for distributed deep learning on Spark. → in Edge#191, finalizing the distributed ML training series, we discuss the fundamental enabler of distributed training: message passing interface (MPI); overview Google’s paper about General and Scalable Parallelization for ML Computation Graphs; share the most relevant technology stacks to enable distributed training in TensorFlow applications. Next week we are going back to deep learning theory. Our next mini-series will cover the subject of graph neural networks (GNNs). Super interesting! Remember: by reading TheSequence Edges regularly, you become smarter about ML and AI 🤓 You’re on the free list for TheSequence Scope and TheSequence Chat. For the full experience, become a paying subscriber to TheSequence Edge. Trusted by thousands of subscribers from the leading AI labs and universities. |
Key phrases
Older messages
☁️🔁📱 The Most Important Federated Learning Framework
Sunday, May 22, 2022
Weekly news digest curated by the industry insiders
📝 Guest post: Fast Access to Feature Data for AI Applications with Hopsworks*
Friday, May 20, 2022
how to abstract away the complexity of a dual storage system
🟧 Edge#192: Inside Predibase, the Enterprise Declarative ML Platform
Thursday, May 19, 2022
Our goal is to keep you up to date with new developments in AI and introduce to you the platforms that deal with the ML challenges
📝 Guest post: How to Measure Your GPU Cluster Utilization, and Why That Matters*
Wednesday, May 18, 2022
In this article, Run:AI's team introduces rntop, a new super useful open-source tool that measures GPU cluster utilization. Learn why that's a critical measure for data scientists, as well as
📨 Edge#191: MPI – the Fundamental Enabler of Distributed Training
Tuesday, May 17, 2022
In this issue: we discuss the fundamental enabler of distributed training: message passing interface (MPI); +Google's paper about General and Scalable Parallelization for ML Computation Graphs; +
You Might Also Like
Data Science Weekly - Issue 544
Friday, April 26, 2024
Curated news, articles and jobs related to Data Science, AI, & Machine Learning ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Develop highly relevant search applications using AI
Friday, April 26, 2024
New Elasticsearch and AI training ㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤ ㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤ ㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤ elastic | Search. Observe. Protect A world of AI possibilities door-test 2.png Explore
Stripe makes more changes
Thursday, April 25, 2024
TikTok is in trouble, and net neutrality is back View this email online in your browser By Christine Hall Thursday, April 25, 2024 Welcome back to TechCrunch PM, your home for all things startups,
💎 Issue 414 - From a Lorry Driver to Ruby on Rails Developer at 38
Thursday, April 25, 2024
This week's Awesome Ruby Newsletter Read this email on the Web The Awesome Ruby Newsletter Issue » 414 Release Date Apr 25, 2024 Your weekly report of the most popular Ruby news, articles and
💻 Issue 414 - JavaScript Features That Most Developers Don’t Know
Thursday, April 25, 2024
This week's Awesome Node.js Weekly Read this email on the Web The Awesome Node.js Weekly Issue » 414 Release Date Apr 25, 2024 Your weekly report of the most popular Node.js news, articles and
💻 Issue 407 - The Performance Impact of C++'s `final` Keyword
Thursday, April 25, 2024
This week's Awesome .NET Weekly Read this email on the Web The Awesome .NET Weekly Issue » 407 Release Date Apr 25, 2024 Your weekly report of the most popular .NET news, articles and projects
💻 Issue 414 - Everyone Has JavaScript, Right?
Thursday, April 25, 2024
This week's Awesome JavaScript Weekly Read this email on the Web The Awesome JavaScript Weekly Issue » 414 Release Date Apr 25, 2024 Your weekly report of the most popular JavaScript news, articles
📱 Issue 408 - All web browsers on iOS are just Safari with different design
Thursday, April 25, 2024
This week's Awesome iOS Weekly Read this email on the Web The Awesome iOS Weekly Issue » 408 Release Date Apr 25, 2024 Your weekly report of the most popular iOS news, articles and projects Popular
💧 Don't Bother Liquid Cooling Your AMD CPU — Why You Should Keep Using Live Photos on iPhone
Thursday, April 25, 2024
Also: We review the Unistellar Odyssey iPhone Telescope, and More! How-To Geek Logo April 25, 2024 Did You Know Charles Darwin and Abraham Lincoln were both born on the same day: February 12, 1809. 💻
💻 Issue 332 - 🥇The first framework that lets you visualize your React/NodeJS app 🤯
Thursday, April 25, 2024
This week's Awesome React Weekly Read this email on the Web The Awesome React Weekly Issue » 332 Release Date Apr 25, 2024 Your weekly report of the most popular React news, articles and projects