🤔🤯 Addressing One of the Fundamental Questions in Machine Learning
Was this email forwarded to you? Sign up here 🤔🤯 Addressing One of the Fundamental Questions in Machine LearningWeekly news digest curated by the industry insiders📝 EditorialThe frantic pace of machine learning (ML) research is pushing the complexity of neural networks to new highs. Larger neural networks seem to be the state-of-the-art these days, reaching new milestones in areas such as natural language processing (NLP), computer vision, and speech analysis. Despite the progress, one of the most significant limitations for adopting big ML models remains how little we know about the way they generalize knowledge. Arguably, one of the biggest mysteries of ML is understanding why functions learned by neural networks generalize to unseen data. We are all impressed with the performance of GPT-3, but we can’t quite explain it. The future of ML has to be based on explainable ML and, to get there, we might have to go back to the first principles. A few days ago, researchers from Berkeley AI Research (BAIR) quietly published what I think could be one of the most important ML papers of the last few years. Behind the weird title of “Neural Tangent Kernel Eigenvalues Accurately Predict Generalization”, BAIR tries to formulate a first-principles theory of generalization. In a nutshell, the BAIR research reformulates subjective why-questions with a quantitative problem: given a network architecture, a target function f, and a training set of n random examples, can we efficiently predict the generalization performance of the network’s learned function f? The research shows that the incomprehensible complexity of neural networks is ruled by relatively simple rules. Even though BAIR’s research can’t be considered a complete theory of neural network generalization, it’s certainly an encouraging step in that direction. 🔺🔻TheSequence Scope – our Sunday edition with the industry’s development overview – is free. To receive high-quality content about the most relevant developments in the ML world every Tuesday and Thursday, please subscribe to TheSequence Edge 🔺🔻 🗓 Next week in TheSequence Edge: Edge#137: Detailed recap of our self-supervising (SSL) series. Edge#138: Deep dive into Toloka App Services Now, let’s review the most important developments in the AI industry this week 🔎 ML ResearchGrouping Tasks in Multi-Task Models Which types of tasks should a neural network learn together? Google Research published a paper proposing a task-grouping technique for multi-task networks →read more on Google Research blog Learning and Evolution Researchers from Stanford University published a paper proposing a technique called “deep evolutionary reinforcement learning,” or DERL, which uses complex virtual environments to simulate evolutionary dynamics and improve the learning of agents →read more in the original paper on Nature A First-Principles Theory of Generalization Berkeley AI Research (BAIR) published a fascinating paper outlining a quantitative theory for neural network generalization →read more on BAIR blog Advances in Model-Based Optimization Berkeley AI Research (BAIR) published a blog post summarizing recent advancements in model-based optimization methods, which are actively used in design problems →read more on BAIR blog 🛠 Real World MLGrammar Corrections in Pixel 6 Google Research published some details about the models powering grammar correction capabilities on Gboard on Pixel 6 →read more on Google Research blog Adapting LinkedIn’s ML Talent Solutions to COVID times The LinkedIn engineering team published a blog post detailing some of the building blocks used to improve ML solutions based on the dynamics of the job market during the pandemic →read more on LinkedIn blog 🤖 Cool AI Tech ReleasesMetaflow UI Netflix open-sourced a new UI for its Metaflow ML platform →read more on Netflix blog EC2 DL1 Instances AWS announced the general availability of DL1 instances, which improve the training of deep learning models by up to 40% →read more in this press release from AWS 💎 We recommendWatch a powerhouse panel of executives from Starbucks, HubSpot, and WestCap, talk about how today’s low-code/no-code trends, including automated predictive analytics, are applied to their unique business case and how they’re using it to drive business strategy. 💸 Money in AIML&AI:
AI-powered:
Acquisitions:
You’re on the free list for TheSequence Scope and TheSequence Chat. For the full experience, become a paying subscriber to TheSequence Edge. Trusted by thousands of subscribers from the leading AI labs and universities. |
Older messages
🤩 Early access: try the world's most flexible AI cloud*
Friday, October 29, 2021
only for TheSequence readers
Welcome to TheSequence!
Friday, October 29, 2021
Hi there, you're on the free list for our news digest TheSequence Scope. Every Sunday we pick the most relevant ML research papers, cool ML tech releases, and cover important investments in AI.
👓 Edge#135: Self-Supervised Learning for Computer Vision
Friday, October 29, 2021
The conclusion of our self-supervised learning series
🎙 Olga Megorskaya/Toloka: Practical Lessons About Data Labeling
Friday, October 29, 2021
and balance between fully automated, crowdsourced or hybrid approaches to data labeling
🔵⚪️Edge#136: Kili Technology and Its Automated Data-Centric Training Platform
Friday, October 29, 2021
Let's dive in
You Might Also Like
The dark side of startup accelerators
Sunday, May 5, 2024
Plus: No easy solution to AI hallucinations View this email online in your browser By Anthony Ha Sunday, May 5, 2024 Image Credits: Bryce Durbin This Week, TechCrunch dug into the struggles at two
Android Weekly #621
Sunday, May 5, 2024
View in web browser 621 May 5th, 2024 Articles & Tutorials Sponsored Genius Scan SDK: a document scanner in your app Embed a reliable document scanner with OCR in your app, enabling your customers
This Week's Daily Tip Roundup
Sunday, May 5, 2024
Missed some of this week's tips? No problem. We've compiled all of them here in one convenient place for you to enjoy. Happy learning! iPhoneLife Logo View In Browser Your Tip of the Day is
NativePHP now supports Windows! - №511
Sunday, May 5, 2024
Your Laravel week in review ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Maybe Two Big Research Breakthroughs or Maybe Nothing
Sunday, May 5, 2024
Multi-token prediction and a multi-layer perceptron alternative. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Dot Leap 2024-8: Paragraph + DGCM = decentralized newsletter?
Sunday, May 5, 2024
Thank you for taking the Dot Leap!We cover Polkadot, Kusama, Polkadot-SDK, and all related Web 3.0 projects! Want your content featured? X ...
Card Buddy/Bread Book/Pocket Translator
Sunday, May 5, 2024
Recomendo - issue #409 ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Kotlin Weekly #405
Sunday, May 5, 2024
ISSUE #405 5th of May 2024 Hello Kotliners! We're already in May and getting closer to KotlinConf. Any predictions for the keynote announcements? Check out our selection of links for the upcoming
📈 Why Is My Ping So High While Gaming? — How to Keep Your Android From Overheating
Saturday, May 4, 2024
Also: Using ChatGPT to Craft a Resume, and More! How-To Geek Logo May 4, 2024 📩 Get expert reviews, the hottest deals, how-to's, breaking news, and more delivered directly to your inbox by
JSK Daily for May 4, 2024
Saturday, May 4, 2024
JSK Daily for May 4, 2024 View this email in your browser A community curated daily e-mail of JavaScript news The Power of React's Virtual DOM: A Comprehensive Explanation Modern JavaScript