🤔🤯 Addressing One of the Fundamental Questions in Machine Learning
Was this email forwarded to you? Sign up here 🤔🤯 Addressing One of the Fundamental Questions in Machine LearningWeekly news digest curated by the industry insiders📝 EditorialThe frantic pace of machine learning (ML) research is pushing the complexity of neural networks to new highs. Larger neural networks seem to be the state-of-the-art these days, reaching new milestones in areas such as natural language processing (NLP), computer vision, and speech analysis. Despite the progress, one of the most significant limitations for adopting big ML models remains how little we know about the way they generalize knowledge. Arguably, one of the biggest mysteries of ML is understanding why functions learned by neural networks generalize to unseen data. We are all impressed with the performance of GPT-3, but we can’t quite explain it. The future of ML has to be based on explainable ML and, to get there, we might have to go back to the first principles. A few days ago, researchers from Berkeley AI Research (BAIR) quietly published what I think could be one of the most important ML papers of the last few years. Behind the weird title of “Neural Tangent Kernel Eigenvalues Accurately Predict Generalization”, BAIR tries to formulate a first-principles theory of generalization. In a nutshell, the BAIR research reformulates subjective why-questions with a quantitative problem: given a network architecture, a target function f, and a training set of n random examples, can we efficiently predict the generalization performance of the network’s learned function f? The research shows that the incomprehensible complexity of neural networks is ruled by relatively simple rules. Even though BAIR’s research can’t be considered a complete theory of neural network generalization, it’s certainly an encouraging step in that direction. 🔺🔻TheSequence Scope – our Sunday edition with the industry’s development overview – is free. To receive high-quality content about the most relevant developments in the ML world every Tuesday and Thursday, please subscribe to TheSequence Edge 🔺🔻 🗓 Next week in TheSequence Edge: Edge#137: Detailed recap of our self-supervising (SSL) series. Edge#138: Deep dive into Toloka App Services Now, let’s review the most important developments in the AI industry this week 🔎 ML ResearchGrouping Tasks in Multi-Task Models Which types of tasks should a neural network learn together? Google Research published a paper proposing a task-grouping technique for multi-task networks →read more on Google Research blog Learning and Evolution Researchers from Stanford University published a paper proposing a technique called “deep evolutionary reinforcement learning,” or DERL, which uses complex virtual environments to simulate evolutionary dynamics and improve the learning of agents →read more in the original paper on Nature A First-Principles Theory of Generalization Berkeley AI Research (BAIR) published a fascinating paper outlining a quantitative theory for neural network generalization →read more on BAIR blog Advances in Model-Based Optimization Berkeley AI Research (BAIR) published a blog post summarizing recent advancements in model-based optimization methods, which are actively used in design problems →read more on BAIR blog 🛠 Real World MLGrammar Corrections in Pixel 6 Google Research published some details about the models powering grammar correction capabilities on Gboard on Pixel 6 →read more on Google Research blog Adapting LinkedIn’s ML Talent Solutions to COVID times The LinkedIn engineering team published a blog post detailing some of the building blocks used to improve ML solutions based on the dynamics of the job market during the pandemic →read more on LinkedIn blog 🤖 Cool AI Tech ReleasesMetaflow UI Netflix open-sourced a new UI for its Metaflow ML platform →read more on Netflix blog EC2 DL1 Instances AWS announced the general availability of DL1 instances, which improve the training of deep learning models by up to 40% →read more in this press release from AWS 💎 We recommendWatch a powerhouse panel of executives from Starbucks, HubSpot, and WestCap, talk about how today’s low-code/no-code trends, including automated predictive analytics, are applied to their unique business case and how they’re using it to drive business strategy. 💸 Money in AIML&AI:
AI-powered:
Acquisitions:
|
Older messages
🤩 Early access: try the world's most flexible AI cloud*
Friday, October 29, 2021
only for TheSequence readers
Welcome to TheSequence!
Friday, October 29, 2021
Hi there, you're on the free list for our news digest TheSequence Scope. Every Sunday we pick the most relevant ML research papers, cool ML tech releases, and cover important investments in AI.
👓 Edge#135: Self-Supervised Learning for Computer Vision
Friday, October 29, 2021
The conclusion of our self-supervised learning series
🎙 Olga Megorskaya/Toloka: Practical Lessons About Data Labeling
Friday, October 29, 2021
and balance between fully automated, crowdsourced or hybrid approaches to data labeling
🔵⚪️Edge#136: Kili Technology and Its Automated Data-Centric Training Platform
Friday, October 29, 2021
Let's dive in
You Might Also Like
Import AI 399: 1,000 samples to make a reasoning model; DeepSeek proliferation; Apple's self-driving car simulator
Friday, February 14, 2025
What came before the golem? ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Defining Your Paranoia Level: Navigating Change Without the Overkill
Friday, February 14, 2025
We've all been there: trying to learn something new, only to find our old habits holding us back. We discussed today how our gut feelings about solving problems can sometimes be our own worst enemy
5 ways AI can help with taxes 🪄
Friday, February 14, 2025
Remotely control an iPhone; 💸 50+ early Presidents' Day deals -- ZDNET ZDNET Tech Today - US February 10, 2025 5 ways AI can help you with your taxes (and what not to use it for) 5 ways AI can help
Recurring Automations + Secret Updates
Friday, February 14, 2025
Smarter automations, better templates, and hidden updates to explore 👀 ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
The First Provable AI-Proof Game: Introducing Butterfly Wings 4
Friday, February 14, 2025
Top Tech Content sent at Noon! Boost Your Article on HackerNoon for $159.99! Read this email in your browser How are you, @newsletterest1? undefined The Market Today #01 Instagram (Meta) 714.52 -0.32%
GCP Newsletter #437
Friday, February 14, 2025
Welcome to issue #437 February 10th, 2025 News BigQuery Cloud Marketplace Official Blog Partners BigQuery datasets now available on Google Cloud Marketplace - Google Cloud Marketplace now offers
Charted | The 1%'s Share of U.S. Wealth Over Time (1989-2024) 💰
Friday, February 14, 2025
Discover how the share of US wealth held by the top 1% has evolved from 1989 to 2024 in this infographic. View Online | Subscribe | Download Our App Download our app to see thousands of new charts from
The Great Social Media Diaspora & Tapestry is here
Friday, February 14, 2025
Apple introduces new app called 'Apple Invites', The Iconfactory launches Tapestry, beyond the traditional portfolio, and more in this week's issue of Creativerly. Creativerly The Great
Daily Coding Problem: Problem #1689 [Medium]
Friday, February 14, 2025
Daily Coding Problem Good morning! Here's your coding interview problem for today. This problem was asked by Google. Given a linked list, sort it in O(n log n) time and constant space. For example,
📧 Stop Conflating CQRS and MediatR
Friday, February 14, 2025
Stop Conflating CQRS and MediatR Read on: my website / Read time: 4 minutes The .NET Weekly is brought to you by: Step right up to the Generative AI Use Cases Repository! See how MongoDB powers your