TheSequence - ♟♟ Chess Learning Explainability
Was this email forwarded to you? Sign up here 📝 EditorialChess has long been considered a solved problem in machine learning (ML) as many chess programs have achieved superhuman performance. However, chess still continues contributing to the ML field in surprising ways. One of those new areas of contributions has to do with understanding how deep learning models build knowledge representations in complex scenarios such as chess. Traditional chess engines often start with extensive collections of games as well as established knowledge pools of openings and mid-game and end-game tactics. That approach was challenged with recent chess models like DeepMind’s AlphaZero that mastered chess by simply playing games. AlphaZero quickly became the strongest chess engine in the world and also discovered all set of new lines in chess openings that challenged conventional wisdom. Despite the success and popularity of AlphaZero, we still know very little about how it builds its knowledge. This is starting to change by a collaboration between DeepMind, Google Brain, and one of the brightest minds in chess history. In a paper released this week, DeepMind and Google Brain collaborated with former chess world champion Vladimir Kramnik to evaluate how AlphaZero develops knowledge representations of chess positions. This level of analysis is incredibly relevant to add a layer of interpretability to superhuman neural networks. The general assumption is that complex neural networks build opaque and nearly impossible to interpret knowledge representations. However, some recent empirical evidence seems to challenge that belief suggesting that those neural networks develop plenty of human-understandable concepts. The study of AlphaZero added more evidence to this thesis illustrating how the super neural network developed several widely understood human chess concepts during its learning process. Furthermore, the research showed exactly when AlphaZero developed these concepts during training, helping us understand how explainable knowledge representations are built in complex neural networks. Certainly one of the most fascinating papers of this year. 🍂🍁 TheSequence Scope is our Sunday free digest. To receive high-quality educational content about the most relevant concepts, research papers and developments in the ML world every Tuesday and Thursday, please subscribe to TheSequence Edge 🍂🍁 🗓 Next week in TheSequence Edge is Thanksgiving Week: we will share a few important content series, deep dives, and best interviews. Now, let’s review the most important developments in the AI industry this week 🔎 ML ResearchUnderstanding Chess Acquisition Knowledge DeepMind collaborated with former world chess champion Vladimir Kramnik in a fascinating paper about how AlphaZero acquires and develops chess knowledge →read more in this article from Chessbase Self-Supervised Speech in 128 Languages Facebook AI Research (FAIR) published a paper detailing XLS-R, a self-supervised model that can master speech tasks in 128 languages →read more on FAIR blog Evaluation and Reporting in Reinforcement Learning Google Research published a paper and open-sourced RLiable, a method for quantifying uncertainty in RL models →read more on Google Research blog Predicting Text Readability Google Research published a paper proposing a method to predict text readability based on screen interactions, such as scrolls →read more Google Research blog 🛠 Real World MLDataOps vs. MLOps Walmart Labs published a blog post explaining their ideas about DataOps and its relevance in MLOps pipelines →read more on Walmart Global Tech blog 🤖 Cool AI Tech ReleasesGNNs in TensorFlow TensorFlow open-sourced TensorFlow Graph Neural Networks (GNNs), a new framework designed to streamline GNNs implementation and graph data processing in deep learning models →read more on TensorFlow blog SynapseML Microsoft Research open-sourced SynapseML (formerly MMLSpark), a library that enables the implementation of massively parallel machine learning pipelines →read more on Microsoft Research blog OpenAI API General Availability OpenAI removed the waitlist requirement to access its popular API that includes models like GPT-3 and Codex →read more on OpenAI blog 💸 Money in AIFor ML&AI:
AI-powered
Merging:
|
Older messages
🤖Edge#142: How Microsoft Built a 530 Billion Parameter Model
Thursday, November 18, 2021
The biggest innovation behind Megatron-Turing NLG
☝️ CoreWeave is allocating $50 million to scale the best compute infrastructure for ML*
Wednesday, November 17, 2021
How are they going to do it? Read along
🩺 Edge#141: MLOPs – Model Monitoring
Tuesday, November 16, 2021
plus the building blocks of interpretability and a few ML monitoring platforms to keep up with
🐉 NVIDIA's ML Software Moment
Sunday, November 14, 2021
Weekly news digest curated by the industry insiders
🌟Share your opinion on MLOps and get rewarded*
Friday, November 12, 2021
Receive the result of the survey and a $25 Amazon gift card
You Might Also Like
Import AI 399: 1,000 samples to make a reasoning model; DeepSeek proliferation; Apple's self-driving car simulator
Friday, February 14, 2025
What came before the golem? ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Defining Your Paranoia Level: Navigating Change Without the Overkill
Friday, February 14, 2025
We've all been there: trying to learn something new, only to find our old habits holding us back. We discussed today how our gut feelings about solving problems can sometimes be our own worst enemy
5 ways AI can help with taxes 🪄
Friday, February 14, 2025
Remotely control an iPhone; 💸 50+ early Presidents' Day deals -- ZDNET ZDNET Tech Today - US February 10, 2025 5 ways AI can help you with your taxes (and what not to use it for) 5 ways AI can help
Recurring Automations + Secret Updates
Friday, February 14, 2025
Smarter automations, better templates, and hidden updates to explore 👀 ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
The First Provable AI-Proof Game: Introducing Butterfly Wings 4
Friday, February 14, 2025
Top Tech Content sent at Noon! Boost Your Article on HackerNoon for $159.99! Read this email in your browser How are you, @newsletterest1? undefined The Market Today #01 Instagram (Meta) 714.52 -0.32%
GCP Newsletter #437
Friday, February 14, 2025
Welcome to issue #437 February 10th, 2025 News BigQuery Cloud Marketplace Official Blog Partners BigQuery datasets now available on Google Cloud Marketplace - Google Cloud Marketplace now offers
Charted | The 1%'s Share of U.S. Wealth Over Time (1989-2024) 💰
Friday, February 14, 2025
Discover how the share of US wealth held by the top 1% has evolved from 1989 to 2024 in this infographic. View Online | Subscribe | Download Our App Download our app to see thousands of new charts from
The Great Social Media Diaspora & Tapestry is here
Friday, February 14, 2025
Apple introduces new app called 'Apple Invites', The Iconfactory launches Tapestry, beyond the traditional portfolio, and more in this week's issue of Creativerly. Creativerly The Great
Daily Coding Problem: Problem #1689 [Medium]
Friday, February 14, 2025
Daily Coding Problem Good morning! Here's your coding interview problem for today. This problem was asked by Google. Given a linked list, sort it in O(n log n) time and constant space. For example,
📧 Stop Conflating CQRS and MediatR
Friday, February 14, 2025
Stop Conflating CQRS and MediatR Read on: my website / Read time: 4 minutes The .NET Weekly is brought to you by: Step right up to the Generative AI Use Cases Repository! See how MongoDB powers your