🔮 Edge#249: Model-Intrinsic vs. Post-Hoc Interpretability Methods
Was this email forwarded to you? Sign up here 🔮 Edge#249: Model-Intrinsic vs. Post-Hoc Interpretability MethodsModel-intrinsic vs. post-hoc interpretability, activation atlases visualizations and TensorBoard.In this issue:
Have fun ML geeking! 💡 ML Concept of the Day:Model-Intrinsic vs. Post-Hoc Interpretability MethodsIn a previous edition of this series, we explored a taxonomy to understand different ML interpretability methods. There are models such as linear regression or decision trees which are intrinsically explainable. Those models are typically analyzed using interpretability techniques optimized for their specifics which is what we called model-specific interpretability. In general, model-intrinsic interpretability methods look to leverage unique characteristics of explainable models:
From an ML interpretability standpoint, model-intrinsic methods are very simple but also quite restrictive. It is more common to interact with ML models whose algorithmic decisions are not easily explainable. Most neural networks techniques lack the simulatability and decomposability properties. Also, most non-convex problems which are common in neural networks can’t guarantee its convergence using gradient optimization methods. Enabling interpretability in methods such as neural networks requires post-hoc interpretability techniques which try to explain the overall behavior of the model without understanding the intricacies of the model. Post-hoc interpretability can be classified into two main groups:... Subscribe to TheSequence to read the rest.Become a paying subscriber of TheSequence to get access to this post and other subscriber-only content. A subscription gets you:
|
Older messages
What a Week for Generative AI
Sunday, December 4, 2022
📝 Editorial We just experienced one of the most active weeks of the year in the AI market. AWS came out with a lot of interesting announcements at re:Invent, PyTorch 2.0 was released and the NeurIPS
🚀🚀 Edge#248: Foundation Models are Creating the Industrial Era of AI
Thursday, December 1, 2022
Large pretrained models are changing the mechanics of intelligent applications
📃 Edge#247: Classifying ML Interpretability Methods
Tuesday, November 29, 2022
In this issue: we classify ML interpretability methods; we explore the building blocks of interpretability by Google Research; we explain TensorWatch, an open-source framework for debugging ML models.
📝 Guest post: Burst Compute: Scaling Workloads Across Thousands of GPUs in the Cloud, Instantly*
Monday, November 28, 2022
The smartest companies are evolving toward more flexible, on-demand cloud infrastructure using a technique called burst compute, which provides enterprises with accessible, efficient, and cost-
🤗 Stable Diffusion v2
Sunday, November 27, 2022
📝 Editorial Stable Diffusion has been one of those few machine learning (ML) models that have transcended to mainstream culture. A few months ago, Stability AI shocked the ML community by open-sourcing
You Might Also Like
Import AI 399: 1,000 samples to make a reasoning model; DeepSeek proliferation; Apple's self-driving car simulator
Friday, February 14, 2025
What came before the golem? ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Defining Your Paranoia Level: Navigating Change Without the Overkill
Friday, February 14, 2025
We've all been there: trying to learn something new, only to find our old habits holding us back. We discussed today how our gut feelings about solving problems can sometimes be our own worst enemy
5 ways AI can help with taxes 🪄
Friday, February 14, 2025
Remotely control an iPhone; 💸 50+ early Presidents' Day deals -- ZDNET ZDNET Tech Today - US February 10, 2025 5 ways AI can help you with your taxes (and what not to use it for) 5 ways AI can help
Recurring Automations + Secret Updates
Friday, February 14, 2025
Smarter automations, better templates, and hidden updates to explore 👀 ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
The First Provable AI-Proof Game: Introducing Butterfly Wings 4
Friday, February 14, 2025
Top Tech Content sent at Noon! Boost Your Article on HackerNoon for $159.99! Read this email in your browser How are you, @newsletterest1? undefined The Market Today #01 Instagram (Meta) 714.52 -0.32%
GCP Newsletter #437
Friday, February 14, 2025
Welcome to issue #437 February 10th, 2025 News BigQuery Cloud Marketplace Official Blog Partners BigQuery datasets now available on Google Cloud Marketplace - Google Cloud Marketplace now offers
Charted | The 1%'s Share of U.S. Wealth Over Time (1989-2024) 💰
Friday, February 14, 2025
Discover how the share of US wealth held by the top 1% has evolved from 1989 to 2024 in this infographic. View Online | Subscribe | Download Our App Download our app to see thousands of new charts from
The Great Social Media Diaspora & Tapestry is here
Friday, February 14, 2025
Apple introduces new app called 'Apple Invites', The Iconfactory launches Tapestry, beyond the traditional portfolio, and more in this week's issue of Creativerly. Creativerly The Great
Daily Coding Problem: Problem #1689 [Medium]
Friday, February 14, 2025
Daily Coding Problem Good morning! Here's your coding interview problem for today. This problem was asked by Google. Given a linked list, sort it in O(n log n) time and constant space. For example,
📧 Stop Conflating CQRS and MediatR
Friday, February 14, 2025
Stop Conflating CQRS and MediatR Read on: my website / Read time: 4 minutes The .NET Weekly is brought to you by: Step right up to the Generative AI Use Cases Repository! See how MongoDB powers your