Messages
1/24/2023
12 : 15
Edge 263: Local Model-Agnostic Interpretability Methods: Counterfactual Explanations
Counterfactual explanations as an ML interpretability method, Google's StylEx and Microsoft's DiCE implementation
1/22/2023
12 : 14
The Most Exciting Alliance in AI
Sundays, The Sequence Scope brings a summary of the most important research papers, technology releases and VC funding deals in the artificial intelligence space.
1/20/2023
6 : 54
📌 Event: Robust & Responsible AI Summit with Andrew Ng & industry leaders
Jan 26! Connect with AI builders, leaders, and industry experts
1/20/2023
5 : 54
Edge 260: Data2vec 2.0 is Meta AI's New Self-Supervised Learning Model for Vision, Speech and Text
The model is one of the most impressive achievements in self-supervised learning research to this day.
1/20/2023
5 : 4
📝 Guest Post: Winning the AI Game as a Medium-Sized Business*
How to overcome five challenges of AI adoption with Managed AI
1/20/2023
4 : 15
New Generative AI Innovations from Google and Salesforce
Sundays, The Sequence Scope brings a summary of the most important research papers, technology releases and VC funding deals in the artificial intelligence space.
1/20/2023
3 : 44
Edge 261: Local Model-Agnostic Interpretability Methods: LIME
LIME, Meta AI research on interpretable neurons and the Alibi Explain framework.
1/20/2023
3 : 4
Edge 262: NVIDIA’s Get3D is a Generative AI Model for 3D Shapes
The model is actively used in NVIDIA's Omniverse platform.
1/10/2023
12 : 14
Edge 259: Local Model-Agnostic Interpretability Methods: SHAP
SHAP method, MIT taxonomy for ML interpretability and BAIR's iModels framework.
1/8/2023
12 : 14
NVIDIA Latest Push in Generative AI the Metaverse
Sundays, The Sequence Scope brings a summary of the most important research papers, technology releases and VC funding deals in the artificial intelligence space.
1/5/2023
12 : 14
Edge 258: Inside OpenAI's Point-E: The New Foundation Model Able to Generate 3D Representations from Language
The new model combines GLIDE with image-to-3D generation models is a very clever and efficient architecture.
1/3/2023
12 : 14
Edge 257: Local Model-Agnostic Interpretability Methods
Local model-agnostic interpretability, IBMs ProfWeight research and the InterpretML framework.
1/1/2023
12 : 14
2023: The Year The Value Shifted from Infrastructure to Applications
Sundays, The Sequence Scope brings a summary of the most important research papers, technology releases and VC funding deals in the artificial intelligence space.
12/29/2022
12 : 14
Edge 256: The Architecture and Methods Powering ChatGPT
An overview of the AI techniques behind OpenAI's new supermodel
12/27/2022
12 : 14
Edge 255: Interpretability Methods: Accumulated Local Effects (ALE)
ALE method, OpenAI Microscope and IBM's AI 360 Explainability Toolkit.
12/27/2022
3 : 44
Edge 255: Interpretability Methods: Accumulated Local Effects (ALE)
ALE method, OpenAI Microscope and IBM's AI 360 Explainability Toolkit.
12/25/2022
12 : 14
OpenAI Gets Into the Text-to-3D Game with Point-E
Sundays, The Sequence Scope brings a summary of the most important research papers, technology releases and VC funding deals in the artificial intelligence space.
12/22/2022
12 : 14
Edge 254: InstructGPT is the Model that Inspired the Famous ChatGPT
The model fine tuned GPT-3 to improve its ability to follow instructions.
12/20/2022
12 : 14
Edge 253: Interpretability Methods: Partial Dependence Plots
Partial dependence plots, interpretable time series forecasting and Google's fairness indicators.
12/18/2022
12 : 14
Security: The Most Ignored Area of MLOps
Sundays, The Sequence Scope brings a summary of the most important research papers, technology releases and VC funding deals in the artificial intelligence space.