TheSequence

Newsletter Image

Messages

4/30/2023
11 : 14

The Generative AI Cyber Security Week

Sundays, The Sequence Scope brings a summary of the most important research papers, technology releases and VC funding deals in the artificial intelligence space. ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
4/28/2023
12 : 54

📌 Meet Elemeta: Metafeature Extraction for Unstructured Data*

LLMs are everywhere, left, right, and center of any and all AI discourse these days. But we've got to be honest here, it's hard to understand how they make decisions and explain and monitor
4/27/2023
11 : 14

Edge 286: Vicuna, the LLaMA-Based Model that Matches ChatGPT Performance

Created by researchers from UC Berkeley, CMU, Stanford, and UC San Diego, Vicuna is part of the new wave of models that use Meta's LLaMA as its foundation. ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
4/26/2023
11 : 27

The Sequence Chat: Microsoft's Evan Chaki on Semantic Kernel and Combining LLMs with Conventional Programming Lang…

A veteran innovator in areas such as low-code now is working on one of the most innovative projects in the LLM space. ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
4/25/2023
11 : 14

Edge 285: A Recap Of Our Series About Federated Learning

A summary of the topics discussed in the last 8 weeks. ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
4/24/2023
13 : 14

📝 Guest post: Elemeta: Metafeature extraction for unstructured data*

In this guest post, Lior Durahly, data & ML engineer @Superwise, introduces Elemeta, a brand new open-source library, currently in beta, for metafeature extraction from unstructured data. What is
4/23/2023
11 : 24

Open Source Generative AI is Experiencing a "Linux Moment" but it Needs an "Apache Moment"

Sundays, The Sequence Scope brings a summary of the most important research papers, technology releases and VC funding deals in the artificial intelligence space. ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
4/21/2023
11 : 14

💡The Buyer’s Guide to Evaluating ML Feature Stores & Feature Platforms

If you're looking to adopt a feature store or platform, but don't know where or how to start your research, Tecton created this helpful guide for you. Download this free guide to: Access a
4/20/2023
11 : 14

Edge 284: Meet Dolly 2.0: One of the First Open Source Instruction Following LLMs

Dolly builds on the principles of InstructGPT on the GPT-J model. ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
4/19/2023
11 : 14

The Sequence Chat: Consensys's Lex Sokolin on Generative Art and Philosophical Principles of Generative AI

A conversation about the history, current state and foundations of generative art. ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
4/19/2023
7 : 34

The Sequence Chat: Salesforce Research's Junnan Li on Multimodal Generative AI

One of the creators of the famous BLIP-2 model shares his insights about the current state of multimodal generative AI. ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
4/19/2023
6 : 55

Inside LangChain: The Super Popular LLM Framework You Need to Know About

LangChain is part of a generation of new frameworks that are integrating LLMs into mainstream software development lifecycles. ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
4/19/2023
6 : 25

📌 Webinar: Improving search relevance with ML monitoring

Let's take a dive into ML systems for ranking and search relevance and what it means to monitor them for quality, edge cases, and corrupt data ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
4/19/2023
5 : 55

Big vs. Small, Open Source vs. API Based, the Philosophical Frictions of Foundation Models

Sundays, The Sequence Scope brings a summary of the most important research papers, technology releases and VC funding deals in the artificial intelligence space. ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
4/19/2023
5 : 35

📝 Guest Post: How to Enhance the Usefulness of Large Language Models*

In this guest post, Filip Haltmayer, a Software Engineer at Zilliz, explains how LangChain and Milvus can enhance the usefulness of Large Language Models (LLMs) by allowing for the storage and
4/19/2023
5 : 24

Edge 283: Federated Learning and Differential Privacy

Applying deferential privacy to federated learning(FL) scenarios, Meta AI's research and the best open source frameworks in this area. ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
4/11/2023
11 : 14

Edge 281: Cross-Device Federated Learning

Cross device federated learning(FL), Google's work on FL with differential privacy and the FedLab framework ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
4/10/2023
19 : 14

📝 Guest Post: Caching LLM Queries for Improved Performance and Cost Savings*

If you're looking for a way to improve the performance of your large language model (LLM) application while reducing costs, consider utilizing a semantic cache to store LLM responses. By caching
4/9/2023
11 : 14

The LLama Effect: How an Accidental Leak Sparked a Series of Impressive Open Source Alternatives to ChatGPT

Sundays, The Sequence Scope brings a summary of the most important research papers, technology releases and VC funding deals in the artificial intelligence space. ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
4/8/2023
12 : 24

📌 EVENT: Join us at LLMs in Production conference – the first of its kind

How can you actually use LLMs in production? There are still so many questions. Cost. Latency. Trust. What are the real use cases? What are challenges in productionizing them? MLOps community decided