|
Amazon's Shovels, Anthropic's Castle: A Symbiotic AI Moat |
It’s not even an AI race anymore, it seems that what really matters is the moat the company has (thanks SemiAnalysis for popularizing the term). The Usual Suspects©, including Google, Meta, Amazon, Apple, Microsoft, and genAI-darlings Open AI and Anthropic (and a few smaller ones), every week compete on whose moat is moater. |
On August 7, we wrote: “While Amazon's aspirations are vast, they notably lack access to OpenAI's models and are yet to get their hands on Meta's Llama 2. This AI tug-of-war among tech behemoths is fascinating, to say the least.” |
This week, they opted for a tried-and-true formula: LLM developer + provider of 'picks and shovels' = a strong moat. |
Amazon is going to invest up to $4 billion investment in Anthropic (initial investment of $1.25 billion for a minority stake in Anthropic, with the option to increase it to $4 billion.) AWS gains a strategic edge by incorporating Anthropic's foundation models into Amazon Bedrock. On the flip side, Anthropic benefits from AWS's massive scale, security, and performance capabilities. |
Here are a few other key points of this collaboration: |
Anthropic selects AWS as its primary cloud provider for mission-critical workloads, safety research, and future foundation model development. Anthropic will use AWS's Trainium and Inferentia chips for building, training, and deploying its future foundation models. Both will collaborate on developing future chips technology.
|
This is a solid (or should we say deep?) moat: AWS solidifies its standing in the AI space. Anthropic gets an opportunity to scale rapidly and impact a wider audience via AWS's extensive network. It's said that both Amazon and Anthropic are committed to the responsible deployment of AI technologies and have made public commitments toward the same. The best part (at least for Anthropic) is that Amazon has an immense amount of user data that they have freedom to use however they want. It’s like not only having a moat but a source of the water of life. Which is always important when you live in an AI castle. |
Additional Info: this week Anthropic also published their Responsible Scaling Policy, Version 1.0 (key elements include evaluating models, defining safety levels iteratively, security measures, evaluations protocol, transparency, and pausing scaling if necessary). |
“I have tremendous respect for Dario, the Anthropic team, and their foundation models,” tweeted Andy Jassy, “and believe that together we can help improve many customer experiences.” And Amazon’s moat. |
You are currently on the free list. Support our work, upgrading to the Premium. Join top companies, AI labs and universities who trust our historical deep dives, keen eye on details, innovative ideas and timely updates. |
|
|
News from The Usual Suspects © |
OpenAI goes Multimodal |
OpenAI never stops to innovate and deliver. Google still promises Gemini, and ChatGPT is already introducing voice and image capabilities, expanding its utility and accessibility. For voice, both iOS and Android users can opt-in, while image capabilities will be accessible on all platforms. |
On the application side, there's already a partnership with Be My Eyes to assist visually impaired users, and an interesting collaboration with Spotify for voice translation features. |
We agree with Sasha Luccioni from Hugging Face though, it’s always important to remember why we shouldn’t anthropomorphise AI: |
| Sasha Luccioni, PhD 💻🌎🦋✨🤗 @SashaMTL | |
| |
The always and forever PSA: stop treating AI models like humans. No, ChatGPT cannot "see, hear and speak". It can be integrated with sensors that will feed it information in different modalities. Don't fan the flames of hype, y'all. | OpenAI @OpenAI ChatGPT can now see, hear, and speak. Rolling out over next two weeks, Plus users will be able to have voice conversations with ChatGPT (iOS & Android) and to include images in conversations (all platforms). |
| | Sep 25, 2023 | | | | 312 Likes 69 Retweets 35 Replies |
|
|
Not public yet but soon: very promising DALL-E 3 that can translate nuanced requests into extremely detailed and accurate images. |
| Introducing DALL·E 3 |
|
|
Also OpenAI: |
partners with Tools Competition to launch the Learning Impact Prize, offering $100,000 and API credits for innovative AI education projects. opens a call for experts to join its Red Teaming Network, aiming to enhance the safety of its machine learning models.
|
Google's AI Endeavors |
Google tries to keep up, they made updates to their PaLM2 model and now introduce Bard AI Extensions Gmail and all other apps and services, featuring "Double-Check" for info accuracy. |
Additional Info: How Google taught its AI to doubt itself by Platformer and An Overview of Google’s AI Product Strategy by AI Supremacy. |
Also Google: |
|
Microsoft from “PC to to every desk” to Copilot to every PC and laptop |
Microsoft’s commitment to AI innovation manifests through Copilot – an AI integrated across the Windows ecosystem including Bing, Edge, and Microsoft 365. Designed as an everyday assistant and integrated to the whole Microsoft universe, Copilot aims to be a lifestyle companion. |
The tech giant also rolled out AI enhancements for Bing and Edge, offering features like personalized answers and DALL-E powered image creation. On the enterprise front, Microsoft 365 Copilot will debut on November 1, bringing AI functionalities to Outlook, Word, and Excel. |
Additional Info: an interesting writeup about the old and new Microsoft strategy in Stratechery. |
Also Microsoft: |
announces New Surface devices, designed to showcase the firm's advances in AI, fortifying the belief that hardware, software, and AI should work in harmony. presents Kosmos-2.5, a model adept in machine reading of text-intensive images, signifies another layer of Microsoft's AI strategy. It offers a flexible approach to text and image understanding.
|
The moats are hastily being dug around these newly built AI castles, often in such a rush that it leads to the invention of false information, the getting of facts wrong, and mistakes in demos. Just a note, that we should be aware of that and race responsibly. |
Can’t wait to see what comes next: a strategic moat between Apple and Inflection AI? |
Practical Implementation (it’s also a cool video) |
| Tesla Optimus @Tesla_Optimus | |
| |
Optimus can now sort objects autonomously 🤖 Its neural network is trained fully end-to-end: video in, controls out. Come join to help develop Optimus (& improve its yoga routine 🧘) → | | tesla.com/AI AI & Robotics | Tesla Apply now to work on Tesla Artificial Intelligence & Autopilot and join our mission to accelerate the world’s transition to sustainable energy. |
|
| | | Sep 23, 2023 | | | | 32.6K Likes 7.17K Retweets 2.4K Replies |
|
|
Jim Fan from Nvidia engaged in some mental reverse engineering and distilled his insights into a fascinating long-read article. |
|
Twitter Library |
| 10 AI Code Companions: Models and Tools | Avoid hype and explore AI & ML in depth and width. Get the full perspective: news, company profiles, AI history, global AI policies, and investment trends from the industry experts | www.turingpost.com/p/10-code-assistants |
| |
|
|
We recommend |
| Bot Eat Brain AI is eating the world and your brain is on the menu. Join 20,000+ readers staying one step ahead of the bots with our daily AI newsletter. | Subscribe |
|
|
Fresh Research Papers, categorized for your convenience (all links lead to the original papers) |
Vision Transformers and Computer Vision |
RMT: Retentive Networks Meet Vision Transformers: proposes Retentive Transformer Model (RMT) by adapting Retentive Networks for computer vision tasks. The 2D Retentive Self-Attention mechanism promises superior performance over current state-of-the-art models in image classification and other downstream vision tasks →read more DualToken-ViT: Position-aware Efficient Vision Transformer with Dual Token Fusion combines local information from convolution layers and global information from downsampled self-attention, using position-aware global tokens for enrichment and enhanced efficiency →read more
|
Fine-Tuning and Efficiency in LLMs |
|
Data Compression |
|
Fact-Checking, Verification in LLMs, and Model Limitations |
The Reversal Curse: LLMs trained on “A is B” fail to learn “B is A" highlights the limitations in logical inference capabilities of LLMs, focusing on the failure of these models to deduce "B is A" when trained on "A is B" →read more Chain-of-Verification (COVE) Reduces Hallucination in LLMs proposes COVE to reduce factual hallucinations in LLM responses. COVE operates by creating draft responses, generating verification questions, and then cross-answering them to produce a final, fact-checked response →read more
|
In other newsletters |
|
Thank you for reading, share it with your friends and colleagues! 🤍 |
|
Another week with fascinating innovations! We call this overview “Froth on the Daydream" - or simply, FOD. It’s a reference to the surrealistic and experimental novel by Boris Vian – after all, AI is experimental and feels quite surrealistic, and a lot of writing on this topic is just a froth on the daydream. |
Leave a review! |
|
|
|