| | Next Week in Turing Post: | Wednesday, AI 101: a dive deep into Jepa, a framework for AI that emphasizes joint embedding predictive architecture, designed to improve unsupervised learning by predicting future observations from current embeddings.; Friday: A new investigation into the world if AI Unicorns.
| If you like Turing Post, consider becoming a paid subscriber. You’ll immediately get full access to all our articles, investigations, and tech series → | |
|
| "People who are really serious about software should make their own hardware." This famous quote, often attributed to Steve Jobs but originally from computer scientist Alan Kay, encapsulates Apple's philosophy: a deep integration of hardware and software is key to delivering the best user experiences. Apple's journey in chip design, from outsourcing to developing its own silicon like the A-series and M-series, underscores this philosophy. It began with Steve Jobs' vision and was realized through significant investments in research and development, strategic acquisitions (like PA Semi in 2008), and a close partnership with TSMC for chip fabrication. By making the right move with designing its own chips back then, Apple has set the stage for its current AI strategy. Everyone is now watching to see how Apple will play its AI game, and they answered this question at their annual Worldwide Developers Conference (WWDC) in Cupertino. | The AI Leak and Apple's Strategy | While Microsoft and Nvidia have embargoed pre-briefings to keep one big thing confidential until the very event, Apple involuntarily had to take a different strategy. Everything they were going to announce on Monday was leaked to Bloomberg's Mark Gurman the day before. How that happened – nobody knows, but the company decided to ignore it and calmly announced the old news. Apple has been steadily leaking iPhones since almost the very beginning, so who knows, maybe it is their strategy. Nonetheless, their announcements prove the original strategy of seamless integration of its existing hardware and software ecosystem. This is where the "Apple Intelligence" announcement comes in. | Apple Intelligence and The Power of Integration (with OpenAI) | Two main things from the today’s keynote: Apple’s AI system called "Apple Intelligence," and its collaboration with OpenAI. By waiting and not developing its own conversational LLM like GPT, Apple achieved at least two things: it observed several rounds of GPT developments and saved hundreds of millions of dollars on training the model. Now it can weave the existing and working model into the fabric of the Apple experience. They introduced AI-powered Siri, which will be able to address everything on your Apple device. AI features will include summarizing articles, emails, and messages, and auto-reply suggestions. Big hits are AI-created custom emojis and auto-transcription in Voice Memos. Enhanced photo editing in the Photos app is finally here also. A nod to developers is AI-infused Xcode for automatic code completion, similar to GitHub Copilot. | Apple's advantage is its control over the entire stack, from the chip to the operating system to the apps. This allows for a level of optimization and integration that competitors can't easily match. | Why Apple's AI Matters | Apple's entry into the AI arena is significant for several reasons: | Timing: Apple isn't early, but it's not late either. The company is entering the AI landscape at a time when it can learn from others' mistakes and focus on delivering AI features that genuinely enhance the user experience. Hardware Boost: AI is computationally intensive, and Apple's latest devices are well-equipped to handle it. This could drive a wave of upgrades, benefiting Apple's bottom line. Ecosystem Lock-In: By integrating AI into its ecosystem, Apple makes its devices even more indispensable to users. This strengthens its platform and makes it harder for users to switch to competitors. Privacy Focus: Apple has a strong reputation for privacy, and its AI features are designed with this in mind.
| The Road Ahead | While WWDC didn't bring new hardware, it did offer a glimpse into Apple's AI-powered future. It’s well-measured timing to become a major player in the AI landscape. As AI continues to evolve, Apple's ability to seamlessly blend it with its hardware and software could set a new standard for user experiences. The company is betting on the idea that when it comes to AI, the whole is greater than the sum of its parts. And if history is any indication, Apple's bet is likely to pay off. | Additional reading: Great analysis by Stratechery. | Click the link below to support our partners 🙂 They’ve created a new standard for enterprise GenAI evaluation 🌕 We highly recommend → | | Why it’s great: 💰 Cost effective: 97% cheaper than GPT-3.5 ⚡ Fast: 11x faster than GPT-3.5 🎯 Accurate: 18% more accurate than GPT-3.5 📈 No Ground Truth Data Needed: Simplifies deployment and maintenance 🔧 Customizable: Quickly fine-tune for your specific evaluation requirements | Fortune 500 teams already use Luna to detect hallucinations, prevent prompt attacks, and enforce data privacy. Get started with Luna today!
|
|
| |
| 10 Free Books to Master Machine Learning for Every Level | Explore foundational concepts, advanced techniques, and practical guides | www.turingpost.com/p/free-machine-learning-books |
| |
|
| | News from The Usual Suspects © | | | Yann LeCun @ylecun | |
| |
- Regulators should regulate applications, not technology. - Regulating basic technology will put an end to innovation. - Making technology developers liable for bad uses of products built from their technology will simply stop technology development. - It will certainly stop the… x.com/i/web/status/1… | Andrew Ng @AndrewYNg The effort to protect innovation and open source continues. I believe we’re all better off if anyone can carry out basic AI research and share their innovations. Right now, I’m deeply concerned about California's proposed law SB-1047. It’s a long, complex bill with many parts… x.com/i/web/status/1… |
| | Jun 6, 2024 | | | | 2.97K Likes 574 Retweets 168 Replies |
|
| | The day after Nvidia's introduction of their new chips ahead of the official Computex schedule, AMD announced the MI325X accelerator, available in Q4 2024, and plans for the MI350 series in 2025, boasting 35 times better performance in AI inference compared to the MI300 series. AMD also detailed the MI400 series, expected in 2026. Like Nvidia, AMD is now going to ship new chips annually. | | Microsoft faced backlash over its "Recall" feature, which captures screenshots every five seconds, posing significant security risks. Instead of making this feature ‘built-in,’ they switched to opt-in, requiring facial recognition or fingerprint ID and encrypting the search database. From the very beginning, ‘Recall’ didn’t seem to be well thought-through. | On the positive note: Microsoft just introduced Aurora (1.3 b parameters), the first large-scale foundation model of the atmosphere. It uses advanced 3D Swin Transformers and Perceiver-based encoders and achieves remarkable accuracy, outperforming traditional models by predicting atmospheric dynamics, air pollution levels, and extreme weather events with a computational speed-up of approximately 5,000x. | | In their blog, they detail how their AI-based code completion now assists with 50% of code characters, significantly boosting developer productivity. Recent updates include AI resolving over 8% of code review comments and adapting pasted code to context. Future goals involve expanding AI applications to testing, code understanding, and maintenance. | | Misinformation has been one of the main topics in AI discourse. AI practitioners like Oren Etzioni start companies (check our interview) to prevent harmful misinformation. Though it is still a problem, a paper by Ceren Budak et al. suggests that misinformation is a more manageable problem than previously thought. | Additional read: The Platformer discusses how misinformation's impact is smaller than expected, challenging the notion that it severely undermines democracy. It’s actually a small number of individuals who are responsible for most misinformation spread and that its influence is limited. Platforms like Twitter reduced misinformation significantly by banning habitual spreaders. While concerns remain about the information environment, there is hope that platforms can control misinformation effectively. | | | Image Credit: CBInsights |
| In other newsletters | | The freshest research papers, categorized for your convenience | Our top | “The idea here is that just as we've built foundation models to help us better classify and generate human language, we might seek to do the same with animal,” – Researchers from the Max Planck Institute of Animal Behavior and their collaborators introduced animal2vec and MeerKAT: A self-supervised transformer for rare-event raw audio input and a large-scale reference dataset for bioacoustics. Animal2vec, a self-supervised transformer model, addresses the challenges of analyzing sparse and unbalanced bioacoustic data by learning from raw audio waveforms. The MeerKAT dataset, the largest publicly available labeled dataset on non-human terrestrial mammals, contains over 1068 hours of audio, including 184 hours with detailed vocalization labels. Isn’t that incredible? I told my dog that I would soon understand his language, and he rolled his eyes. Researchers from OpenAI introduced “Scaling and Evaluating Sparse Autoencoders” paper, which presents a methodology how to interpret and extract concepts from GPT-4. Using scalable techniques, they decomposed its internal representations into 16 million patterns. Despite challenges in capturing the full behavior of large models, their sparse autoencoders identify human-interpretable features. This research aims to enhance model trustworthiness and steerability. OpenAI is open-sourcing their findings, including papers, code, and visualizations. In the paper “Open-Endedness is Essential for Artificial Superhuman Intelligence,” researchers from Google DeepMind argue that achieving artificial superhuman intelligence (ASI) necessitates open-ended AI systems – those capable of continuous self-improvement and discovery. They provide a formal definition of open-endedness based on novelty and learnability. The paper illustrates the potential of foundation models combined with open-ended systems to create human-relevant discoveries and highlights the safety implications of developing such AI.
| Reinforcement Learning and Agent Development | Advancing DRL Agents in Commercial Fighting Games: Training, Integration, and Agent-Human Alignment – Explores developing and deploying a DRL agent system in a commercial game using Heterogeneous League Training to balance competence and efficiency, aligning agent behavior with human expectations. Read the paper Artificial Generational Intelligence: Cultural Accumulation in Reinforcement Learning investigates how RL agents can accumulate culture across generations, enhancing capabilities through episodic in-context learning and in-weights learning, inspired by human cultural evolution. Read the paper Self-Improving Robust Preference Optimization iIntroduces an offline RLHF framework to enhance alignment with human preferences, treating learning as a self-improvement process optimized through a min-max objective. Read the paper AGENTGYM: Evolving LLM-based Agents across Diverse Environments presents a framework for training LLM-based agents to handle diverse tasks, integrating behavioral cloning and a novel evolution method, achieving performance comparable to state-of-the-art models. Read the paper
| Language Models and Natural Language Processing | MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark presents an enhanced benchmark to evaluate LLMs with more difficult questions, showcasing a significant drop in model accuracy, highlighting its effectiveness in distinguishing model capabilities. Read the paper Show, Don’t Tell: Aligning Language Models with Demonstrated Feedback introduces Demonstration Iterated Task Optimization to align LLMs with user preferences using fewer examples, significantly outperforming other methods. Read the paper To Believe or Not to Believe Your LLM explores uncertainty quantification in LLMs, developing an information-theoretic metric to detect high epistemic uncertainty, identifying unreliable responses and hallucinations. Read the paper Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models proposes a framework enhancing LLMs' reasoning abilities using a meta-buffer to store high-level thought-templates, achieving significant performance improvements across diverse tasks. Read the paper PLaD: Preference-based Large Language Model Distillation with Pseudo-Preference Pairs introduces a framework for distilling LLMs using preference data, leveraging quality discrepancy for ranking loss, improving model performance and generation quality. Read the paper Item-Language Model for Conversational Recommendation combines a text-aligned item encoder with a frozen LLM to integrate user interaction signals, enhancing recommendation performance and maintaining language and reasoning capabilities. Read the paper
| Model Scalability and Efficiency | Will we run out of data? Limits of LLM scaling based on human-generated data analyzes constraints on LLM scaling due to finite human-generated text data, suggesting synthetic data, transfer learning, and improving data efficiency to sustain progress. Read the paper µLO: Compute-Efficient Meta-Generalization of Learned Optimizers develops a method for improving generalization of learned optimizers using Maximal Update Parametrization, enabling zero-shot generalization of optimizer hyperparameters. Read the paper Block Transformer: Global-to-Local Language Modeling for Fast Inference introduces an architecture optimizing inference speed by separating global and local context modeling, drastically reducing key-value cache retrieval needs and increasing throughput. Read the paper Scaling Laws for Reward Model Overoptimization in Direct Alignment Algorithms analyzes reward overoptimization in DAAs used in training LLMs, proposing solutions for mitigating performance degradation at higher KL divergence budgets. Read the paper
| Leave a review! | | Please send this newsletter to your colleagues if it can help them enhance their understanding of AI and stay ahead of the curve. You will get a 1-month subscription! |
|
| |
|