|
An October Cornucopia of AI Prognostications |
For some reason, October is ripe on AI reports. In the same week, the AI community was awarded the Kaggle AI Report and the State of AI report from Nathan Benaich. No jokes, the world is indeed in turmoil. Halloween is upon us. Zombies haven't started munching on brains – yet – but self-aware AIs could well decide to channel their inner utilitarian philosopher and maximize global happiness. That scenario might be as concerning as listening to Sam Altman's musings on AGI. Point being, not only does AI need benchmarks, but we also deserve support. So, the reason might have been: why wait until year-end retrospectives when we're all still here and, tentatively at least, listening? |
Missing the Forest for the Trees |
While annual AI reports from State of AI and Kaggle offer a kaleidoscope of predictions for the next year, they may not necessarily capture the seismic shifts truly dictating the future of the field. The State of AI report posits bold expectations for the coming year: a generative AI media company will face scrutiny for election meddling; tech IPOs will thaw; and AI companies will face antitrust scrutiny. On the other hand, Kaggle zooms in on the tech-specifics, speaking about the ethical considerations in generative AI and the complexities around computer vision and ML for tabular data. However, what these reports might overlook is the big picture. There are deeper currents, not always obvious, that merit our collective contemplation: the herculean undertaking in neuroscience to map human brain cell types, and the leap in energy-efficient machine learning models, for example. |
Nature Speaks to Us |
One can argue that we're overlooking developments that could fundamentally change our understanding of intelligence, human or artificial. Take, for example, the monumental feat in neuroscience: the creation of a comprehensive atlas showcasing over 3,000 human brain cell types. The atlas, detailed in a Nature article, promises to decode brain complexity, opening doors for enhanced AI models trained not just on computational data but on a richer, biological context. This work, that contains 21 papers, will “aid the study of diseases, cognition and what makes us human, among other things.” |
Similarly, engineers at Northwestern University have developed nanoelectronic devices that make machine learning 100-fold more energy efficient. Detailed in Nature Electronics paper, these devices offer the ability to perform AI tasks in real-time, without relying on the cloud, thereby improving data privacy. Not only do they contribute to sustainable AI, but they also offer real-world applications far removed from the speculative games we often play in predicting the future of AI. |
Looming Questions |
Before the end of the year, these are the questions that we keep contemplating on: |
AI is both a culprit and a solution in the climate crisis. Is it a yin-yang scenario where AI contributes to massive energy consumption but also holds the promise of optimizing energy grids, predictive maintenance for renewables, etc.? With multimodality becoming a trend, how can we build ML models that not only ingest multimodal data but also interpret the data in a context-sensitive manner to produce more nuanced outputs? We are also about to figure out Reinforcement Learning with Human Feedback (RLHF) for open source, with such people as Yann LeCun suggesting that “Human feedback for open source LLMs needs to be crowd-sourced, Wikipedia style. It is the only way for LLMs to become the repository of all human knowledge and cultures.” How the access to the repository of all human knowledge combined with neuroscience discoveries will enhance us? In a society where opinions are often shaped by headlines, how is the layperson's perception of AI evolving? Are we looking at a future where "AI literacy" becomes as fundamental as reading and math, especially given the influence of AI in decision-making processes from healthcare to finance? How advancements in AI could radically alter our social fabric? What are other historical research and ideas (like those mentioned here) that were overlooked and/or forgotten?
|
As we ponder these questions, maybe we’ll find the answers we didn’t even know we were seeking. Please email us with your thought and questions. |
Or.. we can just use Mistral’s Trismegistus: a new model designed for a niche audience interested in esoteric and spiritual topics 🙂 |
You are currently on the free list. Until Friday, for our early supporters we offer an annual subscription for only $24/year |
|
|
|
News from The Usual Suspects © |
Andreessen vs Marcus: The Tug of War in Techno-Optimism |
Marc Andreessen paints a vivid, unflinching techno-optimist future, advocating for an unbridled embrace of technology and markets as the harbinger of prosperity. Meanwhile, Gary Marcus counters by advocating for a more nuanced approach. Marcus scrutinizes Andreessen’s 11,000 words essay for not substantiating his claims with data and failing to address the proverbial elephants in the room – like climate change and misinformation. While both agree that technology is pivotal for the future, they diverge on how blind or calculated that optimism should be →get popcorn |
OpenAI: The Evolving Ethos |
Meanwhile, as the ether is occupied with word fights, OpenAI has quietly shuffled its core values, setting its sights squarely on AGI. Gone are words like 'Audacious' and 'Unpretentious,' replaced by 'AGI focus' leading the charge and ‘Intense and scrappy’. The modification in OpenAI’s online charter provides a transparent lens into how its mission is shifting gears. General intelligence is a term that's bandied about often, but could it be more myth than reality? The so-called race to Artificial General Intelligence (AGI) feels like a chase after windmills, diverting resources from solving more immediate, tangible problems. |
(side note: instead of our usual Midjourney, we used DALL-E 3 to create a cover for this newsletter. To be able to text with an image generator and optimize the picture in real-time is incredible!) |
Google's AI-Powered Artist and Wordsmith |
Google isn't just satisfied with answering queries; it wants to inspire our imagination. The Search Generative Experience (SGE) now comes with capabilities to generate images based on user prompts and offers a draft function that enables users to alter the length and tone of the content it produces. It's not only assisting you in finding information but also in shaping it. |
The Steering Wheel for Language Models |
NVIDIA’s SteerLM is billed as a democratizing force in the world of LLMs. With real-time "knobs" for tweaking model behavior, it transitions from a one-size-fits-all model to a bespoke tool. Moreover, a customized 13B Llama 2 model is already up for grabs for real-world testing. |
Adobe Fires Up the AI Ring |
Adobe isn’t one to stand by. Adobe MAX saw a flurry of announcements, including three new Adobe Firefly models for images, vectors, and design. The magnitude of AI features and tools was staggering, effectively signaling Adobe's eagerness to become a heavyweight contender in the AI arena. |
|
Turing Post as a Guest |
| AI, Don’t Mimic Us: Expand Beyond Our Human-Like Thinking | Turing Post Newsletter Founder Ksenia Se |
|
|
In this podcast, I argue that expanding AI literacy will widen people's creativity, similar to how increased literacy rates during the industrial revolution accelerated progress. Instead of limiting ourselves by anthropomorphizing AI or fearing it will replace us, we should use AI as a tool to enhance our abilities and evolve as humans →subscribe to Creativity Squared to listen to more podcasts about creativity and AI. |
Twitter Library |
| 15 AI image generators | Discover the top text-to-image tools in 2023! From Midjourney and Jasper's AI to DALL-E-2 and Stable Diffusion. | www.turingpost.com/p/15-image-generators |
| |
|
|
|
|
Tech news, categorized for your convenience: |
Task-Specific Enhancements for LLMs |
Meta-CoT: Generalizable CoT Prompting in Mixed-task Scenarios: Presents a generalized prompting method for mixed-task scenarios, specifically improving generalization in multiple reasoning tasks →read more LEMUR: Harmonizing Natural Language and Code: Introduces open-source models proficient in both text and code, aiming to enhance the capabilities of language agents →read more
|
Performance and Efficiency |
HyperAttention: Long-context Attention in Near-Linear Time: A new attention mechanism is introduced that effectively handles long contexts in linear time, improving performance significantly →read more Flash-Decoding for Long-context Inference: Introduces a method to make the attention mechanism more efficient during inference, especially for longer sequences →read more
|
Resource Efficiency and Deployment |
|
Industry Contributions |
|
Simulation and Real-World Interaction |
|
In other newsletters |
|
Thank you for reading, please feel free to share with your friends and colleagues 🤍 |
|
Another week with fascinating innovations! We call this overview “Froth on the Daydream" - or simply, FOD. It’s a reference to the surrealistic and experimental novel by Boris Vian – after all, AI is experimental and feels quite surrealistic, and a lot of writing on this topic is just a froth on the daydream. |
Leave a review! |
|
|
|