| | Good morning. The idea that deep learning might be hitting a wall is one that has been collectively derided by the bulk of the AI sector for years, now. | Increasingly, though, the evidence has suggested that it is, at least, reaching a point of diminishing returns. Today, we’re getting into a bit more evidence to this point that surfaced in recent days. | As Gary Marcus told me when we chatted a few weeks ago, a new paradigm is likely needed. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | 💊 AI for Good: ALS drug discovery 🔔 Vatican unveils AI project 📊 OpenAI, others dealing with ‘diminishing returns’ in current architecture
|
| |
| AI for Good: ALS drug discovery | | Source: Unsplash |
| While there are some treatments that have the potential to slow the deadly progress of Amyotrophic lateral sclerosis (ALS) in some patients, there exists no cure for the disease. | Some pharmaceutical companies have deployed generative AI and deep learning to help identify potential treatments. | The details: Insilico Medicine, a pharmaceutical research company that has developed an integrated suite of AI-based drug discovery software, has been collaborating with academics and patient research organizations for years. In 2016, the firm partnered with Above & Beyond, a company that manages and funds research into ALS. | Insilico said that its systems are able to identify personalized novel targets in patients with ALS; once discovered, the system can generate potential drug candidates designed to respond to those specific targets. The risk here is really in the sharing of highly sensitive genetic data, which poses potentially severe privacy risks.
| Though it’s still early days, Insilico’s goal is to develop personalized therapies for ALS patients. And some early signs are hopeful on this front; in one instance, a new therapeutic agent was able to reverse damage in an ALS patient. |
| |
| | Today’s fastest-growing software company might surprise you 🤳 | | 🚨Heads up! It's not the publicly traded tech giant you might expect… Meet $MODE, the disruptor turning phones into potential income generators. Investors are buzzing about the company's pre-IPO offering.1 | 📲Mode saw 32,481% revenue growth from 2019 to 2022, ranking them the #1 overall software company on Deloitte’s most recent fastest-growing companies list2 by aiming to pioneer "Privatized Universal Basic Income" powered by technology—not government. Their flagship product, EarnPhone, has already helped consumers earn & save $325M+. | 🫴 Mode’s Pre-IPO offering1 is live at $0.25/share — 20,000+ shareholders already participated in its previous sold-out offering. There’s still time to get in on Mode’s pre-IPO raise and even lock in 100% bonus shares3… but only until their current raise closes for good. Claim this exclusive bonus while you can!4 |
| |
| Vatican unveils AI project | | Source: Vatican |
| The Vatican on Monday unveiled AI-powered services for St. Peter’s Basilica, which will allow visitors to virtually explore the Basilica. | The details: The launch is the result of weeks of work with Microsoft and Iconem, a tech firm that specializes in the digitalization of historical sites. | They achieved this by scanning the Basilica with lasers, cameras and drones; AI algorithms were then used to re-assemble the structure. This digital twin will become available on Dec. 1. And beyond being used to offer visitors a virtual experience, it serves the additional purpose of aiding the Vatican in ongoing preservation efforts.
| “If the people entering the Basilica in some way have intuited the Mystery that inspired and radiates it, our mission will have been accomplished,” Cardinal Mauro Gambetti wrote. |
| |
| | It’s time to change the way we build digital products | | Airtable ProductCentral is a complete operating system for Product teams. | Synthesize customer feedback, visualize investments across all product teams, allocate resources based on real time data, & translate every initiative into financial impact. | Learn More |
| |
| | | | | DNA firm holding highly sensitive data 'vanishes' without warning (BBC). Tesla shares pop 8% as post-election rally continues (CNBC). A tale of two jets: The old media grapples with its new limits (Semafor). Google DeepMind open-sourced AlphaFold 3 for academics (Google). 'I was moderating hundreds of horrific and traumatizing videos' (BBC).
| If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here. | | | | |
| |
| OpenAI, others dealing with ‘diminishing returns’ in current architecture | | Source: Unsplash |
| There is a prominent belief in the AI sector in something called “scaling laws.” The basic idea here is that, if you continue to increase (or scale) the amount of training data and the amount of computing power in use, the result will be increasingly more powerful AI models. | OpenAI and its CEO Sam Altman have been vocal proponents of this approach, with Altman recently likening scaling laws to the discovery of a “new square in the periodic table.” | “When we started, the core beliefs were that deep learning works and it gets better with scale,” Altman said. “A religious level belief was that that wasn't going to stop. At some point, you have to just look at the scaling laws and say ‘we’re going to keep doing this.’ There was something really fundamental going on.” | Despite what Altman is suggesting publicly, it seems that the approach internally isn’t going too well. You might say that it has … hit a wall. | What happened: The Information reported that OpenAI’s new flagship model Orion isn’t improving at the expected rate, prompting the company to begin exploring new methods of improving models after training. | The model, according to the report, represented a small increase in performance when compared to the massive jump between GPT-3 and GPT-4. Some OpenAI researchers likewise said that the model isn’t reliably better than GPT-4.
| At the same time, Yam Peleg, an AI researcher, said that he had heard a leak from an unnamed lab (not OpenAI) that “they reached an unexpected HUGE wall of diminishing returns trying to brute-force better results by training longer & using more and more data.” | Peleg added that the wall here is “more severe than what is published publicly.” | “I think it is safe to assume that all major players have reached the limits of training longer and collecting more data already,” Peleg wrote. “It is all about data quality now, which takes time.” | Go deeper: Scaling laws, according to cognitive scientist Gary Marcus, “are not physical laws. They are merely empirical generalizations that held for a certain period time, when there was enough fresh data and compute.” | Marcus — who caught plenty of industry flak for saying in 2022 that deep learning was hitting a wall — said that there has never been any clear argument that scaling laws would “solve hallucinations or reasoning, or edge cases in open-ended worlds — or that synthetic data would suffice indefinitely to keep feeding the machine.” This is in line with Princeton computer science professor Arvind Narayanan’s skeptical impression of scaling laws, which is that nothing can continue in a true exponential fashion forever: “Emergence is not governed by any law-like behavior. It is true that so far, increases in scale have brought new capabilities. But there is no empirical regularity that gives us confidence that this will continue indefinitely.”
| What’s in play here is the idea that developers can fake intelligence and reasoning until they make it; that, trained on enough data, Large Language Models (LLMs) can continue outputting the illusion of intelligence until the line between the real thing because so blurry as to not matter. | Jill Nephew, an AI researcher and the founder of software company Inqwire, said that the challenge of using scaled-up datasets to forge a path toward genuine artificial reasoning has been clear from the beginning due to Zipf’s law. The statistical law holds that if we rank every word in a given language based on how often it is used, the frequency of a word’s use is proportional to the inverse of that rank. | Based on this law, we would expect to see the second-most common word in a language 5,000 times for each occurrence of the 10,000th most common word. | “My bet is the sky is actually falling. This whole mania was obvious for anyone that understands that 95% of conversation uses the first 1000 words and Zipf's law,” Nephew said. “To do reasoning, you are touching on millions of words used in diminishing frequencies. This is death for statistical models because there won't ever be enough data to handle these long tails.” | “A fundamentally different approach is needed AND has been needed for decades, this is nothing new,” she added. | | When labs say that artificial general intelligence is imminent, all I think about are the myriad obstacles lining the uneven, likely mountainous path between LLMs and a hypothetical AGI. | This is a massive one. To even mimic genuine reasoning would require a quantity of data and computing power that doesn’t exist; if we haven’t achieved it with current (massive) amounts of data, scale is not the solution. | LLMs remain fundamentally flawed. | | | Which image is real? | | | | | 🤔 Your thought process: | Selected Image 1 (Left): | | Selected Image 2 (Right): | |
| |
| 💭 A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on OpenAI’s copyright win: | 37% of you think that the media companies need to refile a far stronger case; 22% think it was the right call and another 22% think it was the wrong call. | Something else: | “Law tends to move slowly and specifically … AI is new, a lot of people don't understand it, and a theme in the justice world is ‘legal precedent’ — aka what can we argue here based on past decisions made already? This case sounds like a building block. It's up to the next case (and legal team/judge) to decide whether to lay another brick down, to clarify boundaries/protections around AI. I'll be honest, I'm worried it will not come quickly enough.”
| What do you think about the Vatican's new AI project? | | *Advertiser’s disclosure: | 1 Mode Mobile recently received their ticker reservation with Nasdaq ($MODE), indicating an intent to IPO in the next 24 months. An intent to IPO is no guarantee that an actual IPO will occur. | 2 A minimum investment of $1,950 is required to receive bonus shares. 100% bonus shares are offered on investments of $9,950+. | 3 Please read the offering circular and related risks at invest.modemobile.com. This is a paid advertisement for Mode Mobile’s Regulation A+ Offering. | Past performance is no guarantee of future results. Start-up investments are speculative and involve a high degree of risk. Those investors who cannot afford to lose their entire investment should not invest in start-ups. Companies seeking startup investment tend to be in earlier stages of development and their business model, products and services may not yet be fully developed, operational or tested in the public marketplace. There is no guarantee that the stated valuation and other terms are accurate or in agreement with the market or industry valuations. Further, investors may receive illiquid and/or restricted stock that may be subject to holding period requirements and/or liquidity concerns. | DealMaker Securities LLC, a registered broker-dealer, and member of FINRA | SIPC, located at 105 Maxess Road, Suite 124, Melville, NY 11747, is the Intermediary for this offering and is not an affiliate of or connected with the Issuer. Please check our background on FINRA's BrokerCheck. |
|
|
|