| | Good morning. The European Commission yesterday said that Meta’s ‘pay or comply’ advertising model does not comply with European Union’s Digital Markets Act. | What will now ensue is a possibly lengthy legal process that could result in Meta paying a really hefty fine. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | |
| |
| AI for Good: Lighting one nonprofit’s path to find a cure for ALS | | Source: Major Tom Agency on Unsplash |
| Among the biggest promises of artificial intelligence technology is its integration with biotech, based on the hope that AI can be leveraged to accelerate and drive drug discovery and innovation. | One nonprofit — Everything ALS — has turned its focus to AI and machine learning as the backbone of a plan to find a cure for Amyotrophic Lateral Sclerosis (ALS) by 2030. | The details: The organization in March launched its Vision 2030 AI Hub, a platform that combines AI, machine learning and data science with the goal of heavily accelerating ALS research. | The focus of Everything ALS’s approach revolves around a newer area of study called Repair and Regeneration, whose ambition is to “develop therapies that can effectively halt and prevent the progression of the disease” and “foster nerve regeneration, empowering individuals with ALS to recover muscle strength and functionality.” The role of AI here is one of data processing and analyzing in order to identify insights and paths forward that “are beyond human comprehension.”
| Why it matters: A 2016 study published in the National Library of Medicine predicted that the prevalence of ALS could increase by about 70% between 2015 and 2040. | Bill Nuti, chairman of Everything ALS, said in a statement that ALS research has long been hindered by limited resources due to its lower prevalence. “This stark reality underscores a pressing need for a radical change in approach to ensure that no promising research is left on the sidelines, and we accelerate the process of developing a cure. Vision 2030 is the right operating model to address these challenges,” he said. |
| |
| | | Good prompts and fine-tuning aren’t enough if you’re building an AI SaaS application/feature. | It’s best to feed the LLM customer-specific context. | But this context lives across the dozens of apps and hundreds of files your customers use - in their emails, call transcripts, Slack conversations, internal knowledge documents, CRM data…the list goes on. | So to get access to this data, your team will need to build dozens, if not hundreds, of integrations. | But integrations shouldn’t be your core competency - that’s why AI21, Copy ai, Tavus, Writesonic and other leading AI companies use Paragon to ship integrations for their products with 70% less engineering. | A few key highlights: | Managed authentication for all integrations 3rd party webhook triggers Serverless ingestion infrastructure with robust error handling and observability 100+ connectors and a custom connector builder
| Try Paragon for free or get a demo tailored to your use case |
| |
| Study: LLMs aren’t good at abstractive reasoning | | Source: Unsplash |
| Last year, the New York Times launched another word game — Connections — which might well be part of your daily brain befuddlement routine (I’ve never played a game more annoying than Connections, but my parents love it). | Recent research tested a lineup of state-of-the-art Large Language Models (LLMs) against human players across 200 rounds of Connections. This sounds to me like an elevated form of medieval torture, but the results factor heavily into the ongoing debate regarding the reasoning (or lack thereof) ability of said models. | The details: The researchers found that since Connections requires a wide variety of knowledge types in addition to “orthogonal thinking” (the practice of exploring diverse and seemingly unrelated fields of knowledge) it serves as a powerful test of abstractive reasoning in both LLMs and humans. | The research found that while all four models were able to accurately solve Connections some of the time, their performance was not great — GPT-4o (the highest performing model) accurately solved only 8% of games played. While novice humans (who had never played before) performed only marginally better than GPT-4o, expert (AKA consistent) players performed “significantly better” at achieving perfect games.
| The researchers found that LLMs are “fairly deficient” in many of the reasoning types required to be good at Connections. The models struggled with multi-word expression and combined knowledge categories, and were often unable to identify red herrings or “use step-by-step logic to work around them.” | “Ultimately, we find that excelling in Connections means having a breadth of different knowledge types, and LLMs are unfortunately not yet suited for the task.” | Cognitive scientist Dr. Gary Marcus said that this — abstract reasoning — is the wall that deep learning has been unable to conquer over the past 12 years. |
| |
| | | | | Tesla deliveries set to fall for the second straight quarter (Reuters). Report: Gen Z’s shopping decisions are heavily influenced by TikTok and influencers (CNBC). Deepfake tools are getting more powerful. One company wants to catch them all (Wired). Protect open-source AI from attacks by Big Tech (The Information). Giant firms push Japan to accelerate renewables adoption (Semafor).
| | | | | | |
| |
| Nvidia to be charged by French anti-trust regulator | | Source: Created with AI by The Deep View |
| Reuters reported Monday that Nvidia, the semiconductor that has powered the generative AI craze, is set to be charged by the French antitrust regulator for alleged anti-competitive practices. | The organization has cited concerns of potential abuse by chip providers, saying in a recent report that it is concerned with the industry’s reliance on Nvidia’s CUDA chip programming software, which is the only system perfectly compatible with Nvidia’s GPUs.
| In the U.S., the Department of Justice is taking the lead in investigating Nvidia. | Nvidia declined a request for comment. | | | The only AI Crash Course you need to master 20+ AI tools, multiple hacks & prompting techniques to work faster & more efficiently. | Just 3 hours - and you become a pro at automating your workflow and save upto 16 hours a week. | Get the crash course here for free (valid FOR next 24 hours only!) | This course on AI has been taken by 1 Million people across the globe, who have been able to: | Build No-code apps using UI-ZARD in minutes Write & launch ads for your business (no experience needed + you save on cost) Create solid content for 7 platforms with voice command & level up your social media And 10 more insane hacks & tools that you’re going to LOVE!
| Register & save your seat now (100 free seats only) 🎁 |
| |
| AI won’t take your job; employers using AI will | | Source: Created with AI by The Deep View |
| As it stands among technological innovations, AI might be unique in inspiring a very long list of various fears (horse-and-buggy drivers likely didn’t expect automobiles to end human civilization in addition to taking their jobs). | But prominent among the AI-inspired fears of misinformation, bias, enhanced surveillance and unsustainability are those of job loss, which have been prevalent since ChatGPT entered the zeitgeist in 2022. A 2023 APA survey found that nearly 40% of Americans are afraid that AI will make some or all of their job duties entirely obsolete.
| Early reports into the potential impacts of AI on the labor market have done much to stoke those fears; a 2023 Goldman Sachs report claimed that “if generative AI delivers on its promised capabilities, the labor market could face significant disruption.” The report estimated that around 66% of current jobs are “exposed to some degree of automation,” further adding that generative AI could “substitute up to 25% of current work.” | | A 2023 report by Accenture likewise estimated that around 40% of all working hours could be affected by language models like ChatGPT. | Each of these reports, however, has couched their rather dire predictions in historical assurances and promises of productivity boosts. | On the first point, the Goldman report said that historically, worker displacement from automation has been followed by the creation of new jobs. And indeed, this has been the case. New jobs have always followed the erasure of old ones — the internet, for instance, created a long list of jobs that didn’t exist before, from web developers to data scientists and social media influencers and experts alike.
| The issue with this, as pointed out by the Harvard Business Review, is that in the past, innovation moved slowly enough to avoid mass worker displacement; factories took a long time to switch from steam to electric motors, allowing the economy to gradually adjust to the new paradigm. | With AI, there is no luxury of time. | You won’t be replaced by AI, you’ll be replaced by someone using AI (I really dislike this phrase): Goldman said that most industries are only partially exposed to automation and are “more likely to be complemented rather than substituted by AI.” | | Despite these assertions, AI-powered job replacement is already being felt by certain industries. A 2023 report found a 21% decrease in job listings related to writing and coding compared to more manual-intensive jobs in the eight months following ChatGPT’s release. And every freelance artist and writer I’ve spoken to has noted a significant decrease in jobs over the past year and change. | Companies, meanwhile, have been detailing the early stages of their job-removal efforts; IBM, for instance, told BI that it was able to shrink its HR department from 800 workers to just 60 due to automation. Klarna similarly said this year that its AI assistant is doing the work of 700 full-time people. Last year, Dukaan replaced around 90% of its support staff with an AI chatbot.
| A recent survey of business executives conducted by the U.S. Federal Reserve Bank of Richmond found that, of the 60% of businesses that are integrating AI, 47% are using it to reduce labor costs and 33% to substitute workers. | The Dallas Fed, however, recently found that thus far, there has been minimal impact on employment among the Texas firms that have adopted generative AI tech.
| A separate MIT study, meanwhile, found that worker-replacing AI adoption will take far longer than people think, something that aligns with the enormous expense of building AI and the inherent issues of reliability that come with the tech. | | As author and journalist Brian Merchant has so eloquently pointed out, many companies are not interested in good work; they’re interested in work that’s “good enough.” And on this front — something supported by the Richmond Fed’s survey — commercialized generative AI might well be a tool to allow executives to increase their productivity expectations (and their profit) all while reducing their labor costs. | I — perhaps in a streak of foolish optimism — believe that, in the creative industries, displaced human workers will eventually be called back. This belief is based on the steady rise of anti-AI sentiment that has grown in the arts, where backlash against authors whose books bear AI-generated covers, or movies that employ a few AI-generated frames, has been extreme. For other industries, the enormous cost of compute for AI might well make it more worthwhile for many companies to continue employing humans.
| That said, there will be displacement. The scope, scale and timing of it will be highly dependent on specific industries and regulations. | “There will almost certainly be no AI jobs apocalypse,” Merchant said. “That doesn’t mean your boss isn’t going to use AI to replace jobs, or, more likely, going to use the specter of AI to keep pay down and demand higher productivity.” | | | Which image is real? | | | | | |
| |
| A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Your view on whether AI deployment is moving too fast: | It’s complicated: | “Limited uses can serve as a beta testing test bed with consent but release into the wild may conceivably be a different thing. Genetically altered foods prompted similar concerns. Some AI risks seem greater.” | Keep it in the labs until it’s more trustworthy: | “Great potential for both positive and dangerous outcomes. Until we can solve the hallucination issue and also ensure that undesirable biased input in the data sets is reliably excluded the risk of harm outweighs the benefit.” | Are you worried that AI will be used to make your job obsolete? | |
|
|
|