| | Good morning. I spoke with Liran Hason, the CEO and co-founder of Aporia, a company that specializes in AI guardrails. | We spoke about AGI and the evolving need for guardrails in the space. | Read on for the full story. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | |
| |
| AI for Good: Fighting spotted lanternflies | | Source: Unsplash |
| If you live on the East Coast, you’ve probably heard of the spotted lanternfly, a red-winged bug everyone on this side of the U.S. has been encouraged to kill on sight. | For several years now, the invasive species has proliferated across New York, New Jersey and Pennsylvania, posing a severe threat to local environments. | | A New Jersey teenager recently designed a lanternfly trap in an effort to fight the infestation. The device is powered by AI. | The details: Selina Zhang built a device she dubbed ArTreeficial, a solar-powered, self-cleaning, AI-powered synthetic tree designed to look like the Tree of Heaven, which the lanternfly is attracted to. | | “I want to demonstrate that we can use artificial resources as an essential tool for how we help protect our natural resources, especially in agriculture,” Zhang told Smithsonian Magazine. “As humans, it’s our responsibility to protect what we have and cultivate these resources responsibly to make sure that they’re here for us to enjoy, as well as generations after us.” |
| |
| | Better GenAI Results With a Knowledge Graph | | Combine vector search, knowledge graphs, and RAG into GraphRAG. Make your GenAI results more: | Accurate Explainable Relevant
| Read this blog for a walkthrough of how GraphRAG works with examples. Get an overview of building a knowledge graph for better GenAI results. |
| |
| Privacy advocate files 9 complaints against Twitter for AI training | | Source: Unsplash |
| Nearly a year ago, Twitter — like just about every other social media platform — updated its terms of service to include this one, highly impactful line: “We may use the information we collect … to help train our machine learning or artificial intelligence models.” | More recently, it became apparent that all users had been opted-in to AI training for its AI model Grok by default, leading to this post from Twitter’s Safety account, saying that “all X users have the ability to control whether their public posts can be used to train Grok.” | What happened: European privacy advocate NOYB on Monday said it had filed General Data Protection Regulation (GDPR) complaints with nine European Union countries to protest the action. | The organization said that Twitter “never proactively informed its users that their personal data is being used to train AI.” Though Twitter has agreed to pause its training on European user content in light of a recent order of suspension from Ireland’s Data Protection Commission, the damage in content consumption has already been done.
| Max Schrems, the chairman of NOYB, said that companies should simply ask users for consent before training on their content. | Some context: Meta tried the same thing a few months ago, and, after facing massive backlash, was forced to suspend its AI rollout in Europe and Brazil. |
| |
| | | If you're looking to leverage AI in your investment strategy, you need to check out Public. | The all-in-one investing platform allows you to build a portfolio of stocks, options, bonds, crypto and more, all while incorporating the latest AI technology — for high-powered performance analysis — to help you achieve your investment goals. | Join Public, and build your primary portfolio with AI-powered insights and analysis. |
| |
| | | | | The growing threat to the hidden network of cables that power the internet (The Guardian). The New Fake Math of AI Startup ARR: Not So Annual, Not So Recurring (The Information). US SEC sues over alleged $650 million global crypto fraud (Reuters). SpaceX repeatedly polluted waters in Texas this year, regulators found (CNBC). Is the US finally getting ‘all aboard’ with electric trains? (The Verge).
| If you want to get in front of an audience of 200,000+ developers, business leaders and AI enthusiasts, get in touch with us here. | | | | | | |
| |
| Paper: GenAI can harm learning | | Source: Created with AI by The Deep View |
| We’ve talked before about the danger of integrating generative AI systems like ChatGPT into the classroom; in that environment, risks of hallucination, bias, performance bias, misinformation and data privacy are severe. | But there’s another risk, one that involves overreliance and learning. | What happened: A recent paper — published by researchers at the University of Pennsylvania — studied the impact of genAI in the specific context of math classes at a high school in Turkey. | In their experiment, the researchers provided two groups of students with two different iterations of a GPT-4-based tutor; one mimicked ChatGPT (called GPT Base) and the other (GPT Tutor) included guardrails designed to safeguard learning. The experiment found that access to the tutors improved performance by a significant margin (48% improvement for GPT Base and 127% for GPT Tutor).
| But when the genAI tutors were taken away, students performed “statistically significantly worse” than those who never had access (17% reduction for GPT Base). This negative performance was eradicated in the Tutor group, but the researchers still didn’t “observe a positive effect.” | Why it matters: “While generative AI tools such as ChatGPT can make tasks significantly easier for humans, they come with the risk of deteriorating our ability to effectively learn some of the skills required to solve these tasks,” the researchers said. |
| |
| Exclusive Interview: We’ll always need AI guardrails | | Source: Aporia |
| We’ve talked before about the dangerous limitations of the large language models (LLMs) that power the world’s leading chatbots and generative AI systems. Namely, these boil down to bias and hallucination, a combination that makes these systems fundamentally unreliable and untrustworthy, something that is presenting quite a challenge to mass enterprise adoption. | And while hallucination — the poorly-named propensity of a genAI model to essentially make up fiction and relay it confidently as fact — is part and parcel with the architecture, there are non-native solutions out there designed to mitigate the risk. | One of these is Aporia, a company that offers third-party guardrails aimed at observing LLM performance and isolating, identifying and mitigating instances of bias and hallucination. | | In May — shortly after OpenAI launched GPT-4o — Aporia unveiled a new set of guardrails for multimodal AI systems. The company said that its guardrails — operating at an “unnoticeable latency” — can mitigate around 94% of hallucinations before they reach the end user in real-time. | Aporia’s CEO and co-founder Liran Hason recently told me that he doesn’t ever see a world where such guardrails won’t be necessary. In fact, he said that they become more vital as AI gets more powerful. | | The two sides of AGI: Hason said that one part of this conversation involves the real thing, a true, peer-reviewed, fully-verified AGI a’la the Star Trek computer. The other part involves the illusion of the real thing: an AI agent that is widespread and reliable enough to integrate with a variety of tools. | The latter, because of its integration with your phone, for instance, or your car, or house, requires guardrails to ensure it uses these tools safely.
| And in the case of the former — a “GPT-AGI,” for instance — Hason believes guardrails would remain vital for a simple reason: “would we feel comfortable putting all of our questions, all of the decisions at hand, in one type of personality?” | Guardrails, he said, would be needed to restrict this hypothetical system, telling it what it can and cannot do and tracking what happens in so-called edge cases, because, as Hason noted, “human beings also have edge cases. We have prison for that, right?” | “I also hope that regulation will be in place that will require companies to implement appropriate guardrails, whether moral ones or more for safety and reliability, for the sake of everyone,” Hason said. | | Today, AGI is a hyper-focused example of the hype that has run rampant in this industry, hype that has been leveraged to win VC funding and inflate stock prices. | So why talk about it at all? | The idea of AI/AGI ought to be rooted in sociology, psychology and philosophy; we should be thinking about what we want our society to look like if these for-profit labs do — somehow — achieve AGI … or a passable illusion of it. | If we can figure out the philosophy before the tech gets there, we might be able to mitigate harm and actually unlock some of those benefits we’ve been hearing so much about. | | | Which image is real? | | | | | |
| |
| A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on an AI assistant in hospitals: | Around 60% of patients surveyed said it sounds cool; the rest said they don’t want an AI listening to them. | The doctors who responded were flipped; 55% of you said you don’t like it, while the remainder said it would be game-changing. | Sounds cool: | | Something else: | | What do you want society to look like if AGI is achieved? | | *Public disclosure: All investing involves the risk of loss, including loss of principal. Brokerage services for US-listed, registered securities, options and bonds in a self-directed account are offered by Public Investing, Inc., member FINRA & SIPC. Cryptocurrency trading services are offered by Bakkt Crypto Solutions, LLC (NMLS ID 1828849), which is licensed to engage in virtual currency business activity by the NYSDFS. Cryptocurrency is highly speculative, involves a high degree of risk, and has the potential for loss of the entire amount of an investment. Cryptocurrency holdings are not protected by the FDIC or SIPC. | Alpha is an experiment brought to you by Public Holdings, Inc. (“Public”). Alpha is an AI research tool powered by GPT-4, a generative large language model. Alpha is experimental technology and may give inaccurate or inappropriate responses. Output from Alpha should not be construed as investment research or recommendations, and should not serve as the basis for any investment decision. All Alpha output is provided “as is.” Public makes no representations or warranties with respect to the accuracy, completeness, quality, timeliness, or any other characteristic of such output. Your use of Alpha output is at your sole risk. Please independently evaluate and verify the accuracy of any such output for your own use case. |
|
|
|