| | | Good morning. As we experiment and iterate here at TDV — all while exploring the fascinating world of AI — we’d love to get your thoughts on what’s working, what isn’t and what you’d like to see from us every day. | You can schedule a one-on-one meeting with me here — looking forward to chatting with you! | In today’s newsletter: | ⛑️ OpenAI forms new safety team as it begins training a new frontier model 🇺🇸 Survey: 47% of the American public have never heard of ChatGPT 📊 Google researchers say AI-generated disinformation spiked in 2023 🛜 What even is AGI, anyway?
|
| |
| | OpenAI begins training new frontier model, forms new safety team | | Sam Altman at Microsoft Build, 2024. |
| OpenAI on Tuesday announced the formation of a new “Safety & Security” team, a committee led by board directors Bret Taylor (Chair), Adam D’Angelo, Nicole Seligman and CEO Sam Altman. | | Zoom in: The creation of this new committee comes, according to OpenAI, as it has begun training its next frontier model, a system that OpenAI thinks will “bring us to the next level of capabilities on our path to AGI.” | Zoom out: The announcement of a new model-in-training & new safety team (led by Altman, no less) follows the departure of a number of safety researchers from the company, including former Chief Scientist Ilya Sutskever and Jan Leike. Leike recently said that “over the past years, safety culture and processes have taken a backseat to shiny products” at OpenAI.
| This new safety team also comes just days after two former OpenAI board members said that the board’s ability to uphold the company’s nonprofit mission was hindered by Altman’s behavior. They added: “Self-governance cannot reliably withstand the pressure of profit incentives.” | | Survey: 47% of the American public has never heard of ChatGPT | | Photo by Growtika (Unsplash). |
| A survey conducted by the Reuters Institute of 12,000 people across Argentina, Denmark, France, Japan, the U.K. and the U.S. found that, though ChatGPT is by far the most recognized AI tool out there, only 53% of those surveyed in the U.S. have heard of it. That number was slightly higher for the other countries listed, though significantly lower in Argentina. | Interesting findings: | Only 7% of American respondents use ChatGPT daily; 11% use it weekly, 4% monthly, 10% “once or twice” and 20% have never used it. 47% have never even heard of ChatGPT. The numbers for Google’s Gemini and Microsoft’s Copilot were much more exaggerated — more than 85% of American respondents have either never used or never heard of either tool.
| | A screenshot of one part of the report’s findings. |
| Overarching Patterns: The vast majority of respondents believe genAI will have a significant impact across almost every sector — from social media to journalism to the military — in the next five years. | The researchers noted that when it comes to trust, young people are more inclined to trust deployers of genAI, and a sizeable portion of respondents are simply not yet sure if they trust certain sectors to responsibly deploy the tech.
| | Together with Twilio Segment | What can good data do for you? - Twilio Segment Customer Data Platform | | Segment helps 25,000+ companies turn customer data into tailored experiences. | With customer profiles that update real-time, and best-in-class privacy features — Segment's Customer Data Platform allows you to make good data available to every team. | | Google researchers say AI-generated disinformation spiked in 2023 | | Photo by Nathana Reboucas (Unplash) |
| On May 19th — just around the same time that Google rolled out AI Overviews to such predictably rough results — a team of Google researchers published a preprint of an examination into media-based disinformation over time. The paper was first reported on by the Faked Up newsletter and 404 Media. | The key finding: The two-year-long study examined fact-checking information from fact-checking sites including Snopes and Politifact. The researchers looked at 135,838 fact checks in total, dating back to 1995 (though most were created after 2016, with the introduction of ClaimReview). | The report found that AI-generated image-based disinformation surged in 2023, just when genAI systems were becoming popular. The problem, however, is likely worse than the study finds, as the fact-checkers are not able to examine every single image-based piece of disinformation that proliferates across the web.
| | Figure 18 in the report: Prevalence of Content Manipulation Types |
|
| |
| | 💰AI Jobs Board: | Machine Learning Engineer: Hacker Pulse · United States · Remote · Full-time · (Apply here) Machine Learning Researcher: DeepRec.ai · United States · Remote · Full-time · (Apply here) Senior Data Engineer: Deel · United States · Remote · Full-time · (Apply here)
| | 🔭 Tools: * | | | 🌎 The Broad View: | Here’s what it’s like inside the operating room when someone gets a brain implant (CNBC). There’s one reason to tread carefully around Google Docs: They can lock you out of your own content (Wired). Forget retirement. Older people are turning to gig work to survive (Rest of World).
| *Indicates a sponsored link |
| |
| Together with Ai4 | Join AI leaders in Las Vegas — free passes available! | | Join 4500+ attendees, 350+ speakers and 150+ AI exhibitors at Ai4 2024, North America's largest AI industry event — coming to Las Vegas, NV on Aug. 12-14. At Ai4, you can: | Discover the latest applications and best practices shaping the future of AI Engage in dedicated AI content and networking opportunities Attend AI sessions tailored to a variety of industries, job functions, and project types
| Don't wait — ticket prices increase THIS FRIDAY, May 31st! Apply for a complimentary pass or register now to save $1200 and receive a 41% discount off final at-the-door pricing. |
| |
| | What even is AGI, anyway? | | Photo by Marius Masalar (Unsplash). |
| At the same time as OpenAI is hard at work on its new foundation model, the company told the FT that its mission is to build an artificial general intelligence capable of “cognitive tasks that are what a human could do today.” | | The mission: An OpenAI spokesperson told the FT that, whatever they’ve said in the past, superintelligence isn’t OpenAI’s mission — “Our mission is AGI that is beneficial for humanity.” The spokesperson added that OpenAI studies superintelligence in order to achieve general intelligence. | The semantics of it all: The issue at hand with much of this kind of discussion is a lack of universal definitions. Scientists do not all agree on what AGI is or what attributes a system would need to have to carry that title. | | My take: In previous editions, we’ve discussed a few relevant factors: LLMs are black boxes, language does not equal intelligence and benchmarks designed to test LLM reasoning are unlikely to be good measures of actual capability. | The reality of what’s going on here is that as far as the science is concerned, there is no evidence to suggest that an artificial superintelligence will ever be possible. Many researchers have told me that there is no evidence to suggest that an artificial general intelligence will ever be possible, either. Part of the reason for this has to do with some of the disciplines of the mind that I’ve mentioned before — scientists do not understand much about human intelligence or the human brian: We don’t know the relationship between intelligence and consciousness; we don’t know the link between organic brain structure and the technical ways that structure contributes to either intelligence or consciousness.
| When computer scientists try to replicate human intelligence synthetically, those unknowns feel much more significant. | Even as this side of the debate plays out philosophically and technologically, OpenAI — despite what Altman might say — is in the business of language models, models that are great at perpetuating the illusion of intelligence but which lack any sort of understanding around their output (again, as we’ve discussed). AI researchers, including Gary Marcus, Grady Booch and Yann LeCun, have all said that LLMs will not bring us to AGI, whatever AGI is.
| I’ll finish with two things: When companies throw around cool sci-fi terms like ‘AGI’ and ‘superintellignece,’ they have a very clear business incentive to do so. And the hype that they’ve generated has worked — the sector has pulled in tons of investors. But it is hype. | And two, the AGI debate is a deeply philosophical rabbit hole. Read this conversation between Marcus and Booch if you want to explore some of it. |
| |
| | | | | Image 1 |
| Which image is real? | | | Image 2 |
|
| |
| | Pika Labs: A tool to generate videos from prompts GigaBrain: A search engine that finds useful answers from online communities. iListen: A tool to summarize articles & text into miniature podcasts.
| Have cool resources or tools to share? Submit a tool or reach us by replying to this email (or DM us on Twitter). | *Indicates a sponsored link |
| |
| SPONSOR THIS NEWSLETTER | The Deep View is currently one of the world’s fastest-growing newsletters, adding thousands of AI enthusiasts a week to our incredible family of over 200,000! Our readers work at top companies like Apple, Meta, OpenAI, Google, Microsoft and many more. | If you want to share your company or product with fellow AI enthusiasts before we’re fully booked, reserve an ad slot here. |
| |
| One last thing👇 | | Carl Hendy @carlhendy | |
| |
Google's first major TV ad, "Parisian Love". Look at those links and no ads 😭 | | | May 27, 2024 | | | | 48 Likes 7 Retweets 5 Replies |
|
| That's a wrap for now! We hope you enjoyed today’s newsletter :) | What did you think of today's email? | | We appreciate your continued support! We'll catch you in the next edition 👋 | -Ian Krietzberg, Editor-in-Chief, The Deep View |
| |
|
|