| | Good morning. As the tech industry forges ahead to build more powerful AI systems more quickly, American voters still wish they wouldn’t. | Just another day in Silicon Valley. | Let’s get into it. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | |
| |
| AI for Good: Diagnosing diseases through sound analysis | | Source: Unsplash |
| A team of Google scientists in March unveiled a new machine learning model — called Health Acoustic Representations (HeAR) — designed to detect and diagnose a number of illnesses based on sound analysis alone. | The details: The AI system was trained on millions of clips of human audio and has the potential to help doctors diagnose everything from tuberculosis to Covid-19. | The team use unsupervised learning to train the model on more than 300 million unlabeled sound clips from YouTube videos. The clips — which included coughing, breathing and throat-clearing noises — were transformed into spectrograms, which were used to train the model to predict missing segments.
| In tests, the model achieved a score of .739 for tuberculosis detection and a 0.645 and 0.710 for COVID-19 detection on a scale where 1 would indicate full accuracy and .5 represents random guessing. | Why it matters: If similar bioacoustics tech achieves FDA approval and takes the shape of a free app, it could offer a low-resource, non-invasive method of, for instance, testing for Covid-19. |
| |
| | Code reviews, powered by AI! | | Greptile is a platform that can search and understand complex codebases in natural language. Kind of like an automated staff engineer that makes your job way easier, reducing review time by up to 90%. | How it works: Greptile reviews your pull requests with full codebase context using AI, catching bugs, style issues, security threats and more. | Greptile reviews thousands of pull requests a day at 600+ companies companies like Anima, Highlight, Kiosk, Hatch, Replo, Warp, and popular open source projects like StorybookJS, Phaser, and FleetDM. | Get started with Greptile today |
| |
| Microsoft and Apple give up OpenAI board seats | | Source: Unsplash |
| Microsoft has given up its observatory seat on OpenAI’s board, according to the FT. At the same time, Apple — which initially was going to take a seat as part of its arrangement with the startup — is opting not to. | The details: OpenAI will instead host regular meetings with its partners, Apple and Microsoft included, to keep them apprised of internal goings-on. | A letter sent by Microsft to OpenAI reportedly said that Microsoft has “witnessed significant progress from the newly formed board and are confident in the company’s direction,” making an ongoing observational role unnecessary. Microsoft — which has poured some $13 billion into OpenAI — first took the seat back in November, at the tail end of OpenAI’s rendition of Game of Thrones which saw Sam Altman resurrected (a’la John Snow) as OpenAI’s CEO.
| Why it matters: This step back by OpenAI’s two megacap partners comes amid mounting regulatory scrutiny. | European Union antitrust regulators are still looking into Microsoft’s relationship with OpenAI, a concern shared by Britain’s Competition and Markets Authority. |
| |
| | | | | OpenAI and Los Alamos National Laboratory announce bioscience research partnership (OpenAI). Creator Startups Have Already Raised as Much Money This Year as in All of 2023 (The Information). The AI artist who used Bad Bunny’s voice — and shot to fame (Rest of World). Amazon’s carbon emissions fell slightly in 2023 but are still much higher than they were when it made a big climate pledge (The Verge). With Chevron reversal, Supreme Court paves way for a ‘legal earthquake’ (CNBC).
| | | | | | |
| |
| Elon Musk is building the ‘most powerful training cluster in the world’ | | Source: Tesla |
| Elon Musk’s xAI has ended talks with Oracle to rent more specialized Nvidia chips — in what could have been a $10 billion deal — according to The Information. | Musk is instead buying the chips himself, all to begin putting together his planned “gigafactory of compute.” | The details: Musk confirmed in a post on Twitter that xAI is now working to build the “gigafactory” internally. | Musk explained that the reason behind the shift is “that our fundamental competitiveness depends on being faster than any other AI company. This is the only way to catch up.” “xAI is building the 100k H100 system itself for fastest time to completion,” he said. “Aiming to begin training later this month. It will be the most powerful training cluster in the world by a large margin.”
| xAI isn’t the only one trying to build a supercomputer; Microsoft and OpenAI, also according to The Information, have been working on plans for a $100 billion supercomputer nicknamed “Stargate.” | Why it matters: The industry is keen to pour more and more resources into the generation of abstractly more powerful AI models, and VC investments into AI companies, as we noted yesterday, are growing. | But at the same time, concerns about revenue and return on investment are growing as well, with a growing number of analysts gaining confidence in the idea that we are in a bubble of high costs and low returns, something that could be compounded by multi-billion-dollar supercomputers. | | Step-By-Step to a $100,000+ / Year Career Using AI | | Zero To Mastery has taught tech skills to over 1,000,000 people. Graduates have been hired at companies like Tesla, NVIDIA, Apple, Google, NASA and many more. | And their recently launched AI courses take you step-by-step from complete beginner to AI Developer that can get hired this year. | → Learn To Code + Master Prompt Engineering | → Build AI Apps + Work With Large Language Models | → Complete Career Toolkit to Land Interviews & Get Hired | But the best part? You won't be learning alone. | You'll get access to the private ZTM discord with 1,000s of other students, alumni, mentors and even your instructors. It's the most active and highly-rated discord of any online learning platform (Google it if you don't believe us). | For the first time ever (only for Deep View readers), ZTM is offering a 7-day free trial so that you have no excuses! And if you sign up before July 14th, you'll also get 15% off an annual subscription (that's less than $1/day) once your trial is over. | Start Your Free Trial and Lock In Your 15% OFF (offer ends July 14). |
| |
| Poll: Despite global pressure, Americans want the tech industry to slow down on AI | | Source: Created with AI by The Deep View |
| A little more than a year ago, the Future of Life Institute published an open letter calling for a six-month moratorium on the development of AI systems more powerful than GPT-4. Of course, the pause never happened (and we didn’t seem to stumble upon superintelligence in the interim, either) but it did elicit a narrative from the tech sector that, for a number of reasons, a pause would be dangerous. | One of these reasons was simple: sure, the European Union could potentially instate a pause on development — maybe the U.S. could do so as well — but there’s nothing that would require other countries to pause, which would let these other countries (namely, China and Russia) to get ahead of the U.S. in the ‘global AI arms race.’
| As the Pause AI organization themselves put it: “We might end up in a world where the first AGI is developed by a non-cooperative actor, which is likely to be a bad outcome.” | But new polling shows that American voters aren’t buying it. | The details: A recent poll conducted by the Artificial Intelligence Policy Institute (AIPI) — and first published by Time — found that Americans would rather fall behind in that global race than skimp on regulation. | 75% of Republicans and 75% of Democrats said that “taking a careful controlled approach” to AI — namely by curtailing the release of tools that could be leveraged by foreign adversaries against the U.S. — is preferable to “moving forward on AI as fast as possible to be the first country to get extremely powerful AI.” A majority of voters are also in favor of the application of more stringent security measures at the labs and companies developing this tech.
| The polling additionally found that 50% of voters surveyed think the U.S. should use its position in the AI race to prevent other countries from building powerful AI systems by enforcing “safety restrictions and aggressive testing requirements.” | Only 23% of Americans polled believe that the U.S. should eschew regulation in favor of being the first to build a more powerful AI. | “What I perceive from the polling is that stopping AI development is not seen as an option,” Daniel Colson, the executive director of the AIPI, told Time. “But giving industry free rein is also seen as risky. And so there’s the desire for some third way.” “And when we present that in the polling — that third path, mitigated AI development with guardrails — is the one that people overwhelmingly want.”
| This comes as federal regulatory efforts in the U.S. remain stalled, with the focus shifting to uneven state-by-state regulation. | Previous polling from the AIPI has found that a vast majority of Americans want AI to be regulated and wish the tech sector would slow down on AI; they don’t trust tech companies to self-regulate. | Colson has told me in the past that the American public is hyper-focused on security, safety and risk mitigation; polling published in May found that “66% of U.S. voters believe AI policy should prioritize keeping the tech out of the hands of bad actors, rather than providing the benefits of AI to all.” | | Underpinning all of this is a layer of hype and an incongruity of definition. It is not clear what “extremely powerful” AI means, or how it would be different from current systems. | Unless artificial general intelligence is achieved (and agreed upon in some consensus definition by the scientific community), I’m not sure how you measure “more powerful” systems. As current systems go, “more powerful” doesn’t mean much more than predicting the next word at slightly greater speeds. | Aggressive testing and safety restrictions are a great idea, as is risk mitigation. However, I think it remains important for regulators and constituents alike to be aware of what risks they want mitigated. Is the focus on mitigating the risk of a hypothetical superintelligence, or is it on mitigating the reality of algorithmic bias, hallucination, environmental damage, etc.?
| Do people want development to slow down, or deployment? | To once again call back Helen Toner’s comment of a few weeks: how is AI affecting your life, and how do you want it to affect your life? | Regulating a hypothetical is going to be next to impossible. But if we establish the proper levels of regulation to address the issues at play today, we’ll be in a better position to handle that hypothetical if it ever does come to pass. | | | Which image is real? | | | | | |
| |
| A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Your view on personalized AI health coaches: | Around a third of you would consider it if developers could prove it was both reliable and accurate. 20% of you would be into now, no questions asked, while another 20% said you’d never use one. | 22% of you said you have the internet. | If it was reliable: | | I have the internet: | | Do you agree that a cautious approach is preferable to being the first to more powerful systems? | |
|
|
|