| | Good morning. I spoke with Michael Stewart, the managing partner of Microsoft’s VC arm M12, about the investment side of the AI sector. | Like many VCs, Stewart believes we’re in the middle of an investing bubble. But he thinks we’re in the middle of a second bubble as well: a user bubble. | Read on for the full story. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | |
| |
| AI for Good: Tracking and mitigating air quality | | Source: Unsplash |
| The United Nations Environmental Program (UNEP) several years ago co-founded the GEMS Air Pollution Monitoring Platform. The platform aggregates data from around 25,000 air quality monitoring stations across 140 countries and applies AI to provide insights based on this information. | The details: The real-time insights derived from this data can help inform certain health protection measures. | The UNEP has reported that air quality management efforts require high-quality, credible air quality data. The organization has said that artificial intelligence is a key tool in this process.
| Why it matters: The UNEP said in 2022 that globally, nine out of 10 people are exposed to air pollution. It said air pollution is one of the “most significant environmental health issues” active today, as it impacts public health in addition to agricultural efforts and biodiversity. | “These platforms allow both the private and public sector to harness data and digital technologies in order to accelerate global environmental action and fundamentally disrupt business as usual,” David Jensen, the coordinator of the UNEP’s digital transformation task force, said at the time. “Ultimately, they can contribute to systemic change at an unprecedented speed and scale.” |
| |
| | | | Join this 3-hour Power-Packed Workshop worth $199 at no cost and learn 20+ AI tools like: | Midjourney, Invideo, Humata, Claude AI, HeyGen and more. | Additionally, this workshop will help you: | Do quick excel analysis & make AI-powered PPTs Earn over 20K per month using AI Build your own personal AI assistant to save 10+ hours Research faster & make your life a lot simpler & more…
| 👉 Register here to become an AI Genius! |
| |
| Paper: An analysis of the ‘fair use’ defense | | Source: Unsplash |
| Professor Jane Ginsburg, a Columbia Law School expert on U.S. copyright law, recently published an in-depth analysis of the legal precedent that surrounds the idea of scraping copyrighted content to train artificial intelligence models. | The details: Ginsburg concluded that “fair use will not accomplish the task of immunizing inputs in a way that provides sufficient security to AI entrepreneurs unless they can ensure, upstream, that outputs will not infringe.” | Ginsburg found that there are two elements to the conversation; one involves the outputs and one doesn’t. In decoupling the outputs from the inputs, she found that the determination of whether the act alone of copying content into training data is “transformative” (and protected under fair use) depends on whether there is a market for that data. “Even if the outputs might not infringe particular inputs, commercial copying (at least) to create training data would be for the same purpose, and might therefore fail a first-factor fair use inquiry without a ‘compelling justification’ for supplanting authors’ markets,” she said.
| Ginsburg said that if the courts ignore this, then the question of fair use as it pertains to AI training will “turn on the non-infringing character of the outputs,” something that will likely prove to be at least something of a challenge, considering the regularly evidenced instances of copyright-infringing output. | She said that developers would need to design a system that would “refrain from output-level copying altogether,” something that — considering that LLM output is based on their input — might not be possible to establish with any level of certainty. | The context: There are a number of ongoing copyright-related lawsuits currently in motion, though none have reached completion and the U.S. Copyright Office, meanwhile, has yet to weigh in on the issue. |
| |
| | | | | AT&T says data from 109 million US customer accounts illegally downloaded (Reuters). Inside the $43 million Veterans Affairs simulation hospital where doctors are piloting new tech (CNBC). UK universities want millions from new government for startups hatched in academia (Semafor). Silicon Valley moguls’ favorite fellowships (The Information). Regrow hair in as few as 3-6 months with Hims’ range of treatments. Restore your hairline with Hims today.*
| | | | | | |
| |
| OpenAI is at level 1 (of 5) on its mission to achieve AGI | | Source: Created with AI by The Deep View |
| OpenAI exists really for one singular reason: to invent artificial general intelligence, which OpenAI has defined as a system that is generally “smarter” than humans. | Beyond being the driving force for just about every ounce of hype that has proliferated within this industry, AGI would mark a game-changing moment for OpenAI; based on the rather strange structure of the company’s capped for-profit/nonprofit mix, “the board determines when we’ve attained AGI…Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.” | At an all-hands meeting last week, OpenAI shared a new set of tiers designed to track its progress toward reaching AGI, Bloomberg reported. | The first tier involves chatbots, the second involves human-level problem solving, nicknamed “reasoners,” the third involves “agents,” the fourth involves AI that can aid in new inventions and the fifth tier involves AI that can “do the work of an organization.” OpenAI said it is currently at Level One, but it is on the brink of Level Two.
| The term AGI is not present on this list of tiers. | The context: As we’ve mentioned before, there is no universal scientific definition of AGI, nor is it universally accepted that AGI will ever be possible (some researchers have compared the belief that AGI is on its way to religious ferver). What scientists do know, however, is that large language models are not AGI and likely do not represent a pathway to it, as they are incapable of extrapolating beyond their training data. | A recent paper by Google DeepMind similarly attempted to carve out five levels designed to track AGI progress, and they took a wildly different approach than OpenAI did, categorizing capabilities on a scale of ‘emerging’ to ‘superhuman.’ The five tiers, as Bloomberg pointed out, seem quite similar to the five levels of autonomous driving used to describe autonomous capabilities.
| My view: OpenAI is in the business of AI. This company has a tremendous amount of incentive to continue fueling hype around the tech. Hype drives investment, which is good for business. | These tiers make very clear that OpenAI’s focus is on enticing the excitement of the corporate community; its categories are all about business application — if OpenAI turned around tomorrow and said it achieved Level 5, plenty of corporate executives would be thrilled at all the labor costs they could cut. | But this is all divorced from reality and steeped in problematic ethics. I think these tiers make clear that what OpenAI is interested in is not curing cancer or solving climate change (which are naively simplistic goals, anyway) but instead seeing some return on its investment. | | Learn AI like it’s 2024 | | Ready to become a master of AI in 2024? | Brilliant has made learning not just accessible, but engaging and effective. Through bite-sized, interactive lessons, you can master concepts in AI, programming, data analysis, and math in just minutes a day—whenever, wherever. | Learn 6x more effectively with interactive lessons Compete against yourself and others with Streaks and Leagues Explore thousands of lessons on topics from AI to going viral on Twitter Understand the concepts powering LLMs like ChatGPT to quantum computers
| Unlike traditional courses, Brilliant offers hands-on problem solving that makes complex concepts stick, turning learning into a fun and competitive game. | Join 10 million other proactive learners around the world and start your 30-day free trial today. | Plus, readers of The Deep View get a special 20% off a premium annual subscription. |
| |
| Managing partner of Microsoft VC arm says we’re living through 2 AI bubbles | | Source: Created with AI by The Deep View |
| The AI hype that has run rampant this past year is perhaps best accentuated by the investing sector. | Nvidia has about tripled in value over the past 12 months; Microsoft’s valuation spiked from around $2 trillion to more than $3 trillion and private companies, meanwhile, have reached enormously lofty valuations, with OpenAI last valued at around $86 billion.
| In 2023, AI startups raked in about $50 billion in venture capital investments, according to Crunchbase data. And 2024 has so far shown a continuation of that trend; the first quarter of the year saw $12.2 billion invested in VC-backed AI startups across nearly 1,200 deals (a number that more than doubled in the second quarter). | But the reality of AI has not been quite so rosy — AI startups in 2023 collectively spent $50 billion on Nvidia chips alone, bringing in only $3 billion in revenue, according to Sequoia. Executives, meanwhile, remain concerned about the security risks and unreliability that could come in hand with hasty enterprise deployment.
| And these lofty valuations may not be sustainable; Nvidia is trading at a forward price-to-earnings ratio of 45.6 (Microsoft is at 35 and Tesla is at nearly 70). | I’ve spoken with many venture capitalists who have said that we are currently in the throes of an AI bubble. Michael Stewart, the managing partner of Microsoft’s corporate VC arm M12, agrees. | I sat down with Stewart to discuss the state of AI in the VC world. | Two different kinds of bubbles: Stewart said that “there is absolutely an investment bubble” whose consequences we are already beginning to see. He expects it to get worse over the next year. | | | Corey Brickley Illustration. Justice for Palestine @CoreyBrickley | |
| |
people always ask me when the bubble is going to pop and i think literally the answer is whenever GenAI startups decide to raise their prices to the amount necessary to turn a profit | Chris Sharpes @chrissharpesVO A colleague of mine works in digital training and he mentioned 'Oh we're dropping AI voices' When I asked why: 'Their prices keep going up, they want so much money for so few voices, they must be getting desperate. We can hire an actor for the same price' |
| | 5:15 AM • Jun 29, 2024 | | | | 5.17K Likes 811 Retweets | 35 Replies |
|
| The bubble that he’s more concerned with is something he calls the “user bubble.” | “The first applications we've focused on as an industry are definitely catering a lot to highly fact-and-information-heavy users,” Stewart said. “Yet there’s really a finite number of people who need information looked up at all times.” | | He thinks it’ll be similar to the internet boom, where people at first didn’t see how internet-based applications (calls, messaging, social media, music, etc.) would fit into their lives. Now, of course, the internet has become the backbone of many people’s daily habits, a shift that did not happen overnight. | “I’d like to encourage more people to think about how it changes people’s lives. About how we as an industry need to force the model use, the application quality and the respect for personal data to create applications that do become part of your everyday life,” he said. “I feel like that phase is just starting now.” | | | Which image is real? | | | | | |
| |
| A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Your view on whether sometimes the best AI strategy is no AI strategy: | In a first, a majority of you (55%) agree that sometimes, the best AI strategy is no AI strategy. We have yet to have a majority in agreement. | Around 34% of you said the answer is ‘complicated’ or ‘something else.’ | Absolutely: | “While it is a tool that can enhance a variety of functions in almost any business, the business itself must be viable, well-managed and in tune with its market. Otherwise, AI may only exacerbate dysfunction or accelerate misguided directions”
| It’s complicated: | “AI is a tool and should always be seen that way. Calling it ‘the answer’ is greatly oversimplified. So, I do agree with Rashidi in that it isn't ‘the’ answer for all business, I also believe all business can benefit from it, in various degrees.”
| What kind of real-world AI use cases do you want to see? Give us some specifics! | | *Hims Disclaimer: Based on studies of topical and oral minoxidil. Hair Hybrids are compounded products and have not been approved by the FDA. The FDA does not verify the safety or effectiveness of compounded drugs. Individual results may vary. |
|
|
|