| | Good morning. OpenAI began its journey as a nonprofit with the humble goal of safely building artificial intelligence systems to benefit humanity. For several years, now, though, it has operated under a strange hybrid structure involving a blend between a nonprofit and a capped for-profit organization. | Reuters reported this weekend that OpenAI’s fight for new funding and that big, shiny new valuation ($150 billion) is contingent upon OpenAI killing the structure and removing the profit cap for investors. | Sam Altman has certainly come a long way. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | |
| |
| AI for Good: Waste management | | Source: Zabble |
| The U.S. is home to more than 2,600 active landfills (and thousands of closed ones). The environmental cost of these landfills is extreme and multi-faceted, emitting methane and other greenhouse gases in addition to sacrificing and damaging hundreds of thousands of acres of land. | This has led to ‘Zero Waste’ initiatives, the goal of which is to reduce waste to a bare minimum by transforming the life cycle — and end-of-life cycles — of products. | What happened: Zabble designed an AI-driven platform that’s intended to help corporate buildings, schools and hospitals achieve their zero-waste goals. | The platform allows for AI-enabled bin tagging (through mobile photos), which sends alerts to staff when the system detects “unacceptable items” inside a given bin. The platform also introduced real-time analytics and insights that enable teams to better identify their sources of waste. As of 2022, Zabble users had kept 100 tons of waste from landfills.
| In 2022, the U.S. Environmental Protection Agency awarded Zabble $400,000 to further develop its system. | “The technology that this research will advance reduces waste going to landfills, which is critical to protecting communities from pollution and reducing emissions of methane, a potent greenhouse gas,” EPA Pacific Southwest Regional Administrator Martha Guzma said in a statement at the time. |
| |
| | Transform your hiring game with Ashby | | For a hectic startup, hiring can be a messy process. Ashby makes it so much easier. (That’s why writing this particular slot was so much fun — at The Deep View we use Ashby and love it). | Their interface is intuitive, and applicants are easily organized, making reviews quicker, more targeted, and as painless as possible. | With Ashby, our scheduling is almost completely automated, and as it turns out, their new AI-assisted application review actually enables us to deliver a much more human experience. Having all our workflows in a single tool helps us deliver a better candidate experience with better data and more accuracy. | Plus, all of Ashby’s AI-backed features are reliable, secure and trustworthy (it checks all the boxes). | If you’re hiring — but want to handle that process without your usual migraines — switch to Ashby. |
| |
| Meta is ready to consume more of your posts | | Source: Created with AI by The Deep View |
| Months ago, Meta said that it plans to use user content across its suite of social media services to train its generative AI models. The company at the time said that the wealth of content posted by its users would give Meta’s AI an edge over the competition. | After this practice was stymied by regulators in Europe and the U.K., Meta said last week that it is finally ready to begin training its generative AI models on the content its U.K.-based users post. | Meta said that, since pausing the practice in the U.K., it has engaged “positively” with the Information Commissioner’s Office (ICO) — the company will use all public content across Facebook and Instagram (shared by adults) to train its AI models. According to Meta, this means that its models will now “reflect British culture, history, and idiom, and that U.K. companies and institutions will be able to utilize” its models. Ed Newton-Rex, the CEO of Fairly Trained, said in response that a “weakly-defined desire for cultural representation cannot be an excuse to commercially exploit the world’s creative output without permission or payment.”
| Meta said that it won’t train on private messages or on content from users who have objected to the practice, though these objections were largely ignored in the U.S, where Meta trains on user-generated content. | The ICO said in a statement that Meta has changed its processes, making it easier for people to object to the practice. | At the same time, Meta's global privacy director, Melinda Claybaugh, admitted to the Australian government that the company has already scraped every public post (from adult Australian accounts) across Facebook and Instagram dating back to 2007. She confirmed that Australians were given no option to opt out and that photos of children featured on adult accounts were scraped as well. |
| |
| | | | | | Google’s AI will help decide whether unemployed workers get benefits (Gizmodo). Microsoft’s hypocrisy on AI (The Atlantic). Exclusive: OpenAI's huge valuation hinges on upending corporate structure (Reuters).
| If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here. | | | | | | |
| |
| Voters want AI protections for artists | | Source: Unsplash |
| The copyright debate, as it pertains to the training and operation of generative AI models, largely breaks down to two perspectives: the tech companies believe it is fair, necessary (and protected under ‘fair use’) to train their models on content that has been scraped without the consent, compensation or permission of the creator. | Creators disagree. | As does the American public (and a bunch of legal scholars). | What happened: A new poll, conducted by the Human Artistry Campaign, found that the majority of U.S. voters support artist protections against AI infringements. | Nearly 80% of those polled said developers should receive explicit consent before using someone’s work to train a genAI model. More than 70% said artists should be compensated when their work is used by a developer. The vast majority of those polled — including Republicans, Democrats and Independents — support the creation of new legislation to protect artists’ voices, likenesses and work from being used by AI developers.
| Around 90% of voters said developers ought to be required to label their output as AI-generated, and 85% said they should be required to maintain a public dataset of all content used to train their models. | “Voters may not have studied AI deeply, but they are guided by common sense and a collective gut feeling that using someone’s voice, image or creative works without authorization is reckless, invasive and wrong,” Dr. Moiya McTier, a senior advisor to the HAC, said in a statement. |
| |
| New Poll: Americans want AI regulation | | Source: Created with AI by The Deep View |
| Conversations of regulation have been loudly ongoing in the AI sector since Sam Altman testified before Congress in May of 2023, asking the government to regulate the industry before things got out of hand. | Altman’s plea surprised the legislators, with one Senator saying: “I can’t remember when we’ve had companies come before us and plead with us to regulate them.” | But from that moment on, the idea of regulation became contentious, with the major tech companies and venture capital firms — including Altman — largely coming out staunchly against regulation. The common refrain here revolves around the idea that regulation will “stifle innovation.” | On that front, I’ve heard much the opposite — experts have told me that regulation drives innovation. Suresh Venkatasubramanian, an AI researcher, professor and former White House tech advisor, told me last year that regulation will trigger “market creation; we're beginning to see companies form that offer responsible AI as a service, auditing as a service. The laws and regulations will create a demand for this kind of work."
| And amid this push and pull between the regulators and those they’re trying to regulate, the legislative landscape has stalled out. We still have no federal regulation regarding AI in the U.S.; what we do have is a patchwork layer of state attempts, some of which have drawn the ire (and lobbying attention) of the tech industry. | And throughout all of this, people have largely been in overwhelming support of regulation. A new poll — conducted by the Artificial Intelligence Policy Institute (AIPI) of a thousand U.S. voters — found that, once again, Americans favor regulation over unbridled innovation. | The results: 71% of those polled support mandatory cybersecurity standards for AI developers, with only 9% in opposition. | 61% of voters think it should be mandatory for developers to give the U.S. AI Safety Institute model access, with only 17% saying it should be voluntary. 60% of voters said robust safety testing is more important than the speed of new releases, with only 15% saying the reverse.
| Nearly 60% of voters also support California’s SB 1047, and want Gov. Gavin Newsom to sign the bill into law. | "This data shows that, unlike the loud voices of tech giants, voters aren't just into boundless innovation regardless of the consequences; they are demanding safety and accountability from the companies leading AI advancements,” Daniel Colson, the executive director of the AIPI, said in a statement. | “Voters are sending a clear message: the focus should be on making AI safe before pushing for rapid deployment," he added. "Americans want AI companies to take responsibility for the potential risks their technologies pose." | Newsom has until Sept. 30 to sign SB 1047; a list of more than 100 current and former employees of AI developers including OpenAI and DeepMind have come out in support of the bill. | | I think much of the public concern here stems from the fact that it is the public who will have to deal with the fallout of irresponsible deployments and mass experiments gone wrong. | There is a difference between development and deployment. | | | Which image is real? | | | | | 🤔 Your thought process: | Selected Image 1 (Left): | | Selected Image 1 (Right): | |
| |
| 💭 A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on OpenAI’s future: | A third of you think Microsoft will just acquire OpenAI, and Mr. Altman will become a Microsoft VP. A third think OpenAI will run out of money in 12 months and will fundraise at a lower valuation. | 18% think the profits and an IPO are imminent within the next year, and four of you think the company is going under. | Microsoft swoops in: | | Do you think model developers should be required to share their models with the government before deployment? | |
|
|
|