| | Good morning. This week (Aug. 28), Nvidia will report earnings. For several quarters now, they’ve blown past investor expectations as the AI Gold Rush has directed billions of dollars into the necessary semiconductor firms (makers of those trusty picks and shovels). | With concerns over investment returns mounting, this could be a pivotal moment for the industry. But if Nvidia turns out another massive quarter, it could very well spell more of the same as the gold rush continues. | Either way, we’ll be watching very closely. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | |
| |
| AI for Good: NASA’s ‘Wildfire Digital Twin’ | | Source: NASA |
| NASA in May developed a tool — powered by artificial intelligence and machine learning — that forecasts wildfire burn paths in real-time. | The details: The Wildfire Digital Twin project achieved a resolution that is two orders of magnitude greater than current global models, according to NASA, and can be developed in mere minutes (as opposed to the hours it takes to get current models off the ground). | The tool relies on a wide range of data sources that combine to allow scientists and firefighters to better monitor wildfires instantly and on the ground. “We want to be able to provide firefighters with useful, timely information,” Professor Milton Halem, who leads the project, said in a statement. “There is generally no internet, and no access to big supercomputers, but with our API version of the model, they could run the digital twin not just on a laptop, but even a tablet.”
| Why it matters: Wildfires have been getting larger, more intense and more frequent for years, and it is a trend that is likely to continue. This means that we need a larger arsenal of better, more targeted tools for wildfire mitigation. |
| |
| | Register today for Ashby’s upcoming launch event | | When it comes to ATSs, Ashby is the best (that’s why we use it as our all-in-one recruiting platform at The Deep View). | Ashby’s got a new feature coming — AI-assisted application review — designed to make it easier for companies to handle high volumes of inbound applications while maintaining a great candidate experience. | On September 10, Ashby will: | Walk through and unveil the new AI-Assisted Application Review feature Cover existing Ashby features that help you manage high inbound volume Detail best practices, tactics and important considerations when introducing this technology into your workflow
| Save your spot by signing up for the launch event today. |
| |
| Poll: How musicians feel about GenAI | | Source: Created with AI by The Deep View |
| APRA AMCOS, a music rights management organization, recently conducted a poll of more than 4,000 musicians from New Zealand and Australia regarding generative artificial intelligence and its role in the music industry. | The details: More than half of those surveyed think that genAI can support the “human creative process.” But significant portions of those surveyed believe the application of GenAI will be narrow, i.e., recording, mixing, mastering, marketing and generating promotional content. | On the actual music creation side of things, slightly less than half of those surveyed think GenAI will help unlock “new forms of creativity.” 27% are “AI refusers” and 20% would “rather not” use the tech. At the same time, the report acknowledged a real risk of GenAI threatening the livelihoods of these musicians; 82% of those surveyed are concerned that the use of AI in music could prevent them from making a living from their work.
| The copyright factor: And that leads me to copyright, a question that has resulted in high-profile lawsuits across the media industry. 96% said that AI companies should be required to disclose when they train models on copyrighted works; 95% said rightsholders must be asked permission before their work is consumed in training data; 93% said AI tracks should be identified as such; 93% said rightsholders should be paid when their work is used to train a music-gen model. | “The issue lies not in the technology itself, but in the secretive corporate practices that erode trust within the global creative sector,” APRA AMCOS said in a statement. “Transparency is crucial to this process.” |
| |
| | AI will run 80% of project management by 2030, per Gartner | | But you don’t have to wait till then – backed by Zoom, Atlassian, and Y Combinator, Spinach AI makes that a reality today. | Invite Spinach to your next team or project meeting and it will: | Help run your meeting on Google Meet, Zoom, or Microsoft Teams Summarize and share notes and action items in email, Slack, Google Docs, Confluence, or Notion Help you update tasks and tickets in Jira, Asana, Trello, ClickUp, Monday or Linear
| Which means you can spend more time building and growing the business. | Try Spinach in your next meeting and get an unlimited 14-day trial! |
| |
| | | | | The world’s first airport to require biometric boarding is set to arrive in 2025 (CNBC). DeepMind employees reportedly push Google to end military contracts (The Information). Telegram messaging app CEO Durov arrested in France (Reuters). The Boeing Starliner Astronauts Will Come Home on SpaceX's Dragon Next Year (Wired). Fans use AI deepfakes to keep a slain Indian rapper’s voice alive (Rest of World).
| If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here. | | | | | | |
| |
| We might finally have a definition of ‘open source’ | | Source: Unsplash |
| A bit of a theme in the AI industry is a lack of unified definitions. Artificial intelligence, itself, is a term that has increasingly frustrated many researchers for being vague and inaccurate. Artificial general intelligence — a hypothetical, theoretical AI with human-like intelligence — is a term that is likewise mired in debate (researchers can’t seem to agree on what AGI would look like, if it ever arrived). | And then there’s open-source, a term that is likewise rooted in technical disagreements. Some developers — like Meta — claim their models are open source, while much of the detailed information concerning their models (such as the training data) remains obscured. Some researchers have begun to refer to this as “open-washing.” | What happened: The Open Source Initiative last week published a new definition of open-source AI, the result of deliberations with a group of 70+ researchers in the space. | Under the new definition, an open-source system must be able to be used for any purpose without requiring permission or licenses; its components must be accessible for study by researchers; the system must be able to be modified; and the system, with or without modifications, must be able to be shared. OSI added that “sufficiently detailed” information regarding the training data used to build the model must be provided so that a sufficiently skilled person could recreate a similar system.
| The data aspect, OSI said, is the most “hotly debated” component of the definition. But such data is vital in understanding how a system works; systems that don’t share that data, under this new definition, aren’t really open-source. |
| |
| OpenAI whistleblowers like SB 1047 | | Source: Unsplash |
| While this doesn’t quite constitute legislative progress, Anthropic last week wrote in a public letter that the benefits of SB 1047 “likely outweigh its costs.” This, of course, comes just a few days after the bill was watered down, partially through amendments suggested by Anthropic. | What they said: Anthropic said that the bill addresses “real and serious concerns” dealing with the potential “catastrophic risk” of increasingly powerful AI systems. After the amendments, the bill presents a “feasible compliance burden” for Anthropic and its peers. | Anthropic said SB 1047 would provide certain security measures and more transparent safeguards that would push forward AI science without hampering innovation. But Anthropic isn’t fully behind the bill — the company said that certain elements of pre-harm enforcement laid out by the bill remain a cause for “concern.”
| California Sen. Scott Wiener, the lead author of the legislation, said on Twitter that he incorporated “some, but not all” of the amendments requested by Anthropic. In response to questions of conflict of interest regarding Anthropic’s role in shaping the latest form of the bill, he said: “that’s called democracy.” | Now, at the same time that Anthropic said publicly that the bill is good enough to go, other companies — Google, Meta and, more recently, OpenAI — remain staunchly opposed to it. | Two former OpenAI engineers/whistleblowers — William Saunders and Daniel Kokotajlo — last week published an open letter regarding OpenAI’s stance on the bill. They said that they joined OpenAI to “ensure the safety of the incredibly powerful AI systems” the company is building; but they resigned from the company because “we lost trust that it would safely, honestly and responsibly develop its AI systems. In light of that, we are disappointed but not surprised by OpenAI’s decision to lobby against SB 1047.” | The two whistleblowers wrote that voluntary safety disclosures aren’t enough. “Sam Altman, our former boss, has repeatedly called for AI regulation. Now, when actual regulation is on the table, he opposes it … OpenAI's approach is in contrast to Anthropic's engagement, though we disagree with some changes that Anthropic requested … OpenAI has instead chosen to fearmonger and offer excuses.”
| | That we have a governance problem when it comes to AI is pretty clear. | Regulation makes the most sense at the federal level; state-to-state regulation results in a patchwork of varying requirements that make compliance challenging. But things are moving exceptionally slowly at the federal level, and, as proven by SB 1047, Big Tech seems to balk at any piece of legislation anyone can come up with. | As cognitive scientist and AI researcher Gary Marcus has said, self-regulation and voluntary commitments are not enough here. “If we don’t figure out the governance problem, internal and external, before the next big AI advance — whenever it may come — we could be in serious trouble,” Marcus wrote.
| Just about every other industry — from airlines to pharmaceuticals and healthcare, to construction, finance, fishing and oil production — has to navigate a complex web of regulatory bodies and requirements. | If the tech companies are saying that the catastrophic risk is real, then they ought to be regulated for that, regardless of the hype inherent to such statements. | | | Which image is real? | | | | | 🤔 Your thought process: | Selected Image 1 (left): | “The tie knot in Image 1 was more convincing, with a dimple. But tighten it up, sir!” | Selected Image 2 (right): | “Looks less augmented.” |
| |
| 💭 A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on non-medically-necessary brain implants: | More than 60% of you said you wouldn’t ever get a brain implant unless you absolutely needed one, but 15% said you would. | A few of you said that the capabilities Musk is talking about are too far away to matter, and the rest were undecided. | Yes: | | Nope: | | How do you feel about AI in music? | |
|
|
|