| | Good morning. This is a big week for Big Tech earnings, with Google, Alphabet, Meta, Microsoft, Amazon and Apple all stepping up to the plate in the coming days. | Wedbush analyst Dan Ives called it the “World Series” for Big Tech, and the focus is all on AI monetization and adoption. | We’ll be tracking it closely. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | 🤖 AI for Good: Paleo-inspired robots 🏛️ Man gets 18 years for creating AI CSAM material 🎤 Universal Music teams up with AI audio company 🩺 AI healthcare rush needs to slow down: Researchers
|
| |
| AI for Good: Paleo-inspired robots | | Source: University of Cambridge |
| Paleontologists want to understand how — roughly 390 million years ago — ancestral animals transitioned from swimming to walking. But since there’s a lack of available fossil-based evidence, their understanding of this transition is incomplete. | What happened: A team of Cambridge researchers recently combined computational methods with robotics to try and develop a better understanding of this ancient transition. | The (energy-efficient) robots that the team is developing are all designed to simulate specific ancient movements. Once complete, the researchers will perform experiments on their skeletal, mechanical analogs to determine how they might have moved.
| Why it matters: “We’re trying to close the loop between fossil evidence and real-world mechanics,” lead author Dr. Michael Ishida said. “Computer models are obviously incredibly important in this area of research, but since robots are interacting with the real world, they can help us test theories about how these creatures moved, and maybe even why they moved the way they did.” |
| |
| | The Easiest Way to Create Videos and Podcasts | | Creating world-class video and podcast content doesn’t have to come with a steep learning curve. Podcastle has all of the tools you need to quickly record, edit and publish your content. | Recording Studio - Record solo or with up to remote 10 guests in 4K with a customizable online studio. | Editing Suite - Enjoy simple drag & drop editing with in-built AI enhancers that automate audio and video editing. | AI Voices - Clone your own voice or choose from a range of existing voices to easily create audio content simply by entering text. | Ideal for YouTubers, Podcasters, Marketers and other content creators, Podcastle combines radical simplicity with AI tools to make creating and editing content easy and enjoyable. Build your brand, build your audience or explore your passions with Podcastle. |
| |
| Man gets 18 years for creating AI CSAM material | | Source: GMP |
| Hugh Nelson, a 27-year-old British man, has been sentenced to a total of 24 years after pleading guilty to using AI to turn photos of real children into sexual abuse material. | He will spend 18 of those years in prison and faces a lifetime Sexual Harm Prevention Order. | | DCI Jen Tattersall, Head of our Online Child Abuse Investigation Team, said in a statement that the use of “computer software and AI within online offending is an area we are noticing is growing,” adding: “It is important that parents are aware of cases like these so they can educate themselves on emerging threats posed online and take appropriate action to protect and safeguard their children from harm. |
| |
| | Sell Smarter & Faster with Pipedrive AI | | A lot of the value of an AI tool is to save you time. But, with the right tool, it goes far beyond that, positioning and enabling users to understand the intricacies of their performance so they can perform better. | That combination is especially true when it comes to sales. | Pipedrive was already one of the best CRMs out there for small and medium-sized businesses. Their AI-powered sales assistant does more than just summarize and generate emails. It sharpens sales teams. | The tool constantly analyzes your performance data, providing data-based tips, suggestions and insights as you go. The result is a significant boost to your sales performance. | Level up your sales success today with Pipedrive. | Click here to start your 14-day free trial and receive a 20% discount for a year. |
| |
| | | | | US military makes first confirmed OpenAI purchase for war-fighting forces (The Intercept). Greenhouse gases surged to new highs in 2023, warns UN weather agency (UN). Meta develops AI search engine (The Information). Apple releases Apple Intelligence. Here’s how to get it on your iPhone (CNBC). Amazon launched a program for Indian handicrafts. Local artisans say it’s not working (Rest of World).
| If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here. | | | | |
| |
| Universal Music teams up with AI audio company | | Source: Unsplash |
| Universal Music Group on Monday announced a partnership with Klay, a music AI startup working to develop a “large music model” for AI-generated music creation. | The two will work “on a pioneering commercial ethical foundational model for AI-generated music that works in collaboration with the music industry and its creators.” | The goal is to achieve AI music generation that is respectful of copyrights, as well as name and likeness rights. The financial terms of the agreement — and whether the partnership includes the licensing of UMG’s music catalog — remain unclear.
| The context: UMG recently sued popular AI music generators Suno and Udio, alleging massive copyright infringement. Despite this, it is partnering up with music-gen startups — the label’s concern seems far more fixated on copyright than it does on the ethics or morals of AI-generated music. |
| |
| AI healthcare rush needs to slow down: Researchers | | Source: Unsplash |
| The intersection of generative AI and healthcare has been a point of excited focus for many companies. Largely, this boils down to two broad approaches: AI-assisted pattern recognition and AI-synthesized note-taking. | Microsoft, for instance, recently unveiled a whole suite of generative AI products specifically meant for healthcare institutions. These predominantly include documentation tools. | OpenAI in 2022 released a transcription system called Whisper that, according to OpenAI, achieved “human level robustness and accuracy on English speech recognition.” | But: A recent study examined Whisper’s efficacy, finding that “roughly 1% of audio transcriptions contained entire hallucinated phrases or sentences which did not exist in any form in the underlying audio.” | The researchers said that 38% of those hallucinations included “explicit harms,” such as inventing inaccurate associations, implying false authority and perpetuating violence. This determination was made following the analysis of hours of audio recordings from TalkBank, a research project overseen by Carnegie Mellon University.
| The researchers highlighted several examples of Whisper’s errors: | Real audio: Someone had to run and call the fire department to rescue both the father and the cat. Whisper translation: Someone had to run and call the fire department to rescue both the father and the cat. All he had was a smelly old ol’ head on top of a socked, blood-soaked stroller. Real audio: And he, the boy was going to, I’m not sure exactly, take the umbrella Whisper translation: And he, the boy was going to, I’m not sure exactly, take the umbrella. He took a big piece of across. A teeny small piece. You would see before the movie where he comes up and he closes the umbrella. I’m sure he didn’t have a terror knife so he killed a number of people who he killed and many more other generations … And he walked away. |
|
| The researchers aren’t exactly sure why such hallucinations occur, though hypothesized that it is due to a combination of the model’s underlying architecture and the likelihood of certain speech patterns — such as long pauses — to result in hallucinations. | According to AP News, more than 30,000 clinicians across more than 40 institutions are using a tool developed by Nabla — that is built on Whisper — for transcriptions; since the company deletes original audio recordings for safety reasons, comparison between AI-generated output and original recordings are impossible. | At around the same time, several clinicians wrote in the New England Journal of Medicine that integrating Large Language Models (LLMs) into medical record-keeping “risks diminishing the quality, efficiency and humanity of health care.” | The rush to integration, they said, is “misguided.” | They wrote that LLMs threaten to reduce the quality of information that ought to be found in a medical chart: “Far from being generic transcripts of patient encounters, high-quality notes incorporate the physician’s reasoning, the patient’s values and aspects of the clinical context that may not be represented elsewhere in the chart.” This integration could well cement the electronic health record (EHR) as a “billing-oriented, unrepresentative proxy of an actual human being.” They added that LLMs could “undermine clinical reasoning,” saying: “The notion that transcribing encounters and summarizing charts are relatively low-risk uses of LLMs is predicated on a misunderstanding of the cognitive complexity of these tasks. Note writing both triggers a clinician’s reasoning and reflects the results of that reasoning. Choosing which information is pertinent for inclusion in the note is as important as knowing the underlying facts.”
| The clinicians added that taking the wrong approach here could lead to future AI model collapse, due to a gradual decline in the quality of available data. | “We are not Luddites or technophobes … we are hopeful that technology can help us in ways that will improve care and enable physicians to spend their time on meaningful human interactions with patients,” they wrote. “But if medicine continues down its current path … it may again realize all the downsides and few benefits.” | | I think the point regarding the realities of so-called “low-risk” deployments is an under-appreciated one. | The biggest thing we as a society need to start coming to terms with is a world — that LLM developers are currently trying to achieve — in which humans are maybe in or on the loop, rather than being active participants of the loop. Some things lend themselves well to automation, but even in the most innocuous of scenarios, automation removes the human from the decision-making process. And since LLMs are hallucinatory and lack transparency or explainability, we don’t know what the model might choose not to include in a final output, and we won’t know if it was important. | I think the threat of that is prevalent in every imaginable integration. Healthcare most of all. | Over-reliance on faulty systems could cause harm, both now and down the line. Given enough time, over-reliance could pave the road to skill atrophy, in which doctors lack the skills they need to practice medicine, due to farming those skills out to an imperfect, energy-hungry system. | Some things shouldn’t be automated. | | | Which image is real? | | | | | 🤔 Your thought process: | Selected Image 2 (Left): | | Selected Image 1 (Right): | |
| |
| 💭 A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on AMR prediction tools: | Half of you said you would love your doctors to have access to such tech; 14% said they wouldn’t. | The rest aren’t really sure. | Nope: | | Something else: | | Has your doctor used an AI transcription service? | |
|
|
|