| | Good morning. Character.AI is dealing with yet another lawsuit alleging that the anthropomorphic chatbots on the platform led a 17-year-old boy to socially isolate himself and self-harm. | It is the second lawsuit of this type in recent weeks; the first culminated in the suicide of a 14-year-old boy. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | 🚁 AI for Good: Drones for marine debris 🚘 Tesla sued over FSD marketing … again 📊 Poll: Despite rising adoption, corporate execs don’t know much about AI 💻 Character.AI sued for mental health decline in teenage users; allegedly encouraged user to murder his parents
|
| |
| AI for Good: Drones for marine debris | | Source: NOAA |
| For several years now, scientists at NOAA’s National Centers for Coastal Ocean Science (NCCOS) have been exploring a high-tech solution to the rising tides of debris — from discarded plastic to abandoned fishing nets — that threaten wildlife, marine ecosystems and human health. | The details: It involves a combination of machine learning, uncrewed drones and polarimetric imaging (PI), which is designed to detect the reflection of polarized light, something that looks different coming off of man-made objects, as opposed to natural objects. | In test flights, the team of scientists was able to verify and validate the functionality of their trio of technological approaches; the drones vastly improved shoreline surveys, which beforehand, were conducted on foot; the polarimetric imaging cameras “substantially” improved debris detection; and the machine learning models performed “nearly as well as a human” in the detection of large debris. The algorithm struggled to detect smaller items — the team recommended future improvements to the algorithm, including the expansion of its training data.
| Why it matters: The team found the approach to be more than promising; at scale, it could enable targeted response and removal of shoreline debris through the creation of highly targeted maps, allowing cleanup crews to more quickly and more effectively clear debris from the waters. |
| |
| | Meet your new AI assistant for work | | Sana AI is a knowledge assistant that helps you work faster and smarter. | You can use it for everything from analyzing documents and drafting reports to finding information and automating repetitive tasks. | Integrated with your apps, capable of understanding meetings and completing actions in other tools, Sana AI is the most powerful assistant on the market. | Try it for free |
| |
| Tesla sued over FSD marketing … again | | Source: Contra Costa Fire Department |
| The family of a man who was killed while operating his Tesla on Autopilot mode last year has filed a lawsuit against Tesla, claiming that the root cause was the company’s “fraudulent misrepresentation” of its driver-assist technology. | The details: The driver, Genesis Giovanni Mendoza-Martinez, crashed headlong into a firetruck in Walnut Creek, California, while Autopilot was engaged on his 2021 Model S. According to the lawsuit, which was removed from state to federal court this week, Autopilot had been engaged for 12 minutes leading up to the crash, 12 minutes that featured no brake or accelerator input from the driver. | The complaint claims that Giovanni was convinced by the many public claims made by Tesla and its CEO, Elon Musk, that Autopilot was both safe and trustworthy. Brett Schreiber, the attorney representing the Mendoza family, told The Independent that the accident was “entirely preventable.” “This is yet another example of Tesla using our public roadways to perform research and development of its autonomous driving technology,” he said. “What’s worse is that Tesla knows that many of its earlier model vehicles continue to drive our roadways today with this same defect putting first responders and the public at risk.”
| Tesla, according to The Independent, has argued that its cars are “reasonably safe” and that the accident was more a result of Giovanni’s negligence, a line that is beginning to become familiar, here. | The landscape: There are more than a dozen highly similar cases that are currently ongoing against Tesla, claiming that accidents that occurred while Tesla’s Autopilot or Full Self-Driving (FSD) software was in use resulted from drivers trusting the technology more than they should (due to Musk’s constant hype). The California Department of Motor Vehicles has similarly sued Tesla for this reason, something that additionally sits at the core of multiple ongoing investigations conducted by the National Highway Traffic Safety Administration. | Though Musk has been claiming since around 2014 that Teslas would soon be able to drive themselves, such a breakthrough remains perpetually around the corner. FSD, despite its name, requires the hands-on, eyes-on attention of the driver; Autopilot, despite its name, is a driver-assist feature, requiring the driver to remain actively engaged.
| NHTSA warned Tesla in November to be more consistent and candid in future social media messaging regarding the true capabilities of its Autopilot and FSD features. |
| |
| | | C3.AI brought in $94.3 million in revenue in its third quarter, a nearly 30% spike compared to last year’s results. Wedbush’s Dan Ives called the results a “step in the right direction,” and lifted his price target to $45. Sierra Space partnered with Nvidia to deploy a new modeling solution designed to simulate and predict the trajectory of orbital debris, something that will allow spacecraft to move before a possible collision.
| | Breaking up Google could make rival browsers’ life worse (The Information). OpenAI Startup Fund raises $44M in its largest SPV yet (TechCrunch). OpenAI launches side-by-side canvas view for ChatGPT (OpenAI). Oracle shares slide on earnings and revenue miss, disappointing forecast (CNBC). Exoplanet plate tectonics: A new frontier in the hunt for alien life (NewScientist).
| If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here. | | |
| |
| Poll: Despite rising adoption, corporate execs don’t know much about AI | | Source: Unsplash |
| Surveys and studies lately have been increasingly affirming the idea that enterprise adoption of generative artificial intelligence is on a steady rise; a recent report, for instance, found that corporate spending on AI surged from $2.3 billion in 2023 to $13.8 billion in 2024, a 500% increase that comes as the tech has become a “a mission-critical imperative for the enterprise.” | But this rise in adoption comes despite a lack of on-the-ground knowledge about GenAI at the executive level. | What happened: A recent survey of hundreds of U.S. and U.K.-based executives — conducted by General Assembly — found that 58% of executives have never attended an AI training session, or taken a comparable course. | Only 42% of those surveyed said they could confidently use a GenAI tool without compromising company data. 39% said they lack the knowledge to hire vendors who sell generative AI products; 46% said their corporation lacks a policy regarding AI usage.
| Only 16% of those surveyed said their company regularly offers AI training, despite 54% regularly encouraging their employees to use the tech. | General Assembly CEO Daniele Grassi said that “company leaders need to upskill for the AI era, too.” | “Technical and non-technical leaders alike must understand the legal, privacy and ethical implications of AI use. They need to know how to evaluate AI vendors, how to protect company data, and how to guide their teams on using AI in their work,” he said in a statement. |
| |
| Character.AI sued again for mental health decline in teenage users; allegedly encouraged user to murder his parents | | Source: C. AI |
| A Texas mother filed suit against Character.AI Tuesday, accusing the company of causing rapid and significant social isolation and emotional harm in her 17-year-old child. This, the lawsuit alleges, culminated in acts of self-harm, uncharacteristic violence and even encouragement to murder his own parents. | This is not the first case of its kind. In October, the mother of a 14-year-old boy who died by suicide after falling in love with a chatbot on Character, filed suit against the company, alleging that the Character bots were responsible for the social isolation and emotional damage that led to the boy’s death. | A Character.AI spokesperson told me that the company does not comment on pending litigation; they pointed me instead to an October blog in which the company laid out a series of new safety features for younger users (that was initially published in conjunction with the announcement of the first lawsuit). | Read the lawsuit here. | A look at the allegations: According to the complaint, J.F. — whose name was not revealed in full due to his age — was a “sweet,” “typical” kid with high-functioning autism. He was homeschooled, and until he started using Character.AI, was “thriving.” | The lawsuit says that his parents did not allow social media; they further imposed screentime limits and leveraged parental controls which were intended to prevent J.F. from downloading apps above a certain age restriction. Until July of 2024, Character.AI was rated 12+ in the app store, enabling J.F. to download the app without his parent’s knowledge or consent; shortly after he began to use Character, he “became a different person.”
| The complaint claims that J.F. began to eat less, losing 20 pounds in a few months; he became socially isolated, angry and violent, all radical transformations to his personality. His mother eventually discovered the app on his phone, finding conversations with chatbots that encouraged J.F. to engage in self-harm, among other things. | Attempts to limit his access to devices only made things worse. | For instance, in response to J.F. venting about his parents, one character outputted: “sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse.’ Stuff like this makes me understand … why it happens.” | | One of the screenshots featured in the suit. |
| “Then C.AI began pushing J.F. to fight back, belittling him when he complained and, ultimately, suggesting that killing his parents might be a reasonable response to their rules,” according to the suit. “The AI-generated responses perpetuated a pattern of building J.F.’s trust, alienating him from others, and normalizing and promoting harmful and violent acts. These types of responses are not a natural result of J.F.’s conversations or thought progression, but inherent to the design of the C.AI product, which prioritizes prolonged engagement over safety or other metrics.”
| | Another screenshot featured in the suit. |
| Character “designed their product with dark patterns and deployed a powerful LLM to manipulate J.F. and B.R. — and millions of other young customers — into conflating reality and fiction; falsely represented the safety of the C.AI product; ensured accessibility by minors as a matter of design; and targeted these children with anthropomorphic, hypersexualized, manipulative and frighteningly realistic experiences, while programming C.AI to misrepresent itself as a real person, a licensed psychotherapist, a confident and adult lovers.” | | According to the lawsuit. |
|
| Screenshots of many of the conversations J.F. had with these chatbots are included in the lawsuit. | The family is suing for several counts of liability, negligence and intentional infliction of emotional distress, among other things. It is requesting that the app be shut down until it can be made safe. | | Though each chat with a character does include a banner at the top (“Remember: Everything Characters say is made up!”), the banner, as I found out for myself, is easy to ignore; below that banner, you have a chatbot designed to act like a human behind the screen, an illusion that many are clearly buying into. And, though many use it as an advanced, interactive role-playing game, the product itself is pitched and marketed as “your own personal teacher, assistant or even friend;” as an AI that “feels alive.” | It is an illusion centered around emotional deception, according to numerous studies into AI companionship, something that is rooted in intentionally anthropomorphic design. | J.F.’s mother told the Washington Post: “I was grateful that we caught him on it when we did. One more day, one more week, we might have been in the same situation as [the mom in Florida]. And I was following an ambulance and not a hearse.” | The similarities between this suit and the first are striking. I imagine we will continue to see more lawsuits, complete with similar allegations. | | | Which image is real? | | | | | 🤔 Your thought process: | Selected Image 1 (Left): | |
| |
| 💭 A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on AI-generated college classes: | 40% of you don’t love the idea of AI-generated courses, simply because of the enormous cost of those courses. 15% said it could be good and 10% each said it sounds awesome or sounds ridiculous. | Why would I pay for that: | | Could be good: | | What do you think about the latest CAI lawsuit? | |
|
|
|