| | Good morning, and happy Friday. | We are getting our podcast — which you can check out here — more underway, with a couple of really cool episodes coming your way soon. | Drop a line if there’s anyone you’d like me to chat with, or any topics you’d like us to explore. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | 🚤 AI for Good: Autonomous water robots 📱 TikTok opens up the AI advertising floodgates 🏛️ Anthropic’s nuclear safety testing 🚨 Researchers call for the abolishment of ‘carceral’ AI
|
| |
| AI for Good: Autonomous water robots | | Source: Clearbot |
| Faced with growing and hazardous ocean pollution, a team of students in 2019 developed an autonomous, AI-driven and clean energy-powered boat, called the Clearbot, which they have since deployed to aid in marine clean-up efforts. | The details: The Clearbot uses onboard cameras, sensors and AI to identify and scoop up pollutants in the water, while avoiding marine life. | It has already been used to clean garbage out of lakes, clear up oil spills and make cargo deliveries, all with zero emissions. The data the system collects during its operations allow environmentalists to identify the sources of garbage that clog up a given waterway.
| The system is capable of collecting between 80 and 200 kilograms of pollution per hour. | Since plastic pollutants vary in size and color, and often float below the ocean’s surface, identifying them — especially at the scale with which we dump plastic into the oceans — is practically impossible without algorithmic aid. |
| |
| | 🔒 Secure Your Digital Life with NordVPN this Black Friday! 🔒 | | Protect your privacy and keep your data safe with NordVPN, the trusted leader in online security. With military-grade encryption and a strict no-logs policy, NordVPN ensures your browsing stays private no matter where you are. | ✨Enjoy seamless, lightning-fast connections on all your devices with just one account. | This Black Friday, NordVPN is offering up to 74% off + 3 months extra, starting at $2.99 / 2.99 € — so you can experience true internet freedom and security at an unbeatable price. | Don’t wait! Take advantage of this limited-time offer to stay secure all year round. |
| |
| TikTok opens up the AI advertising floodgates | | Source: TikTok |
| TikTok on Thusday announced that it is making its Symphony Creative Studio — an AI-powered video-generation platform — available to all advertisers. | The details: The studio allows advertisers to “turn your product information or URL into a video,” which they can then flesh out with AI avatars and AI-powered language translation. | The platform draws from TikTok’s best-performing videos to generate a series of creative options for a given advertiser. Using these features, TikTok said in a statement that drinkware brand Meoky achieved a 1.8x increase in purchases and a 13% boost in return on ad spend.
| The videos will be labeled as “AI-generated.” | It’s not clear what content the Symphony studio was trained on, and whether TikTok acquired permission to use that content beforehand. |
| |
| | ⭐Save your spot for SmallCon: A free virtual conference for GenAI teams looking to build big with small models!⭐ | | We're bringing together AI leaders from Meta, Mistral AI, Salesforce, Harvey AI, Upstage, Nubank, Nvidia and more for deep dive tech talks, interactive panel discussions and live demos on the latest tech and trends in GenAI. | You'll learn first hand what it takes to build the GenAI stack of the future and put your SLMs into production! | See the full agenda and register before it fills up |
| |
| | | | | Tech giants are investing in ‘sovereign AI’ to help Europe cut its dependence on the U.S. (CNBC). Google changes tack to overcome AI slowdown (The Information). US unveils roadmap to triple nuclear energy capacity by 2050 (Semafor). FBI raids apartment of election betting site Polymarket's CEO and seizes cellphone, source says (NBC News). Powell Says Solid Economy Allows Fed to Consider Rate Cuts ‘Carefully’ (Wall Street Journal).
| If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here. | | | | |
| |
| Anthropic’s nuclear safety testing | | Source: Anthropic |
| Anthropic said Thursday that it has been working with the National Nuclear Security Administration (NNSA), a division within the U.S. Department of Energy, for much of this year to test whether its generative AI models are capable of sharing dangerous information related to nuclear weapons. | This was first reported by Axios. | The details: The partnership has involved members of the NNSA “red teaming” Claude 3 Sonnet to see whether the system can be abused by bad actors. | The program was recently extended to apply the same approach to Claude 3.5 Sonnet, which was unveiled in June. Anthropic did not disclose the results of the red teaming, telling Axios instead that it plans to share its results exclusively with other labs due to the sensitivity of the experiment.
| “We do this kind of work because we believe third-party testing is fundamental to ensuring the AI ecosystem develops well and doesn't generate risks to the public,” Anthropic co-founder Jack Clark said. “We also do this because we believe it's of critical importance that governments gain more expertise in developing and running tests on AI systems — we believe this is of significant societal importance.” | Anthropic, generally regarded as one of the more safety-oriented AI labs, recently inked a deal with Palantir to grant U.S. defense and intelligence officials access to its generative AI models. |
| |
| Researchers call for the abolishment of ‘carceral’ AI | | Source: Carceral AI |
| Artificial intelligence is not a new technology. Algorithms have been around for decades, and were prominent (even if they operated behind the scenes) in the 2010s, shaping social media experiences and personalizing the internet. | They have also been used to enable heightened surveillance and predictive policing. In 2011, the Los Angeles Police Department implemented Operation LASER and PredPol, in collaboration with modern AI darling Palantir. The two systems parsed data from prior crime reports to predict future sources of crime; they eventually shut down Operation LASER in 2019, though the department said it would continue exploring “data-driven” methods of policing. | An analysis of the operation, while it was active, found that it targeted communities of color, poor people and immigrants. | The far more recent advent of Large Language Models (LLMs) and generative AI shifted this ecosystem somewhat, according to a new report, by providing “unprecedented scale, scope and opacity” in the realm of predictive policing and surveillance. | The report, the result of a collaboration of sociologists and experts in algorithmic discrimination and predictive policing, analyzes what it dubs “carceral” AI, AI that is used to surveil, control, and ultimately, police. | The findings: Today’s generative AI models grant their users a greater ability to “store, combine and query data at scale,” encouraging mass surveillance, mass data collection and generally predictive and “proactive” policing measures, according to the report. | Since these systems lack transparency — LLMs have been described as ‘black boxes’ — it becomes more difficult to identify error and bias, making a formerly straightforward decision-making process abstract. “This predilection for mathematical and data-driven decision-making makes it more difficult to challenge decisions, especially where the software is proprietary and therefore hidden from cross-examination,” according to the report.
| But, according to the researchers, the idea of addressing the bias inherent to these models is, itself, a problem, rather than a solution. They write that efforts to develop less biased algorithms assume “that non-biased software is possible” when it isn’t. | “All software contains biases, and the impulse to ‘eliminate bias’ is actually a move towards creating software that is aligned with values of often-powerful creators, and thus perhaps seems unbiased,” the report reads. “However, ‘unbiased software’ is simply software with biases that we either do not see or do not find problematic.” | Beyond this, such an approach assumes that algorithmic decision-making is one, better than human decision-making, and, two, that algorithms are the appropriate technology for the complexities of policing and incarceration. | The reality, however, is much more complex — first, despite the massive price tags associated with this technology, there is little evidence that such tech is actually effective when it comes to making communities safer. Further, humans are still in control of what to do with the information provided by an algorithm, according to the report, meaning that impressions of algorithmic-based objectivity simply do not exist. | Where to go from here: | The report argues that carceral AI “should not be developed, funded or used.” | “Technology is not the solution to our crises, themselves the result of deliberate discriminatory policy choices like the legacy of the U.S. war on crime and drugs,” the researchers wrote. “Instead, we advocate for reducing the size and scope of the carceral system through low-tech community-oriented interventions.” | Research is increasingly indicating that a relatively simple way to reduce crime involves building up community — increasing access to quality healthcare, education and housing, for instance. Instead of investing in technology that is unproven and fundamentally flawed, the researchers argue for an investment in areas that are proven to actually help. | “We do not need additional surveillance or data to know that we need policy interventions that center de-carceration and care-based approaches,” the report reads. “These approaches should always be prioritized over technological innovation that replaces, expands or reaffirms parts of the current system.” | | Caught up in a wave of hype, fewer and fewer organizations seem to be asking whether they should use AI; instead, many seem interested in figuring out how to use it, which is fundamentally the wrong question. | Dr. Sasha Luccioni, for instance, has called for more thoughtful, more targeted applications of generative AI — “digital sobriety” — considering its enormous energy and carbon footprint. Not everything needs an algorithm, and by putting an algorithm into everything, you are massively increasing the resource intensity of something that met its function perfectly well before. | Similar to the carceral AI dissected by the researchers in the above report, the main question should be whether AI is needed, not how to use it. And that question ought to be addressed with caution and care; analyze the impact, analyze the behavior of an algorithm, run pilot studies and compare any given outcomes and their associated costs with your non-algorithmic situation. | As AI expert Sol Rashidi told me in July, “Why use a sledgehammer when a hammer does the trick? AI can be overkill and sometimes AI is not the answer.” | | | Which image is real? | | | | | 🤔 Your thought process: | Selected Image 1 (Left): | | Selected Image 2 (Right): | |
| |
| 💭 A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on whether you’d pay more for human-oriented businesses: | Nearly half of you said that it depends, with many reminding me in the comments that my question was super general and you’re gonna need a more specific hypothetical. Good point. | 26% said they would, 19% said they wouldn’t and 1% have absolutely no idea. | Depends: | “Is the human-centric business also more ethical in its sourcing of products/ services? More kind / fair to its employees? More affordable? Does the human model or AI model discriminate more, historically? If the human model has historically had more flaws, have they shown more recent accountability through actions, not just words on ads/ social media? AI are only as flawed as they've been programmed to be - and so are humans.”
| Yes!: | | Okay, let’s try this again. I’ll be more specific this time. | Would you pay more to eat at a restaurant staffed exclusively by humans, vs one staffed exclusively by robots?? | |
|
|
|