| | Good morning. Titanic filmmaker James Cameron has joined the board of Stability AI, saying that AI is the next big thing for CGI. | Many other filmmakers and actors, meanwhile — including Michael Bay — remain staunchly opposed to the integration of AI within the creative arts. | Copyright concerns remain unresolved. | And OpenAI rolled out advanced voice mode to Plus and Teams users of ChatGPT (though not in Europe). | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | |
| |
| AI for Good: How the UN is reducing disaster risk | | Source: United Nations |
| We’ve talked often about the careful application of AI in simulating, predicting, mitigating and managing the risk of a range of natural disasters. | From wildfires to earthquakes, tsunamis, tornadoes and hurricanes, scientists are studying ways to increase their speed and accuracy in predicting what’s coming, where it’ll hit and how bad it’ll be. | What happened: Last month, the UN launched a new initiative to formally explore widespread applications of AI for disaster risk and preparation. | As part of a collaboration with a variety of other UN-led programs, the initiative aims to develop a framework for AI readiness that would assess and improve any national capacities of AI for disaster management. It will focus on “seismic, hydrometeorological and other natural hazards, as well as compound or cascading events that can result in disasters.”
| It comes as part of the International Telecommunication Union’s push to “protect everyone on Earth with timely disaster alerts by 2027.” |
| |
| | | | If you are not an AI-powered professional in 2024, you will either: | Get replaced by a person who uses AI Face a slow career growth & lower salary Keep spending 10s of hours on tasks that can be done in 10 minutes.
| But don’t fret– there is one resource that can CHANGE your life, but only if you’re ready to take action NOW. | Save your seat now (Offer valid for 24 hours only ⏰) | By the way, here’s sneak peek into what’s inside the workshop: | Making money using AI 💰 The latest AI developments, like GPT o1 🤖 Creating an AI clone of yourself, that functions exactly like YOU 🫵 25 BRAND new AI tools to automate your work & cut work time by 50% ⏱️
| And a lot more that you’re not ready for, just 3 hours! 🤯 | 1.5 Million People are already RAVING about this AI Workshop. Don’t believe us? Attend for yourself and see. | Register here (first 100 people get it for free + $500 bonus) 🎁 |
| |
| Congress takes a stab at AI regulation | | Source: Unsplash |
| United States Sen. Edward J. Markey (D-Mass.) this week introduced the Artificial Intelligence Civil Rights Act, a comprehensive piece of legislation that would establish guardrails around the deployment of AI algorithms. | The details: The Act focuses on the ways in which algorithms can be used to increase inequity, enhance discrimination and otherwise infringe upon peoples’ rights. | It would regulate algorithms involved in decision-making, including in employment, banking healthcare, criminal justice and government services, prohibiting the use of discriminatory algorithms. It would additionally require independent pre-deployment and post-deployment audits, specifically focused on potential biases in outcomes. It would authorize the Federal Trade Commission to enforce the legislation.
| Suresh Venkatasubramanian, an AI professor and former White House tech advisor, called the bill “perhaps the most robust protections for people in the age of AI of any bill introduced in Congress in this session.” | “It is vitally important that technological development serves the public interest … The AI Civil Rights Act provides a detailed and practical approach to ensuring that we can continue to benefit from safe innovation in technology,” he said in a statement. | Global regulation remains inconsistent, from Europe’s AI Act to the hodgepodge collection of legislative efforts across the states. |
| |
| | Simplify Your Manufacturing Process with Autodesk Fusion | | Autodesk Fusion brings CAD, CAM, CAE, and PCB together into one solution, helping you streamline your entire design and manufacturing process. | Right now, you can bundle 3 Fusion subscriptions and save 33%—purchase 3 one-year subscriptions for the price of 2, saving over $600. | Learn more and get started. |
| |
| | | Boost your software development skills with generative AI. Learn to write code faster, improve quality, and join 77% of learners who have reported career benefits including new skills, increased pay, and new job opportunities. Perfect for developers at all levels. Enroll now.* Join us for an inside look at how Ada and OpenAI build trust in enterprise AI adoption for customer service, starting with minimizing risks such as hallucinations. Viewers will also be able to participate in a Q&A.
| | FTX fraudster Caroline Ellison sentenced to 2 years in prison, ordered to forfeit $11 billion (CNBC). Uber, Chinese self-driving tech startup announce partnership to launch robotaxis in UAE (Reuters). Hollywood is coming out in force for California’s AI safety bill (The Verge). Google serving AI-generated images of mushrooms could have 'devastating consequences' (404 Media). Scale AI’s sales nearly quadrupled in first half (The Information).
| If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here. | | | | | | |
| |
| Study: The underground, malicious exploitation of LLMs | | Source: Unsplash |
| You’ve heard of ChatGPT, but have you heard of FraudGPT? How about WormGPT? | Both models, discovered by cybersecurity researchers last year, function as malicious iterations of ChatGPT available on the Dark Web, designed explicitly to help cyber criminals out. | A recent paper explored — for the first time — this offshoot of the underground large language model (LLM) environment in detail. | The details: It goes way behind FraudGPT and WormGPT; the researchers studied 14 “malicious LLM applications” — or Malla — services, and 198 Malla projects in depth, uncovering everything from WolfGPT to EvilGPT and Machiavelli GPT. | The study found that the tactics employed by Mallas include the abuse of uncensored LLMs and the exploitation, through jailbreaking, of public LLM APIs. The researchers “uncovered a rapid increase of Mallas, which has grown from April 2023 to October 2023.”
| Why it matters: “Our findings bring to light the ethical quandaries posed by publicly accessible, uncensored LLMs when they fall into the hands of adversaries.” The researchers released the prompts used by cyber criminals to generate malicious code and phishing campaigns in the hope that doing so will help platforms enhance their “current content moderation mechanism.” |
| |
| Interview: The content authenticity initiative | | Source: The content authenticity initiative |
| AI-generated content — in the form of images, text, audio and video — has been transforming the internet for the past 18 months. | It has seeped beyond Google Images and Twitter and into faked (but convincing) phone calls and Zoom meetings. In our digital world, anything that’s digital should no longer be trusted, which presents something of a challenge. | In 2019, a group led by Adobe sought to provide a solution in the form of the Content Authenticity Initiative. In 2022, the group launched the Coalition for Content Provenance and Authenticity (C2PA), an open-sourced technical standard intended to signify the origin and history of a given piece of content. | “We need authenticity more than ever,” Andy Parsons, who runs engineering for the Content Authenticity Initiative at Adobe, told me. “We needed it three, four years ago, when this was started. But with the advent of generative AI, it's become widely democratized and just incredibly easy to create photorealistic video and audio fakes.” | From the beginning, the intention was twofold: one, to create a hedge against the idea of public figures and politicians appearing to do and say things they didn’t do or say. And two, to provide a content “nutrition label” for those who want it, something Parsons called a “fundamental human right.”
| “We often use words like trust and real versus fake, which are fraught terms, but the real idea here is transparency and providing authenticity for those who want to show it,” he said. “The fundamental idea here is to be sure that in a world where anything can be created … we understand what's actually happening.” | Still, such approaches are not a silver bullet — metadata can be stripped away relatively easily, simply by taking a picture of a picture, for instance. In that vein, as AI platform Hugging Face has said: "AI watermarking is not foolproof, but can be a powerful tool in the fight against malicious and misleading uses of AI." | But Parsons said that, while watermarking is itself a vulnerable approach, Adobe’s content credentials are “durable,” combining three different approaches — secure metadata, watermarking and fingerprinting — which, combined, form a secure solution. | “My point of view is that watermarking has become kind of the battle cry of regulators in particular as kind of a panacea. But watermarks are very vulnerable. We don't think of watermarking as a security mechanism,” he said. “But we do think of it as a way to survive that analog role where you take a picture of the picture … the watermark can survive, and the metadata will not survive.”
| And even as content provenance technology strives to keep pace with generative advancements, the biggest obstacle is ubiquity; if each content-sharing platform online doesn’t integrate these credentials, the problem remains unsolved. | But the C2PA has been making strides in that arena; in addition to a number of other companies, OpenAI, Meta and Google have all joined the C2PA’s steering committee, with all three organizations promising to help advance the adoption of the authenticity standard. | “I think this stuff just takes time,” Parsons said, acknowledging that making the credentials more widespread is “among the biggest challenges” faced by the initiative. He said that it’s already moved far more quickly than he could have expected, adding: “It’s still pretty early … give me 10 years.” | | However long it takes, I do believe some version of this — across all mediums — will absolutely become ubiquitous. We need nutrition labels for our content. We need to know if content is generated by an AI model, edited by an AI model or changed and edited in general. | The only way to regain a bit of trust in our current environment is for these nutrition labels to become widespread. And I would argue that we’re going to need that in less than 10 years. | | | Which image is real? | | | | | 🤔 Your thought process: | Selected Image 1 (Left): | “In the other image, the plane seems too low, and the reflections in the skyscraper windows just seem too detailed. The higher plane, silhouetted against the clouds is more what I would expect to see.”
| Selected Image 2 (Right): | |
| |
| 💭 A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on RAG: | Close to 30% of you don’t use RAG, but 20% of you love it and 20% don’t use LLMs to start with. | Would you feel better if every piece of content you encountered had a C2PA - or something similar - tag? | |
|
|
|