| | Good morning. Yesterday, Amazon pushed beyond a $2 trillion market cap for the first time ever, making it the fifth most valuable company in the world. | To help visualize just how much a trillion is: One million seconds is 11.5 days. One billion seconds is 31.75 years. One trillion seconds is 31,710 years. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | |
| |
| AI for Good: Advanced lightning detection | | Source: NASA |
| About a year ago, 35,000 people were denied entry to the Rock the South music festival due to an imminent threat of lightning. NASA’s Short-Term Prediction Research and Transition Center’s (SPoRT) AI tool was behind the decision. | The details: The machine learning tool was trained on tons of data from previous events to identify the patterns and trends that lead to lightning strikes. | It can predict where a lightning strike will occur roughly 15 minutes in advance. The tech was integrated into a free tool on SPoRT’s website; while it is currently only active in a few locations, NASA is working to expand its reach.
| The tool also keeps track of lightning strikes in a given area over time, enabling people to make more informed decisions related to summertime outdoor safety. | | Why it matters: “By providing more lead time to predict lightning, we can give the public more time to seek shelter, and be out of harm’s way of lightning,” Marshall Space Flight Center’s lightning research scientist Dr. Christopher Schultz said. |
| |
| | Optimize Your Team with No Code AI | | Interested in learning how to bring AI into your workflow? The MindStudio team just expanded its free course - you'll learn how to: | Build out complete apps with AI-generated prompts and custom automations Add and query your own data sources Share AI workflows and invite your teammates to an organization Integrate your AI application workflows with your current tools, like Slack and Mailchimp
| Whether you're a seasoned AI enthusiast or a novice, this webinar will give you the skills to launch your own AI applications quickly. | Register for the No Code AI Building Webinar today. |
| |
| OpenAI delays rollout of controversial ‘voice mode’ | | Source: Unsplash |
| Last month, OpenAI launched GPT-4o, a multi-modal evolution of GPT-4 that CEO Sam Altman called the “best model in the world.” A component of the demo was a new feature called “voice mode,” which would allow users to interact vocally with the model. | Voice Mode was supposed to begin its gradual rollout in June, but OpenAI said Tuesday that it is delaying the launch of the feature to next month instead. | The details: OpenAI said that it needs more time to reach its “high safety and reliability bar.” | Part of the improvements it still has to make to the system revolve around the model’s “ability to detect and refuse certain content.” “We’re also working on improving the user experience and preparing our infrastructure to scale to millions while maintaining real-time responses,” the company said.
| OpenAI expects Voice Mode to roll out to Plus users sometime in the Fall. | A cautious step around the backlash: The voice that was on display during the demo sounded remarkably similar to Scarlett Johansson, a connection that Altman emphasized with references to the movie Her, in which Johansson voiced an AI. | After the demo, Johansson released a statement saying that Altman had asked her to voice the model a year ago, and she refused. He then asked her again two days before the demo, and went ahead with it before receiving a response.
| Upon a request from her lawyers to detail the process behind the voice’s creation, OpenAI took it down, though it has maintained that the similarity is purely coincidental. |
| |
| | | | Use code IMPACTAITODAY for $200 off VIP Passes before the end of today* | | Want to adopt GenAI for software development but don’t know where to start? This buyer's guide helps engineering leaders cut through the noise* ID Verification Service for TikTok, Uber, X Exposed Driver Licenses (404 Media). Reddit is fighting hard against AI data scraping (The Verge). Voice actors sue AI company over voice cloning without permission (CBS). Kathmandu is making taxi drivers switch to EVs. Not all drivers can afford one (Rest of World).
| | | | | | |
| |
| Hollywood union scores agreement on AI with studios | | Source: Unsplash |
| The International Alliance of Theatrical Stage Employees (IATSE) on Wednesday announced a tentative agreement with film studios. A complete summary of the agreement will be released over the next few days. | The details: While the agreement in part includes pay increases for workers, it also includes new protections against artificial intelligence. | | Zoom out: Concerns about AI were top of mind throughout the Hollywood union strikes and negotiations that have taken place over the past year. | The Writers Guild of America, after weeks of strikes, secured a number of AI protections last September. For instance, on covered projects, AI cannot be used to write or rewrite literary material & companies cannot require writers to use genAI. The Actors Guild additionally secured many AI protections. Several of these are centered around clear and informed consent (and compensation) when it comes to AI-generated synthetic clones of actors.
| These protections come as AI companies seem keen to disrupt Hollywood, offering video-production tools that have been pitched as a new way of filmmaking. The content that was used to train those tools, meanwhile, remains unknown. | | Attend the AI revolution in NYC | | Discover how AI is transforming industries, upending work, and reshaping the business landscape at Imagine AI Live — an immersive one-day event in the heart of New York City. | At Imagine AI Live, you can: | Network with AI-first professionals and leaders from top AI companies like Bindu Reddy (Abacus), Mark Heaps (Groq) and Jiquan Ngiam (Lutra AI) Attend workshops on topics like AI Productivity, Agents, Automation, Creativity, and Leadership Strategy Explore AI solutions showcase with cutting-edge demos in our AI Exhibitor Gallery
| Register today and use code IMPACTAITODAY for $200 off VIP Passes before the end of today. |
| |
| Waymo’s winding road to a self-driving victory | | Source: Waymo |
| The purveyors of artificial intelligence have promised a lot of innovation over the years, from advanced medical screenings to cures for a wide range of diseases, fixes for climate change and robotic butlers a la C-3PO. | But one of the biggest AI innovations has revolved around a promise to take distracted human drivers out from behind the wheel and plant them firmly in the passenger seat, where they can do no harm. I am talking, of course, about self-driving cars. | But the past 15 years have proven that the challenge might be a little larger than people thought. | The self-driving journey: The self-driving industry lately has been a bit messy, hit with federal safety investigations (into Cruise, Tesla, Waymo and Zoox) and mired in setbacks. | This year, Waymo has twice recalled its software following accidents with its self-driving cars. Cruise halted its operations last year after one of its vehicles dragged a pedestrian around 20 feet. The self-driving unit only recently resumed (human-supervised) operations as part of its road to recovery. Uber shuttered its self-driving effort in 2020, two years after one of its cars killed a pedestrian in Arizona. Apple recently canceled its Apple Car project, which would have included self-driving features.
| Tesla, meanwhile, has been unable to get its features to live up to its “Full-Self Driving” name; the system remains stuck at a Level Two designation, requiring the hands-on, eyes-on attention of the driver at all times. And those fleets of cross-country, fully autonomous robotaxis that Elon Musk promised all those years ago? Yeah, they’re nowhere in sight. | But Waymo, the autonomous driving company that was born within Google in 2009, has thus far been able to avoid any major, show-stopping collisions. The company last year reported, across 7.1 million autonomous miles, an 85% reduction in injury-causing accidents, compared to human benchmarks | | Waymo said this week that its 300,000-strong San Francisco waitlist is no more; after years of gradual scaling, the company is opening up its driverless taxis to anyone who wants to ride. The move comes a few years after Waymo did the same thing in Phoenix, Arizona. | How it works: The Waymo One is powered by three different types of sensors: Radar, lidar and cameras. | The cameras capture scenic details while the lidar and radar function as extra backup layers, creating scans of the car’s surroundings to measure the size, distance and speed of surrounding objects. Waymo then employs AI and machine learning for object detection, prediction, and, of course, car operation.
| This is different from Tesla, which only employs cameras and neural networks. Omer Keilaf, the CEO of Innoviz Technologies (which develops lidar tech), told me last year that lidar or something similar is vital as a backup to the camera system, which has been known to fail in certain conditions. | The problem with edge cases: The fuel that powers AI is data. The fuel for self-driving cars, then, is driving data. But the problem here is with the kind of random scenarios that are lacking in data because they haven’t happened (yet/often enough). These are known as edge cases. | Just a few days ago, we reported on a new study that found that, though autonomous vehicles might be generally safer than humans, they are far more likely to get in accidents at dawn, dusk and during turns. | | When we pay attention, humans are very good at reasoning. This has nothing to do with being ‘smart’ or ‘stupid’; if you have your phone locked up in your pocket and see a person start to walk into the street while you’re driving, you will stop the car. | | The issue with self-driving cars is that artificial general intelligence doesn’t exist. The issue is that as decently as Waymo has performed, its AI cannot reason. It instead needs to consume metric tons of data in order to respond to situations. | The problem with human drivers, meanwhile, is that many do not pay attention, a fact that likely makes self-driving cars an inevitability. It just might require a new approach. | As cognitive scientist Dr. Gary Marcus has said: To get reliable self-driving cars, “what we need is a much smarter AI, a kind that can reason, and not hallucinate, grounded in reality and not just corpus statistics — which require fundamental research of a sort that is currently getting starved out in the LLM euphoria.” |
| |
| | A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Your thoughts on generative AI in schools: | It’s a terrible idea: | “Working Master's student. Anything more than using e.g., BraveAI to help summarize searches, or AI for translation should not be used. Some people want AI to do almost everything for them. We need to be asking ourselves what it means to live as human beings.”
| Could be good with oversight: | "Students need to be literate about AI and experiential learning is effective. There is no better way to learn about AI than to use it. Students need to learn with rigor how to prompt intelligently and how to use the tools at their disposal. As well, the more one works with the tools the more the distinct artifacts of AI done poorly become apparent.”
| Would you take a ride in a robotaxi? | |
|
|
|