| | | Good morning. ChatGPT — and then a few of its competitors — went down yesterday, and Twitter was flooded with people like this: “I’m worried work might notice I don’t know how to do my job.” | In today’s newsletter: | 🤗 AI for good: NASA’s new weather forecasting model 📱 New York to take on social media algorithms ♟️ Current and former OpenAI employees say corporate governance isn’t enough 💻 Zoom CEO envisions a future of digital clones
|
| |
| | AI For Good: NASA’s new weather forecasting model | | Image Source: NASA |
| Last month, NASA — in collaboration with IBM Research — developed a new AI foundation model for weather and climate forecasting called the Prithvi-weather-climate model. | Trained on 40 years of weather and climate data, the model will enable better storm tracking, forecasting and historical analysis. The model “holds promise to advance our understanding of atmospheric dynamics,” according to NASA Earth Data Officer Katie Baynes. “We're excited to see how the community can leverage this work to enhance resilience to climate and weather-related hazards."
| The model will be made publicly available on Hugging Face later in the year. | Why it matters: For researchers, the model will reduce costs and increase the resolution of long-term climate models. For society, the model will enable more precise storm & severe weather tracking, which NASA said can lead to improvements in public safety. | | New York to take on social media algorithms | | Photo by Markus Spiske (Unsplash). |
| The state of New York, according to the Wall Street Journal, is planning legislation that, instead of targeting social media content, would target the method of delivery: Algorithms. | The legislation — which the Journal said will be voted on this week — would prohibit social media companies from serving automated feeds & sending overnight notifications to minors without parental consent. | | But: Lobbyists for tech companies have spent recent weeks pushing back against the bill. | | Psychologists have said that algorithms are one of the more dangerous sides of social media, as they trap users in “endless scrolling loops.” | | Together with Tabnine | Generative AI without the risks | | At this point, it’s become a well-known fact that software developers spend too much time on repetitive tasks or maintenance, which makes the industry ripe for AI disruption. | But the problem with most of the AI coding assistants out there is simple: They’re risky. | Tabnine, however, is the AI coding assistant that you control. | Private: You choose where and how to deploy Tabnine, and they will never store or share any of your code. Protected: Tabnine offers models only trained on licensed code, on top of full indemnity, so you won’t get exposed to legal liability. Personalized: Integrations with the most popular IDEs and awareness of your entire codebase mean that Tabnine’s recommendations are both higher quality and more relevant.
| Join more than 1 million developers in accelerating and simplifying your development, today. Try Tabnine free for 90 days — or get a full year for just $99. | | Current and former OpenAI employees say financial incentives and responsible AI do not align | | Photo by Solen Feyissa (Unsplash). |
| A group of current and former OpenAI employees (plus a handful from other labs) published an open letter Tuesday in which they said that corporate governance isn’t working. | Key points: The group believes in the potential for AI to bring “unprecedented benefits” to society, but acknowledges that the only way to achieve those benefits is through proper regulatory governance (because of all the risks). | | The group added that AI companies “possess substantial non-public information” about the capabilities, risks, limitations and “adequacy” of safety measures that they have not shared with the public or governmental organizations. | The letter calls for whistleblower protections and an environment of open criticism. | OpenAI did not respond to a request for comment. | Why it matters: The letter comes in the wake of a safety researcher exodus & criticism from former OpenAI board members, who said that financial incentives and responsible AI do not align. Such instances also come as U.S. regulation still has yet to take shape. | Go deeper: Big Tech spent millions in lobbying efforts focusing on AI and other issues in 2023. |
| |
| | 💰AI Jobs Board: | Lead Support Engineer: Fotokite · United States · Boulder, CO · Full-time · (Apply here) Machine Learning Engineer: Insight Global · United States · Remote · Full-time · (Apply here) ML Scientist: DeepRec.ai · United States · Cambridge, MA · Full-time · (Apply here)
| | 📊 Funding & New Arrivals: | | | 🌎 The Broad View: | What the AI boom is getting right (and wrong) according to Hugging Face’s head of global policy (Rest of World). AI-directed drones could help find lost hikers faster (MIT Technology Review). Elon Musk ordered Nvidia to ship thousands of GPUs (reserved for Tesla) to X and xAI instead (CNBC).
| *Indicates a sponsored link |
| |
| Together with QA Wolf | 😘 Kiss bugs goodbye with fully automated end-to-end test coverage | | QA Wolf gets web apps to 80% automated end-to-end test coverage in just 4 months — and it requires zero effort (your team will thank you). | Plus, they provide unlimited parallel test runs on their infrastructure and 24-hour test maintenance + triage at no additional cost. | The benefit? 3-minute pass/fail results, zero false positives and human-verified bug reports. | ⭐ Rated 4.8/5 on G2 — Trusted by AutoTrader, Salesloft, Cohere and many others. | Learn more about their 90-day pilot |
| |
| | Zoom CEO envisions a future of digital clones | | Created with AI by The Deep View. |
| Eric Yuan, the CEO of Zoom, told The Verge in a recent interview that Zoom’s semi-near-term vision for the future of AI and video calling is simple (if a little fantastical): Digital clones. | “Let’s assume, fast-forward five or six years, that AI is ready,” Yuan said. “AI probably can help for maybe 90% of the work, but in terms of real-time interaction, today, you and I are talking online. So, I can send my digital version — you can send your digital version.” Yuan acknowledged that current systems aren’t there yet, but he is certain that in a few years, “we’ll get there.”
| His pitch is that meetings — and work in general — are “boring” and calendars are awful. In a world where you can spin off fine-tuned AI avatars of yourself (imbued with decision-making capabilities), why work five days a week? Why work at all? | Zoom out: The idea of agentic AI and large action models isn’t new. It’s a focus for many labs, including OpenAI. | At the same time, the idea brings up a lot of questions. Here are just a few: | Can security be guaranteed to ensure your avatar isn’t hacked or leveraged by some other person/group? What happens if your decision-making AI makes a decision that you later realize it should not have made? What happens to the global workforce if CEOs can create thousands of digital clones of themselves to run a company? Does this future rely on the creation of a universal basic income? (The idea of a UBI sets off a laundry list of its own practical questions). How would it be ensured that this tech is equitably distributed? Is this leap scientifically feasible? How?
| How would you feel about a world of decision-making AI clones of yourself? | | My thoughts: The utopic idea of sending an AI clone of myself out into the world to generate an income while I can hang out on a beach somewhere sunny, spending my days making music (to be heard by other people’s AI agents?), writing books (to be read by other people’s AI agents?) and mastering the art of painting and yoga sounds great. Maybe? | And maybe it’s achievable. But there are more practical issues to that idealized vision that crop up the longer I think about it. The technical ability side of things is only one of them. | | I am not convinced that this idea of wholly removing humans from the loop is a good one, for a number of reasons. Perhaps the main one is that my human mind cannot comprehend a world that is run by digital entities. Not working sounds great, but AI — AGI or otherwise — running everything at every level of society feels far more dystopian than not. | | And if we actually have AI that is capable of achieving Yuan’s vision in five years ... I will be quite surprised. |
| |
| | | | | Image 1 |
| Which image is real? | | | Image 2 |
|
| |
| | TwoShot: An AI-powered music sampling platform. DryMerge: An AI-powered (YC-backed) platform for automating repetitive tasks. MyLens: An AI platform to create clear timelines.
| Have cool resources or tools to share? Submit a tool or reach us by replying to this email (or DM us on Twitter). | *Indicates a sponsored link |
| |
| SPONSOR THIS NEWSLETTER | The Deep View is currently one of the world’s fastest-growing newsletters, adding thousands of AI enthusiasts a week to our incredible family of over 200,000! Our readers work at top companies like Apple, Meta, OpenAI, Google, Microsoft and many more. | If you want to share your company or product with fellow AI enthusiasts before we’re fully booked, reserve an ad slot here. |
| |
| One last thing👇 | | Sasha Luccioni, PhD 🦋🌎✨🤗 @SashaMTL | |
| |
Things I wish people would stop comparing AI to: 1) nuclear weapons 2) planes 3) fictitious/spiritual entities | | Jun 4, 2024 | | | | 39 Likes 4 Retweets 10 Replies |
|
| That's a wrap for now! We hope you enjoyed today’s newsletter :) | What did you think of today's email? | | We appreciate your continued support! We'll catch you in the next edition 👋 | -Ian Krietzberg, Editor-in-Chief, The Deep View |
| |
|
|