| | | Good morning. We finally found Ilya. | The former OpenAI chief scientist has launched a new venture, simply called: Safe Superintelligence, Inc. | His previous company — OpenAI — turned out not to be so open. | I guess we’ll see if this one ends up being either safe or super. | In today’s newsletter: | 🦾 AI for Good: Mind-controlled prosthetics ⚡️ OpenAI co-founder sets out on a quest to build artificial superintelligence 🛜 Study: Fair LLMs can’t ever happen 🗳️ Report: Voters favor greater regulation the more powerful AI gets
|
| |
| | AI for Good: Mind-controlled prosthetics | | Image Source: Atom Limbs |
| There are a host of medical applications that can result from decoding and reconstructing brain activity through the use of AI. One of them has to do with enhanced prosthetics. | What we’re talking about here is a shift from traditional prosthetic limbs, which don’t do much more than sit in place, to a more Star Wars-esque bionic arm. It sounds like science fiction. But it’s a lot closer than it may seem. | Atom Limbs: Though still in early development, Atom Limbs’ first product — an upper arm prosthetic called Atom Touch — serves as a lightweight bionic arm that offers users a full range of motion. | Through the non-invasive application of electrodes to a person’s stump, Atom leverages machine learning and AI technology to decode brain activity and translate that into bionic movement. “For someone who’s lost a limb, they still have all those nerves,” CEO Tyler Hayes has said. “We take some electrodes and we listen to the electrical activity that emanates from the body. We send those signals over to the robotic limb and that’s what autocompletes the rest of the movement.”
| | The prosthetic arm controlled by your mind | BBC News |
|
| Why it matters: There are millions of amputees across the country. And the top-end of existing prosthetic limbs are incredibly expensive. According to the BBC, Atom plans to sell its arm for $20,000, which is on the lower end of the bionic market. | | Ilya Sutskever sets out on a quest for artificial superintelligence | | Image Source: Stanford University |
| Just a few weeks after departing OpenAI, Ilya Sutskever has launched a new project: Safe Superintelligence, Inc. | Welcome to SSI: The startup — which was additionally cofounded by Daniel Gross and Daniel Levy — aims to be the “world’s first straight-shot SSI lab.” Its explicit purpose is to build an artificial superintelligence. | “This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then,” Sutskever told Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race.” “By safe, we mean safe like nuclear safety as opposed to safe as in ‘trust and safety,’” he added.
| SSI did not disclose how much money it has raised, or who its backers are, saying just that raising capital will not be a problem. | The landscape: First, this pitch is pretty similar to OpenAI’s stated mission of creating a general intelligence that “benefits all humanity.” OpenAI, of course, has gone through a couple of dramatic shifts as, faced with the high cost of compute, it has pivoted further from its nonprofit origins. (A number of safety researchers have recently left OpenAI, citing safety concerns). | | Sutskever didn’t provide details on what ‘safety’ means here, how it will be achieved and who will decide if a product is safe or safe enough. | The company did not respond to a request for comment, in which I asked if there is an inherent safety risk in a single for-profit company selling a superintelligence. | | Together with Vanta | Join the Live Session: Automating SOC 2 and ISO 27001 Compliance | | Whether you’re starting or scaling a business, demonstrating top-notch security practices and establishing trust is more important than ever. | Vanta automates compliance for SOC 2, ISO 27001, ISO 42001, NIST AI RMF, and more, saving you time and money — while helping you build customer trust. | Ready to learn more about why compliance is so important, which businesses need it, and how Vanta's automation can help you quickly achieve it? | Join the live session on Tuesday, July 9 to ask your questions and see the platform in action. | | Study: Fair LLMs can’t ever happen | | Photo by Amanda Dalbjörn (Unsplash). |
| One of the most significant concerns that has grown in the wake of the popularization of LLMs is centered around bias and fairness. The idea is simply that since these models have been trained on the corpus of the internet, bias is intrinsic. | Recent research confirms this impression, further finding that existing attempts to apply fairness to LLMs are “inherently limited.” | Key points: LLMs are too “generally flexible” to ever be considered “fair.” | The reasons for this are multifold, mainly that training data is both too vast and too sensitive for certain mitigation methods (like fairness through unawareness) to work. The case for fair LLMs is worsened by the lack of model transparency — if researchers don’t know training data, they can’t really employ methods that would reasonably increase model fairness.
| There’s still hope: The researchers made a list of three recommendations for achieving incremental, long-term success in model fairness. | Fairness evaluations must keep in mind social and societal context. LLM developers must be held responsible for ensuring fairness and mitigating harm. Developers must also work closely with researchers, policymakers, end users and other stakeholders to audit algorithms and collaboratively design more fair systems.
|
| |
| | 💰AI Jobs Board: | Machine Learning Scientist: DeepRec.ai · United States · Cambridge, MA · Full-time · (Apply here) Machine Learning Infrastructure Engineer: Stealth Startup · United States · San Francisco · Full-time · (Apply here) Founding Engineer: Diana HR · United States · San Francisco · Full-time · (Apply here)
| | 📊 Funding & New Arrivals: | | | 🌎 The Broad View: | Amazon was fined $5.9 million for over 59,000 violations of California labor laws (CNBC). Singapore doubles down on lab-grown meat as Silicon Valley backs off (Rest of World). A WIRED investigation shows that Perplexity AI is ignoring the Robots Exclusion Protocol (Wired). Ready to supercharge your career? Sidebar is a leadership program where you get matched to a small peer group, have access to a tech-enabled platform, and get an expert-led curriculum. Members say it’s like having their own Personal Board of Directors.*
| *Indicates a sponsored link |
| |
| Together with Tabnine | Generative AI that keeps your code secure | | Software developers are rapidly adopting AI code assistants to reduce time spent on mundane, repetitive tasks — but do they risk accidentally exposing IP-sensitive code? | Because general coding assistants aren’t nearly secure enough. | Tabnine, however, is the AI code assistant that always keeps your code safe. | You choose where and how to deploy Tabnine (SaaS, VPC, or on-premises) to maximize control over your intellectual property. Tabnine never stores or shares your company’s code. Tabnine ensures the privacy both of your code and your team’s activities.
| Join the more than 1 million developers who use Tabnine today to accelerate and simplify software development. Try it free for 90 days — or get a full year for the discounted price of $99. |
| |
| | Report: The more powerful AI gets, the more the public supports regulation | | Created with AI by The Deep View. |
| Apropos of Ilya Sutskever’s blatant quest to create and sell an artificial superintelligence, new polling from the Artificial Intelligence Policy Institute (AIPI) has found that voter support for restrictive AI legislation is strongly correlated with impressions on how powerful AI is. | Key points: Its research found that the majority of voters who believe that AI represents a “uniquely powerful” technology that will “dramatically change society” also believe that AI poses a national security risk. They support: | | “Voters who believe AI is powerful are more likely to support developing regulations to avoid harmful effects, rather than waiting to see how the technology develops, by a 60 point margin,” the AIPI said. | The AIPI said that its findings “show that the more powerful a model is, the more voters support restricting the model.” | It recommended that policymakers explore safety testing of frontier models and export controls, among other things. | The legislative landscape: In the U.S., federal regulation of AI remains conspicuous only in its absence. Many states, meanwhile, are jumping to the fill the regulatory void. | Colorado was the first to sign a comprehensive piece of AI legislation, which, in part, requires certain disclosures related to high-risk systems and algorithmic discrimination. Utah has also signed a piece of AI regulation. Other states have a series of bills currently in process; one California bill would require developers spending more than $100 million in model training to complete certain safety tests. If they don’t, they’ll be held liable if their system leads to a “mass casualty event.”
| When it comes to ASI … Daniel Colson, executive director of the AIPI, told me that when it comes to explicit corporate attempts to build artificial superintelligence, “the public is crystal clear that they must be involved. The more powerful AI models are, the more the public cares about regulating AI and restricting its proliferation.” | | “Our polling has demonstrated time and time again that the American people do not trust tech companies to handle AI Safely,” Colson said. “Even if Ilya is committed to developing AI safely and can deliver on this promise, an unchecked environment may simply mean the companies that push the fastest and most recklessly will receive the most funding. Keeping AI safe will take action from both the private and public sectors." | Do you support enhanced regulation as AI models get more and more powerful? Let us know what kind of regulation you think is a good idea. | |
| |
| | | | | Image 1 |
| Which image is real? | | | Image 2 |
|
| |
| | Brave Search API: An ethical, human-representative web dataset to train your AI models. * Voscribe: AI-powered transcription tool. Grammarly: AI-powered writing and editing tool.
| Have cool resources or tools to share? Submit a tool or reach us by replying to this email (or DM us on Twitter). | *Indicates a sponsored link |
| |
| SPONSOR THIS NEWSLETTER | The Deep View is currently one of the world’s fastest-growing newsletters, adding thousands of AI enthusiasts a week to our incredible family of over 200,000! Our readers work at top companies like Apple, Meta, OpenAI, Google, Microsoft and many more. | If you want to share your company or product with fellow AI enthusiasts before we’re fully booked, reserve an ad slot here. |
| |
| One last thing👇 | | Andrea Miotti @_andreamiotti | |
| |
one more try I swear just one more try | SSI Inc. @ssi Superintelligence is within reach. Building safe superintelligence (SSI) is the most important technical problem of our time. We've started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence. It’s called Safe Superintelligence… x.com/i/web/status/1… |
| | | Jun 19, 2024 | | | | 36 Likes 4 Retweets 0 Replies |
|
| That's a wrap for now! We hope you enjoyed today’s newsletter :) | What did you think of today's email? | | We appreciate your continued support! We'll catch you in the next edition 👋 | -Ian Krietzberg, Editor-in-Chief, The Deep View |
| |
|
|