Happy Friday! Some part of you knows that an AI could have written an impressively packed to-do list for your weekend, but the AI that keeps you from worrying too much has already booked you for a relaxing time with a nice book. You’ll need an AI assistant to sort this out.
In today’s edition:
—Patrick Kulp, Maeve Allsup, Courtney Vien
|
|
Mark Surman
We sat down with Mark Surman, president of the Mozilla Foundation (parent of Mozilla Corp., of Firefox fame), to talk all about…what else? Artificial intelligence. We chatted about how to define “responsible tech,” how the foundation spots smart AI investments, and about his “slightly countercultural” view on AI regulation in the US.
This interview has been edited for length and clarity.
How does Mozilla define responsible tech? How do you define responsible AI and what are you looking for in investments?
What we decided to do for fund one is every investment memo has got to have one element in the Mozilla Manifesto that, if the company succeeds at its vision for the product, would actually advance the Mozilla Manifesto in some way, and at least does no harm on others. And so this little manifesto has got 14 principles that are things like “privacy and security are non-negotiable” or there’s a piece around inclusivity, or there’s a piece around healthy, respectful communities.
So those then become kind of the things we look for in the company: Does the founder or just the founding team have a kind of real desire and a vision for how their product can advance one of these things? ...On the tech, the phrase we use is “trustworthy AI,” not “responsible AI.”
But it goes back to those same things I said before: agency and accountability. Is there some piece where, effectively, the use of AI is there for the purpose of empowering users? Is there kind of an approach to think about guardrails or accountability in how AI is being built in the company?
Read our full interview here.—MA
|
|
How? With Morning Brew’s engaged audience of 22m+ monthly readers, of course.
Our unique community of young, hard-to-reach readers (who are 1.7x more likely to have a household income of $150k+) can give your B2B offerings the valuable visibility you’re looking for.
B2B decision-makers know how crucial it is to get their business’s potential in front of the right s, and the Brew’s paid advertising opportunities connect your brand to our audience by leveraging our popular B2B-centric franchise newsletters, specialized events, and skyrocketing cache of multimedia content.
Morning Brew is powered by the knowledge our readers trust us to deliver. From Retail Brew’s trending insights to Healthcare Brew’s timely updates, we’ve got a B2B Brew for you. Which one will you choose to grow with?
Advertise with us.
|
|
Stanford University
US regulators have worked to get Congress up to speed on AI-related issues including intellectual property, human rights, and potential development guardrails. But recent hearings haven’t quite had the same entertainment value of Mark Zuckerberg explaining the internet back in 2018. The Hill isn’t exactly swarming with AI experts, and congressional leaders know it.
So Senate Majority Leader Chuck Schumer is planning a series of forums this fall to help regulators beef up their AI knowledge, and this week Hill staffers headed to California to get a head start. Staffers held a “bipartisan boot camp” with Stanford University’s Institute for Human-Centered Artificial Intelligence last week, which featured three days of sessions focusing on everything from the basics of foundation models and deepfakes to China.
What exactly does AI summer camp involve? The 28 staffers who fled the DC heat for Stanford’s slightly cooler temps were treated to workshops and sessions by university faculty and graduate students whose diverse expertise included everything from healthcare and neuroscience to Chinese military and security.
Attendees, who included advisors for policymakers on both sides of the aisle (think Bernie Sanders, Cory Booker, Rick Scott, and Frank Lucas) sat in on sessions that started with basic AI concepts like compute power and neural networks and got as specific as the impact of AI on addiction. Plus, they were treated to high-profile speakers, including Meta’s chief ethicist, Chloé Bakalar, Anthropic co-founder Jack Clark, and former Secretary of State Condoleezza Rice, who closed out the camp with a discussion of AI’s impact on governance.
And the boot camp wasn’t limited to speaker panels—staffers also took part in a National Security Council “simulation” focusing on AI deployment for national security in response to an imagined crisis in the Taiwan Strait.
Of course, no summer camp would be complete without a certificate of completion. We here at Tech Brew are guessing the Stanford University AI certificates will stay on the wall longer than the archery awards from the summer of 2009.—MA
|
|
SPONSORED BY AMAZON WEB SERVICES
|
An AWSome opportunity. Wanna learn more about cloud computing and how to leverage the AWS Cloud to your advantage? Don’t miss out on AWSome Day, a free virtual half-day cloud training conference that’ll give you a practical intro to all things AWS and the cloud. Save your seat.
|
|
Ipopba/Getty Images
Generative AI has been one of corporate boards’ top concerns this year, according to KPMG’s midyear observations on the 2023 board agenda. The technology is “being discussed in most boardrooms, as companies and their boards are seeking to understand its associated opportunities and risks,” the KPMG Board Leadership Center noted in its report.
That’s a striking change from just six months ago. AI played a minor role in KPMG’s On the 2023 Board Agenda report released in December 2022. That report discussed AI as just one of various technology-related issues, alongside data governance, cybersecurity, and economic risk.
But rapid advances in generative AI technology have put it atop boards’ priority list. Board members are seeking more education about AI, according to KPMG’s new report. They’re asking for experts to conduct high-level trainings on the benefits and risks of AI.
Boards are also concerned about the need to establish policies around the use of generative AI. “It’s important to develop a governance structure and policies regarding the use of this technology early on, while generative AI is still in its infancy,” the report’s authors said.
And entire boards also are taking on responsibility for AI oversight, rather than delegating it to one committee in particular, according to the report.
“Given its strategic importance, oversight is often a responsibility for the full board,” the authors wrote.
|
|
Stat: About 2,000. That’s the estimated number of people who participated in a recent hacking event hosted in Las Vegas, with the goal of creating prompts that would throw even the best AI chatbot off its game. (NBC News)
Quote: “It needs to experience a diverse set of use cases so it can learn, and driving into wet concrete is one of those use cases.”—Paul Leonardi, professor of technology management at the University of California, Santa Barbara, about the Cruise driverless car that recently drove into, and got stuck in, wet concrete in San Francisco (the New York Times)
Read: With scores of people missing after the devastating fires in Lahaina, Maui, the island’s failed warning system is among the many infrastructural matters facing the community and state and federal government officials. (the Economist)
Links we love: Everyone wins when you’re all on the same page. Learn how to lead your team in the right direction, as a unit of one, with our Building High-Performance Teams sprint. Sign up now.
|
|
Apple
Usually, we write about the business of tech. Here, we highlight the *tech* of tech.
Tick, tock: If you’re anything like us, you enjoy thinking of yourself as someone who would wear a watch daily (instead of checking our phones every half hour). With Apple’s rumored plans for a big relaunch of its beloved Watch, per Bloomberg, we’re counting on this launch in 2024 or 2025, which is just enough time for us to be interested in being a watch wearer.
Let’s try this again: Snapchat users have thoughts and feelings about the content that the company’s My AI, a chatbot built with ChatGPS technology, “accidentally” posted as a live Story, reports CNN: a short video of a wall. It sounds boring, but we want to know so much more. Whose wall was it? Was anything in the room? Does My AI need affordable wall art recommendations? It’s giving “dorm room decor ideas 2023.”
Origami lifestyle: We fold our CVS receipts, our precious clothes, and our laptops closed, so why not our phones? Foldable smartphones deserve a new look, especially those from Samsung, Motorola, and Google, writes CNET.
|
|
Share Tech Brew with your coworkers, acquire free Brew swag, and then make new friends as a result of your fresh Brew swag.
We’re saying we’ll give you free stuff and more friends if you share a link. One link.
Your referral count: 2
Click to Share
Or copy & paste your referral link to others: emergingtechbrew.com/r/?kid=303a04a9
|
|
Written by
Patrick Kulp, Maeve Allsup, and Courtney Vien
Was this email forwarded to you? Sign up
here.
Guide →
What is AI?
Guide →
What is 5G?
Take The Brew to work
Get smarter in just 5 minutes
Business education without the BS
Interested in podcasts?
|
ADVERTISE
//
CAREERS
//
SHOP 10% OFF
//
FAQ
Update your email preferences or unsubscribe
here.
Please Note: We've recently updated our Privacy Policy. View our privacy policy
here.
Copyright ©
2023
Morning Brew. All rights reserved.
22 W 19th St, 4th Floor, New York, NY 10011
|
|