Not Boring by Packy McCormick - Narrative Tug-of-War
Welcome to the 408 newly Not Boring people who have joined us since last week! If you haven’t subscribed, join 216,703 smart, curious folks by subscribing here: The sourcing tool for data driven VCs Harmonic AI is the startup discovery tool trusted by VCs and sales teams in search of breakout companies. It’s like if Crunchbase or CB Insights was built today and without a bunch of punitive paywalls. Accel, YC, Brex and hundreds more use Harmonic to:
Whether you're an investor or GTM leader, Harmonic is just one of those high-ROI no-brainers to have in your stack. Hi friends 👋 , Happy Tuesday! Hope you all had a great Thanksgiving (or enjoyed the peace and quiet while us Americans were in Turkey comas). Apologies that this is a little late — once again, the newsletter gods dropped a perfect example of the point I was trying to make in my lap at the last minute, and I’ve been up since 5:30 trying to incorporate it. We live in a time of extreme narratives. It’s easy to get caught up and worked up when you take the extremes in isolation. Don’t. They’re part of a bigger game, and once you see it, the world makes a lot more sense. Let’s get to it. Narrative Tug-of-WarOne of the biggest changes to how I see the world over the past year or so is viewing ideological debates as games of narrative tug-of-war. For every narrative, there is an equal and opposite narrative. It’s practically predetermined, cultural physics. One side pulls hard to its extreme, and the other pulls back to its own. AI is going to kill us all ←→ AI is going to save the world. What starts as a minor disagreement gets amplified into completely opposing worldviews. What starts as a nuanced conversation gets boiled down to catchphrases. Those who start as your opponents become your enemies. It’s easy to get worked up if you focus on the extremes, on the teams tugging the rope on each side. It’s certainly easy to nitpick everything they say and point out all of the things they missed or left out. Don’t. Focus on the knot in the middle. That knot, moving back and forth over the center line as each team tries to pull it further to their own side, is the important thing to watch. That’s the emergent synthesis of the ideas, and where they translate into policy and action. There’s this concept called the Overton Window: the range of policies or ideas that are politically acceptable at any given time. Since Joseph Overton came up with the idea in the mid-1990s, the concept has expanded beyond government policy. Now, it’s used to describe how ideas enter the mainstream conversation where they influence public opinion, societal norms, and institutional practices. The Overton Window is the knot in the narrative tug-of-war. The teams pulling on either side don’t actually expect that everyone will agree with and adopt their ideas; they just need to pull hard enough that the Overton Window shifts in their direction. Another way to think about it is like price anchoring, when a company offers multiple price tiers knowing that you’ll land on the one in the middle and pay more for it than you would have without seeing how little you get for the lower price or how much you’d have to pay to get all of the features. No one expects you to pay $7,000 for the Super Pro tier (although they’d be happy if you did). They just know that by showing it to you, it will make paying $69 for the Pro tier more palatable. The same thing happens with narratives, but instead of one company carefully setting prices to maximize the likelihood that you buy the Pro tier, independent and opposed teams, often made up of people who’ve never met, loosely coordinated through group chats and memes, somehow figure out how to pull hard enough that they move the knot back to what they view as an acceptable place. It’s a kind of cultural magic when you think about it. There are a lot of examples I could use to illustrate the idea, many of which could get me in trouble, so I’ll stick to what I know: tech. Specifically, degrowth vs. growth, or EA vs. e/acc. EA vs. e/accOne of the biggest debates in my corner of Twitter, which burst out into the world with this month’s OpenAI drama, is Effective Altruism (EA) vs. Effective Accelerationism (e/acc). It’s the latest manifestation of an age-old struggle between those who believe we should grow, and those who don’t, and the perfect case study through which to explore the narrative tug-of-war. If you look at either side in isolation, both views seem extreme. EA (which I’m using as a shorthand for the AI-risk team), believes that there is a very good chance that AI is going to kill all of us. Given the fact that there will be trillions of humans in the coming millennia, even if there’s a 1% chance AI will kill us all, preventing that from happening will save tens or hundreds of billions of expected lives. We need to stop AI development before we get to AGI, whatever the cost. As that team’s captain, Eliezer Yudkowsky, wrote in Time:
The idea that we should bomb datacenters to prevent the development of AI, taken in a vacuum, is absurd, as many of AI’s supporters were quick to point out. e/acc (which I’m using as a shorthand for the pro-AI team) believes that AI won’t kill us all and that we should do whatever we can to accelerate it. They believe that technology is good, capitalism is good, and that the combination of the two, the techno-capital machine, is the “engine of perpetual material creation, growth, and abundance. We need to protect the techno-capital machine at all costs. Marc Andreessen, who rocks “e/acc” in his twitter bio, recently wrote The Techno-Optimist Manifesto, in which he makes the case for essentially unchecked technological progress. One section in particular drew the ire of AI’s opponents:
The idea that things most people view as good – like sustainability, ethics, and risk management – taken in a vacuum, seems absurd, as many journalists and bloggers were quick to point out. What critics of both pieces either missed is that neither argument should be taken in a vacuum. Nuance isn’t the point of any one specific argument. You pull the edges hard so that nuance can emerge in the middle. While there are people on both teams who support their side’s most radical views – a complete AI shutdown on one side, unchecked techno-capital growth on the other – what’s really happening is a game of narrative tug-of-war in which the knot is regulation. EA would like to see AI regulated, and would like to be the ones who write the regulation. e/acc would like to see AI remain open and not controlled by any one group, be it a government or a company. One side tugs by warning that AI Will Kill Us All in order to scare the public and the government into hasty regulation, the other side tugs back by arguing that AI Will Save the World to stave off regulation for long enough that people can experience its benefits firsthand. Personally, and unsurprisingly, I’m on the side of the techno-optimists. That doesn’t mean that I believe that technology is a panacea, or that there aren’t real concerns that need to be addressed. It means that I believe that growth is better than stagnation, that problems have solutions, that history shows that both technological progress and capitalism have improved humans’ standard of living, and that bad regulation is a bigger risk than no regulation. While the world shifts based on narrative tug-of-wars, there is also truth, or at least fact patterns. Doomers – from Malthus to Ehrlich – continue to be proven wrong, but fear sells, and as a result, the mainstream narrative continues to lean anti-tech. The fear is that restrictive regulation is put in place before the truth can emerge. Because the thing about this game of narrative tug-of-war is that it’s not a fair one. The anti-growth side needs only to pull hard and long enough to get regulation enacted. Once it’s in place, it’s hard to overturn; typically, it ratchets up. Nuclear energy is a clear example. If they can pull the knot over the regulation line, they win, game over. The pro-growth side has to keep pulling for long enough for the truth to emerge in spite of all the messiness that comes with any new technology, for entrepreneurs to build products that prove out the promise, and for creative humans to devise solutions that address concerns without neutering progress. They need to keep the tug-of-war going long enough for solutions to emerge in the middle. Yesterday, Ethereum co-founder Vitalik Buterin wrote a piece called My techno-optimism in which he proposed one such solution: d/acc. The “d,” he wrote, “can stand for many things; particularly, defense, decentralization, democracy and differential.” It means using technology to develop AI in a way that protects against potential pitfalls and prioritizes human flourishing. On one side, the AI safety movement pulls with the message: “you should just stop.” On the other, e/acc says, “you’re already a hero just the way you are.” Vitalik proposes d/acc as a third, middle way:
It’s a synthesis, one he argues can appeal to people whatever their philosophy (as long as the philosophy isn’t “regulate the technology to smithereens”): Without EA and e/acc pulling on both extremes, there may not have been room in the middle for Vitalik’s d/acc. The extremes, lacking nuance themself, create the space for nuance to emerge in the middle. If EA wins, and regulation halts progress or concentrates it into the hands of a few companies, that room no longer exists. If the goal is to regulate, there’s no room for a solution that doesn’t involve regulation. But if the goal is human flourishing, there’s plenty of room for solutions. Keeping that room open is the point. Despite the fact that Vitalik explicitly disagrees with pieces of e/acc, both Marc Andreessen and e/acc’s pseudonymous co-founder Beff Jezos shared Vitalik’s post. That’s a hint that they’re less worried about their solution than a good solution. Whether d/acc is the answer or not, it captures the point of tugging on the extremes beautifully. Only once e/acc set the outer boundary could a solution that involves merging humans and AI through Neuralinks be viewed as a sensible, moderate take. Ray Kurzweil made that point a couple decades ago and has the arrows to prove it. In this and other narrative tug-of-wars, the extremes serve a purpose, but they are not the purpose. For every EA, there is an equal and opposite e/acc. As long as the game continues, solutions can emerge from that tension. Don’t focus on the tuggers, focus on the knot. Thanks to Dan for editing! That’s all for today. We’ll be back in your inbox with the Weekly Dose on Friday! Thanks for reading, Packy |
Older messages
Weekly Dose of Optimism #70
Friday, November 24, 2023
Ceasefire, Q*, Deep learning Cancer Detection, Warp Speed Learnings, Nuclear Wins, Starship Separation
OpenAI & Grand Strategy
Wednesday, November 22, 2023
Altman, Augustus, and Preparing for Tech's Coming Battles
Weekly Dose of Optimism #69
Friday, November 17, 2023
CRISPR, Batkid, GraphCast, Nuclear Pledge, Starship, AoM5
Blockchains As Platforms
Friday, November 17, 2023
Benefits, Drawbacks, and Trade-Offs
Weekly Dose of Optimism #68
Friday, November 10, 2023
devday, DACs, Zepbound, Turmeric, Humane, AoM e4
You Might Also Like
What is it to have a free plan for your SaaS?
Saturday, December 28, 2024
These deals are ending: Inro, Qolaba, MySEOAuditor, ContentRadar, and SEO Pilot - get them now to start off your 2025 right!! Get these lifetime deals now! (https://www.rockethub.com/) Today's hack
The Biggest News of 2024!
Friday, December 27, 2024
Over the past year, there has been some pretty big SEO and digital marketing news that has impacted bloggers and content creators. Since its the end of the year, Jared and I decided to sit down and
Online Sales Grew This Much During the Holidays [Crew Review]
Friday, December 27, 2024
You're an Amazon whiz... but maybe not an email whiz. Omnisend makes setting up email for your brand as easy as click, drag, and drop. Make email marketing easy. Hey Reader, Merry belated Christmas
A strategy for more prospects in 2025
Friday, December 27, 2024
Today's Guide to the Marketing Jungle from Social Media Examiner... Presented by social-media-marketing-world-logo It's Make Cut Out Snowflakes Day, Reader... Let your inner child out to play!
Influence Weekly #369 - TikTok At A Crossroads: 23 Experts Weigh In On The Ban, ByteDance, And What’s Next
Friday, December 27, 2024
Social Media as a Recruitment Tool: School Bus Driver Influencers
Issue #48: When Hardware Hits Reality
Friday, December 27, 2024
Issue #48: When Hardware Hits Reality
The UGLIEST website ever? (He paid $55k for it)...
Friday, December 27, 2024
You have to hear this story, it's crazy. View in browser ClickBank Day 3 of Steven Clayton and Aidan Booth's '12 Day Giveaway' celebration has just been published. Click here to find
"Notes" of An Elder ― 12.27.24
Friday, December 27, 2024
Life is too precious to be lived on autopilot.
10 busiest VCs in supply chain tech
Friday, December 27, 2024
9 VCs that ruled 2024 fundraising; aircraft parts market becomes a hotbed for PE; EMEA's 10 biggest buyout funds Read online | Don't want to receive these emails? Manage your subscription. Log
🔔Opening Bell Daily: Housing Outlook 2025
Friday, December 27, 2024
Mortgage rates have climbed as the Fed has cut borrowing costs, and unaffordability will likely persist in the new year.