The Roots of Progress - What happened to the idea of progress?

View original blog posts
Roots of Progress

The Roots of Progress

by Jason Crawford


In this update:
  • Boston meetup Wednesday, Sept. 21
  • What happened to the idea of progress?
  • Towards a philosophy of safety
  • Links and tweets

Boston meetup Wednesday, Sept. 21

I’m in Boston again! Last time I got covid and we had to cancel the meetup, but now with new and improved antibodies we have rescheduled for Wednesday, September 21 (either tomorrow or tonight, depending on when you're reading this!)

The meetup is co-hosted by Dan Elton of the Astral Codex Ten meetup group and Tony Wang of MIT EA. I’ll make a few brief remarks, followed by a fireside chat and Q&A. Come meet others and chat about progress.

Details and RSVP on the Progress ForumFacebook, or Meetup.com. Attendance is free but RSVP is required.

What happened to the idea of progress?

Big Think magazine has a special issue on progress out, featuring writers including Tyler Cowen, Charles Kenny, Brad Delong, Kevin Kelly, Jim Pethokoukis, Eli Dourado, Hannah Ritchie, Alec Stapp, Saloni Dattani, and yours truly.

My piece is a revised and expanded version of “We need a new philosophy of progress,” including material from “Why do we need a NEW philosophy of progress?” and from recent talks I’ve given. Here’s an excerpt from the opening:

The title of the 1933 Chicago World’s Fair was “A Century of Progress”; the 1939 fair in New York featured “The World of Tomorrow,” and people came back from it proudly sporting buttons that said “I Have Seen the Future.” In the same era, DuPont unironically used the slogan “better things for better living… through chemistry.”

In the 1950s and ‘60s, people looked forward to a future of cheap, abundant energy provided by nuclear power; Isaac Asimov even predicted that by 2014, appliances “will have no electric cords, of course, for they will be powered by long-lived batteries running on radioisotopes.” A 1959 ad in the Los Angeles Times sponsored by a coalition of power companies referred to “tomorrow’s higher standard of living”—without explanation, as a matter of course—and illustrated the possibilities with a drawing of a flying car.

Today, the zeitgeist is far less optimistic. A 2014 editorial in The Atlantic asked “Is ‘Progress’ Good for Humanity?” Jared Diamond has called agriculture “The Worst Mistake in the History of the Human Race.” Economic growth is referred to as an “addiction”, a “fetish”, a “Ponzi scheme”, or a “fairy tale.” Some even advocate a new ideal of “degrowth”.

We no longer assume that tomorrow will bring a higher standard of living. A 2015 survey of several Western countries found that only a small minority think that “the world is getting better.” The most optimistic vision of the future that many people can muster is one in which we avoid disasters such as climate change and pandemics. Young people are not even that optimistic: in a recent survey of 16- to 25-year-olds in ten countries, more than half said that “humanity was doomed” from climate change.

What happened to the idea of progress?

Read the whole thing at Big Think.

Towards a philosophy of safety

We live in a dangerous world. Many hazards come from nature: fire, flood, storm, famine, disease. Technological and industrial progress has made us safer from these dangers. But technology also creates its own hazards: industrial accidents, car crashes, toxic chemicals, radiation. And future technologies, such as genetic engineering or AI, may present existential threats to the human race. These risks are the best argument against a naive or heedless approach to progress.

So, to fully understand progress, we have to understand risk and safety. I’ve only begun my research here, but what follows are some things I’m coming to believe about safety. Consider this a preliminary sketch for a philosophy of safety.

Safety is one dimension of progress

Safety is a value. All else being equal, safer liver are better lives, a safer technology is a better technology, and a safer world is a better world. Improvements in safety, then, constitute progress.

Sometimes safety is seen as something outside of progress or opposed to it. This seems to come from an overly-narrow conception of progress as comprising only the dimensions of speed, cost, power, efficiency, etc. But safety is one of those dimensions.

Safety is part of the history of progress

The previous point is borne out by history.

Many inventions were primarily motivated by safety, such as the air brake for locomotives, or sprinkler systems in buildings. Many had “safety” in the name: the safety lamp, the safety razor, the safety match; the modern bicycle design was even originally called the “safety bicycle.” We still use “safety pins.”

Further, if we look at the history of each technology, safety is one dimension it has improved along: machine tools got safety guards, steam engines got pressure valves, surgery got antiseptics, automobiles got a whole host of safety improvements.

And looking at high-level metrics of human progress, we find that mortality rates have declined significantly over the long term, thanks to the above developments.

We have even made progress itself safer: today, new technologies are subject to much higher levels of testing and analysis before being put on the market. For instance, a century ago, little to no testing was performed on new drugs, sometimes not even animal testing for toxicity; today they go through extensive, multi-stage trials.

To return to the previous point, safety as a dimension of progress: Note that drug testing incurs cost and overhead, and it certainly reduces the rate at which new drugs are released to consumers, but it would be wrong to describe drug testing as being opposed to pharmaceutical progress—improved testing is a part of pharmaceutical progress.

Safety must be actively achieved

Safety is not automatic, in any context: it is a goal we must actively seek and engineer for. This applies both to the hazards of nature and to the hazards of technology.

One implication is that inaction is not inherently safe, and a static world is not necessarily safer than a dynamic one.

There are tradeoffs between safety and other values

This is clear as soon as we see progress as multivariate, and safety as one dimension of it. Just as there are tradeoffs among speed, cost, reliability, etc., there are also tradeoffs between safety and speed, safety and cost, etc.

As with all multivariate scenarios, these tradeoffs only have to be made if you are already on the Pareto-efficient frontier—and, crucially, new technology can push out the frontier, creating the opportunity to improve along all axes at once. Light bulbs, for instance, were brighter, more convenient, and more pleasant than oil or gas lamps, but they also reduced the risk of fire.

We are neither consistently over-cautious nor consistently reckless

As with all tradeoffs, it’s possible to get them wrong in either direction, and it’s possible to simultaneously get some tradeoffs wrong in one direction while getting others wrong in the opposite direction.

For example, some safety measures seem to add far more overhead than they’re worth, such as TSA airport screening or IRB review. But at the same time, we might not be doing enough to prevent pathogens from escaping research labs.

Why might we get a tradeoff wrong?

Some potential reasons:

Some risks are more visible than others. If a plane crashes, or is attacked, the deaths that result are very visible, and it’s easy to blame airline safety for them. If those safety measures make air travel slower and less convenient, causing people to drive instead, the increased road deaths are much less visible and much less obviously a result of anything to do with air travel.

Tail risks in particular are less visible. If a society is not well prepared for a pandemic, this will not be obvious until it is too late.

Sins of omission are more socially acceptable. If the FDA approves a harmful drug, they are blamed for the deaths that result. If they block a helpful drug, they are not blamed for the deaths that could have been avoided. (Alex Tabarrok calls this the “invisible graveyard.”)

Incentive structures can bias towards certain types of risks. For instance, risks that loom large in the public consciousness, such as terrorism, tend to receive a disproportionate response from agencies that are in some form accountable to the public. The end result of this is safety theater: measures that are very visible but have a negligible impact on safety. In contrast, risks that the public does not understand or does not think about are neglected by the same types of agencies. (Surprisingly, “a new pandemic from a currently unknown pathogen” seems to be one such risk, even after covid.)

Safety is a human problem, and requires human solutions

Inventions such as pressure valves, seat belts, or smoke alarms can help with safety. But ultimately, safety requires processes, standards, and protocols. It requires education and training. It requires law.

Improving safety requires feedback loops, including reporting systems. It greatly benefits from openness: for instance, the FAA encourages anonymous reports of safety incidents, and will even be more lenient in penalizing safety violations if they were reported.

Safety requires aligned incentives: Worker’s compensation laws, for instance, aligned incentives of factories and workers and led to improved factory safety. Insurance helps by aligning safety procedures with profit motives.

Safety benefits from public awareness: The worker’s comp laws came after reports by journalists such as Crystal Eastman and William Hard. In the same era, a magazine series exposing the shams and fraud of the patent medicine industry led to reforms such as stricter truth-in-advertising laws.

Safety requires leadership. It requires thinking statistically, and this does not come naturally to most people. Factory workers did not want to use safety techniques that were inconvenient or slowed them down, such as goggles, hard hats, or guards on equipment.

Safety requires defense in depth

There is no silver bullet for safety: any one mechanism can fail; an “all of the above” strategy is needed. Auto safety was improved by a combination of seat belts, anti-lock brakes, airbags, crumple zones, traffic lights, divided highways, concrete barriers, driver’s licensing, social campaigns against drunk driving, etc.

(To apply this to current events: the greater your estimate of the risk from climate change, the more you should logically support a defense-in-depth strategy—including nuclear power, carbon capture, geoengineering, heat-resistant crops, seawalls to protect coastal cities, etc.)

We need more safety

When we hope for progress and look forward to a better future, part of what we should be looking forward to is a safer future.

We need more safety from existing dangers: auto accidents, pandemics, wildfires, etc. We’ve made a lot of progress on these already, but as long as the risk is greater than zero, there is more progress to be made.

And we need to continue to raise the bar for making progress safely. That means safer ways of experimenting, exploring, researching, inventing.

We need to get more proactive about safety

Historically, a lot of progress in safety has been reactive: accidents happen, people die, and then we figure out what went wrong and how to prevent it from recurring.

The more we go forward, the more we need to anticipate risks in advance. Partly this is because, as the general background level of risk decreases, it makes sense to lower our tolerance for risks of all kinds, and that includes the risks of new technology.

Further, the more our technology develops, the more we increase our power and capabilities, and the more potential damage we can do. The danger of total war became much greater after nuclear weapons; the danger of bioengineered pandemics or rogue AI may be far greater still in the near future.

There are signs that this shift towards more proactive safety efforts has already begun. The field of bioengineering has proactively addressed risks on multiple occasions over the decades, from recombinant DNA to human germline editing. The fact that the field of AI has been seriously discussing risks from highly advanced AI well before it is created is a departure from historical norms of heedlessness. And compare the lack of safety features on the first cars to the extensive testing (much of it in simulation) being done for self-driving cars. This shift may not be enough, or fast enough—I am not advocating complacency—but it is in the right direction.

This is going to be difficult

It’s hard to anticipate risks—especially from unknown unknowns. No one guessed at first that X-rays, which could neither be seen or felt, were a potential health hazard.

Being proactive about safety means identifying risks via theory, ahead of experience, and there are inherent epistemic limits to this. Beyond a certain point, the task is impossible, and the attempt becomes “prophecy” (in the Popper/Deutsch sense). But within those limits, we should try, to the best of our knowledge and ability.

Even when risks are predicted, people don’t always heed them. Alexander Fleming, who discovered the antibiotic properties of penicillin, predicted the potential for the evolution of antibiotic resistance early on, but that didn’t stop doctors from massively overprescribing antibiotics when they were first introduced. We need to get better at listening to the right warnings, and better at taking rational action in the face of uncertainty.

Thoughtful sequencing can mitigate risk before it is created

A famous example of this is the 1975 Asilomar conference, where genetic engineering researchers worked out safety procedures for their experiments. While the conference was being organized, for a period of about eight months, researchers voluntarily paused certain types of experiments, so that the safety procedures could be established first.

When the risk mitigation is not a procedure or protocol, but a new technology, this approach is called “differential technology development” (DTD). For instance, we could create safety against pandemics by having better rapid vaccine development platforms, or by having wastewater monitoring systems that would give us early warning against new outbreaks. The idea of DTD is to create and deploy these types of technologies before we create more powerful genetic engineering techniques or equipment that might increase the risk of pandemics.

This kind of sequencing seems valuable and important to me, but the devil is in the details. Judging which technologies are the most risk-creating, and which are the best opportunities for mitigation, requires deep domain expertise. And implementing the plan may in some cases require a daunting global coordination effort.

Safety depends on technologists

Much of safety is domain-specific: the types of risks, and what can guard against them, are quite different when considering air travel vs. radiation vs. new drugs vs. genetic engineering.

Therefore, much of safety depends on the scientists and engineers who are actually developing the technologies that might create or reduce risk. As the domain experts, they are closest to the risk and understand it best. They are the first ones who will be able to spot it—and they are also the ones holding the key to Pandora’s box. They are the ones who will implement DTD—or thwart it.

A positive example here comes from Kevin Esvelt. After coming up with the idea for a CRISPR-based gene drive, he says, “I spent quite some time thinking, well, what are the implications of this? And in particular, could it be misused? What if someone wanted to engineer an organism for malevolent purposes? What could we do about it? … I was a technology development fellow, not running my own lab, but I worked mostly with George Church. And before I even told George, I sat down and thought about it in as many permutations as I could.”

Technologists need to be educated both in how to spot risks, how to respond constructively to them, and how to maximize safety while still moving forward with their careers. They should be instilled with a deep sense of responsibility, not in a way that induces guilt about their field, but in a way that inspires them to hold themselves to the highest standards.

Broad progress helps guard against unknown risks

General capabilities help guard against general classes of risk, even ones we can’t anticipate. Science helps us understand risk and what could mitigate it; technology gives us tools; wealth and infrastructure create a buffer against shocks. Industrial energy usage and high-strength materials guard against storms and other weather events. Agricultural abundance guards against famine. If we had a cure for cancer, it would guard against the accidental introduction of new carcinogens. If we had broad-spectrum antivirals, they would guard against the risk of new pandemics.

Safety doesn’t require sacrificing progress

The path to safety is not through banning broad areas of R&D, nor through a general, across-the-board slowdown of progress. The path to safety is largely domain-specific. It needs the best-informed threat models we can produce, and specific tools, techniques, protocols and standards to counter them.

If and when it makes sense to halt or ban R&D, the ban should be either narrow or temporary. An example of a narrow ban would be one on specific types of experiments that try to engineer more dangerous versions of pathogens: the risks are large and obvious, and the benefits are minor (it’s not as if these experiments are necessary to fundamentally advance biology). A temporary ban can make sense until a particular goal is reached in terms of working out safety procedures, as at Asilomar.

Bottom line: we can—we must—have both safety and progress.
 



Thanks to Vitalik Buterin, Eli Dourado, Mark Lutter, Matt Bateman, Adam Thierer, Rohit Krishnan, David Manheim, Maxwell Tabarrok, Geoff Anders, Étienne Fortier-Dubois, James Rosen-Birch, Niloy Gupta, Jonas Kgomo, and Sebastian G. for comments on a draft of this essay. Some of the ideas above are due to them; errors and omissions are mine alone.

Original post: https://rootsofprogress.org/towards-a-philosophy-of-safety

Links and tweets

Opportunities

Announcements

Queries

Quotes

Tweets & retweets

Charts

Pics

@rootsofprogress
Reddit
Facebook
LinkedIn
RSS
rootsofprogress.org
Email
Progress Studies Slack • San Francisco Meetup • Other forums & meetups
BECOME A PATRON
Copyright © 2022 The Roots of Progress. Some rights reserved: CC BY-ND 4.0

Update preferences • Unsubscribe

Email Marketing Powered by Mailchimp

Older messages

Why was progress so slow in the past?

Friday, September 2, 2022

Also, an event with the Tony Blair Institute: How progress can go mainstream? View original blog posts Roots of Progress The Roots of Progress by Jason Crawford In this update: Event: How progress can

A conversation about progress and safety

Friday, August 26, 2022

Also: Foresight Institute meetup in SF, Sep 8 View original blog posts Roots of Progress The Roots of Progress by Jason Crawford In this update: Event in SF: Foresight Institute meetup, Sep 8 A

Technocracy and the Space Age

Tuesday, August 2, 2022

NASA's failed promises and the lost potential of space technology View original blog posts Roots of Progress The Roots of Progress by Jason Crawford In this update: Technocracy and the Space Age

Launching a new progress institute, seeking a CEO

Monday, July 18, 2022

Also: Vannevar Bush memoir highlights, the Future Forum, two new podcasts, and more View original blog posts Roots of Progress The Roots of Progress by Jason Crawford In this update: Launching a new

BBC Future covers progress studies

Saturday, June 18, 2022

Also: Erik Brynjolfsson on automation, productivity, work, and the future View original blog posts Roots of Progress The Roots of Progress by Jason Crawford In this update: BBC Future covers progress

You Might Also Like

37,000 visitors 🚀 Are niche sites back?!

Friday, November 22, 2024

Traditional blogs appear to be making a bit of a comeback... ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏

The Extra Points Bowl is tomorrow! Here's how to watch:

Friday, November 22, 2024

Hang out with us in person, or on the internet: ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌

🔔Opening Bell Daily: Bitcoin $100K

Friday, November 22, 2024

Gary Gensler's resignation and Trump's crypto council have pushed the cryptocurrency toward its six-figure milestone. ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌

India's Sports Federations Need Urgent Reform

Friday, November 22, 2024

Back in 2019, Indian archers won a gold, two silvers and four bronze medals at the Bangkok Asian Games – but those medals do not figure in India's tally. ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌

"Notes" of An Elder

Friday, November 22, 2024

Everyone else is already taken, so realize we're only shortchanging ourselves and those we lead if we don't show up as ourselves. ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌

The End of Hashtags? #️⃣

Friday, November 22, 2024

Not exactly. ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌

Issue #43: Logistics Meets Liftoff

Friday, November 22, 2024

Issue #43: Logistics Meets Liftoff ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌

Sequoia flagship's meaty markup

Friday, November 22, 2024

Medtech perks up in Q3; Accel-KKR leaps into secondaries; what's fueling Europe's valuation recovery; credit spreads expected to tighten, survey finds Read online | Don't want to receive

Never mind the ballots

Friday, November 22, 2024

The future for prediction markets is… unpredictable ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏

10-second trick

Friday, November 22, 2024

Every year we bring the highest quality software to RocketHub for an insane BFCM event. This year is no different! BFCM starts now so check the page below for one new lifetime deeaaal drop each day.