Tedium - The Sneaky Standard 🖥

Intel shaped an industry through a canny bit of deception.

Hunting for the end of the long tail • February 09, 2024

Today in Tedium: Computing has changed a lot in the past four decades, and one of the biggest changes, perhaps the most unheralded, comes down to compatibility. These days, you generally can’t fry a computer by plugging in a joystick that the computer doesn’t support. Simply put, standardization slowly fixed this. One of the best examples of a bedrock standard? PCI, the peripheral component interface, which came about in the early 1990s and appeared in some of its earliest consumer machines three decades ago this year. Its lessons gradually shaped other standards, like USB, and ultimately made computers less frustrating. So how did we get it? Through a moment of canny deception. Today’s Tedium considers the latent power of PCI, one of the most prevalent standards the computing world has ever given us, and how it gave a tech giant a long-lasting foothold in our computing lives. — Ernie @ Tedium

Sure, Intel had the Pentium chip, but it had a lot of say in the computer’s components, too.

The computing industry’s biggest gift to itself: Embracing standards

When you used an Apple II or a Commodore 64 or an MS-DOS machine in the 1980s, you were essentially tied into an ecosystem that didn’t talk with anything else that you used, and that meant you were locked into an ecosystem. Regional factors or not, lock-in sucks, and it was hard to avoid in the early days of the PC.

The disks often weren’t compatible. The peripherals didn’t work across platforms. That meant, if you wanted to sell hardware in the 1980s, you were stuck building multiple versions of the same device.

To offer one example: The KoalaPad, a common drawing tool sold in the early 1980s for numerous platforms, including the Atari 800, the Apple II, the TRS-80, the Commodore 64, and the IBM PC. It was essentially the same device on every platform, and yet, because of different incompatibility issues, Koala Technologies had to make five different versions of this device, with five different manufacturing processes, five different connectors, five different software packages, and a lot of overhead. Put simply, it was wasteful, and made being a hardware manufacturer more costly, while adding to consumer confusion.

This slowly began to change with the IBM PC clone market, which underlined the importance of standardization to a wide audience. It was a happy accident—IBM’s decision to use a bunch of off-the-shelf components accidentally turned into a de facto standard—but it gradually became harder for computing platforms to become islands unto themselves. Soon enough, IBM tried and failed to sell the computing world on a bunch of proprietary standards for PS/2. The cat was already out of the bag. It was too late.

Which raised an obvious question: What would an expansion card standard look like if it was built from the ground up to be a standard? PCI wasn’t the first of its kind—you could argue, for example, that if things played out differently, we’d all be using NuBus or MCA. But it was a standard seemingly for the long haul, far beyond other competing standards of its era.

Who’s responsible for spearheading this standard? Intel, of course. While PCI was a cross-platform technology, it proved to be an important strategy for the chip-maker to consolidate its power over the PC market at a time when IBM had taken its foot off the gas and was no longer driving industry innovation.

The vision of PCI was simple: An interconnect standard that was not intended to be limited to one line of processors or one bus. PCI is PCI, and the technology was not designed to be tethered to one kind of processor.

But don’t mistake standardization for cooperation. It was still a chess piece—but one part of a different game than the one PC manufacturers were playing.

420TX

The name of the first PCI chipset, first sold in 1992 and supporting the 486 architecture. It was uncommon for its time—PCI was not mainstream among PC makers until at least 1994, at which point the Pentium had firmly taken its place in the market. As the OS/2 Museum notes, the chipset was so early that the board it had that sported the technology did not support PCI’s defining feature—plug and play.

An Intel Pentium-era motherboard. Note the large PCI chipset in the middle of the board. (htomari/Flickr)

Intel had to crush a standards body on the way to giving us an essential technology

In the years before the Pentium chipset came out, there seemed to be some skepticism about whether Intel could maintain its status at the forefront of the desktop computing field.

On the low end, players like AMD and Cyrix were starting to shake their weight around. At the high end, workstation-level computing from the likes of Sun Microsystems, Silicon Graphics, and Digital Equipment Corporation suggested there wasn’t room for Intel in the long run. And laterally, the company suddenly found itself competing with a triple-thread of IBM, Motorola, and Apple, whose PowerPC chip was about to hit the market.

A Bloomberg piece from the period painted Intel in a corner between these various extremes:

If its rivals keep gaining, Intel could eventually lose ground all around.

This is no idle threat. Cyrix Corp. and Chips & Technologies Inc. have re-created--and improved--Intel's 386 without, they say, violating copyrights or patents. AMD has at least temporarily won the right in court to make 386 clones under a licensing deal that Intel canceled in 1985. In the past 12 months, AMD has won 40% of a market that since 1985 has given Intel $2 billion in profits and a $2.3 billion cash hoard. The 486 may suffer next. Intel has been cutting its prices faster than for any new chip in its history. And in mid-May, it chopped 50% more from one model after Cyrix announced a chip with some similar features. Although the average price of a 486 is still four times that of a 386, analysts say Intel's profits may grow less than 5% this year, to about $850 million.

Intel's chips face another challenge, too. Ebbing demand for personal computers has slowed innovation in advanced PCs. This has left a gap at the top--and most profitable--end of the desktop market that Sun, Hewlett-Packard Co., and other makers of powerful workstations are working to fill. Thanks to microprocessors based on a technology known as RISC, or reduced instruction-set computing, workstations have dazzling graphics and more oomph--handy for doing complex tasks and moving data faster over networks. And some are as cheap as high-end PCs. So the workstation makers are now making inroads among such PC buyers as stock traders, banks, and airlines.

This was a deep underestimation of Intel’s market position, it turned out. The company was actually well-positioned to shape the direction the industry went in through standardization. They had a direct say on what appeared on the motherboards of millions of computers, and that gave them impressive power to wield. If Intel didn’t want to support a given standard, there’s a chance said standard would be dead in the water.

Just ask the Video Electronics Standards Association, or VESA. The technical standards organization is perhaps best known today for its mounting system for computer monitors and its DisplayPort technology, but in the early 1990s, it was working on a video-focused successor to the Industry Standard Architecture (ISA), widely used in IBM PC clones. The standard, known as VESA Local Bus (VL-Bus), added support for the standards body’s then-emerging Super VGA initiative. It wasn’t a massive leap, more like a stopgap improvement on the way to better graphics.

And it looked like Intel was going to go for it. But there was one problem—Intel actually wasn’t feeling it, and Intel didn’t exactly make that point clear to the companies supporting the VESA standards body until it was too late for them to react.

Intel revealed its hand in an interesting way, according to San Francisco Examiner tech reporter Gina Smith:

Until now, virtually everyone expected VESA's so-called VL-Bus technology to be the standard for building local bus products. But just two weeks before VESA was planning to announce what it came up with, Intel floored the VESA local bus committee by saying it won't support the technology after all. In a letter sent to VESA local bus committee officials, Intel stated that supporting VESA's local bus technology "was no longer in Intel's best interest." And sources say it went on to suggest that VESA and Intel should work together to minimize the negative press impact that might arise from the decision.

Good luck, Intel. Because now that Intel plans to announce a competing group that includes hardware heavyweights like IBM, Compaq, NCR and DEC, customers and investors (and yes, the press) are going to wonder what in the world is going on.

Not surprisingly, the people who work for VESA are hurt, confused and angry. "It's a political nightmare. We're extremely surprised they're doing this," said Ron McCabe, chairman for the committee and a product manager at VESA member Tseng Labs. "We’ll still make money and Intel will still make money, but instead of one standard, there will now be two. And it's the customer who's going to get hurt in the end."

(For those familiar with video game history, you may recognize this general tactic from Nintendo’s infamous CD-ROM betrayal of Sony.)

But Intel saw an opportunity to put its imprint on the computing industry. That opportunity came in the form of PCI, a technology that the firm’s Intel Architecture Labs started developing around 1990, two years before the fateful screw-over of VESA. Essentially, Intel had been playing both sides on the standards front.

Why make such a hard shift, screwing over a trusted industry standards body out of nowhere? Well, essentially, beyond wanting to put its mark on the standard, it also saw an opportunity to build something more future-proof. As John R. Quinn wrote in PC Magazine in 1992:

Intel’s PCI bus specification requires more work on the part of peripheral chip-makers, but offers several theoretical advantages over the VL-Bus. In the first place, the specification allows up to ten peripherals to work on the PCI bus (including the PCI controller and an optional expansion-bus controller for ISA, EISA, or MCA). It, too, is limited to 33 MHz, but it allows the PCI controller to use a 32-bit or a 64-bit data connection to the CPU.

In addition, the PCI specification allows the CPU to run concurrently with bus-mastering peripherals—a necessary capability for future multimedia tasks. And the Intel approach allows a full burst mode for reads and writes (Intel's 486 only allows bursts on reads.)

Essentially, the PCI architecture is a CPU-to-local bus bridge with FIFO (first in, first out) buffers. Intel calls it an “intermediate” bus because it is designed to uncouple the CPU from the expansion bus while maintaining a 33MHz 32-bit path to peripheral devices. By taking this approach, the PCI controller makes it possible to queue writes and reads between the CPU and PCI peripherals. In theory, this would enable manufacturers to use a single motherboard design for several generations of CPUs. It also means more sophisticated controller logic is necessary for the PCI interface and peripheral chips.

To put that all another way, VESA came up with a slightly faster bus standard for the next generation of graphics cards, one just fast enough to meet the needs of 486 users. Intel came up with an interface designed to reshape the next decade of computing, one that it would even let its competitors use. This bus would even allow people to upgrade their processor across generations without needing to upgrade their motherboard. Intel brought a gun to a knife fight, and it made the whole debate about VL-Bus seem insignificant in short order.

The result was that, no matter how miffed the graphics folks were, Intel had consolidated power for itself by actually innovating and creating an open standard that would eventually win the next generation of computers. (It developed the standard, then gave away the patents. How nice of them.) Sure, Intel let other companies use the PCI standard, even companies like Apple that weren’t directly doing business with Intel on the CPU side of things at the time. But Intel, by pushing forth PCI, suddenly made itself relevant to the entire next generation of the computing industry in a way that ensured it would have a foothold in hardware, just as Microsoft dominated software. (Intel Inside was not limited to the processors, as it turned out.)

To be clear, Intel’s standards record wasn’t pristine. For example, its efforts to push forth a desktop video codec left many small companies miffed after the chip-maker randomly switched gears, per a 1993 New York Times article (which we’re linking, while noting our policy on linking the NYT). But its work with PCI definitely stuck.

Case in point: 32 years later, and three decades after PCI became a major consumer standard, we’re still using PCI derivatives in modern computing devices.

An example of a modern computer with a GPU. GPUs, which are generally more power-hungry than every other component in a desktop computer, tend to use 16-lane PCIe cards. (Rafael Pol/Unsplash)

Five offshoots of the original PCI standard that you may be familiar with

  1. Accelerated Graphics Port. Effectively a PCI-first approach to the VL-Bus standard, this port was essentially a way to offer access to faster graphics cards at a time when 3D graphics were starting to hit the market in a big way. Its first appearance came not long after the original PCI standard.

  2. PCI-X. Despite the name, Intel was less involved in this standard, which was intended for high-end workstations and server environments. Instead, the standard was developed by IBM, Compaq, and Hewlett-Packard, doubling the bandwidth of the existing PCI standard—and released in the wild not long before HP and Compaq merged in 2002. The slot standard was effectively a dead-end: It did not see wide use with PCs, likely because Intel chose not to give the technology its blessing, but was briefly utilized by the Power Macintosh G5 line of computers.

  3. PCIe. This is the upgrade to PCI that Intel did choose to bless, and it’s the one used by desktop computers today, in part because it was developed to allow for a huge increase in flexibility compared to PCI, in exchange for somewhat more complexity. Key to PCIe’s approach is the use of “lanes” of data transfer speed, allowing high-speed cards like graphics adapters more bandwidth (up to 16 lanes) and slower technologies like network adapters or audio adapters less. This has given PCIe unparalleled backwards compatibility—it’s technically possible to run a modern card on a first-gen PCIe port in exchange for lower speed—while allowing the standard to continue improving. To give you an idea of how far it’s come: A one-lane fifth-generation PCIe slot is roughly as fast as a 16-lane first-generation slot.

  4. Thunderbolt. As we’ve written in the past, Thunderbolt can best be thought of as a way to access PCIe lanes through a cable. First used by Apple in 2011, it has become common on laptops of all stripes in recent years. Unlike PCI and PCIe, which are open to all manufacturers, Thunderbolt is closely associated with Intel, which has meant its competitor AMD had traditionally not offered Thunderbolt ports until USB4, a reworked form of the Thunderbolt 3 standard, emerged.

  5. NVMe. This popular Intel-backed standard, dating to 2011, has completely rewritten the way we think about storage in computers. Once a technology built around mechanical parts, NVMe has allowed for ever-faster solid-state storage speeds that take advantage of innovations in the PCIe spec. Modern NVMe drives are roughly ten times the speed of comparable SATA SSDs—and, thanks to the corresponding M.2 expansion card standard, they’re far smaller and significantly easier to install.

“I believe in enabling all end-users, developers, partners and enterprises to be successful, because it drives renewed R&D excitement. And I believe a powerful, open ecosystem will always triumph. Only together can we ensure technology, which is inherently neither good nor evil, is ultimately applied for good.”

— Pat Gelsinger, the current CEO of Intel, in an open letter discussing the importance of building open technological ecosystems that help the broader industry, rather than just Intel itself. Some other standards that Intel has played a key role in developing—including USB, Wi-Fi, and Bluetooth—have helped far more than just Intel.

Looking at PCI and PCIe less as ways that we connect the peripherals we use with our computers, and more as a way for Intel to maintain its dominance over the PC industry, highlights something fascinating about standardization.

It turns out that perhaps Intel’s greatest investment in computing in the 1990s was not the Pentium chipset, which made them famous, but its investment in Intel Architecture Labs, which quietly made the entire computing industry better by working on the things that frustrated consumers and manufacturers alike.

Essentially, as IBM had begun to take its eye off the massive clone market it unwittingly built during this period, Intel used standardization to fill the power void. It worked pretty well, honestly, and made the company an essential part of more than just the CPUs we use. In fact, devices you use daily—that Intel played zero part in creating—have benefited greatly from the company’s standards work.

Craig Kinnie, the director of the Intel Architecture Labs in the 1990s, said it best in 1995, upon coming to an agreement with Microsoft on a 3D graphics architecture for the PC platform.

“What’s important to us is we move in the same direction,” he said. “We are working on convergent paths now.”

It was said about collaborating with Microsoft. But really, it has been Intel’s M.O. for decades—what’s good for the technology field is good for Intel. Innovations developed or invented by Intel—like Thunderbolt, Ultrabooks, and NUCs—have done much to shape the way we buy and use computers.

For all the talk of Moore’s Law as a driving factor behind the company’s success, the true story of Intel’s success might be its sheer cat-herding capabilities. The company that builds the standards builds the industry, and even as Intel faces increasing competition from alliterative processing players like ARM, Apple, and AMD, as long as it doesn’t lose sight of the roles standards played in its success, it might just hold on a few years longer.

This standards-driving winning streak, now more than three decades old, might have all started the day Intel decided to screw over a standards body.

--

Find this one an interesting read? Share it with a pal! And back at it again next week!

Share this post:

follow on Twitter | privacy policy | advertise with us

Copyright © 2015-2024 Tedium, all rights reserved.

Disclosure: From time to time, we may use affiliate links in our content—but only when it makes sense. Promise.

unsubscribe from this list | view email in browser | sent with Email Octopus

Older messages

sudo embrace 🧑‍💻

Friday, February 9, 2024

Microsoft doesn't have superusers, but it now has sudo. Here's a version for your browser. Hunting for the end of the long tail • February 08, 2024 sudo embrace Microsoft's decision to

Remaking Podcasts For Text 📡

Wednesday, February 7, 2024

A case for RSS getting a creator-economy revamp. Here's a version for your browser. Hunting for the end of the long tail • February 06, 2024 Remaking Podcasts For Text Podcasts are far and away the

The Ballad Of Mark Discordia 🎮

Saturday, February 3, 2024

A second take on an infamous early-internet meme. Here's a version for your browser. Hunting for the end of the long tail • February 02, 2024 Today in Tedium: Game culture has never been a high-

Cache Clearing 🔍

Wednesday, January 31, 2024

Google quietly takes away something from its search results. Here's a version for your browser. Hunting for the end of the long tail • January 31, 2024 Cache Clearing Google appears to hide away an

No Frame Of Reference 💻

Tuesday, January 30, 2024

Is the Framework 16 trying to do too much all at once? Here's a version for your browser. Hunting for the end of the long tail • January 29, 2024 No Frame Of Reference The mixed-feelings reviews

You Might Also Like

Android Weekly #650 🤖

Sunday, November 24, 2024

View in web browser 650 November 24th, 2024 Articles & Tutorials Sponsored Why your mobile releases are a black box “What's the status of the release?” Who knows. Uncover the unseen challenges

PHP 8.4 is released, Dynamic Mailer Configuration, and more! - №540

Sunday, November 24, 2024

Your Laravel week in review ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏

Lumoz RaaS Introduces Layer 2 Solution on Move Ecosystem

Sunday, November 24, 2024

Top Tech Content sent at Noon! How the world collects web data Read this email in your browser How are you, @newsletterest1? 🪐 What's happening in tech today, November 24, 2024? The HackerNoon

😼 The hottest new AI engineer

Sunday, November 24, 2024

Plus, an uncheatable tech screen app Product Hunt Sunday, Nov 24 The Roundup This newsletter was brought to you by Countly Happy Sunday! Welcome back to another edition of The Roundup, folks. We've

Transformers are Eating Quantum

Sunday, November 24, 2024

DeepMind's AlphaQubit addresses one of the main challenges in quantum computing. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏

Retro Recomendo: Gift Ideas

Sunday, November 24, 2024

Recomendo - issue #438 ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏

Kotlin Weekly #434

Sunday, November 24, 2024

ISSUE #434 24th of November 2024 Hi Kotliners! Next week is the last one to send a paper proposal for the KotlinConf. We hope to see you there next year. Announcements State of Kotlin Scripting 2024

Weekend Reading — More time to write

Sunday, November 24, 2024

More Time to Write A fully functional clock that ticks backwards, giving you more time to write. Tech Stuff Martijn Faassen (FWIW I don't know how to use any debugger other than console.log) People

🕹️ Retro Consoles Worth Collecting While You Still Can — Is Last Year's Flagship Phone Worth Your Money?

Saturday, November 23, 2024

Also: Best Outdoor Smart Plugs, and More! How-To Geek Logo November 23, 2024 Did You Know After the "flair" that servers wore—buttons and other adornments—was made the butt of a joke in the

JSK Daily for Nov 23, 2024

Saturday, November 23, 2024

JSK Daily for Nov 23, 2024 View this email in your browser A community curated daily e-mail of JavaScript news React E-Commerce App for Digital Products: Part 4 (Creating the Home Page) This component