Cover Story / June 1994

80x86 Wars

It's hard to accept the argument
that the Intel 80x86 architecture or its underlying CISC technology
is staggering on the brink of extinction. On the contrary,
the next few years will see the broadest proliferation
of the architecture since the 8086's debut in 1978.

Tom R. Halfhill

Apple is promoting its new Power Macs with charts that show CISC microprocessors sputtering out of gas while RISC chips like the PowerPC race up a steep curve toward superior performance. Actually, the leading CISC architecture — Intel's venerable 80x86 — is on the verge of the greatest burst of evolution in its 16-year history. Over the next year, radical new designs from Intel and others will push the 80x86 to new heights of performance while challenging some popular assumptions about the differences between RISC and CISC.

At the same time, though, Intel will find its position as the leader in 80x86 technology increasingly under siege. Not that anyone will mount a serious threat to Intel's overwhelming market dominance; market-research firm Dataquest (San Jose, CA) estimates that Intel commands 74 percent of the worldwide microprocessor market, compared to a meager 8 percent for its nearest competitor Motorola. But until now, Intel alone has defined what is state of the art for an 80x86 microprocessor, and it has enjoyed a formidable technological lead of 12 to 24 months over its rivals.

That will begin to change this year, when competitors such as AMD (Sunnyvale, CA), Cyrix (Richardson, TX), and NexGen (Milpitas, CA) introduce new 80x86 processors that differ significantly from Intel's chips. These original, highly optimized designs promise to deliver better performance than Intel's Pentiums, while maintaining full compatibility with DOS and Windows software. In response, Intel is accelerating the development process and has separate teams of engineers working on the next two generations of its 80x86 simultaneously.

Then there are the wild cards. The 80x86 market is so lucrative — Intel is expected to sell more than 30 million 486 and Pentium chips this year — that it could attract even more competition from parties currently uncommitted. Texas Instruments (Dallas, TX), SGS-Thomson Microelectronics (Saint Genis, France), and IBM Microelectronics (Hopewell Junction, NY) make 486-compatible chips and have the expertise to produce fifth-generation processors but are coy about their future plans. While IBM is focused on making the PowerPC an 80x86 killer, it is also experimenting with the most exotic variant of the 80x86 yet conceived: a PowerPC with accelerated 80x86 emulation hard-wired in silicon.

Although RISC chips like the PowerPC offer an approximate 2-to-1 price/performance advantage over the 80x86, that margin is not nearly as wide by the time the chips are packaged into finished systems. And it may not be enough to alter the path of an inertial PC market that historically values software compatibility and safe choices over raw speed and other factors.

For users, these trends have many implications — some good, some bad. On the positive side, you will enjoy many more options. Right now, however, your only choice is an Intel Pentium. Soon you'll be able to choose from a wide variety of PCs with comparable chips from AMD, Cyrix, and NexGen — the K5, M1, and Nx586, respectively. The extra competition is sure to force prices downward, and system vendors will welcome alternative sources of supply. Users who buy a 486 will benefit, too, as those prices drop under pressure from the higher-end chips.

There will also be more variety as the chip companies scramble to differentiate their products. New 486 and 586 processors are appearing in a wider range of clock speeds than ever before, making it easier to match the power of a system to your application and the price to your pocketbook. Different levels of CPU integration will offer still more options. NexGen's Nx586, for example, is really a 586SX, because the FPU is an optional coprocessor. Users who don't need maximum floating-point performance can save about $150.

The flip side is that more options mean more decisions, and some of those decisions will be more difficult to make. Already, some users have trouble understanding why a 25-MHz 486 is faster than a 33-MHz 386. In the near future, the diverging internal designs of 80x86 processors will mean that clock speeds are no longer a convenient shorthand method of comparing performance between different chips, even within the same generation. For instance, the performance of an AMD 100-MHz K5 may differ markedly from that of a Cyrix 100-MHz M1 or an Intel 100-MHz Pentium. The increasingly independent designs may also spur doubts about software compatibility.

Chasing the Pentium

The engineers at Intel have toiled to produce a new 80x86 generation at a remarkably predictable clock rate; roughly every 44 months since the introduction of the 8086. Performance has scaled at a similar rate. Now, like a clock-doubled CPU, Intel is quickening the pace. Its next generation, code-named P6, is scheduled to debut in 1995, a little more than two years after the Pentium, and the P7 is already in the works. Also, the Pentium team is still pushing the fifth generation with faster clock speeds, smaller process technologies, and higher levels of integration. The P54C 90- and 100-MHz Pentiums began shipping only a year after the 60- and 66-MHz Pentiums, adding new support for clock division, power management, and dualprocessing (see the text box "A Billion-Dollar Ball Game").

Intel is in a hurry because it's hearing footsteps. On one front, Intel is fending off the first credible RISC challenge in the PC market. Even though most observers don't view the PowerPC as a serious immediate threat to Intel's dominance — Dataquest projects the Pentium-class 80x86s will outsell the PowerPC by a ratio of about 5 to 1 through 1996 — it nevertheless exposes a chink in Intel's armor. The PowerPC's current price/performance advantage may not be significant enough to woo the average PC user into switching platforms, but it makes Intel's technology look dated, and it could pose a danger if widened.

On another front, Intel faces renewed attacks from those who make 80x86-compatible processors that compete directly with Intel's chips. From a business standpoint, this competition has always been more serious because it vies for Intel's primary customers, the system vendors (see the text box "System Vendors Like Variety"). Now, however, the competition is heating up from a technological standpoint as well.

Before the Pentium, the 80x86-compatible processors that AMD, Cyrix, and others made did not stray far from Intel's basic designs. For the sake of software compatibility, of course, they all had to share the same architecture: the instruction set, registers, condition flags, interfaces, and other characteristics that distinguish one type of microprocessor from another. But for the most part, they also conformed closely to Intel's microarchitecture — the internal details that distinguish individual chips within the same microprocessor family.

For instance, AMD's 386 and 486 chips use Intel microcode thanks to a cross-licensing agreement. (Intel has been fighting a losing court battle to negate those licenses.) IBM's 486 chips also use Intel microcode under similar agreements, but IBM is hampered by a proviso that limits it to selling chips on motherboards and subsystems, not as separate parts. Cyrix lacks a license to use Intel microcode, so it created a more independent microarchitecture but still didn't push the envelope beyond Intel's technology.

The Pentium, however, marks a fork in the road. For their fifth-generation processors, AMD, Cyrix, and NexGen are attempting to surpass the Pentium's performance with new microarchitectures that are less derivative. To maintain software compatibility, they will conform to the basic 80x86 architecture, but their internal designs will be increasingly independent. The development cycles for high-end microprocessors are too long for reverse engineering; thus, if they continue to let Intel define what is state of the art for an 80x86 chip, they will never close Intel's 12- to 24-month lead.

A similar situation prevailed in the early 1980s when IBM set the pace for PC systems. Clone vendors generally followed IBM's lead until 1986, when Compaq brashly introduced a 386-based system before IBM. Soon afterward, IBM lost its position as the standard-bearer for the PC platform. AMD, Cyrix, and NexGen would like to pull the same switcheroo on Intel.

Borrowing from RISC

All three companies insist they can beat Intel at its own game. "When we started with the 386, we were five or six years behind," says Drew Dutton, strategic marketing manager of AMD's PCProducts division. "With the 486, we were something like four years behind. The K5 will be about two years behind the Pentium, but closing the gap with the P54C and P6 is absolutely going to happen very shortly." More specifically, the K5 will deliver about 30 percent more performance than a Pentium at the same clock speed, says PCProducts division vice president Robert McConnell.

Cyrix makes similar claims for its M1. "At introduction, M1 will be a technically superior product," says Steve Domenik, vice president of marketing. "The M1 will deliver more performance at the same clock speed. Based on simulations, it ranges from something like 30 percent to two times or even two-and-a-half times faster. Without [the software] running on silicon, it's hard to be too aggressive with those claims, but there's no question we'll have higher performance."

NexGen, the only Intel competitor that actually has silicon samples of its Pentium-class processor, claims the Nx586 will best an identically clocked Pentium, at least when running integer math. For example, NexGen says that at 60 MHz, the Nx586 measured performance gains of 8 percent with the Landmark 2.0 benchmark and 28 percent with the BYTE 2.4 benchmark. (BYTE was unable to independently verify NexGen's results in time for this story.) On the other hand, NexGen says the Nx586 obtained lower performance than a Pentium when running the PowerMeter 1.81 benchmark (14 percent) and Norton Speed Index 7.0 (7 percent).

Again, the reason why performance doesn't scale with clock speed across these competing chips is that they're using different microarchitectures. What the chips have in common, however, is that they're all adapting RISC technology to pump new life into a 16-year-old CISC architecture that 140 million PC users are loath to abandon.

The first hints of RISC technology began appearing in 80x86 processors in 1989, when Intel's 486 integrated an FPU, more hard-wired instruction logic, and pipelining. The FPU was Intel's first response to the superior floating-point performance of RISC chips. The additional instruction logic reduced the 486's reliance on microcode. And the 8-KB cache helped to keep the 486's pipeline efficiently filled with instructions.

Pipelining is a key feature. It works like an automotive assembly line, substituting program instructions for cars. A pipeline has multiple stages, and instructions pass in lockstep from one stage to another. While some instructions are passing through the execution stages, following instructions can be moving through the fetch and decode stages. Once the pipeline starts flowing, the chip can process more instructions per clock cycle than a nonpipelined chip running at the same speed.

Pipelining and less microcode allow the 486 to process many of its instructions at an effective rate of one instruction per clock cycle, compared to a dozen or more clock cycles required by earlier 80x86 chips. (RISC chips achieve the same result partly by using simpler instructions that inherently require fewer clock cycles.)

Cyrix adapted some of these techniques to its 386-generation chips, creating 386/486 hybrids like the 486SLC. TI, under an agreement with Cyrix, introduced some 386/486 variants based on the Cyrix core. IBM's 486SLC2 is a similar crossbreed, combining a 486 pipeline and instruction set with a larger cache, a doubled clock, and a 386SX-style bus. But while IBM, Cyrix, TI, and AMD were still trying to match Intel's 486, Intel raised the bar again in 1993 with the Pentium.

Because the 486 is a scalar processor (i.e., single-pipelined), its theoretical throughput limit is one instruction per clock cycle. To break that barrier, Intel endowed the Pentium with two pipelines that can handle two instructions simultaneously. This allows the Pentium to issue some instructions at an effective rate greater than one per clock cycle, even though each instruction still requires at least one clock cycle to process.

Strictly speaking, this superscalar design is not part of the classic definition of RISC; some RISC processors, like the Mips R4000, don't have superscalar pipelines. But superscalar is generally lumped into the domain of RISC technology, because the complex nature of a CISC instruction set makes multiple pipelines difficult to implement.

For example, unlike RISC processors, which typically use 32-bit fixed-length instructions aligned on even-word boundaries, the 80x86 uses unaligned, variable-length instructions ranging from 8 to 120 bits. They are more troublesome to handle than are uniform RISC instructions, because the processor must decode their length before fetching the next instruction.

Self-modifying code and other stunts cause even more thorny problems when designing a superscalar 80x86. For instance, some programmers exploit the 80x86's irregular instruction format by writing code that executes in two different ways, depending on the fetch alignment. In other words, a byte that represents an operand field when fetched one way might be interpreted as an op code when fetched another way. Designing a superscalar 80x86 that won't choke on amusing little tricks like these is what makes CPU architects strangers to their families.

The superscalar Pentium successfully solves these problems and adopts a few other RISC-like features: a pipelined FPU with dedicated math logic; separate 8-KB instruction and data caches (compared to a unified 8-KB cache on the original 486); and a branch target buffer for branch prediction. The Pentium is really a hybrid CISC/RISC design that mocks the rivalry between these two philosophies and opens the door to further cross-pollination.

And that's exactly how AMD, Cyrix, and NexGen seek to leapfrog the Pentium; they intend to pick up where the Pentium leaves off. Cyrix, for example, is attacking another hoary CISC limitation: the register file. CISC processors typically have fewer GPRs (general-purpose registers) than RISC processors; in the case of the 80x86, there are only eight GPRs, compared to 32 in the PowerPC 601. But the Cyrix M1 will have 32 GPRs, using a technique called dynamic register renaming, which makes it appear as if there are only eight registers in use at a time, thus preserving compatibility with existing software that expects to see only eight registers.

The M1's superscalar pipelines will also be two stages longer than the Pentium's and will handle more instructions in parallel without stalling. The M1 also takes advantage of its larger register file to implement speculative execution, which allows the chip to continue to process instructions, with up to four levels of branching, without waiting for a branch to be resolved. If the branch prediction makes the wrong bet, transparent repair can back out of the speculative execution in a single cycle. In this respect, the M1 takes even greater pains than some RISC chips to keep its pipelines primed.

Fewer details are known about AMD's K5, but AMD promises it will borrow similar RISC-like techniques to achieve higher performance. "If you look back to around 1988, when we got into the RISC marketplace with the Am29000, RISC had a 5-to-1 advantage," says AMD's Dutton. "Now it's at best 2 to 1. That's going away. Both AMD and Intel and other 80x86 vendors are going to prove that." Significantly, the chief architect of AMD's K5 is Mike Johnson, who designed the Am29000 RISC chip and wrote Superscalar Microprocessor Design (Prentice-Hall, 1991), a seminal engineering book on superscalar RISC.

In a rare moment of Intel/AMD detente, Lew Paceley, marketing director for Intel's P6 division (Hillsboro, OR), agrees: "RISC is a technology; it's not an architecture. There's a difference. If it's a technology, everybody can exploit it. They may exploit it in slightly different ways, but the basic technology is available to everybody."

Reinventing the 80x86

NexGen's Nx586 appears to go even further toward the merger of RISC and CISC. It also hints at future possibilities for Intel's P6 and P7 chips, and may even foreshadow how the rumored PowerPC 615 will work its magic of emulating 80x86 code (see the text box "An 80x86-Compatible PowerPC?").

The Nx586 incorporates most of the RISC-like technology that suddenly is becoming standard in advanced 80x86 microarchitecture: long superscalar pipelines, wider data paths, on-board instruction/data caches (twice as large as the Pentium's), a larger register file with dynamic renaming, branch prediction, and speculative execution — with the enhanced ability to process streams of instructions nested three branches deep (see the text box "NexGen Nx586 Straddles the RISC/CISC Divide" on page 76).

What is most intriguing about the Nx586, however, is its unique decoder unit. In three stages, it fetches an 80x86 instruction, aligns it on an even boundary, and decodes it into one or more simpler instructions that belong to what NexGen calls the RISC86 instruction set. In other words, the decoder converts unaligned, variable-length CISC instructions into fully aligned, fixed-length RISC instructions that are pumped into the virtual equivalent of a RISC core. In effect, the Nx586 is a RISC chip with an 80x86 front end.

NexGen's RISC86 instruction set is optimized for 80x86 decoding. The decoder maps many of the simpler 80x86 instructions directly to their RISC86 counterparts, so a single CISC instruction is converted into a single RISC instruction. More complex 80x86 instructions must be decoded into multiple RISC instructions; NexGen says the worst case is a 3-to-1 ratio in the basic set of instructions. The decoder issues RISC86 instructions at a rate of one per clock cycle per execution unit.

In concept, the Nx586 decoder works like the code generator of a compiler, except at a lower level. Just as a C++ compiler converts C code into 80x86 machine code, the Nx586 decoder converts 80x86 machine code into RISC86 code.

Intel downplays NexGen's RISC86 and says the Pentium does something similar when it decodes complex 80x86 instructions into microcode primitives; in a sense, this is true. Complex instructions are broken down into 88-bit, fixed-length microinstructions that could be regarded as "RISC instructions," and many simple instructions don't require microcode at all because they're hard-wired in silicon. Intel also makes a valid point that code generators in modern compilers tend to avoid complex 80x86 instructions, because they can generate faster-running code by sticking to simpler instructions.

RISC86 instructions share some characteristics with Pentium microinstructions: They're quite long (the Nx586's eight-chip predecessor used 104-bit RISC86 instructions) and carry vital information of processor states that normally wouldn't be known to a true external RISC instruction. But there's still an important difference: Unlike microcode, NexGen's RISC86 can theoretically support its own assemblers, compilers, and application software. The Nx586 bus can bypass the 80x86 decoder and feed RISC86 instructions directly into the execution stages of the pipeline at full bus speeds. In fact, NexGen already has a RISC86 assembler, although it's for internal use only, since there's obviously no software market for RISC86 binaries.

But what if Intel introduced something like RISC86 in a future processor? NexGen, a small start-up company with zero market share, cannot hope that RISC86 will ever spawn its own software base. But Intel has considerably more influence. What if Intel defined a new 80x86 RISC instruction set and register file that coexisted with the CISC instructions and supported its own native assemblers and compilers?

Linley Gwennap, editor in chief of the Microprocessor Report (Sebastopol, CA), thinks Intel might take this approach with the P7. "In this way, Intel will offer the performance of RISC while maintaining compatibility," he says. "And then, Intel might say to software developers, 'Oh, by the way, if you recompile using the new Intel instruction set, those applications would run really fast.' And other applications not converted [that is, still compiled for the CISC instruction set] would still run pretty fast."

Intel's Paceley says it isn't necessary to define a new RISC instruction set because today's compilers already strive to target a subset of 80x86 instructions that require fewer clock cycles. Interestingly, AMD's Johnson predicted this would happen three years ago in an appendix to his book on superscalar RISC design: "It should be possible to define a core set of [CISC] instructions that have at least some RISC-like qualities," Johnson wrote. "Once the data paths have been designed to execute these instructions in a single cycle, we are close to being able to apply the [superscalar RISC] techniques we have studied in this book."

Paceley hints that Intel will take another step in this direction by streamlining the instruction decoder in the P6. "Our architects know a lot about how to decode 80x86 instructions," he says. "They can figure out how to rip what would be this instruction decoder in a RISC machine and integrate it with the 80x86 decoder. Yeah, there's going to be a little microcode ROM off to the side to handle some strange cases, but by and large, anything that they're doing inside, we can do inside."

Whether Intel codifies a subset of RISC-like instructions or proffers a whole new instruction set, this trend strikes at the very heart of the remaining differences between RISC and CISC. To the extent that designers can minimize the decoding overhead and keep it transparent to the software, either approach could open broad new avenues for future 80x86 evolution.

Dangers of Divergence

As IBM, NexGen, Cyrix, and AMD keep pushing their microarchitectures along paths that increasingly diverge from Intel's and from each other's, it becomes more and more difficult to ensure that software will run exactly the same on everybody's chips. This burden is entirely borne by the chip vendors, of course; users and developers insist on absolute compatibility. Everyone admits the potential problem and agrees on the answer: rigorous compatibility testing that begins on simulators long before a new CPU is etched into silicon.

All the CPU vendors verify their new designs with extensive test suites that include real-world applications as well as special sequences of code that probe every known boundary condition. NexGen, as a newcomer, has more to prove, but the other companies have successfully navigated these waters for years. Everyone remembers what happened to the early system vendors whose PC clones were only 90 percent compatible.

Intel has no particular advantage here. Its processors are evolving in unforeseen directions, too, and must conform to the same software base. "Compatibility is defined by the software, not by the microarchitecture that existed yesterday," notes Peggy Herubin, director of applications engineering at Cyrix.

Another software issue, more related to performance than compatibility, is optimized compilation. This is one spin-off of RISC that 80x86 vendors would rather not deal with. RISC architects realized years ago they could reap even more performance if compilers were in tune with the chip's microarchitecture, especially in superscalar designs. Superscalar pipelines typically impose restrictions on the types of instructions they can process simultaneously. Careful instruction-ordering can maximize a pipeline's throughput.

For instance, the Pentium's twin pipes can work concurrently only on simple instructions. Typically, these are nonmicrocoded instructions requiring only one clock cycle. A smart compiler that strings a number of these instructions together while minimizing dependencies will generate up to 30 percent faster-running code. Intel recognized this when designing the Pentium and spent two years creating optimized code generators for license to vendors.

Software developers will not recompile their programs for every new 80x86 variant. Intel, as the market leader, theoretically enjoys an advantage, because the smaller companies have virtually no influence over developers.

The competitors don't see this as a problem. First, they point out that even a year after introduction, almost nobody optimizes their software for the Pentium. Indeed, even though it has been nine years since Intel introduced the 32-bit 386, almost all PC applications are still compiled to 16-bit code. Most DOS programs, such as Lotus 1-2-3 release 2.x and WordPerfect 5.1 Plus, are compiled for the first-generation 8086, because millions of XT-class machines are still in use. Some DOS programs, such as Lotus 1-2-3 release 3.x and WordPerfect 6.0, are compiled for the 286. Some Windows programs are compiled for the 386. This probably won't change until Microsoft releases a 32-bit version of Windows, Intel sells more Pentiums, and developers overhaul their software.

In any case, developers favor a blended approach to optimization that is not closely tied to the Pentium's specific microarchitecture. In fact, it yields some gains on the 486 and 386 as well. "That kind of recompile is something that any superscalar architecture can take advantage of, so it doesn't hurt us at all," says AMD's Dutton.

More specific optimizations are possible but unlikely. It wouldn't even serve Intel's interests, because the P6 and P7 will have different microarchitectures than the Pentium. "Most of the compiler writers are doing generic optimizations," says Cyrix's Herubin. "Obviously, they don't want their compilers to only perform well on the Pentium. They want to perform well on the whole line of 80x86."

Benefits of Divergence

Although diverging microarchitectures pose some dangers, the reasons for pursuing them are simply too strong to ignore. First, the engineers need all the maneuvering room they can get for new designs that will boost performance. But also, the chip vendors need some way to differentiate their products. Paradoxically, their main selling point is that their chips are so compatible that users shouldn't notice any difference.

In the past, Intel's competitors have sought to distinguish themselves in three main ways: lower prices; continued production of chips that Intel is phasing out in favor of the next generation; and exploiting market niches that Intel ignores. For instance, when Intel shifted the bulk of its production from the 386 to the 486, there was still healthy demand for 386 chips. AMD and Cyrix did not have their 486 processors ready anyway, so they cleaned up after Intel by shipping huge volumes of the 386. Intel was willing to forsake the 386 at that point because prices had dropped very low and the 486 was more profitable. It will be interesting to see if AMD and Cyrix continue this strategy now that they're trying to leapfrog Intel's technology. Like Intel, they may find it more worthwhile to devote their expensive foundry capacity to leading-edge products.

Filling market niches is another traditional strategy of Intel's rivals. For example, Intel's fastest 386 is clocked at 33 MHz; AMD makes a 40-MHz chip. The fastest 486 was Intel's clock-doubled 33- and 66-MHz DX2 until IBM introduced its clock-tripled Blue Lightning 33/100. Intel offered no direct upgrade path from the 386 to the 486; Cyrix makes a 386/486 hybrid that's pin-compatible with 386 sockets.

But Intel has caught on to this strategy. Over the past two years, it has introduced a much wider variety of 80x86 chips and is paying closer attention to holes in its product line. For instance, the recently shipped 100-MHz IntelDX4 neatly fills a performance gap between the 66-MHz 486DX2 and the 60-MHz Pentium. And Intel plans to offer a direct upgrade path from the 486 to the Pentium via the Pentium OverDrive Processor (code-named P24T), for which sockets already exist on many PC motherboards.

NexGen is boldly attempting to differentiate its Nx586 by offering the FPU as an optional coprocessor. This is an interesting experiment in reality engineering versus marketing hype, because NexGen must overcome a broad public perception that FPUs significantly improve the performance of spreadsheet programs and screen graphics. Everyone from Intel to the software developers agrees that floating-point performance is relatively unimportant for almost all PC applications, but many users simply won't buy a machine without an FPU.

NexGen's separate FPU notwithstanding, higher integration is a likely path toward future product differentiation. For instance, the Nx586 is the first 80x86 to integrate a level 2 cache controller, allowing the use of less-expensive SRAM (static RAM), while preserving high-speed access. The new 90- and 100-MHz Pentiums integrate an APIC (Advanced Programmable Interrupt Controller) for multiprocessing. DEC's Alpha 21066, although not an 80x86-compatible CPU, integrates a PCI (Personal Component Interconnect) controller and some video-control functions — a logical possibility for an 80x86. There's even serious speculation about integrating a DSP (digital signal processor) on an 80x86, perhaps with some hardware support for Microsoft's new DSP manager.

The inescapable conclusion is that the 80x86, ancient though it may be, is far from a dead architecture. It still attracts more engineering resources than any other CPU architecture that ever existed, and it's evolving in more lively directions than ever. In all likelihood, the 80x86 will continue to flourish until another architecture offers a wide enough price/performance margin to convince most users that it's time for a change.

Sidebars:

Illustration: Pentium vs. PowerPC. Although market-research firm Dataquest projects sales of about 5.6 million units for the PowerPC by 1996, it expects the Pentium to maintain an approximate 5-to-1 lead in sales with 30 million units sold.

Table: FIFTH-GENERATION 80X86 FEATURE COMPARISON (This table is not available electronically. Please see June, 1994, issue.)

Illustration: 80x86 Evolution: Transistor Density. Higher performance requires more logic circuitry, and this is reflected in the steadily rising transistor densities of 80x86 processors. While the number of transistors per chip has increased more than 100 times over the past 16 years, performance has increased more than 500 times. (Note: The latest P54C Pentiums have 3.3 million transistors due to additional integrated power management functions and an interrupt controller.)

Illustration: 80x86 Evolution: Performance. Since its introduction, the performance of the 80x86 architecture has scaled at a remarkably steady pace. The first 8086 was clocked at 5 MHz and executed about 330,000 instructions per second. The current top-of-the-line 80x86 is the new 100-MHz Pentium P54C, which is benchmarked at 166.3 MIPS. On the chart, the lower line measures performance of the fastest available 80x86 processor at introduction. The upper line shows the maximum available performance for each Intel generation.

Tom R. Halfhill is a BYTE senior news editor based in San Mateo, California. You can reach him on the Internet or BIX at thalfhill@bix.com.

Copyright 1994-1998 BYTE

Return to Tom's BYTE index page