Cover Story / September 1994

Transforming the PC: Plug and Play

An industry-wide effort, Plug and Play will make PC compatibles
easier to configure and maintain while reducing support costs
for vendors and users. Although businesses and individuals
both stand to gain substantial benefits, the transition
to full adoption of Plug and Play won't be painless,
won't come cheap, and will likely take years.

Tom R. Halfhill

At first glance, Plug and Play looks like a collection of paradoxes. To achieve its ambitious goal of making PC compatibles easier to set up and use, it requires changes to the computer's BIOS, operating system, peripherals, device drivers, and applications software. Changes are made to nearly everything, in fact, but the core component that most needs changing: the decade-old PC system architecture with its obsolete ISA bus. And although Plug and Play does a remarkable job of making PCs friendlier while maintaining compatibility with existing hardware, it also requires that you eventually replace almost all that hardware.

Another paradox is the marketing challenge presented by Plug and Play. The handiest way to describe what it does — "Plug and Play makes your PC as easy to set up as a Mac" — isn't likely to be embraced by its biggest backers, which include Microsoft, Intel, and Compaq. And the users who most desperately need Plug and Play — first-time buyers who have no experience configuring PCs — probably won't be the primary target of Plug and Play advertising because, as one Intel marketing person explains, "Naive users expect PCs to work this way already."

Hard Truths

Underlying these paradoxes are some hard truths. The PC's system architecture has remained fundamentally unchanged for 10 years. Faster microprocessors, bigger hard drives, and more memory have unquestionably led to more powerful PCs, but underneath it all lies the same foundation that IBM defined for the PC in 1981 and extended for the AT in 1984. Without a single defining leader since IBM lost control over the architecture in the mid-1980s, the world's leading computer platform has been propelled forward by sheer market momentum.

Meanwhile, the foundation has been slowly cracking under the weight of more and heavier hardware and software. The 8-MHz ISA bus is now a bottleneck when it's mated with a 100-MHz Pentium processor. Expansion slots are crowded with devices that were rare or unheard of a decade ago: LAN cards, fax modems, CD-ROM interfaces, SCSI host adapters, stereo sound boards, and video digitizers. The 640 KB of RAM that once seemed luxurious is now choked with contentious device drivers and TSR programs. IRQs (interrupt requests), DMA channels, I/O memory ports, and other system resources are now fought over like the last pebbles of ore in a played-out gold mine.

The results are ominous. Users are frustrated with the complexities of IRQs, DIP switches, jumpers, and drivers. Technical-support costs for businesses and vendors are skyrocketing. Marketers worry that the supply of customers willing to tolerate this chaos will soon reach a saturation point. An "unacceptably high" return rate (reportedly in excess of 25 percent) for multimedia upgrade kits sold for PCs prompted CompUSA (Dallas, TX), a nationwide retailer, to offer free installation in PCs purchased from its stores. Microsoft says nearly half the calls to its Windows help lines are from users struggling to install or configure hardware and software. A 1993 study by the Gartner Group (Stamford, CT) estimated that the five-year cost to a business of owning a Windows-based PC was more than $37,000, largely due to system complexity.

Past attempts to renovate the PC architecture — such as Micro Channel architecture and EISA — or to establish alternatives like ACE (Advanced Computing Environment) have met with limited success or total failure. The most successful alternative is Apple's Mac, but its proprietary nature has discouraged widespread adoption.

Experience has shown that users want a gradual transition, not a clean break with the past, and no initiative will succeed without pervasive industry support. Plug and Play is a conscious effort to heed those lessons while driving the market forward.

In the short term, Plug and Play is an impressive patch on the creaking PC architecture. Although it doesn't add any system resources, it does codify the way existing resources are rationed. In the long term, Plug and Play is a ladder to a future architecture that by the end of the decade will recast the PC platform. The primary I/O bus will most likely be PCI (Peripheral Component Interconnect). Branching off will be a cascading series of secondary I/O buses (e.g., Enhanced IDE, PCMCIA, SCSI, Access.bus, P1394, and others). The hardware will be more tightly integrated with the system software, much as it is in today's Mac. For both the industry and users, the challenge is how to get there from here.

Step by Step

Plug and Play has three immediate goals. First, it will make PCs easier to set up and configure. Second, it will ease the task of installing new hardware and software. Third, it will endow PCs with entirely new features, such as the ability to change configurations on the fly and allow both the hardware and software to respond dynamically to configuration events. Examples include adding or removing a PCMCIA fax modem, attaching a mobile computer to a network, or hooking a notebook computer to a docking station.

It's important to distinguish between Plug and Play as an officially defined framework and plug and play as an increasingly popular buzzword. Many new devices, peripheral buses, and platforms are described as "plug and play," and they may offer the advantages of easy setup, configuration, and expansion. But Plug and Play (usually abbreviated PnP) grew out of an ISA-specific standard first proposed by Microsoft and Intel at the Windows Hardware Engineering Conference in March 1993.

Over the following months, the two companies founded the Plug and Play Association, distributed preliminary specifications, and solicited input from vendors and users via the PlugPlay Forum on CompuServe. The association released the latest revision of the ISA specification in May. A revised PnP BIOS specification, authored by Phoenix Technologies, Compaq, and Intel, appeared at the same time. Meanwhile, a number of companies and industry groups have collaborated on PnP specifications for other buses, ports, and devices.

Full PnP compliance requires changes to four major elements of a PC system: the computer's ROM-based BIOS, the operating system, hardware devices, and applications software. When all those pieces are in place, PnP will bring automatic, software-driven configuration to almost every I/O bus and port on a PC, including ISA, EISA, PCI, VL-Bus, PCMCIA, SCSI, Micro Channel architecture, IDE, Access.bus, P1394, parallel ports, RS-232 serial ports, and SVGA monitors. PnP will also configure hard-wired motherboard devices in your system, such as the keyboard, mouse, joystick, and display controllers. No more jumpers, no more DIP switches, no more messing with configuration files such as AUTOEXEC.BAT, CONFIG.SYS, SYSTEM.INI, or WIN.INI.

At least, that's the plan. PnP is off to a good start and is slowly gathering strength throughout the industry, but some confusing gray areas in the specifications leave room for improvement. For example, some new PnP ISA cards actually add a jumper that users must change when installing the device in a non-PnP system. "These things will get refined as we go along, in a point-one release or whatever," says Carl Stork, Microsoft's director of Windows hardware programs. "The important thing is that we're doing the best we can at providing a solution that works now."

So nirvana isn't right around the corner, but the journey is under way. Most current users will spend years building toward full PnP capability. Most people will have to buy a new computer or motherboard just to upgrade the BIOS, because BIOS ROMs typically aren't upgradable.

To get full system software support for PnP, you'll need a PnP-integrated operating system like Chicago. Future versions of OS/2 and Windows NT will utilize PnP as well. Chicago is scheduled for release this year. IBM says a new version of OS/2 will fully support PnP in the first half of 1995. Another version of OS/2 that's due this fall (sometimes called Warp) will allow hot-plugging of PCMCIA cards but won't include other elements of PnP. NT won't fully integrate PnP until Cairo (scheduled for 1995), but it already includes some foundation features, such as a system configuration record and a browsing tool called the Registry Editor.

A stopgap solution is to retrofit MS-DOS and Windows 3.1, a task that Microsoft delegated to Intel. Intel's Plug and Play Kit for MS-DOS and Windows 3.1 is available to vendors, who will resell it to users with PnP systems and devices. The retrofit offers significant benefits, but it doesn't go as far as a fully integrated PnP operating system. For example, the only I/O buses it supports are ISA, PCI, and PCMCIA, and its ability to reconfigure on the fly is severely limited. As of now, there is no retrofit for OS/2 2.1 or other operating systems, although IBM's PC-DOS 6.3 already supports PCMCIA hot-plugging and will add hot-docking in early 1995.

All of today's hardware devices (including internal cards and external peripherals) will work in a PnP system, but because they are as susceptible as ever to configuration problems, you will eventually need to replace them if you want full PnP flexibility. Likewise, current applications software is compatible with PnP, but any applications that need to respond to configuration events (e.g., a communications program that knows when a fax modem has been added or removed) must be upgraded. The bottom line is that to derive maximum benefit from PnP, you'll eventually have to replace or upgrade almost everything you own.

Fortunately, PnP softens the transition by letting you mix and match virtually any combination of PnP and non-PnP components. The more parts of your system you upgrade, the more PnP functionality you'll get. How smoothly you'll weather the transition depends on how quickly PnP products come to market, how much you've got invested in current technology, and how soon you can afford to upgrade.

Time Slices

"PnP is more of a long-term solution than a short-term solution," says Carter J. Lusher, program director of personal computing at the Gartner Group. "We probably won't see widespread PnP products until early 1995. Most companies depreciate their machines on a three- or five-year cycle, so I don't expect to see them converting to PnP until late 1996 at the earliest."

Not surprisingly, Microsoft, Intel, and other PnP backers prefer a more optimistic view. ``The release of Chicago will accelerate demand for PnP software and devices, especially on high-end systems," says Stork. The first systems with PnP-enabled BIOS ROMs began shipping a few months ago, and all the major BIOS vendors support PnP. Peripheral vendors seem to be moving a little more slowly, but the first PnP devices — including a SCSI host adapter from Future Domain (Irvine, CA) — actually appeared in 1993, based on the PnP SCSI specifications.

Even without all the pieces in place, there's enough to gain by incrementally adopting PnP that it should quickly become a checkoff item for future purchases. This is especially true for businesses, because configuration woes directly affect productivity and maintenance costs. The Gartner Group 's most recent estimates put the five-year cost of owning a Mac at about 10 percent less than a Windows PC. The difference is partly attributable to the Mac's plug-and-play capabilities, which date from the release of the Mac II in 1987.

Mac users, incidentally, tend to regard PnP as merely the latest effort in the industry's 10-year struggle to turn PCs into Macs. It seems to prove once again that everybody wants a computer that works like a Mac, but for various reasons, only about 12 percent of them want to buy that computer from Apple.

Actually, PnP can do some tricks that even today's Macs can't do, such as hot-docking. Nevertheless, there is no denying the Mac's lead in plug-and-play technology, made possible chiefly because Apple maintains rigid control over the Mac's system architecture and system software. That's the advantage of a proprietary platform. To change something, all Apple has to do is send an internal memo to a dozen product managers.

That's an oversimplification, of course. But change is much more difficult on the PC side, where hundreds of competing vendors must coordinate their actions. The power vacuum left behind by IBM has largely been filled by Microsoft (the leading system software vendor) and Intel (the leading chip vendor), with help from Compaq (a contender for the title of leading system vendor). Like Border collies working on a sheep ranch, these companies are running hard to get everyone else moving in the same direction at the same time.

Resource Bottleneck

Users who have trouble configuring their PCs typically run afoul of conflicts between devices contending for the same system resources. PnP doesn't solve the root problem by adding more resources, but it does try to resolve conflicts by assigning currently available resources in a more systematic manner.

The scarcest resources in a PC are IRQs, DMA channels, I/O memory ports, and conventional memory. For historical reasons that in some cases date back to the 1970s, even the latest Pentium PCs are limited to the same set of resources.

IRQs are crucial to the operation of I/O devices, allowing them to send hardware interrupts to the CPU. Without them, the CPU would have to continually poll I/O devices to check for activity. Thanks to IRQs, the device can sit idly on the I/O bus without consuming processing cycles and interrupt the CPU only when the I/O device needs processor time. In PCs, IRQs are mediated by PICs (programmable interrupt controllers) on the motherboard.

Early PCs and XTs had a single 8259A PIC chip that could handle eight IRQs, numbered IRQ0-IRQ7. It quickly became apparent that eight IRQs weren't enough, so IBM added a second PIC to the AT in 1984, creating an arrangement common to all PC compatibles to the present day. Unfortunately, this yields only 15, not 16, available IRQs, because the second PIC is a slave that bridges to IRQ2 of the master PIC. This prevents IRQ2 from being assigned to another device.

The slave PIC also upsets the priority assignments of IRQs. In PCs, lower-numbered IRQs are serviced before higher-numbered ones. However, because the IRQs on the slave PIC are cascaded onto IRQ2 of the master PIC, the slave inputs (i.e., IRQ8-IRQ15) inherit the priority of IRQ2, thus enjoying a higher priority than IRQ3-IRQ7 on the master PIC. Some I/O devices are especially picky and demand high-priority IRQs, so the numbering makes a difference. Adding more PICs at this point isn't feasible because it would disrupt the PC system architecture. (See "IRQ Assignments in PCs.")

If all this sounds complicated, it is. First-time PC users who don't know an IRQ from the IRS are often thrust into this quagmire as they struggle to install and configure their expansion boards. Some boards are software-configurable, meaning you can change their IRQ settings by running a setup program, but others require you to fiddle with DIP switches or jumpers.

IRQs are just the beginning. Some devices also want a DMA channel. DMA grants a device direct access to system memory without using the CPU as an intermediary. Although this boosts system throughput, because a typical PC has only seven DMA channels, it creates another source of conflict.

Next come the I/O ports. The rivalry over this system resource predates the 80x86 family itself. At least as far back as the 8080 chip — a 1974 predecessor to the 80x86 line — Intel CPUs have offered special instructions for communicating with I/O devices and have allowed those devices to be mapped into a block of address space that's separate from main memory.

This was a clever conservation measure in the days when a typical microcomputer had a few kilobytes of RAM. But today, when Pentium CPUs execute at speeds of 100 or more MIPS and computers have megabytes of RAM, the scheme leads to maddening constraints on the way I/O space is allocated.

To communicate over the bus, each I/O device needs to reserve some address locations, known as I/O ports. (These ports are not to be confused with physical ports, such as parallel and serial connectors.) Because only 16 address lines are used to access I/O devices, the total address space available for those ports is 64 KB. The original 8-bit I/O bus in PCs and XTs made this even worse by decoding only 10 of the bits on those 16 lines, thus reducing the I/O address space to 1 KB. And the first 256 bytes of that 1 KB of address space are reserved for motherboard devices. Barring the use of a trick or two that let you gain a few extra noncontiguous bytes, all the I/O ports on the ISA expansion bus had to be mapped to the remaining 768 bytes.

In 1984, IBM's AT extended the architecture with a 16-bit I/O bus and allowed devices to decode all 16 address lines. Theoretically, this liberated the 63 KB of I/O address space ignored by the original architecture. Unfortunately, to maintain compatibility with older devices that don't recognize 16-bit addresses, most of that memory is off limits, and what's left is scattered around in 256-byte fragments. Even I/O buses that came later and were designed for 16-bit addressing from the start (e.g., EISA, VL-Bus, and PCI) must deal with these fragments to preserve backward compatibility with ISA cards. As a result, I/O devices in today's PCs continue to squabble over tiny crumbs of bytes, even in systems that have many megabytes of main memory. (See "I/O Addressing in PCs.")

You can map device drivers into main memory instead of I/O memory, but that causes headaches, too. Operating systems like DOS that run in 80x86 real mode are normally limited to 1 MB of addressable memory, and only 640 KB is so-called conventional memory. This space can become so overpopulated with drivers and TSRs that some applications hungry for conventional memory won't run at all, no matter how much RAM is free elsewhere in the system.

Resource Rationing

PnP attempts to bring order out of this mess by apportioning system resources according to a complex but consistent set of rules. As is the case with welfare reform, PnP also promises to take care of the truly needy first.

In this case, the truly needy are so-called legacy devices — those that don't support PnP. Another paradox? Not really. Legacy devices often require users to select IRQs and DMA channels by changing DIP switches and jumpers. Once they're adjusted, those settings can't be changed without taking the computer apart and monkeying with controls that are smaller than your fingers. Too often, the result is what's referred to in the industry as a "negative user experience." So PnP gives legacy devices first dibs on system resources and tries to fit everything else in around them.

The degree to which resources are rationed greatly depends on which components in your system are PnP-aware. There are numerous combinations, and the most likely ones are summarized in the table on page 90. The prime components in this process are the BIOS and the operating system. (See "How Plug and Play Works.")

If the BIOS supports PnP, it tries to configure the system first. If it succeeds, you're home free. If the BIOS fails, it hands off to the operating system. If the operating system supports PnP, it finishes the job or tells you if a conflict can't be resolved with your current setup.

If the operating system doesn't support PnP, you must pick up where the BIOS left off. At the minimum, a PnP BIOS will auto-configure three devices at boot-up time: an input device (typically the keyboard controller), an output device (typically the video controller), and an initial program load device (typically the hard drive that holds the operating system). The PnP BIOS also configures motherboard devices (e.g., the PIC, the DMA controller, and the floppy drive controller) and maybe other devices as well. (See "Building a Better BIOS" on page 92.)

If your system has a legacy BIOS (no support for PnP), you can still gain by upgrading to a PnP operating system — either one that's fully integrated (e.g., Chicago) or the retrofit solution for MS-DOS and Windows 3.1. In either case, a new layer called Configuration Manager does its best to configure any PnP devices in the system and minimize the chances that you'll have to manually edit any configuration files, such as CONFIG.SYS or WIN.INI.

In going about its business, Configuration Manager calls upon new system software components known as bus enumerators, as well as the resource arbitrator. Bus enumerators are drivers that check each I/O bus to see what devices are installed and which resources they need. Each bus has its own enumerator, but PnP leverages existing mechanisms wherever possible. For example, the SCSI driver itself enumerates the SCSI bus. The information is reported back to Configuration Manager, which calls the resource arbitrator. The resource arbitrator employs sophisticated algorithms to balance the needs of all the devices, gradually building a hierarchical configuration table called the hardware tree.

If the resource arbitrator can't configure everything, the last resort is new utilities that help you identify and solve configuration conflicts. Intel's upgrade kit for MS-DOS and Windows 3.1 includes a program called the ICU (ISA Configuration Utility). Some PnP BIOS vendors offer similar utilities, such as PnPView from SystemSoft (Natick, MA) and Phoenix System Essentials from Phoenix Technologies (Norwood, MA). Likewise, Chicago will have a built-in tool that's called Device Manager.

Although you still might have to take the computer apart and play with DIP switches on legacy devices, these utilities will offer some guidance by informing you how resources are allocated and which resources are available. That alone is a big improvement over the current method, which relies heavily on trial and error.

When the configuration process is complete, the information in the RAM-based hardware tree is stored in some type of nonvolatile memory. Some low-cost clones will cram an abbreviated bit-mapped table of the configuration data (maybe 64 or fewer bytes) into the system's extended CMOS. Other systems will build a more verbose registry (perhaps 2 to 4 KB) and store it on the hard disk or in the same flash ROM as the BIOS. The next time you boot up, the PnP BIOS or Configuration Manager can survey the computer's status and compare it with this registry to see if anything has changed since the last session. If there's no change, the system continues booting up normally. Otherwise, the configuration process begins anew.

Now it's apparent why you'll eventually want to replace your legacy devices. The more of these devices in your system, the less flexibility is enjoyed by the PnP BIOS and Configuration Manager. Because boot-up and legacy devices get the first crack at system resources, any devices that boot up later must make do with whatever resources are left over. And some devices are very particular about the resources they need.

Although the PnP configuration process is complicated, remember that almost everything happens in the background — especially in an up-to-date system with a PnP BIOS, a PnP operating system, and PnP devices. Ideally, the machine will boot up in a minute without any intervention on your part, even when you've altered the configuration. In a worst-case scenario, you may have a partial PnP system with several legacy devices whose conflicts cannot be resolved, forcing you to take manual action.

Upgrading to a full-fledged PnP system will reduce such headaches. Although Intel's upgrade kit for MS-DOS and Windows 3.1 is a good solution for the interim, it will never offer the same PnP functionality as a fully integrated BIOS and operating system. A true PnP system offers PnP support for every I/O bus and port, with provisions for easily incorporating new buses in the future.

Change on the Fly

Dynamic configuration is perhaps the most exciting benefit of full PnP integration. Until recently, this wasn't a factor, because few I/O buses allowed hot-plugging. But newer buses like PCMCIA and P1394 actually encourage you to add or remove devices while the computer is running. To cope with this, the operating system must juggle system resources and device drivers without unduly pestering you. Moreover, the operating system should be able to pass messages about dynamic events to applications, which in turn should be capable of responding appropriately.

Retrofitted system software just isn't that versatile. It doesn't pass messages about dynamic events, and it lacks the dexterity to juggle device drivers in memory. Chicago, on the other hand, will broadcast messages of dynamic events to all running applications, and it can also load and unload drivers as their associated devices come and go, thus maintaining only the minimum working set of drivers in RAM. Without resorting to tricky hacks, MS-DOS (and, by extension, Windows 3.1) must load its drivers during boot-up to avoid conflicts later on. In fact, as anyone who has wrestled with a CONFIG.SYS file knows, DOS can be very stubborn about the order in which device drivers load.

Flexible System

The ability of a PnP system to morph itself in response to a wide range of dynamic events is in tune with today's increasingly flexible work force. Outbound salespeople and telecommuters who visit an office once a week could dock their notebooks or plug into a network without a second thought. A worker with a PDA (personal digital assistant) could stroll into a room and instantly print a document via an infrared link to a PnP-aware desktop system. Users could share a single fax modem or LAN adapter between their notebook and desktop merely by swapping a PCMCIA card back and forth.

Indeed, PnP and hot-pluggable I/O buses are considered key technologies in the never-ending quest to turn PCs into consumer appliances. "Our culture is such that we're always warned against opening the back of any electronic appliance, including our $300 TV and VCR," says Timothy Saponas, Intel's manager for desktop ease of use. "If people aren't comfortable opening their $300 TV to change parts, are they going to be comfortable opening their $2000 computer? I think not."

PnP is a significant step forward for PCs. Painless configuration when installing new hardware and software is long overdue and really does little more than drag PCs, kicking and screaming, into the 1980s. But the ability to dynamically reconfigure a system to keep pace with active users — although still in its infancy — hints at what lies ahead in the late 1990s and beyond.

Sidebars:

Illustration: Chicago's Device Manager. Chicago's Device Manager (upper left) is a utility for browsing the hardware tree, a hierarchical view of all the devices installed in the system. The Device Manager lets you examine the properties of any installed expansion board, peripheral, or system device. The screen at the bottom left shows general properties, such as the device ID and device driver. By drilling down deeper with the Device Manager, you can examine the resources (e.g., IRQs, DMA channels, and I/O base addresses) used by each device (above).

Illustration: IRQ Assignments in PCs. Early PCs and XTs had only eight IRQ inputs, numbered 0-7. Starting with the IBM AT in 1984, IRQs 9-15 were added, and this arrangement is found in all PC compatibles today. Many IRQs are preassigned to various system devices and cannot be reassigned to add-on peripherals. To make matters worse, some devices insist on having high-priority IRQs. Note that lower-numbered IRQs get higher priority, except that IRQs 8-15 get higher priority than IRQs 3-7.

Illustration: PnP and You. A device "installation wizard" in Chicago (top screen) guides you through the process of installing a new device and its associated software. When Chicago detects a new device in the system (either dynamically or after boot-up), it prompts you to install or select the required driver, if necessary (bottom screen).

Illustration: I/O Addressing in PCs. Most PC compatibles made since 1984 (based on the IBM AT standard) can theoretically use 64 KB of address space for device I/O ports. However, only 768 bytes of that space is available to older devices that decode only 10 bits of the 16-bit address. To maintain compatibility, even today's 16-bit devices can use only 256 bytes out of each 1-KB memory block because of aliasing, which renders the other 768 bytes of each block unavailable.

Illustration: How Plug and Play Works. Because legacy devices — those that are not PnP-aware — get the first crack at system resources, they reduce the options available to PnP devices. You get maximum flexibility by using only PnP devices.

Illustration: Retrofitting Windows 3.0. Intel's Plug and Play Kit retrofits MS-DOS and Windows 3.1 with Configuration Manager and utilities that provide some PnP functionality — although not as much PnP integration as found in Chicago. The ICU, much like Chicago's Device Manager, lets you browse through the devices installed in your system (first screen). When you install a new device, the ICU provides a database of known devices and the resources they require. The database isn't exhaustive, but Intel is working with vendors to compile information on most products currently available (second screen). If the Configuration Manager can't automatically configure a device that's not listed in the ICU's database, you can allocate resources manually (third screen). Advanced users can solve conflicts by using the ICU to examine the resources allocated to every device in the system and then reassign certain resources, if necessary (fourth screen).

Table: THE ROAD TO PLUG AND PLAY (This table is not available electronically. Please see September, 1994, issue.)

Tom R. Halfhill is a BYTE senior news editor based in San Mateo, California. You can reach him on the Internet or BIX at thalfhill@bix.com.


Letters / November 1994

Plug and Play Not Exactly New

Dennis Allen's September editorial quotes Tom R. Halfhill's description of Plug and Play for PCs as "won't be painless, won't come cheap, and will likely take years." However, I can tell you one major platform where Plug and Play has been the accepted norm for a decade: Apple's Macintosh. Even when the Mac II came out in 1987, with its NuBus expansion slots, users had only to insert a board and (at most) drop an extension into their system folder. I guess I'm just disappointed that the Mac wasn't even mentioned in the editorial.
Chris Hanson

chanson@mcs.com
Pittsburgh, PA

Plug and Play in Hindsight

Choice article on Plug and Play (September). However, you omitted one important part: how a trivial hardware botch in the original PC has made this all much worse than it had to be.

Many machines have fewer interrupt levels than the PC but aren't hurt nearly as badly by it. Why? Because on the other machines, I/O boards can share an interrupt line. For instance, there is no reason COM ports need distinct IRQs (interrupt requests), except that I/O cards cannot share the same one and have it work.

There are boards with multiple UARTs (universal asynchronous receiver/transmitter) on a card, all of which internally share one IRQ for the whole card, but the problem is that the original PC bus didn't arrange for IRQ sharing between cards. This only would have cost a pull-up resistor per line (and possibly an inverter), and the IRQ line could have been driven low by open-collector TTL drivers, instead of the single totem-pole outputs specified. With this trivial fix, we would have been able to share the IRQs just fine. It would have all been a lot easier.

Michael O'Dell
mo@uunet.uu.net

If I had listed everything wrong with the PC system architecture, my story would have been twice as long! You're right about the PC's limited ability to share interrupts, of course, but keep in mind that those kinds of compromises were not nearly as obvious when IBM was designing the PC in 1980 and 1981. Many of the devices we plug into our computers today didn't even exist then. Also, component costs were much higher in the early 1980s, so what seems like a trivial expense now would have been more significant in those days. In fact, IBM chose the slower 8088 processor with its 8-bit bus instead of the full 16-bit 8086 just to reduce the cost of peripheral parts. — Tom Halfhill

I work on the technical-support staff for the New England Journal of Medicine, supporting PCs, Macs, and Novell servers. I actually understand IRQs (interrupt requests), cascading, memory ports, and even segmented addressing, but that did me little good when I bought a CD-ROM drive. It took me 14 hours with three different brand units before I could get one installed and working properly in my 386. During the same week, my boss bought a CD-ROM for my Mac at the office, and it honestly took me less than five minutes from opening the box to viewing a sample CD.

The PC as we know it will never be PnP (Plug and Play). The ISA bus is not intelligent and cannot be made so; the DOS/Windows combination is even more pathetic. The only hope for PnP on non-Macs is a system that uses Windows NT or OS/2 with a PCMCIA or SCSI primary bus. PnP is just another case of Bill Gates "inventing a vision" of something that the Macintosh has been doing for seven years. Your article seemed to lead to the conclusion that PnP is a kludge that doesn't have a snowball's chance in Hades of success, but you never actually said so. I'd be curious to know why.

Don Leamy
New England Journal of Medicine
Waltham, MA

Yes, PnP is a kludge, and yes, it is merely a stepping stone to the real solution — and I made both points in my story. But I disagree that it doesn't have a chance for success or won't make life easier for millions of people. What's the alternative? Are PC users suddenly going to junk all their machines and buy systems based on an entirely new architecture? Or will they migrate in droves to the Macintosh? Not likely. For the vast majority of people, PnP will seem like wondrous technology. Why burst their bubble? — Tom Halfhill

Letters / January 1995

Sugar-Coated Reporting?

In the November BYTE Letters, Don Leamy points out that PCs can never be PnP (Plug and Play) due to built-in deficiencies. He criticizes BYTE for not being up-front and frank in its assessment and for not "telling it like it is." Tom Halfhill excuses BYTE's lack of candor, indicating the decision not to provide the whole truth was due to reasoning, "Why burst their bubble [of mistaken judgment and ignorance]?". Might I suggest that BYTE tell the real truth as it is — not the half-truth or sugar-coated truth?

Joel Amkraut
Los Angeles, CA

I did not mean to imply that PnP won't work. PnP will work and will make life easier for millions of PC users. PnP is a kludge, but it does work. My rather flippant comment — "Why burst their bubble?" — doesn't mean that BYTE should avoid telling the truth. It doesn't matter if the Macintosh has better plug-and-play capabilities than the PC because most PC owners aren't going to sell their systems and buy a Macintosh. It matters only to people who haven't yet decided between a PC and a Mac. But the tens of millions of current users deserve a solution, too. So far PnP is the best solution to make PCs easier to use, while preserving as much of PC owners' current investment as possible. — Tom Halfhill

Copyright 1994-1998 BYTE

Return to Tom's BYTE index page