Deschutes was Intel’s code name for Pentium II processors manufactured with the new, 0.25-micron process (the code name for 0.35-micron PII processors was Klamath). With the smaller-micron process come a smaller die size, lower power consumption and less heat, allowing higher clock speeds. All desktop Intel Pentium II processors running at 333MHz or higher are Deschutes models. There are also mobile Deschutes processors for notebooks that run at 233MHz and 266MHz.

Intel’s new BX chipset supports 100MHz bus operation and 100MHz SDRAM (synchronous dynamic RAM). This should enable Intel to easily produce Deschutes processors with clock speeds of up to 500MHz; the practical maximum processor clock speed is typically five times that of the system board bus speed. Also, Intel has produced a single BX chipset for both desktop and notebook systems. This common chipset should narrow the long-standing performance gap between desktop and notebook PCs.

Slot One is the connector used by Intel’s Pentium II processor. Slot Two is the soon-to-appear cartridge connector for Pentium II servers. Socket 7 is the connector used for the past two years for Pentium, Pentium MMX, AMD-K6, Cyrix 6×86 and 6x86MX, and IDT WinChip processors. Super7 is a new motherboard/chipset design promoted by Advanced Micro Devices (AMD) that brings 100MHz bus speed, AGP and 100MHz SDRAM support to Socket 7-compatible processors. Slot A is a new connector AMD is considering, identical in size and form factor to Intel’s proprietary Slot One connector.  Its electrical properties are borrowed from Digital Equipment Corp.’s Alpha RISC processor used in servers and workstations.

Intel’s Celeron is a new Pentium II processor that lacks level 2 cache. It fits into a Slot One motherboard, but will compete against low-cost Socket 7 processors produced by Intel rivals AMD, National Semiconductor Corp./Cyrix Corp. and Integrated Device Technology (IDT) for the sub-$1,000 PC market. Initial versions of Celeron run at 266MHz. Our first tests with a 266MHz Celeron system indicated good performance on component-level benchmarks, but lackluster performance (slightly slower than a 233MHz MMX-enabled Pentium) on ordinary business applications. The app tests appear to underscore the importance of level 2 cache.

MMX is a set of 57 instructions-all dealing with multimedia tasks-that were added to the x86 instruction set in 1997. Think of them as a kind of shorthand, allowing one new instruction to take the place of many previous instructions. Intel promulgated the standard, but licensed the technology to its competitors.

Virtually all computers now sold in the United States are MMX-enabled. Intel’s MMX2, expected to make its debut with the Katmai processor in early 1999, adds more than 70 new instructions.

This is a technology being jointly developed by AMD, National/Cyrix and IDT/Centaur Technology. Similar in concept to MMX, it adds 21 new instructions to these companies’ x86-compatible processors. The new instructions will speed 3D graphic functions in the microprocessor itself. Because these three companies share only about 15 percent of the market for x86 processors, it was unclear at first whether there would be much support for the new 3D instructions among software vendors. However, Microsoft recently announced that its DirectX 6.0 technology will fully support the new 3D instruction set. DirectX 6.0 will first appear in NT 5.0.

Expect new chips from AMD (K6 3D+) that will include the new 3D instruction set and onboard level 2 cache. Cyrix will weigh in with its new Cayenne chip, which is expected to be a Slot One rather than Socket 7 design (Cyrix’s new parent company, National Semiconductor, has a cross-licensing agreement with Intel for Slot One technology). IDT will introduce new chips with enhanced floating-point units and improved MMX performance, and possibly integrated level 2 cache as well. Intel will introduce faster versions of Deschutes this year, followed by Katmai processors in 1999 that add MMX2 capability.

    Then comes Merced, Intel’s first 64-bit, non-x86 CPU, which was jointly developed by Intel and Hewlett-Packard Co. It will run x86 instructions in emulation. Intel officially calls Merced “Intel Architecture 64-bit” or “IA-64.”

Microsoft/Intel’s proposed new PC99 standard calls for a minimum 300MHz Pentium II performance, 2X DVD-ROM drive, and replacement of the ISA bus with PCI 2.2, USB and IEEE 1394. Also upcoming will be a new open-standard device connector from Compaq/Intel/Microsoft called Device Bay. Similar in concept to PC Cards, Device Bay will consist of a connector slot in three standard form factors (including two small enough for notebook computers). The back of each slot will contain connectors for USB and IEEE 1394.

A Device Bay peripheral can use either bus to provide hot-swappable operation. Typical Device Bay uses would be for additional hard drives, DVD-ROM drives, backup and removable media devices, and so forth. These changes should bring expandability to the outside of the case. Future PCs will let you add almost any new capability without opening the case. And “hot swappability” means you’ll be able to simply plug these devices into their slots, and they’ll work without rebooting. It’s about time.

Video Card

Video card, graphics accelerator and graphics card are interchangeable terms: They all refer to the principal link between your system and your monitor. The card’s job is to make graphics and text appear on the screen quickly and accurately, and to process complex objects such as gradient fills. On some systems, the “card” is not a card at all, but a chipset mounted directly on the motherboard.

Memory is the most important element for top-notch performance. Just like RAM on your system board, more video memory is generally better. The other most important features are the type of memory and the video processor chip.

DRAM (dynamic RAM) is the basic type of video memory, but you won’t find it on high-quality cards because it’s relatively slow. That’s partly because it can handle either a read command (processing data from your computer) or a write command (sending the data to your monitor), but it can’t do both simultaneously. The read/write demands might tax the card’s capacity to refresh the monitor for high-resolution images, damaging performance.

    EDO (extended data output) DRAM is faster and performs better with more than 256 colors. Other types of video memory include VRAM (video RAM), WRAM (Windows RAM) and SGRAM (synchronous graphic RAM). VRAM and WRAM are “dual ported” so they can read and write data concurrently. Increasingly, we’re finding mid-priced graphics cards using SGRAM, which is cheaper than VRAM and WRAM and faster than DRAM.

The amount of memory determines how much detail you’ll see on your screen. Typically, you can buy 4MB and 8MB boards. With 4MB boards, such as the Canopus Total3D, you usually top out at 1024×768 resolution when using true color (16.7 million colors). With 8MB boards, such as the ATI All-in-Wonder Pro, you can often go as high as 1600×1200 in true color. Frequently, you also have a greater selection of refresh rates available with 8MB boards. The price difference between a 4MB and 8MB card is generally less than $50.

    As a rule, 4MB-even 2MB-is sufficient for business applications. Game players, CAD/CAM users and graphic artists will want 8MB boards. And if you run extremely demanding graphics applications such as some advanced games, you’ll find that a card like the new Diamond Monster 3D II-with 12MB of memory-can handle just about any graphics task. Of course, greater resolutions are only practical if you have a monitor that’s 17 inches or larger.

Vertical refers rate refers to the speed at which an image is completely repainted on the screen. A refresh rate of 75Hz (the display is refreshed 75 times a second) is considered the bare-bones speed to avoid flicker. The higher the refresh rate at a given resolution, the better. Flicker isn’t just annoying, it can cause headaches and eye strain.

Every card manufacturer claims to sell the latest and greatest chip. The key differences among chips are usually related to the quality of their 3D-rendering engines. Many vendors make their own chips.

    ATI uses its own RAGE chips and Number Nine Visual Technology makes the Ticket to Ride chip. Some card manufacturers are chip neutral. Even Intel is getting into the picture with its i740 chip, which will work with Intel’s Accelerated Graphics Port (AGP).

Not unless you’re a game player and need its superior texture and image rendition. You probably won’t have to make that decision anyway: All the video cards we’ve seen lately have 3D capability.

AGP is a data super-highway that separates graphics from the PCI bus onto a separate and faster bus. AGP is designed for Pentium II-based systems. You’ll typically find a separate AGP slot in such systems. PCs that come with AGP processing on the motherboard are harder to upgrade.

Besides using a separate bus and thus not taxing the PCI bus as heavily, AGP has the unique advantage of being able to share system memory when needed. Applications, particularly games, have been limited to using texture maps of less than 4MB in size because that’s all the memory available for creating such textures with an 8MB non-AGP card. Texture maps include such things as the patterns that make a surface look like a stone wall. When an AGP graphics card needs to render large textures, it “borrows” the memory it requires from system memory, returning it to the system when it’s no longer needed. More complex textures tend to strangle the PCI bus, so having a separate bus is a plus.

    Intel is promoting AGP as a solution for arcade-quality games, 3D modeling, Virtual Reality Modeling Language (VRML) and other graphics-intensive applications.

In the current crop of business applications, AGP offers little speed improvement. But more troubling is the current implementation of AGP. In our tests, we’ve found AGP systems where an improper configuration causes the AGP card to behave like a PCI card and not make use of AGP functionality.

    Vendors will eventually iron out these problems when they become more familiar with the technology. But for now, AGP isn’t quite realized.

Before you install the new card, change your current video card settings to 16-color and choose the Standard VGA adapter instead of your current video card. Exit Windows, power down and install the new graphics card. When you reboot, Windows will need to install the VGA drivers. After another reboot, you can run your card’s installation program.


DRAM stands for dynamic random access memory. As this type of memory requires a constant current to retain information, it needs to be refreshed hundreds of times per second. The memory uses the same circuit to store and retrieve data, so access times can be an issue. Memory is organized in pages, and when one page is accessed, it takes additional CPU cycles to switch to another page to access more memory.

EDO RAM stands for extended data out RAM. It’s similar to DRAM, but EDO RAM operates between 10 and 15 percent faster because it starts accessing the next block of data while sending the previous block to the CPU. That makes it easier and quicker to synchronize data transfer than with regular RAM. EDO RAM is used in both SIMMs and DIMMs (see the next question), while regular DRAM is typically found only on PCs with SIMMs.

SDRAM stands for synchronized DRAM. It is significantly different from regular DRAM because it uses a clock cycle timing for data access and refresh. It operates at the same frequency as the system bus and synchronizes automatically with requests from the CPU. That makes it faster than DRAM and EDO RAM. SDRAM is typically found only in DIMMs.

SIMM stands for single in-line memory module; DIMM stands for dual in-line memory module.

RAM chips are typically packaged in 8MB, 16MB, 32MB or 64M Bmodules that plug into a PC’s motherboard. These modules are small, standard-size circuit boards that hold the actual RAM chips.

Memory used to come in 30-pin SIMMs, but now you’ll find these SIMMs only on older PCs. Pentium-based PCs have the newer 72-pin SIMMs-which hold more memory and can access it better-or the newest DIMMs. DIMMS can hold even more memory and typically have 84 pins active on both sides for 168 connections. While unbuffered DIMMs are limited to 64MB, newly designed registered

DIMMS can hold 128MB or 526MB. These registered DIMMs are found in servers and high-end workstations.

RIMMs, or Rambus memory modules, will be used with Intel’s next-generation Rambus memory interface, which will support high-speed buses and provide much greater bandwidth than current memory (more on Rambus below).

The newer the system, the more RAM speed matters. On older systems with SIMMs, speed matters less. A 60-nanosecond DRAM should work fine for all PCs, and some older systems can run on slower speeds of 70ns or 80ns. SDRAM speed is measured in MHz because it is clocked, just like the system bus. Newer systems based on Intel’s Deschutes Pentium II processors use a 100MHz system bus and require memory clocked at that speed. If your system uses EDO or SDRAM, make sure your RAM conforms exactly to the manufacturer’s specifications. If you upgrade or replace RAM on a PC with DIMMs, you need to follow exact instructions in your system manual.

If your new PC has a system bus clocked at 66MHz or slower, and the PC uses a compatible memory module (SIMMs or DIMMs), then it is possible. Some systems are designed to take a mixture of SDRAM, EDO RAM and even DRAM, but many require a particular type of memory.

You should check the precise specifications of your new machine. If your new PC has a 100MHz system bus, you can’t use the old RAM (unless your old system had a 100MHz system bus). Make sure the RAM in the new machine is designed to run at 100MHz, or else you’ll see slower performance and even memory page faults that could crash the system.

Notebook memory chips are typically the same types of RAM as used in desktop PCs, but with different packaging. Many notebooks use smaller SODIMMs (small-outline DIMMs). These come in 72-pin and 144-pin modules. But many notebook manufacturers use proprietary memory modules, so if you want to expand RAM, you have to get memory designed specifically for that machine.

Graphics cards have special requirements because they must simultaneously move data rapidly into and out of graphics memory to the display. Therefore, most graphics memory is dual-ported, meaning it can send and receive data simultaneously. Graphics memory types include VRAM (video RAM), TPRAM (triple-port RAM), SGRAM (synchronous graphics RAM). Most current cards use SGRAM.

Cache memory is temporarily held data that’s immediately ready to use, speeding up your system. The Intel Pentium and many other CPUs have this memory built right into the processor. That’s level 1 cache, and you can’t change it. Most CPUs now also have level 2 cache, used by the main system RAM. Cache memory is much faster than regular RAM.

Static RAM is a type of cache memory that usually requires no refreshing or synchronizing and returns information to the CPU virtually instantly. You can only upgrade cache memory if your system’s cache memory socket is accessible and includes a larger secondary cache as an option. If your system has a Pentium II, you have to replace the entire processor to upgrade the cache because the system cache is inside the processor’s housing.

As CPU speeds increase, memory must become faster to avoid bottlenecks. Two types of faster RAM are currently proposed. Intel is backing Rambus or RDRAM, a much more complex type of memory interface using a special 800MHz bus and a protocol- and packet-based system for transferring data. Because Intel plans to eventually double the bus speed to 1.6GHz, Rambus is also likely to be the fastest of the proposed suggestions.

A cheaper alternative is high performance SDRAM-DDR, or double-data rate SDRAM. SDRAM-DDR reads data at 200MHz, twice the 100MHz speed of current high-end PC buses. An advanced version of this, SLDRAM (SyncLink DRAM) will quadruple the data rate to 400MHz. The latter two alternatives are cheaper and easier to implement than Rambus. We may see Rambus in high-end PCs and SLDRAM in less-expensive systems. None of the existing RAM solutions will transfer upward to these new systems. The RAM you’re using today isn’t likely to work in the PCs of tomorrow.

Voice Recognition

To use a voice recognition program, you dictate words and phrases into a microphone connected to your PC. The sound is stored in the PC as a digital sound file, often in WAV format, and immediately fed to the voice recognition program. The software breaks the sound down into discrete parts and tries to recognize individual words. Then the program reassembles the individual words into phrases and uses its built-in dictionaries and knowledge of English grammar and speech patterns to recognize the speech.

Voice recognition software primarily performs three distinct functions. The first is “command and control.” This is where your voice activates specific operations such as opening a file or selecting a menu item.

The second voice recognition application is dictation. You speak in a more or less continuous stream, and the program recognizes the speech and inserts it into a document.

The third function is editing, where you use your voice to correct errors and edit a document. Most voice recognition software lets you edit either as you’re dictating or after you have finished dictating the whole document. To correct as you go, you immediately say something such as “undo that” when you see an error in recognition. The software removes whatever phrase was last spoken.

Until recently, many voice recognition programs used a technique called discrete voice recognition where each word had to be spoken discretely or separately, so you had to pause between every word. Now almost all voice recognition programs use continuous recognition so you can speak naturally, as you would talk to another person.

As the name implies, speech-to-text refers to converting speech into text in a document. Text-to-speech is the reverse, where the software can read a document aloud. Remote e-mail readers use text-to-speech technology. It also comes in handy for proofing text, and it’s a great aid for people who have difficulty reading a PC screen.

You need a microphone and either speakers or headphones, plus a sound card to plug them into. Most voice recognition programs ship with a headset microphone that plugs into the sound card of your PC. Headset microphones work well, while the standard microphones that often come with sound cards do not work well at all. If possible, get a noise-canceling headset or microphone that can reduce extraneous background noise. You also need a fairly powerful PC. At minimum, you should have a 166MHz Pentium II, 32MB of RAM and about 100MB of free hard disk space for the program. For optimum performance, you need at least a 200MHz Pentium II and 64MB of RAM.

Not really, but it helps if you speak clearly and calmly and avoid rushing and slurring words. You actually save time by slowing down and getting a higher rate of recognition instead of dictating quickly and then having to go back and fix a lot of recognition errors. All voice recognition programs can comfortably recognize speech at 100 or more words per minute. The software adapts to the way you speak, so you don’t have to worry about slight accents or personal speech idiosyncrasies.

You’ll get a lot more done if you take the time to train the system to recognize your particular speech patterns. In dictation mode, claimed voice recognition rates typically peak at around 140 words per minute with better than 95% accuracy. You can more realistically count on around 100 words per minute with at least 95% accuracy. Editing and command-and-control modes are usually slower than dictation, but often quicker than using a mouse. Several programs let you open applications by voice. For example, say “run Word,” and the recognition program will launch Microsoft Word for dictation.

The software walks you through the training process. First comes a formal training session where the program asks you to read a set of stock phrases and then analyzes the results. This formal training usually lasts at least 45 minutes, and you can extend it to several hours if you really want to perfect it. Training continues during ongoing use. Whenever a correction is made to an incorrectly recognized word, the program learns from the experience. Some programs automatically make the adjustments; others require you to formally tell the program to correct the way it recognized a particular word.

Many of the top programs will. The more advanced programs from Dragon Systems, IBM, and Lernout & Hauspie will let you set up speech profiles for as many people as you want.

This varies from program to program, running from 30,000 words to nearly 200,000 words with a basic package. All programs let you add your own specialized vocabulary, and several let you buy advanced vocabulary packs for particular professions, such as law or medicine.

One program, DragonDictate, lets you control the mouse with its MouseGrid feature. It divides the screen into nine numbered sections which in turn are subdivided as you say numbers until the pointer is exactly where you want it. This method takes a while to perfect, however. Other voice programs simply let you move the mouse by saying icon, button or menu names, then clicking on them.

Voice recognition is a rapidly developing technology. It’s already showing up in office suites. Microsoft has announced plans to include it in future versions of Windows and it will be a big part of AutoPC. Within the next two years voice recognition could become a common feature on desktop PCs.

Widespread voice recognition for notebooks is farther out because notebooks are typically used in noisy environments where recognition is more difficult. Simple voice recognition apps are available now for handheld computers. Advanced Recognition Technologies’ smARTcommand for Windows CE 2.0 can launch apps and manipulate menus, and smARTcontact can find contact information based on a spoken name. Also, several companies, including Dragon, recently introduced handheld recorders with voice recognition that plug into a PC’s serial port to transfer dictated information.

File Systems

Your operating system uses a file system to organize data on a disk. This file system determines how large a hard disk your system can use, the method for tracking the location of files, the minimum file size, what happens when a file is deleted and so forth.

Windows-based PCs use FAT16, FAT32 and the NT File System (NTFS). FAT16 works with DOS 4.0 and later, as well as all versions of Windows. FAT32 was introduced with Windows 95 Service Release 2, and it’s the default file system for Win98. FAT32 does not work with NT, but it will be available with Windows 2000 (formerly NT 5.0). NTFS works only with NT and Windows 2000 (which will include a new version, NTFS 5).

The File Allocation Table, or FAT, keeps track of where files and pieces of files are stored. FAT16 uses 16-bit cluster addresses to locate files. The operating system stores a new file by looking for the first free cluster on the disk, then takes as many clusters as are required to hold it. The OS logs a file’s clusters in the FAT.

Ideally, the OS will find enough adjacent empty clusters to place an entire file in one contiguous area of the disk. That’s how it works on a new or a freshly defragmented drive. But a drive isn’t so well organized after it has been used for a while. The OS starts with the first free cluster; if there is not enough room for the entire file, it skips to the next free cluster until it finishes writing the file. A file broken up among nonadjacent sectors is called a fragmented file. Because the disk heads must jump around to read a fragmented file, system performance slows.

A defragmentation utility such as Win9x’s Defrag rearranges all the files on a disk into contiguous clusters to speed up disk reads. FAT16 has significant limitations. A FAT16-formatted disk has a maximum partition size of 2GB. Also, FAT16 wastes a lot of space-especially on drives approaching the 2GB limit. FAT16 varies the minimum cluster size depending on the size of a partition, using a minimum of 512 bytes and maximum of 32KB. The larger the partition, the more space goes to waste. For example, a 500MB partition uses cluster sizes of 8KB, but a 2GB FAT16 partition uses 32KB clusters. If the OS writes a small 1KB file to a 2GB partition, 31KB of disk space are wasted. Save a lot of small files, and the wasted space quickly adds up.

Win98 comes with a utility (Start/Programs/Accessories/System Tools/Drive Converter) to allow you to convert a FAT16 partition to FAT32. It’s a one-way process, though. If you switch to FAT32, you can only revert to FAT16 by using either a boot disk to repartition and reformat the hard disk, or a third-party program such as PowerQuest’s PartitionMagic. After repartitioning and reformatting the disk, however you must reinstall all your software-including the operating system.

Partitioning your hard disk allows you to better organize your data. It lets you have discrete partitions for different operating systems and uses hard disk space more efficiently by enabling your system to use smaller cluster sizes. Third-party utilities such as PartitionMagic are easier to use and more flexible than Microsoft’s built-in utilities.

NTFS is the advanced file system for NT and Win2000. It provides greater security, better file recovery, integrated compression and security for removable disks. NTFS lets network administrators grant or restrict access to given partitions. You can recover lost files more often because NTFS stores information about a file’s clusters within each cluster instead of just in a FAT. Win3.x or 9x systems need special drivers to read (but not write to) NTFS partitions.

NTFS 5, to be introduced with Windows 2000, will add automatic encryption to its security features. Encrypted files will include the temp files used by applications. NTFS 5 will also support disk quotas and automatically update shortcuts when files move.

In addition to better security and file recovery, NTFS provides significant performance gains. Because it uses a more sophisticated search algorithm than FAT, NTFS can access files on multigigabyte partitions and drives much faster. This is useful for large databases. But FAT16 performs better than NTFS on drives or partitions of 500MB or less, and you need to stick with FAT16 if you dual-boot NT and another operating system.


First, decide what type of network you need. If you have fewer than 10 computers, you can set up a peer-to-peer network; it doesn’t require a dedicated server or full-time administrator. Computers connected in a peer-to-peer network can share local resources such as printers, modems, CD-ROM drives and hard drive space. All versions of Windows allow these devices to be shared. For example, all PCs connected in a peer-to-peer arrangement could use a printer connected to your computer if you grant access to everyone else on the network. Similarly, one PC could operate as a Web server for the whole group.

Peer-to-peer isn’t advisable for a larger group, as machines become tied up handling requests from other PCs. If you’re connecting 10 or more computers, a client/server model is preferable. A client/server network moves shared resources from individual computers on the network to a central location on a larger, more powerful server. All connected PCs, or clients, go to the server to access file storage, printers, modems and other resources. Servers also let a large number of users access more complex applications such as e-mail, databases and Web hosting. With a client/server network, you’ll need a network administrator or at least one person in the group who can run the network.

When planning a network, you’ll also need to consider the resources required by users, applications and other peripherals. Although each server can theoretically handle an unlimited number of clients, the actual number depends on how heavily the clients access the server. As demands increase, you can add servers to the network. And when traffic causes it to bog down, you may need to add switches or hubs and separate users into more manageable workgroups.

Computers classified as servers generally have high-end processors, high-capacity disks (or arrays of disks) and a lot of RAM. Most servers use Intel Pentium II processors, ranging from 300MHz to 450MHz. A basic file server should have at least 64MB of RAM, and more if you’ll use it to run other applications.    

The exact configuration you need depends on the type of jobs the server performs (file storage, e-mail, database, Web hosting or other applications) and the number of users accessing it. Some services require a lot of memory; others place a greater premium on processor speed. You also have to consider the nature of the information stored on the server. Servers that run apps and hold critical data must have features such as Error Checking and Correction (ECC) memory, hot-swappable drives and Automatic Data Recovery to ensure they stay up and running even when individual components fail. You’ll also want a tape drive to archive and protect vital data. Don’t skimp when it comes to choosing a server. Get the fastest processor you can afford, and as much memory and hard disk space as you can. You can buy a 333MHz Pentium II server with 32MB of ECC memory and 4.2GB hard disk for under $2,000.

Building a LAN today is relatively simple. Basic hardware for a peer-to-peer network includes network interface cards, a hub or switch, and network cabling. You’ll also need a server for a client/server setup.

Ethernet is the most common network protocol. It describes how devices communicate and the hardware they use to do so. A standard Ethernet network operates at 10Mb per second; Fast Ethernet does 100Mbps. Using 10/100Mbps Ethernet and several types of cable, you can connect computers up to 1,500 meters apart. The most popular cable is unshielded twisted pair (UTP), also known as 10BaseT or Category 5 cable. Similar to telephone wire, UTP consists of four pairs of wires twisted together to reduce interference caused by electrons moving through the wire.

Several new alternatives to Ethernet are gaining popularity. In offices where only a few PCs and components need to be connected over short distances, you can build networks with existing copper phone line, electrical wiring or even wireless devices. These networks will be much slower than Ethernet (generally between 1Mbps and 2Mbps), but eliminate the need to run cables through ceilings and walls. Larger offices, particularly those that require speed and reliability, should stick with at least 10Mbps Ethernet.

A network interface card, or NIC, plugs into a PC slot and connects the PC to the rest of the network. Every device that needs to communicate over Ethernet cable must have a NIC. These cards, which start at about $30, come in 10Mbps and 10/100Mbps versions. Most 10/100Mbps NICs can automatically detect whether the network uses standard or fast Ethernet and adjust to the speed. Some PCs ship with the basic NIC chipset built into the motherboard.

A hub is the central point for wiring a network; it allows computers to communicate over standard Ethernet cabling. Client PCs link to the hub in branches, and the hub connects back to the server. Hubs have multiple Ethernet ports that split and regenerate a transmission signal among all connected computers. The hub copies the information it receives to the PCs connected to other hub ports; this is called “repeating information”.

When one computer talks, all the other computers connected to the hub (or other connected hubs) hear it. However, if all computers respond at once, the data collides and the network must retransmit it. Repeated collisions can slow a network.

That’s where switches come in. Like hubs, switches repeat the information transmitted by computers connected to the network. Unlike hubs, however, switches listen to each port individually and only transmit information from the sender’s port to the destination port based on a special hardware address assigned to each computer’s NIC. This circuit switching reduces collisions on the network. Switches cost more than hubs, but tend to be faster and provide more flexibility.

For instance, network devices with 10Mbps NICs can’t talk to devices with 100Mbps NICs via a hub, but they can by using a switch. Hubs and switches commonly come in models with four, eight, 12, 16 or 24 ports. You can connect two or more hubs or switches if you want to expand your network. Hub prices start at around $25 per port; switches begin at around $30 per port.

Cache memory is temporarily held data that’s immediately ready to use, speeding up your system. The Intel Pentium and many other CPUs have this memory built right into the processor. That’s level 1 cache, and you can’t change it. Most CPUs now also have level 2 cache, used by the main system RAM. Cache memory is much faster than regular RAM.

Static RAM is a type of cache memory that usually requires no refreshing or synchronizing and returns information to the CPU virtually instantly. You can only upgrade cache memory if your system’s cache memory socket is accessible and includes a larger secondary cache as an option. If your system has a Pentium II, you have to replace the entire processor to upgrade the cache because the system cache is inside the processor’s housing.

Sound Card

Creative Labs’ original Sound Blaster was the first PC-based add-on sound card to achieve widespread popularity. Software makers started writing directly to the Sound Blaster’s register set, and other audio chip makers started supporting the Sound Blaster register interface–making it the de facto standard. As a result, any program, whether business, education or game software, provides Sound Blaster compatible sound as the minimum sound level it supports. By mimicking the functionality of the original Sound Blaster, manufacturers ensure that their sound cards work well with this basic sound-reproduction standard.

Originally, sound cards were ISA (industry standard architecture)-based and used a 16-bit interface on the ISA bus. ISA has been largely superseded by the PCI (peripheral component interconnect) bus, which lets peripheral cards transfer data at the rate of 32 bits per clock cycle. The ability to transfer much more information in a single cycle results in smoother, richer sounds.

That’s one of the most common misconceptions about sound cards. The Sound Blaster 16 was a 16-bit card, but the “16” in its name referred to the 16 “voices” or independent sound channels that the card was capable of producing. Similarly, the Sound Blaster 32 had 32 voices, and the current Sound Blaster AWE64 has 64 voices. Cards are also available from Creative Labs and other manufacturers that have as many as 96 or even 128 simultaneous voices.

FM synthesis is the original method used by sound cards to produce controlled sounds. FM stands for frequency modulation; modulating the frequency of a sound lets you change its pitch. By modulating the volume and frequency simultaneously and by combining sound channels, you can mimic the sounds of different instruments or naturally occurring sounds. The limited number of simultaneous sounds combined with slower responses and FM synthesis made for relatively poor sound quality.

Wavetable sound uses recordings of actual instruments to produce sound. Engineers record the natural sounds of instruments and other sound-producing objects and events, then store a set of the sounds at different frequencies as sound-wave samples in a table on the card’s memory. The card’s software uses the samples of actual sounds stored in the table to extrapolate the desired sounds by filling in the frequencies not sampled.

The wavetable cards offer better quality because they reproduce the actual sounds of instruments rather than mimic them. For example, if you want to have a piano play the F above middle C, the FM synthesis card will have software create a blend of FM sounds at the frequency needed for that F to simulate a piano. A wavetable card will take the exact sound of a piano playing a middle C stored in its wavetable and then increase the frequency of that sound until it matches that of the required F.

WAV files are essentially digital sound recordings. The sound-whether from a single tap of a pencil on a table or a full choir-is sampled, digitized and stored as a single file. The only way to improve the quality is to increase the sampling rate or the frequency at which the samples of the original are taken and digitized. The resulting WAV files can be enormous, taking up many megabytes for relatively short sounds.

In contrast, MIDI (musical instrument digital interface) rep resents music specifically. The notes’ length pitch and volume are recorded to file. Then they can be played back on any MIDI-compatible device. Any MIDI device will play back the music, but the quality of the MIDI device will determine how closely the instrument sounds match the actual instruments. A high-quality MIDI setup can even take wavetable samples of the instruments and use those to accurately regenerate the original sound. MIDI files are typically far smaller than WAV files, so they’re more effective for playing longer musical passages.

The short answer is “surround sound.” Standards for 3D positional sound vary, but there are essentially two methods used to create this effect. The first relies on creating the illusion of depth by exaggerating differences between the left and right channels on simple stereo outputs; the second method uses multiple speakers situated around the listener so that the original rich sound emanating from multiple positions can be reproduced more accurately. 3D positional sound is found mostly on cards developed for gamers.

SoundFont is a technology pioneered by Creative Labs that lets you download wavetable values for new instruments that you can install onto your sound card. This lets you extend the card’s capabilities.

A cheaper alternative is high performance SDRAM-DDR, or double-data rate SDRAM. SDRAM-DDR reads data at 200MHz, twice the 100MHz speed of current high-end PC buses. An advanced version of this, SLDRAM (SyncLink DRAM) will quadruple the data rate to 400MHz. The latter two alternatives are cheaper and easier to implement than Rambus. We may see Rambus in high-end PCs and SLDRAM in less-expensive systems. None of the existing RAM solutions will transfer upward to these new systems. The RAM you’re using today isn’t likely to work in the PCs of tomorrow.

The built-in sound support that comes with most PCs should be adequate for presentations. Typically, a PC will have at least basic Sound Blaster compatibility, which is all you’ll need to add sound to your presentation. The quality of the sounds your PC produces may depend more on the speakers than on the sound card, so if you use sound frequently, you may want to upgrade your speakers. And you may want to carry those speakers with you if you take your presentation on the road with a notebook PC.


A computer virus may be loosely defined as program code that replicates itself on execution and creates undesirable effects. Some antivirus software vendors say that a computer virus is any program that replicates itself. Others contend that a virus can be any ill-intentioned program. In the absence of a precise definition, there is consensus in one area: The likelihood that your PC will be hit by a virus increases immeasurably if you access the Internet, swap software with friends, exchange files via e-mail or are hooked up to a network.

Virtually every virus tries to do one thing first: Spread to other programs and data files on your hard disk. When you boot up from an infected disk, open an infected file or run an infected program, the virus’s code is copied into your PC’s memory. From there, the code usually attempts to attach itself to other files. The rogue code may also alter data-file contents, cause program crashes, display annoying screen messages, degrade system performance or even destroy all of your disk files. There are even viruses that can detect your e-mail program, and then compose and send messages with infected attachments.

The simple answer is no, not directly. Theoretically, your hardware could be affected by a virus that exerts unusual stress on your system by doing something like accessing the hard disk continuously or switching your video card to unsupported settings. Realistically, however, the risk of hardware damage from a virus is minimal.

In the past, virus experts classified pernicious code as viruses, virus carriers, Trojans, bombs, hoaxes and urban legends. But distinguishing viruses from other types of destructive programs is less useful than understanding how a virus can gain entry to your PC-and the rules of that game have changed radically.

Not too long ago, a message on the Web declared that it’s a myth that viruses can hide inside a data file or in electronic mail, or in the text of a Web page. Perhaps at that time it was, but today, it’s possible to conceal destructive code in all three places. And you can’t trust the intentions of programmers involved in such untrustworthy activity: So-called benign “virus hoax” messages have been known to go so far astray that they’ve brought Internet servers to their knees.

According to statistics compiled by the National Computer Security Association, 80 percent of the viruses currently reported “in the wild” are Microsoft Word macro viruses, with the number of known macro viruses growing from about 50 to more than 1,000 in the past year. A macro virus lodges itself within the document or macro templates used by certain applications, primarily Microsoft Word.    Other members of the rogues’ gallery include:

Boot Sector Viruses
These infect a diskette’s or hard disk’s boot sector, which is normally read by the operating system at bootup or when the disk is accessed. Typically, a boot sector virus spreads when an infected diskette is left in the A: drive and the PC is rebooted. Boot sector viruses may interfere with the startup process or destroy the disk’s directory table.

File Viruses
A file virus’s code attaches itself to operating system executables such as COMMAND.COM or WIN.COM. From there, the code may infect other applications.

Multipartite Viruses
Multipartites are distributed in one format and then transform to another. They may, for example, begin by infecting the master boot record and then move on to attack EXE or COM files.

Stealth Viruses
A stealth virus disguises its presence in memory or on disk. A stealth virus that has corrupted a drive’s boot sector may intercept a request from diagnostic utilities examining the boot sector and transmit a false image of the original, uninfected boot record.

Polymorphic Viruses
These viruses dynamically change their code as they spread from file to file, making detection difficult. As of this writing, WM.Concept, a Word macro virus, is believed to be the most prevalent, followed by Form.A, a boot sector virus, and One Half.3544, a multipartite boot sector virus that also infects COM and EXE files.

A macro virus hides in an application’s document template or special macro file. The WordBasic language built into Microsoft Word allows sophisticated formatting instructions to be executed automatically within any Word document. WordBasic also permits direct access to operating system controls, making it possible to create macros that can delete files, reboot the system or even reformat an entire disk.

Since only Word template documents (usually files with a DOT extension) can contain macros, virus programmers put their destructive code in a document template and rename it with a DOC extension. When you open the infected file, it loads into Word as its own style template. If the file contains an AutoOpen macro, all of the macro instructions execute immediately. Destructive instructions can then be copied to the global macro pool, stored in a template called NORMAL.DOT. From there, the code can spread to other document templates and ruin the format of other documents as you open them.

For the most part, you’re safe. But a few relatively rare macro viruses that infect Microsoft Excel and Lotus Ami Pro have been discovered.

The popularity of file hunting on the Web, coupled with utilities that automatically download and unpack Zip archives or open e-mail attachments, can greatly increase the risk of virus infection.

If you download files from the Internet or receive e-mail messages with file attachments, you run the same risk of infection as you would copying those files from a diskette. The spread of Word macro viruses can be attributed largely to the increased flow of e-mail messages containing Word documents as file attachments. Simply double-clicking the message’s document icon to open the file can infect all the Word templates on the hard disk.

If you use your browser to cruise the Web and just read text and look at pictures, the chances of activating a process that will infect your hard disk are very small. Still, Java apps and Microsoft’s ActiveX controls potentially offer malicious programmers new points of entry to your computer via the Internet. These “hostile applets” can be embedded in a Web page so that once your browser connects, they can inflict damage similar to that of a true virus.

If you’re not using an antivirus utility, virus code may be lurking undetected on your hard disk. Some time-delayed viruses show no signs of their presence until they manifest themselves at a particular time or date. The most famous example, the Michelangelo virus, hides in the boot sector and only pops up on March 6, the real Michelangelo’s birthday.

The best way to avoid virus infection is to install an antivirus utility and run frequent scans of your hard disk. But if your system is unprotected, there are some symptoms of virus activity to be on the lookout for.

Unusual system performance may indicate a virus is at work. Your system may run more slowly than usual; programs may crash unexpectedly or start exhibiting strange behavior (menus won’t open, files can’t be saved and so forth). In the worst cases, directory listings may be garbled or the system may refuse to start. Other symptoms to watch for include changes in the file size or time and date signatures of common system programs. If you notice any of those symptoms, stop using the PC and install and run an antivirus program as soon as possible.

Virtually all antivirus packages from established companies are now capable of detecting and expunging 90 to 100 percent of known viruses that currently exist in the wild. Norton AntiVirus 2.0, our WinList selection, is a good bet for protecting your system.

Most antivirus programs now include terminate-and-stay-resident utilities that intercept and block the copying or reception of infected files in real time. There are, however, some distinguishing factors. An antivirus program should be certified by the National Computer Security Association, which regularly tests and evaluates antivirus products. The antivirus vendor should have an established history in the field. The most reliable products come from companies that have research facilities throughout the world. When you buy their products, you should get the benefits of up-to-the-minute virus lists and detection capabilities.

Often, these companies post virus detection pattern updates on the Web so quickly that the virus is thwarted before it has a chance to spread. Many antivirus utility vendors offer 24-hour disinfection turnaround on submitted virus samples that can’t be removed by the current versions of their programs. Specific features of an antivirus package may be important considerations.

Some users want disinfection routines that immediately purge detected viruses-no questions asked. Others will want software with more sophisticated disposition options, such as the ability to quarantine samples for later examination, create file exception lists for programmers and beta testers, perform heuristic tests that find unknown viruses or bypass heuristic analysis to scan faster.

If your PC is frequently connected to the Internet, you should use an antivirus product that offers supplementary protection for Web browsers and e-mail clients. An antivirus utility should be able to clean most infected files, leaving your data intact. Some antivirus utilities can only erase infected files from your hard disk. Easy updates are important. Some antivirus programs can upgrade themselves automatically over the Internet.

Generally, protecting the network clients should be the priority. Server-based and Mail Gateway antivirus products provide additional security, reducing the risk of spreading a virus throughout an enterprise. Some server-based products also give the network administrator a helping hand by automating the installation and distribution of client antivirus software updates across the network.

Try these for starters:
Disable program features that automatically open e-mail attachments or launch downloaded program files. Create an emergency boot disk for your PC and write-protect it. If your PC has options for setting the system startup drive, set it to bypass the A: drive and boot directly from C:

Take advantage of Word 97’s ability to disable all macros when opening a template. Back up all of your Word template (DOT) files to an unused directory and change the file extensions. If you don’t frequently create new macros for your documents, you may also turn on the read-only file attribute for each of your template files. Keep your antivirus program up to date. A dozen or more new macro viruses are reported to antivirus research facilities every day.

False Alarm!
Not long ago, an electronic book publisher circulated a warning for a dread virus called “Irina” to create publicity for an interactive book of the same name. The virus warning was, alas, a hoax, and the publicity was ill-gotten and quite unfavorable.

You may have received similar e-mail announcements that post dire warnings. They’re fairly common, and they generally say that as you read the message, a hidden virus is fiddling with your favorite programs or rendering your data files unreadable. These “hoax” virus alerts may be amusing to sophisticated users-often carrying ridiculous threats like your serial port pinouts are being changed or the rotation of your hard disk has been reversed.

But virus hoaxes-even those that are only intended to be humorous-can be as frightening and disruptive as the real thing. Sometimes, even experts may have difficulty separating fact from fantasy. It’s always better to be safe than sorry, so if you receive a virus warning, here are several ways to determine its authenticity:

  • If a message urges you to pass it along to your friends, don’t. It could contain a virus.
  • If you receive a virus alert claiming to be from an official government or research organization, examine the PGP signature on the message.
  • If there is no PGP signature, it’s probably a hoax.
  • Contact the person alleged to have sent the message to see if the signature is genuine.
  • Check the Web before investing time and energy responding to “new, deadly virus” announcements.

All major antivirus vendors maintain “hype alert” sections at their Web sites. Be sure to check out the “Computer Virus Myths” site at http://kumite.com/myths/ an extensive history of computer-virus urban legends.


It is if you have an ISP account. Then you can use free e-mail services, including Hotmail ( www.hotmail.com ), Net@ddress ( Usa.net ),  Yahoo! Mail ( mail.yahoo.com ),  MailCity ( www.mailcity.com ) and Freemail ( www.freemail.com ). These ad-supported services are convenient if you spend a lot of time on the Internet, especially when traveling. They store your mailbox on the server. You can access it from anywhere on any computer, including the for-pay PC-to-Web kiosks located in airports and convention centers. Most of these services can automatically forward your e-mail to another e-mail address. E-mail also comes as part of the package with online services like America Online, MSN and CompuServe. However, these services provide Internet access first and e-mail second. If e-mail is your primary consideration, you’re better off with a commercial e-mail program and an ISP, or a free e-mail service. Of course, free Web-based e-mail services can’t match commercial e-mail programs like Outlook, Eudora Pro or Outlook Express. Those apps offer advanced formatting, excellent performance and superior message-management features.

Omron Advanced Systems’ SpamEx, Solid Oak Software’s CyberSitter AntiSpam, Unisyn Software’s MailJail and other antispam products combat spam by working in conjunction with your e-mail program to detect spam as you download it; they then either delete the spam or move it into a spam folder. No antispam software or service can completely rid you of this nuisance. Most use rules, or filters, to detect specific incoming messages and then route them to a mailbox or play an alert sound.

The best products come with preconfigured rules that detect common types of spam messages, but also allow you to add your own rules. Look for one that automatically builds rules based on messages you identify as being spam. You also want to be able to set rules that identify messages as not being spam. This lets legitimate e-mail through in case the software is overzealous.

If you’ve already changed, find out if your old ISP offers a forwarding service. Some do, either free for a limited amount of time (usually one to three months) or for a monthly fee of around $10 or less. The best way to forward messages, however, is to act before you change ISPs. Sign up with a free e-mail and forwarding service like Bigfoot.com. You get a Bigfoot address (yourname@bigfoot.com), and you can set up your Bigfoot account to forward your mail to your current ISP’s mailbox. When you change ISPs, you don’t change your public e-mail address, just the mailbox to which your Bigfoot account forwards. Most Web-based e-mail services also offer forwarding features.

Simply blanking out the Return Address in the Options dialog in your e-mail program doesn’t keep an experienced Internet e-mailer from discovering who you are, but it’s a good place to start. The classic strategy for making e-mail anonymous is called “remailing.” Remailing chains e-mail addresses together, strips away the sender’s real name and address, and replaces it with a dummy address. Web sites like Anonymizer and Replay Associates provide this service for free.

These encoding schemes allow you to send attachments in e-mail. MIME (Multipurpose Internet Mail Extensions) and Uuencode (Unix-to-Unix Encode) work with all types of e-mail programs on many types of computer platforms. BinHex (Binary Hexadecimal) is primarily a Macintosh standard. MIME supports a broad range of attachment types and appears on its way to becoming the primary standard, although almost all e-mail packages support all three schemes.

Lightweight Directory Access Protocol (LDAP) is an Internet standard protocol that reaches out to public name-and-address books, such asBigfoot, Four11, Switchboard and WhoWhere, to let you search for people’s e-mail addresses. Unfortunately, your e-mail address, home phone number and mailing address might be listed with these services without your knowledge. If you’re concerned about privacy, visit these Web sites and search for your name. Most of them let you update your record to remove your phone number or street address. If not, you can request that they delete your record.

S/MIME (Secure MIME) is a recent extension to the MIME nontext file protocol that supports e-mail message encryption. S/MIME is based on RSA Data Security’s public-key encryption technology, which is similar to other public-key encryption methods, including PGP (Pretty Good Privacy). Public-key encryption uses two software keys, a public key and a private key. The person sending the message uses a public key (known to everyone) to encrypt it. The recipient then uses his private key to decode and read the message. Eudora Pro, Netscape’s Messenger, Outlook 98 and Outlook Express support S/MIME, PGP or both.

HTML mail can display an e-mail message as if it were a Web page. Eudora Pro and other packages use a Microsoft-supplied component version of Internet Explorer to display HTML messages in e-mail. If the recipient’s e-mail client doesn’t support HTML mail, the message shows up as plain text.

You can configure your e-mail program to leave mess ages on the server after retrieval, which allows you to mirror received messages to two installations. When you use your notebook PC to access messages, it leaves the messages on the server. You can then access all your messages from your desktop PC when you return to the office. However, this method doesn’t let you access messages you sent from your notebook computer when you return to your office PC.

To do that, you need to use your e-mail software’s rules or filters to capture your outgoing messages to a folder that you later copy from your notebook to your office PC. Or you can e-mail a copy of outgoing messages to yourself, then access the messages from the server when you return to the office.

Instant messaging is probably best described as private chat. It uses a registering server and local client software, such as AOL’s Instant Messenger or Mirabilis’s ICQ, to let you identify other users of the software and server connected to the Internet at the same time as you. You create a list of approved people in your instant messaging circle, and when one or more of them are online while you are, you can open a chat window and type text messages back and forth. AOL, CompuServe and other online services have had this capability for years, but it’s relatively new on the Internet. Although different from e-mail, instant messaging provides another way to pass text messages back and forth with friends, family and business associates.

A mailing list is a group of e-mail addresses identified by a single name, or alias. When you send a message to that single address, it goes to everybody in the group. Mailing lists, sometimes known as mail groups, are often used by corporate e-mail servers to help provide a fast, consistent way of distributing messages to a department, committee or other group inside or outside the organization. When you port the same concept to the Internet, you get a somewhat different beast.

There are two types of mailing lists on the Internet. One is primarily a one-way broadcast, such as a newsletter subscription or announcement service. The other is a group discussion mailing list, which uses Respond to All to create a group discussion in the mailbox of every subscriber on the list. The group discussion list is a dying breed. Mailing lists can be incredibly useful, or incredibly frustrating, depending on your level of interest in the topic at hand.


ISDN is an acronym for Integrated Services Digital Network, a high-speed digital telephone service. ISDN is faster and more reliable than analog POTS (plain old telephone service), and lets you make a voice telephone call while simultaneously transmitting or receiving data with your modem. ISDN lets computers communicate at up to 128Kb per second-more than double the speed of the fastest analog modems.

BRI (Basic Rate Interface) ISDN, the most common type of ISDN in North America, consists of two B (bearer) channels that carry data at up to 64Kbps and a D (delta) channel that works at 16Kbps and controls the circuit. Most ISDN terminal adapters can bond or combine the 64Kbps B channels into a single 128Kbps channel. (More on creating a 128Kbps connection in next question.)

An ISDN connection is almost instantaneous. You don’t have to wait 30 seconds or more for your call to connect. While analog modems don’t always connect at your modem’s maximum speed, you know your ISDN B channel connection will be at 64Kbps, or 128Kbps if you’re combining channels.

To take advantage of the full bonding capabilities, make sure your ISDN terminal adapter or router supports Multilink Point-to-Point Protocol (Multilink PPP). Multilink PPP lets equipment from different vendors communicate, and allows you to add and delete channels as needed. Without Multilink PPP, you’re stuck with connection speeds of 64Kbps. To connect to the Internet at 128Kbps, your ISP must also support dual-channel transmissions. The Bandwidth Allocation Control Protocol (BACP) works with Multilink PPP to control bandwidth more efficiently, but isn’t widely implemented yet.

The major catch with an ISDN line is that it only connects to other ISDN lines. You can’t connect from an ISDN terminal adapter to a regular analog modem. That means you can’t send data to a co-worker or friend who has an analog modem. Therefore, you must make sure your ISP and other planned connection points support ISDN and have compatible equipment.

Availability isn’t much of a problem anymore because most ISPs now support ISDN, as do telephone companies in most parts of the country. But until Always On/Dynamic ISDN becomes widely available next year, ISDN remains inconvenient for push technologies and background Web updates because its B channels are closed until activated. ISDN also costs more than regular phone service, although prices are dropping fast.

That depends on where you live. There are three fees involved: a one-time installation charge (usually between $100 and $200), a monthly rate and a per-call charge. Some phone companies will waive the installation charge if you promise to keep the service for three years. Local calls usually cost pennies per minute on each B channel, with long-distance calls billed at higher rates.

You usually get a certain number of free minutes each month. Contact your local telco for ISDN availability and price structure. We found one West Coast phone company charging $195.75 to install an ISDN business line, with a monthly fee of $28.82, plus $29.95 for 30 hours of B channel usage per month. Another major company on the East Coast offers a package of $124.02 for installation, $38.98 monthly fees, plus $9.60 for 20 hours. Your local provider may offer several pricing options, depending on how much you use ISDN.

The main thing you need is a terminal adapter (TA), which is the ISDN equivalent of an analog modem. People often refer to them as ISDN modems, but that term isn’t technically correct.

Modems are so named because they modulate and demodulate signals. There’s no modulation and demodulation necessary with the digital transmission used by ISDN. TAs look like regular modems. You install them and set them up the same way, and they’re available in internal and external models.

Terminal adapters let your computer talk to the ISDN circuit, control the calls and manipulate the B channels for effective communication. You might need bridges and routers if you’ll be connecting several PCs to a network or the Internet. Fortunately, most current telephone wiring works with ISDN, and you’ll need only a plain phone jack, known as a U-interface.

In addition, many ISDN devices come with built-in POTS interfaces that let you plug in regular telephones for voice calls over ISDN. You also need NT1 equipment (see next question).

NT1 units are Network Termination devices that allow terminal adapters, bridges, routers or ISDN-capable telephones to take advantage of ISDN’s multiple channels. NT1 units let you connect up to seven devices to a single BRI ISDN circuit. You can daisy-chain computers, telephones, fax machines, printers and other equipment to the ISDN circuit through NT1. These devices contend for a channel on an as-needed basis. In North America, NT1 is already built into most terminal adapters, but if you need to buy an NT1 device separately, it should cost under $100. In Europe, Japan and other parts of the world, the telephone company owns and provides the NT1 unit.

Always On/Dynamic ISDN (AO/DI) should be widely available in 1999. It is expected to help keep rates down by using the D channel to transmit data. AO/DI combines packet switching and circuit switching to cut down wasted bandwidth. It uses the D channel for continuous data flow through a packet-based protocol called X.25.

The D channel is only 16Kbps, but it is always up and normally isn’t charged a usage rate. So you can use the D channel to check e-mail, transfer small files, make database queries and perform other transfers of data small enough to be handled in the background over a slower link. The D channel will also be available to handle information from push systems.

Costs for using AO/DI haven’t been set yet. But you probably won’t pay usage rates on your B channels; you’ll likely be charged a minimal price based on the number of X.25 packets sent over the D channel. When data streams get heavy during large file transfers, the B channels will take over at their regular rates to quickly finish these large transmissions.