A

David

Darling

COMPUTERS OF THE FUTURE: Intelligent Machines and Virtual Reality - 3. Chips, Changes, and Challenges

A Cray X-MP/48 supercomputer at CERN, the European center for particle physics research near Geneva, Switzerland, in 1994

Figure 1. A Cray X-MP/48 supercomputer at CERN, the European center for particle physics research near Geneva, Switzerland, in 1994.


Computer simulation of air flow around the space shuttle

Figure 2. A computer simulation carried out on a supercomputer shows how air flows around the space shuttle.


optical computer

Figure 3. A technician at the University of Colorado works on the first optical computer capable of storing and manipulating data and instructions as pulses of light. The computer was first demonstrated in January 1993.


memory chip

Figure 4. This tiny memory chip from a modern computer fits easily on the tip of a child's finger.


According to one estimate, if cars had progressed as much as computers over the past 40 years, then a car today would cost less than 15 cents, go over 1 million miles on a gallon of gas, and travel at five times the speed of sound. The astonishing rate of development of computer machinery and programming seems likely to continue in the years to come. As a rough guide, we can expect the processing speed and storage capacity of computers to double about every 18 months.

 


Making Light Work

Just one thin CD-ROM can hold all the information in a 20-volume encyclopedia, or about 600 million bytes. But personal computers have hard disk drives that dwarf this capacity and future machines will be able to hold even more. Researchers in the United States and other countries are investigating new forms of computer storage that improve upon the flat, or two-dimensional, disks used at present. The researchers are developing ways to write and read data in three dimensions by building up and decoding layer upon layer of information in a solid block of material.

 

One promising approach to 3-D storage utilizes a cube of special light-sensitive chemical called spirobenzopyran (SP). The crystal structure of thus substance changes when it is hit simultaneously by beams of green and infrared (IR) light. The green and IR light encode data in the cube, just as information in Morse Code is encoded by a series of dots and dashes. However, in the case of the 3-D technology, the dots and dashes are instead points where the crystal structure of SP has been changed and points where it has been left unchanged.

 

Here's how the process works. Points within a block of SP that have been altered by a combination of green and infrared rays will glow afterward when they are exposed to green light. The glowing points represent locations of information that have been previously written in the cube. A special scanner is then used to detect the glowing points and read the information in the cube. A cube of SP about the size of five audio-cassette boxes stacked together, could hold as much information as 250 CD-ROMs and could be read 1,000 times faster.

 

Scientists are also exploring the possibility of substituting beams of light for pulses of electricity to operate computer processors. These so-called OPTICAL COMPUTERS (see box below) would operate many times faster than machines built with ordinary chips. Optical computers would also eliminate difficult technical problems that arise when tiny electrical devices are crammed very close together as is the case with conventional computers.

 


Supercomputers

The most powerful computers in the world are called supercomputers (see Figure 2). Among other things, these computers are used to design new cars and planes, to forecast the weather days in advance, to help geologists find out where to drill for new deposits of oil and natural gas, and to rapidly produce high-quality pictures that include shading and texture (see Figure 3). Supercomputers can do the trillions of calculations needed to show what happens when a giant star explodes or when tiny particles of matter crash into one another at high speed.

 

Supercomputers are also used as mathematical laboratories where, instead of real experiments, scientists carry out SIMULATIONS. For instance, engineers at an automobile company might use a supercomputer to calculate what would happen if a new type of car smashed into a wall at 30 miles per hour. Would the car be able to protect the passengers from serious injury? Smashing up real cars is expensive, whereas performing a series of crash simulations on a computer can be done quickly and at relatively low cost.

 

In 2009 the fastest computer in the world, called Sequoia, could do up to 20 quadrillion calculations a second. Incredibly quick as this is, researchers in many different fields of science and technology are faced with tasks that require even speedier machines. Such tasks may involve carrying out extremely complex sequences of calculations of calculations creating highly involved simulations, or making accurate predictions about natural phenomena.

 

One way to make computers that work faster is to use faster components. Speedier chips and speedier ways of moving information from one part of a computer to another are constantly being developed.

 

Another approach to making faster computers is to build them so that they can carry out many calculations at once. This involves using not just one central processor but a group of processors that work together on a task. Such an approach to building computers is called PARALLEL PROCESSING. In theory, a parallel processor made from hundreds or thousands of separate processing elements ought to be very fast indeed. But how quick it proves to be in practice depends on whether it is given software that can keep its many processors busy all the time. Developing better programs for parallel processing is an important challenge for those who make and use supercomputers.

 


Inside Optical Computers


Today, bright flashes of laser light can be sent hundreds of miles along fine strands of specially made glass or plastic called OPTICAL FIBERS. In the future, these fibers will be used to replace ordinary wires in a revolutionary type of computer – the optical computer (see Figure 3). Instead of transistors, such a computer will have TRANSPHASORS. These are switches that are activated by beams of light rather than by pulses of electricity. Experimental transphasors have already been made to flip on and off 1,000 times faster than any present-day switch. And unlike transistors, transphasors can be built to handle several incoming signals at once. Beams of light can crisscross and overlap without becoming mixed up, whereas crossed electric currents would get hopelessly confused. Optical computers will have other advantages, too. Many instructions or pieces of data could be sent through such a computer along one optical fiber. Also, the arrangement of connections and switches would not have to be flat, as in an electronic computer. It could be placed in any direction in space, allowing totally new designs in information processing.

 


When Computers Go Wrong

Machines can be very useful – until they break down. And in the case of computers, this can lead to serious consequences. For as we come to depend increasingly on computers, we could be left helpless when something goes very wrong with them. Even a small defect in a widely used program or piece of hardware could cause many computers around the world to start making mistakes. For instance, if one such mistake were to occur in a computer that helped fly airplanes, the result could be disastrous.

 

Faulty chips pose a particularly serious threat. Man new computers, which may look different from the outside, contain the same type of processing chips inside. If a tiny error is made in the design of such a chip and the error goes undetected by the manufacturer, it can cause problems in a very large number of computers.

 

In 1994, a powerful new chip, 5 million copies of which had been put inside computers, was found to contain a design flaw. People discovered that it made mistakes when carrying out certain kinds of long division. In most everyday tasks that were performed by the computers, the problem did not show up. But researchers who had used computers that contained the flawed chip to do long, complex calculations realized that there work might have been badly affected. Among those most concerned were scientists and engineers at NASA's Lyndon B. Johnson Space Center, in Houston, Texas. They had relied on ten computers with the defective chip to carry out important stress calculations and flight simulations on the space shuttle. After finding out about the chip problem, NASA scientists could no longer trust the results that had taken months to obtain. They had to repeat many of the calculations on different computers, thus wasting a great deal of time and money.

 

In the future, as everyone from surgeons to astronauts comes to depend more and more on computers, the issue of faulty hardware and software will grow in importance. More reliable ways of testing new chips and programs will become essential if expensive projects and even people's lives are not to be put at risk.

 

NEXT PAGE  •  PREVIOUS PAGE  •  CONTENTS