computer terminology

access time

The time interval between the instant at which a call for data is initiated and the instant at which the delivery of the data is completed.



– A number, character, or group of characters which identifies a given device or a storage location which may contain a piece of data or a program step.

– To refer to a device or storage location by an identifying number, character, or group of characters.


analog computer

A type of computer that processes continuously variable information. Information is usually first converted into proportional electrical quantities. These are manipulated by amplifiers and other circuits that perform various mathematical operations. In other words, the analog computer solves problems by dealing with quantities (voltages) that are analogous (similar) to the quantities in the problem. Analog computers are time-consuming to set up an operate. Most work once done on analog computers is now carried out on digital computers, which are simpler and quicker to use.


analog-to-digital converter

An input related device that translates an input device's (sensor) analog signals to the corresponding digital signals needed by the computer.


arithmetic logic unit

The (high speed) circuits within the central processing unit (CPU) which are responsible for performing the arithmetic and logical operations of a computer.


ASCII (American Standard Code for Information Interchange)

A seven bit code adopted as a standard to represent specific data characters in computer systems, and to facilitate interchange of data between various machines and systems. ASCII provides 128 possible characters, the first 32 of which are used for printing and transmission control. Since common storage is an 8-bit byte (256 possible characters) and ASCII uses only 128, the extra bit is used to hold a parity bit or create special symbols.


Extended ASCII is the second half of the ACSII character set, 128 thru 255. The symbols are defined by IBM for the PC and by other vendors for proprietary use. It is non-standard ASCII.


assembly language

A low-level programming language that corresponds closely to the instruction set of a given computer, allows symbolic naming of operations and addresses, and usually results in a one-to-one translation of program instructions [mnemonics] into machine instructions. Programs written using these codes are translated by an assembler into a form the computer can understand.


A cross-assembler is an assembler that executes on one computer but generates object code for a different computer.



An acronym for Beginners All-purpose Symbolic Instruction Code, a high-level programming language intended to make it easy to learn how to program in an interactive environment; it uses many everyday words. To run, BASIC computer programs usually require a separate program called an interpreter, which converts the BASIC code into the machine code required by the computer's processor.



– A string of records, words, or characters that for technical or logical purposes are treated as a unity.


– A collection of contiguous records that are recorded as a unit, and the units are separated by interblock gaps.


– A group of bits or digits that are transmitted as a unit and that may be encoded for error-control purposes.


– In programming languages, a subdivision of a program that serves to group related statements, delimit routines, specify storage allocation, delineate the applicability of labels, or segment parts of the program for other purposes. In FORTRAN, a block may be a sequence of statements; in COBOL, it may be a physical record.


Block check is the part of the error control procedure that is used for determining that a block of data is structured according to given rules.


Block length is a measure of the size of a block, usually specified in units such as records, words, computer words, or characters.


Block transfer is the process, initiated by a single action, of transferring one or more blocks of data.


Blocking factor, also called grouping factor, is the number of records in a block. The number is computed by dividing the size of the block by the size of each record contained therein.



– To initialize a computer system by clearing memory and reloading the operating system.


– To cause a computer system to reach a known beginning state. A boot program, in firmware, typically performs this function which includes loading basic instructions which tell the computer how to load programs into memory and how to begin executing those programs.


A distinction can be made between a warm boot and a cold boot. A cold boot means starting the system from a powered-down state. A warm boot means restarting the computer while it is powered-up. Important differences between the two procedures are; (i) a power-up self-test, in which various portions of the hardware (such as memory) are tested for proper operation, is performed during a cold boot while a warm boot does not normally perform such self-tests, and (ii) a warm boot does not clear all memory.



A device or storage area used to store data temporarily to compensate for differences in rates of data flow, time of occurrence of events, or amounts of data that can be handled by the devices or processes involved in the transfer or use of the data.



– A common pathway along which data and control signals travel between different hardware devices within a computer system. (A) When bus architecture is used in a computer, the CPU, memory, and peripheral equipment are interconnected through the bus. The bus is often divided into two channels, a control channel to select where data is located (address bus), and the other to transfer the data (data bus or I/O bus). Common buses are: ISA (Industry Standard Architecture) the original IBM PC 16 bit AT bus; EISA (Extended Industry Standard Architecture) the IBM PC 32 bit XT bus (which provides for bus mastering); MCA (MicroChannel Architecture) an IBM 32 bit bus; Multibus I & II (advanced, 16 & 32 bit respectively, bus architecture by Intel used in industrial, military and aerospace applications); NuBus, a 32 bit bus architecture originally developed at MIT (A version is used in the Apple Macintosh computer); STD bus, a bus architecture used in medical and industrial equipment due to its small size and rugged design (Originally 8 bits, with extensions to 16 and 32 bits); TURBO Channel, a DEC 32 bit data bus with peak transfer rates of 100 MB/second; VMEbus (Versa Module Eurocard Bus), a 32-bit bus from Motorola, et al, used in industrial, commercial and military applications worldwide (VME64 is an expanded version that provides 64-bit data transfer and addressing). (B) When bus architecture is used in a network, all terminals and computers are connected to a common channel that is made of twisted wire pairs, coaxial cable, or optical fibers. Ethernet is a common LAN architecture using a bus topology.


– Part of a spacecraft's payload which provides a platform for experiments or contains one or more atmospheric entry probes.



A general purpose high-level programming language. Created for use in the development of computer operating systems software, it strives to combine the power of assembly language with the ease of a high-level language.



An object-oriented high-level programming language.



Compact disk - read only memory: a compact disk used for the permanent storage of text, graphic, or sound information. Digital data is represented very compactly by tiny holes that can be read by lasers attached to high resolution sensors. Capable of storing up to 680 MB of data, equivalent to 250,000 pages of text, or 20,000 medium resolution images. This storage media is often used for archival purposes.


central processing unit (CPU)

The unit of a computer that includes the circuits controlling the interpretation of program instructions and their execution. The CPU controls the entire computer. It receives and sends data through input-output channels, retrieves data and programs from memory, and conducts mathematical and logical functions of a program.


check summation

A technique for error detection to ensure that data or program files have been accurately copied or transferred. Basically, a redundant check in which groups of digits; e.g., a file, are summed, usually without regard to overflow, and that sum checked against a previously computed sum to verify operation accuracy.



A sum obtained by adding the digits in a numeral, or group of numerals (a file), usually without regard to meaning, position, or significance.



A term used in a broad sense to describe the relationship between the receiver and the provider of a service. In the world of microcomputers, the term client-server describes a networked system where front-end applications, as the client, make service requests upon another networked system. Client-server relationships are defined primarily by software. In a local area network (LAN), the workstation is the client and the file server is the server. However, client-server systems are inherently more complex than file server systems. Two disparate programs must work in tandem, and there are many more decisions to make about separating data and processing between the client workstations and the database server. The database server encapsulates database files and indexes, restricts access, enforces security, and provides applications with a consistent interface to data via a data dictionary.



A high-level language designed primarily for business use; it was first developed in 1959. COBOL has the advantage that it can be easily learned and understood by people without technical backgrounds; and that a program designed for one computer may be run on an another with minimal alteration.



– In software engineering, the process of expressing a computer program in a programming language.


– The transforming of logic and data from design specifications (design descriptions) into a programming language.


Coding standards are written procedures describing coding (programming) style conventions specifying rules governing the use of individual constructs provided by the programming language, and naming, formatting, and documentation requirements which prevent programming errors, control complexity and promote understandability of the source code.



A computer program that translates programs expressed in a high-level language into their machine language equivalents. The compiler takes the finished source code listing as input and outputs the machine code instructions that the computer must have to execute the program. A cross-compiler is a compiler that executes on one computer but generates assembly code (see assembly language) or object code for a different computer.


computer instruction set

Also called the machine instruction set, a complete set of the operators of the instructions of a computer together with a description of the types of meanings that can be attributed to their operands.


computer system

A functional unit, consisting of one or more computers and associated peripheral input and output devices, and associated software, that uses common storage for all or part of a program and also for all or part of the data necessary for the execution of the program; executes user-written or user-designated programs; performs user-designated data manipulation, including arithmetic operations and logic operations; and that can execute programs that modify themselves during their execution. A computer system may be a stand-alone unit or may consist of several interconnected units.


computer-aided design (CAD)

The use of computers to design products. CAD systems are high speed workstations or personal computers using CAD software and input devices such as graphic tablets and scanners to model and simulate the use of proposed products. CAD output is a printed design or electronic output to computer-aided manufacture (CAM) systems. CAD software is available for generic design or specialized uses such as architectural, electrical, and mechanical design. CAD software may also be highly specialized for creating products such as printed circuits and integrated circuits.


computer-aided manufacturing (CAM)

The automation of manufacturing systems and techniques, including the use of computers to communicate work instructions to automate machinery for the handling of the processing (numerical control, process control, robotics, material requirements planning) needed to produce a work piece.



Popular term for the perceived "virtual" space within computer memory, especially if displayed graphically. The term of science fiction, where it usually refers to situations involving direct interface between brain and computer. During the 1990s the term became widespread in reference to the Internet and the World Wide Web.



Representations of facts, concepts, or instructions in a manner suitable for communication, interpretation, or processing by humans or by automated means.


Data structure is a physical or logical relationship among data elements, designed to support specific data manipulation functions.


data compression

A technique in computing to reduce the amount of storage space occupied by data. Methods include representing common characters by fewer bits than normal and storing frequently used words as shorter words (tokenization). Long sequences of repeated characters can be replaced by a single character and a count of how many there are, a technique called run-length encoding. By such means, text files can be reduced by up to 50% and digitized images by about 90%.


Compression techniques can be divided into two main types: lossy and non-lossy. With non-lossy compression, there is no loss in the quality of the data. With lossy compression, the compression is greater but the quality of the data is reduced; when the data is uncompressed it will be slightly different from before it was compressed. Lossy compressed is used chiefly for images, video, and music.


data processing

A systematic sequence of operations performed on data, especially by a computer, in order to calculate new information or revise or update existing information stored on magnetic or optical disk, magnetic tape, and so on. The data may be in the form of numerical values representing measurements, scientific or technical facts, or lists of names, places, or book titles.


The main processing operations performed by a computer are arithmetical addition, subtraction, multiplication, and division, and logical operations that involve decision-making based on comparison of data. For the latter, an instruction might read: "If condition a holds, then follow programmed instruction P; if a does not hold, then follow instruction Q".


data protection

Measures taken to guard data against unauthorized access. Computer technology now makes it easy to store large amounts of data containing, for example, a person's medical or financial details. Many governments have passed data-protection legislation ensuring that such databases are registered and the information that they contain is used only for the purpose for which it was originally given.



Also called a device driver, a program that links a peripheral device or internal function to the operating system, and providing for activation of all device functions.


electronic mail (e-mail)

Correspondence sent via a computer network. In a simply system, messages produced using word processing programs are transmitted over a network (which could be a small company network or the world-wide Internet) and stored in a computer called a mail server. Anyone connected to the network can contact the mail server to check whether it is holding mail for them. If it is, they can transfer the messages to their own computer, and print it if they need a permanent record. Electronic mail is transmitted rapidly, and costs are low, even for international communications.


embedded computer

A device which has its own computing power dedicated to specific functions, usually consisting of a microprocessor and firmware. The computer becomes an integral part of the device as opposed to devices which are controlled by an independent, stand-alone computer. It implies software that integrates operating system and application functions.



– A model that accepts the same inputs and produces the same outputs as a given system.


– To imitate one system with another.


EPROM (erasable programmable read only memory)

Chips which may be programmed by using a PROM programming device. Before programming each bit is set to the same logical state, either 1 or 0. Each bit location may be thought of as a small capacitor capable of storing an electrical charge. The logical state is established by charging, via an electrical current, all bits whose states are to be changed from the default state. EPROMs may be erased and reprogrammed because the electrical charge at the bit locations can be bled off (i.e., reset to the default state) by exposure to ultraviolet light through the small quartz window on top of the IC. After programming, the IC's window must be covered to prevent exposure to UV light until it is desired to reprogram the chip. An EPROM eraser is a device for exposing the IC's circuits to UV light of a specific wavelength for a certain amount of time.



– Also called a data set, a set of related records treated as a unit; e.g., in stock control, a file could consists of a set of invoices.


– The largest unit of storage structure that consists of a named collection of all occurrences in a database of records of a particular record type.


File maintenance is the activity of keeping a file up to date by adding, changing, or deleting data.


file transfer protocol (FTP)

– Communications protocol that can transmit binary and ASCII data files without loss of data.


– TCP/IP protocol that is used to log onto the network, list directories, and copy files. It can also translate between ASCII and EBCDIC.



Also called an indicator, a variable that is set to a prescribed state, often "true" or "false", based on the results of a process or the occurrence of a specified condition.



– Also called a flow diagram, a graphical representation in which symbols are used to represent such things as operations, data, flow direction, and equipment, for the definition, analysis, or solution of a problem.


– A control flow diagram in which suitably annotated geometrical figures are used to represent operations, data, or equipment, and arrows are used to indicate the sequential flow from one to another.



FORmula TRANslation. Early high-level computer language designed specifically for writing programs involving mathematical and scientific computations, it provided the basis for BASIC and other languages. Its newer version (Fortran 2000) is an ISO-standard language for advanced applications such as parallel computing. Invented in 1956 by John Backus at IBM.


genetic algorithm

A type of evolving computer program, first developed by the computer scientist John Holland, whose strategy of arriving at solutions is based on principles taken from genetics. Basically, the genetic algorithm is an algorithm that utilizes the mixing of genetic information in sexual reproduction, random mutations, and natural selection at arriving at solutions. Genetic algorithms, like biological systems, show adaptation.



A circuit having two output points, S and C, representing the sum and carry, and two input points, A and B, representing addend and augend, such that the output is related to the input according to the following table.

A   B S   C
0   0 0   0
0   1 1   0
1   0 1   0
1   1 0   1


A and B are arbitrary input pulses, and S and C are sum without carry and carry, respectively. Two half-adders, properly connected, may be used for performing binary addition and form a full serial adder.



Physical equipment, as opposed to programs, procedures, rules, and associated documentation.


high-level language

A programming language which requires little knowledge of the target computer, can be translated into several different machine languages, allows symbolic naming of operations and addresses, provides features designed to facilitate expression of data structures and program logic, and usually results in several machine instructions for each program statement. Examples are PL/1, COBOL, BASIC, FORTRAN, Ada, Pascal, and C. Compare with assembly language.


Hollerith code

A computer code consisting of 12 levels, or bits per character, which defines the relation between an alphanumeric character and the punched holes in an 80-column computer data card. It is named after Herman Hollerith .



A method by which one piece of data is linked to another piece of data. Hypertext is most commonly seen on the World Wide Web and such things as interactive CD-ROMs, It usually manifests itself in the form of highlighted words (usually in a different color or underlined), which, when selected, take the viewer to associated material. This material may be many things, such as more text, a visual images, or a sound clip. Hypertext can be seen as performing a similar function in multimedia environment as cross-references do in printed encyclopedias and dictionaries.


information storage and retrieval

A branch of computer science that is concerned with the function of datasets.


Database systems retrieval involves the searching on large computer files for specific data which may be organized into a variety of fields. Once a perfect match is found the information is made available. This type of system allows specific sets of data to be retrieved independently or with other sets.


Document-retrieval systems store and retrieve entire documents, which can be searched for and retrieved using the document name or key words within the document.


Reference-retrieval systems do not store documents themselves but references to documents. A search will make available the location of the relevant documents. This system is most commonly used in libraries when printed matter, such as books and periodicals, can be quickly located. See also database.


information technology

Computer and telecommunications technologies used in processing information of any kind. For example, word processing, the use of a database, and the sending of messages over a computer network, all involve the use of information technology.


instruction set

The complete set of instructions recognized by a given computer or provided by a given programming language.



An executive routine that translates a stored program expressed in some machine-like pseudo code into machine code and performs the indicated operations, by means of subroutines, as they are translated. An interpreter is essentially a closed subroutine that operates successively on an indefinitely long sequence of program parameters, the pseudo instructions, and operands.



A program which copies other (object) programs from auxiliary (external) memory to main (internal) memory prior to its execution.


local area network (LAN)

A communications network that serves users within a confined geographical area. It is made up of servers, workstations, a network operating system, and a communications link.


machine code

Instructions that the central processing unit (CPU) of a computer can execute directly, without the need for translation. Machine-code statements are written in a binary-coded (low-level) computer language. Programmers usually write computer programs in a high-level language (such as FORTRAN or C), which a compiler program then translates into machine code for execution.


machine code

Instructions that the central processing unit (CPU) of a computer can execute directly, without the need for translation. Machine-code statements are written in a binary-coded (low-level) computer language. Programmers usually write computer programs in a high-level language (such as FORTRAN or C), which a compiler program then translates into machine code for execution.



In software engineering, a predefined sequence of computer instructions that is inserted into a program, usually during assembly or compilation, at each place that its corresponding macroinstruction appears in the program.


main memory

A non-moving storage device utilizing one of a number of types of electronic circuitry to store information.



Term used to describe a large and power computer and its associated storage devices. Users may be provided with small terminal units resembling personal computers; these have software that enables the user to access the data on the mainframe. The terminal units are linked to each other and to the mainframe by a computer network. Today's personal computers and laptops are much more powerful than early mainframe machines, so many tasks once carried out by mainframes are now done on PC's or mobile devices.



The part of a computer used to hold data and instructions while they are being worked on. The term memory identifies data storage that comes in the form of chips, and the word storage is used for memory that exists on tapes or disks. Moreover, the term memory is usually used as a shorthand for physical memory, which refers to the actual chips capable of holding data. Some computers also use virtual memory, which expands physical memory onto a hard disk. Every computer comes with a certain amount of physical memory, usually referred to as main memory or RAM. You can think of main memory as an array of boxes, each of which can hold a single byte of information. A computer that has 1 megabyte of memory, therefore, can hold about 1 million bytes (or characters) of information.


There are several different types of memory:


– RAM (random-access memory): This is the same as main memory. When used by itself, the term RAM refers to read and write memory; that is, you can both write data into RAM and read data from RAM. This is in contrast to ROM, which permits you only to read data. Most RAM is volatile, which means that it requires a steady flow of electricity to maintain its contents. As soon as the power is turned off, whatever data was in RAM is lost.


– ROM (read-only memory): Computers almost always contain a small amount of read-only memory that holds instructions for s


– PROM (programmable read-only memory): A PROM is a memory chip on which you can store a program. But once the PROM has been used, you cannot wipe it clean and use it to store something else. Like ROMs, PROMs are non-volatile.


– EPROM (erasable programmable read-only memory): A special type of PROM that can be erased by exposing it to ultraviolet light.


– EEPROM (electrically erasable programmable read-only memory): A special type of PROM that can be erased by exposing it to an electrical charge.



Permanent memory that holds the elementary circuit operations a computer must perform for each instruction in its instruction set.



A silicon chip which contains the arithmetic and logic functions of a central processor unit.


model of computation

An idealized version of a computing device that usually has some simplifications such as infinite memory. A Turing machine and the lambda calculus are models of computation.



An electronic device that modulates and demodulates signals. One of the functions of a modem is to enable digital data to be transmitted over analog transmission facilities. The term is a contraction of modulator-demodulator.



A device which takes information from any of several sources and places it on a single line or sends it to a single destination.



A mode of operation in which two or more processes (programs) are executed concurrently by separate CPUs that have access to a common main memory.



Also called parallel processing, a mode of operation in which two or more programs are executed in an interleaved manner by a single CPU.



A mode of operation in which two or more tasks are executed in an interleaved manner.



– In general terms, an arrangement of nodes and interconnecting branches.


– Specifically, in the field of information processing and telecommunications, a system (transmission channels and supporting hardware and software) that connects several remotely located computers via telecommunications.


object oriented language

A programming language that allows the user to express a program in terms of objects and messages between those objects. Examples include C++, Smalltalk and LOGO.


operating system

Software that controls the execution of programs, and that provides services such as resource allocation, scheduling, input/output control, and data management. Usually, operating systems are predominantly software, but partial or complete hardware implementations are possible.



A programming language, named after Blaise Pascal, with a structured design which allows coding errors to be corrected rapidly. Developed from ALGOL, it is easy to learn. Its advantages over high-level languages is that it uses less memory and less assembly time.


peripheral device

Equipment that is directly connected a computer. A peripheral device can be used to input data; e.g., keypad, bar code reader, transducer, laboratory test equipment; or to output data; e.g., printer, disk drive, video system, tape drive, valve controller, motor controller.



– In image processing and pattern recognition, the smallest element of a digital image that can be assigned a gray level.


– In computer graphics, the smallest element of a display surface that can be assigned independent characteristics. This term is derived from the term "picture element."



The hardware and software which must be present and functioning for an application program to run as intended. A platform includes, but is not limited to the operating system or executive software, communication software, microprocessor, network, input/output hardware, any generic software libraries, database management, user interface software, and the like.



– A sequence of instructions suitable for processing. Processing may include the use of an assembler, a compiler, an interpreter, or another translator to prepare the program for execution. The instructions may include statements and necessary declarations.


– To design, write, and test programs.


– In programming languages, a set of one or more interrelated modules capable of being executed.

– Loosely, a routine.


– Loosely, to write a routine.


programming language

A set of rules for giving instructions to a computer. In high-level languages, which use words similar to those in English and are therefore easier to learn, one statement may represent several machine instructions, and must be converted into machine code by a compiler. Low-level languages are more similar to machine code, and each statement represents one machine instruction.


random access memory (RAM)

Chips which can be called read/write memory, since the data stored in them may be read or new data may be written into any memory address on these chips. The term random access means that each memory location (usually 8 bits or 1 byte) may be directly accessed (read from or written to) at random. This contrasts to devices like magnetic tape where each section of the tape must be searched sequentially by the read/write head from its current location until it finds the desired location. ROM memory is also random access memory, but they are read only not read/write memories. Another difference between RAM and ROM is that RAM is volatile, i.e., it must have a constant supply of power or the stored data will be lost.


read-only memory (ROM)

A memory chip from which data can only be read by the CPU. The CPU may not store data to this memory. The advantage of ROM over RAM is that ROM does not require power to retain its program. This advantage applies to all types of ROM chips; ROM, PROM, EPROM, and EEPROM.


real time

Pertaining to a system or mode of operation in which computation is performed during the actual time that an external process occurs, in order that the computation results can be used to control, monitor, or respond in a timely manner to the external process. By contrast the term "batch" pertains to a system or mode of operation in which inputs are collected and processed all at one time, rather than being processed as they arrive, and a job, once started, proceeds to completion without additional input or user interaction.



A small, high speed memory circuit within a microprocessor that holds addresses and values of internal operations; e.g., registers keep track of the address of the instruction being executed and the data being processed. Each microprocessor has a specific number of registers depending upon its design.



A high-speed computer which is designed to be accessed by many other computers. Servers can be attached to local area networks and/or be hooked up to the Internet. With the proper software and connections, servers can control the distribution of email, store World Wide Web documents, and provide access to files that are shared by many users.



– Use of an executable model to represent the behavior of an object. During testing the computational hardware, the external environment, and even code segments may be simulated.


– A model that behaves or operates like a given system when provided a set of controlled inputs.

– Experimentation in the space of theories, or a combination of experimentation and theorization. Some numerical simulations are programs that represent a model for how nature works. Usually, the outcome of a simulation is as much a surprise as the outcome of a natural event, due to the richness and uncertainty of computation.



Programs, procedures, rules, and any associated documentation pertaining to the operation of a system.


Application software is software designed to fill specific needs of a user; for example, software for navigation, payroll, or process control. System software is software designed to facilitate the operation and maintenance of a computer system and its associated programs; e.g., operating systems, assemblers, utilities.



– The set of instructions necessary to direct a computer to carry out a well-defined mathematical or logical operation.


– A subunit of a routine. A subroutine is often written in relative or symbolic coding even when the routine to which it belongs is not.


– A portion of a routine that causes a computer to carry out a well-defined mathematical or logical operation.


– A routine which is arranged so that control may be transferred to it from a master routine and so that, at the conclusion of the subroutine, control reverts to the master routine. Such a subroutine is usually a closed subroutine (see below).


– A single routine may simultaneously be both a subroutine with respect to another routine and a master routine with respect to a third. Usually control is transferred to a single subroutine from more than one place in the master routine and the reason for using the subroutine is to avoid having to repeat the same sequence of instructions at different places in the master routine.


A closed subroutine is a subroutine not stored in the main path of the routine. Such a subroutine is entered by a jump operation and provision is made to return control to the main routine at the end of the operation. The instructions related to the entry and re-entry function constitute a linkage.


time sharing

A mode of operation that permits two or more users to execute computer programs concurrently on the same computer system by interleaving the execution of their programs. Time sharing May be implemented by time slicing, priority-based interrupts, or other scheduling methods.


virtual reality

The origin of the term "virtual reality"can be traced back to French playwright, poet, and director Antonin Artaud (1896–1948) in his seminal book The Theater and Its Double (1938), in which he described theatre as "la realite virtuelle", in which characters, objects, and images take on the phantasmagoric force of alchemy's visionary internal world in which the symbols of alchemy are evolved.


In 1965, US computer scientist Ivan Sutherland (1938–) envisioned what he called the "ultimate display." Using this display, a person would look into a virtual world that appeared as real as the physical world. That world would be seen through a head-mounted display (HMD) and be augmented through three-dimensional sound and tactile stimuli. A computer would maintain the world model in real time, with users manipulating virtual objects in a realistic, intuitive way.


In 1966, Sutherland built the first computer-driven HMD; the computer system provided all the graphics for the display (previously, all HMDs had been linked to cameras). The HMD could display images in stereo, giving the illusion of depth, and it should also track the user's head movements, allowing the field of view to change appropriately as the user looked around. Mychilo Cline, in his book Power, Madness, and Immortality: The Future Of Virtual Reality (2009), predicts that, as we spend more and more time in virtual reality, there will be a gradual "migration to virtual space," resulting in unimagined changes to economics, worldview, and culture.



A hypothetical being that behaves like us and may share our functional organization and even, perhaps, our neurophysiological makeup, but lacks consciousness or any form of subjective awareness. The concept is used in discussions of artificial intelligence.