购买
下载掌阅APP,畅读海量书库
立即打开
畅读海量书库
扫码下载掌阅APP

2.1 Introduction

2.1.1 Computer Architecture

Computer architecture means the structure and organization of a computer’s hardware or system software . The term “architecture” in computer literature can be traced back to the work of Lyle R. Johnson, Mohammad Usman Khan and Frederick P. Brooks, Jr., members in 1959 of the Machine Organization department in IBM’s main research center. Johnson had the opportunity to write a proprietary research communication about the Stretch, an IBM-developed supercomputer for Los Alamos Scientific Laboratory. To describe the level of detail for discussing the luxuriously embellished computer, he noted that his description of formats, instruction types, hardware parameters, and speed enhancements were at the level of “system architecture”—a term that seemed more useful than “machine organization”.

The art of computer architecture has three main subcategories: [1]

(1) Instruction set architecture, or ISA. The ISA defines the codes that a central processor reads and acts upon. It is the machine language (or assembly language), including the instruction set, word size, memory address modes, processor registers, and address and data formats.

(2) Microarchitecture , also known as computer organization describes the data paths, data processing elements and data storage elements, and describes how they should implement the ISA. The size of a computer’s CPU cache for instance, is an organizational issue that generally has nothing to do with the ISA.

(3) System Design includes all of the other hardware components within a computing system. These include: Data paths, such as computer buses and switches, memory controllers and hierarchies, data processing other than the CPU, such as direct memory access (DMA) and other miscellaneous issues such as virtualization, multiprocessing and software features.

2.1.2 Design Goals

Computer architectures usually trade off standards, cost, memory capacity, latency (latency is the amount of time that it takes for information from one node to travel to the source) and throughput . Sometimes other considerations, such as features, size, weight, reliability, expandability and power consumption are factors.

(2-1) The most common scheme carefully chooses the bottleneck that most reduces the computer’s speed. Ideally, the cost is allocated proportionally to assure that the data rate is nearly the same for all parts of the computer, with the most costly part being the slowest. This is how skillful commercial integrators optimize personal computers such as smart cellphones.

Modern computer performance is often described in MIPS per MHz (millions of instructions per millions of cycles of clock speed). This measures the efficiency of the architecture at any clock speed. Since a faster clock can make a faster computer, this is a useful, widely applicable measurement. Historic computers had MIPs/MHz as low as 0.1. Simple modern processors easily reach near 1. Superscalar processors may reach three to five by executing several instructions per clock cycle. Multicore and vector processing CPUs can multiply this further by acting on a lot of data per instruction, since they have several CPU cores executing in parallel.

Historically, many people measured a computer’s speed by the clock rate (usually in MHz or GHz). This refers to the cycles per second of the main clock of the CPU. However, this metric is somewhat misleading, as a machine with a higher clock rate may not necessarily have higher performance. As a result manufacturers have moved away from clock speed as a measure of performance.

Other factors influence speed, such as the mix of functional units, bus speeds, available memory , and the type and order of instructions in the programs being run.

In a typical home computer, the simplest, most reliable way to speed performance is usually to add random access memory (RAM). More RAM increases the likelihood that needed data or a program is in RAM—so the system is less likely to need to move memory data from the disk. The disk is often ten thousand times slower than RAM because it has mechanical parts that must move to access its data.

There are two main types of speed: latency and throughput. Latency is the time between the start of a process and its completion. Throughput is the amount of work done per unit time. Interrupt latency is the guaranteed maximum response time of the system to an electronic event.

Performance is affected by a very wide range of design choices — for example, pipelining a processor usually makes latency worse (slower) but makes throughput better. Computers that control machinery usually need low interrupt latencies. These computers operate in a real-time environment and fail if an operation is not completed in a specified amount of time. For example, computer-controlled anti-lock brakes must begin braking within a predictable, short time after the brake pedal is sensed.

The performance of a computer can be measured using other metrics, depending upon its application domain. A system may be CPU bound (as in numerical calculation), I/O bound (as in a webserving application) or memory bound (as in video editing). Power consumption has become important in servers and portable devices like laptops.

Benchmarking tries to take all these factors into account by measuring the time a computer takes to run through a series of test programs. Although benchmarking shows strengths, it may not help one to choose a computer. Often the measured machines split on different measures. For example, one system might handle scientific applications quickly, while another might play popular video games more smoothly. Furthermore, designers may add special features to their products, in hardware or software, that permit a specific benchmark to execute quickly but don’t offer similar advantages to general tasks.

Power consumption is another measurement that is important in modern computers. Power efficiency can often be traded for speed or lower cost. The typical measurement in this case is MIPS/W (millions of instructions per second per watt).

Modern circuits have more power per transistor as the number of transistors per chip grows. Therefore, power efficiency has increased in importance. Recent processor designs such as the Intel Core 2 put more emphasis on increasing power efficiency. Also, in the world of embedded computing, power efficiency has long been and remains an important goal next to throughput and latency. 06UUXjGoRHesWGDnKhBODep8xN6o+i6f8sZKBI2u+LQlAhvshCJ8dHNzVRRJVjY5

点击中间区域
呼出菜单
上一章
目录
下一章
×