Advanced Architectures
In the previous sections of this course. we have concentrated on singleprocessor architectures and techniques to improve upon their performance, such as: – Efficient algebraic hardware implementations – Enhanced processor ...
In the previous sections of this course. we have concentrated on singleprocessor architectures and techniques to improve upon their performance, such as:
– Efficient algebraic hardware implementations
– Enhanced processor operation through pipelined instruction execution and multiplicity of functional units
– Memory hierarchy
– Control unit design
– I/O operations
Through these techniques and implementation improvements, the processing power of a computer system has increased by an order of magnitude every 5 years. We are (still) approaching performance bounds due to physical limitations of the hardware.
- Several approaches of parallel computer are possible
– Improve the basic performance of a single processor machine
Architecture / organization improvements
Implementation improvements
SSI --> VLSI --> ULSI
Clock speed
Packaging
– Multiple processor system architectures
Tightly coupled system
Loosely coupled system
Distributed computing system
- Parallel computer: SIMD computer, MIMD computer
System with multiprocessor CPUs can be divided into multiprocessor and multicomputers. In this section we will first study multiprocessors and then multicomputers
Shared-Memory Multiprocessor
A parallel computer in which all the CPUs share a common memory is called a tightly coupled systems
Tightly coupled systems, Shased-memory multiprocessor- The features of the system are as follow.
– Multiple processors
– Shared, common memory system
– Processors under the integrated control of a common operating system
– Data is exchanged between processors by accessing common shared variable locations in memory
– Common shared memory ultimates presents an overall system bottleneck that effectively limits the sizes of these systems to a fairly small number of processors (dozens)
Message-passing multiprocessor
A parallel computer in which all the CPUs has a local independent memory is called a loosely coupled systems
Loosely coupled systems, Message-passing multiprocessor- The features of the system are as follow.
– Multiple processors
– Each processor has its own independent memory system
– Processors under the integrated control of a common operating system
– Data exchanged between processors via interprocessor messages
– This definition does not agree with the one given in the text
Distributed computing systems
Now we can see the message-passing computer that multicomputer are held togerther by network.
– Collections of relatively autonomous computers, each capable of independent operation
– Example systems are local area networks of computer workstations
+ Each machine is running its own “copy” of the operating system
+ Some tasks are done on different machines (e.g., mail handler is on one machine)
+ Supports multiple independent users
+ Load balancing between machines can cause a user’s job on one machine to be shifted to another
Performance bounds of Multiple Processor Systems
- For a system with n processors, we would like a net processing speedup (meaning lower overall execution time) of nearly n times when compared to the performance of a similar uniprocessor system
- A number of poor performance “upper bounds” have been proposed over the years
Maximum speedup of O(log n)
Maximum speedup of O(n / ln n)
- These “bounds” were based on runtime performance of applications and were not necessarily valid in all cases
- They reinforced the computer industry’s hesitancy to “get into” parallel processing
The machines are the true parallel processors (also called concurrent processors)
These paralle machines fall into Flynn’s taxonomy classes of SIMD and MIMD systems
– SIMD: Single Instruction stream and Multiple Data streams
– MIMD: Multiple Instruction streams and Multiple Data streams
SIMD Overview
- Single “control unit” computer and anarray of “computational” computers
- Control unit executes control-flowinstructions and scalar operations and passes vector instructions to the processor array
- Processor instruction types:
– Extensions of scalar instructions
Adds, stores, multiplies, etc. become vector operations executed in all processors concurrently
– Must add the ability to transfer vector and scalar data between processors to the instruction set -- attributes of a “parallel language”
- SIMD Examples
Vector addition
C(I) = A(I) + B(I)
Complexity O(n) in SISD systems for I=1 to n do
C(I) = A(I) + B(I)
Complexity O(1) in SIMD systems
Matrix multiply
A, B, and C are n-by-n matrices
Compute C= AxB
Complexity O(n3) in SISD systems
n2 dot products, each of which is O(n)
Complexity O(n2) in SIMD systems
Perform n dot products in parallel across M the array
Image smoothing
– Smooth an n-by-n pixel image to reduce “noise”
– Each pixel is replaced by the average of itself
and its 8 nearest neighbors
– Complexity O(n2) in SISD systems
– Complexity O(n) in SIMD systems
Pixel and 8 neighbors
MIMD Systems Overview
- MIMD systems differ from SIMD ones in that the “lock-step” operation requirement is removed
- Each processor has its own control unit and can execute an independent stream of
instructions
– Rather than forcing all processors to perform the same task at the same time, processors can be assigned different tasks that, when taken as a whole, complete the assigned application
- SIMD applications can be executed on an MIMD structure
– Each processor executes its own copy of the SIMD algorithm
- Application code can be decomposed into communicating processes
– Distributed simulations is a good example of a
very hard MIMD application
- Keys to high MIMD performance are
– Process synchronization
– Process scheduling
- Process synchronization targets keeping all processors busy and not suspended
awaiting data from another processor
- Process scheduling can be performed
– By the programmer through the use of parallel language constructs
Specify apriori what processes will be instantiated and where they will be
executed
– During program execution by spawning processes off for execution on available processors.
Fork-join construct in some languages
- System examples
SIMD
– Illiac IV
One of the first massively parallel systems 64 processors
– Goodyear Staran: 256 bit-serial processors
– Current system from Cray Computer Corp.uses supercomputer (Cray 3) front end coupled to an SIMD array of 1000s of processors
MIMD
– Intel hypercube series:
Supported up to several hundred CISC processors
Next-gen Paragon
– Cray Research T3D
Cray Y-MP coupled to a massive array of Dec Alphas
Target: sustained teraflop performance
- Problems
– Hardware is relatively easy to build
– Massively parallel systems just take massive amounts of money to build
– How should/can the large numbers of processors be interconnected
– The real trick is building software that will exploit the capabilities of the system
- Reality check:
– Outside of a limited number of high-profile applications, parallel processing is still a“young” discipline
– Parallel software is still fairly sparse
– Risky for companies to adopt parallel strategies, just wait for the next new SISD system.