Topic > Parallel computing - 919

PARALLEL ARCHITECTURAL LANDSCAPE Parallel computing, in its fundamental sense, involves the simultaneous execution of multiple operations, i.e. a problem can be divided into sub-problems that can be solved simultaneously. Throughout history, successful attempts have been made to increase the degree of parallelism in computing as much as possible. During this course many constraints were encountered and together possible solutions were suggested by the brightest minds. Parallelism can be broken down in many ways:1. Fine-grained parallelism – When processors need to communicate with each other many times per second. Coarse-grained parallelism – When processors communicate with each other once every few seconds.2. Bit-level parallelism – When the number of operations to be performed is reduced by increasing the word size. The first processor launched by Intel in the 1970s was 4-bit, and the systems we work on today are mostly 64-bit. This was the main source of acceleration until the mid-1980s. Instruction-level parallelism – When instructions are combined into groups and then executed in parallel. Modern processors have a pipeline where each stage executes a different instruction. FLYNN TAXONOMY Single instruction Multiple instruction Single data SISDMISD Multiple data SIMDMIMD1. SISD: Single Instruction-Single Data This is the simplest type of architecture. This is equivalent to an entirely sequential program and hardly uses any parallelism.2. MISD: Multiple Instruction-Single Data No significant applications other than systolic arrays have been designed for this type of architecture and therefore this classification is rarely used.3. SIMD: Single Instruction-Multip...... half of the card...... model cannot be extended beyond 32 processors. Parallel computing in the future In my opinion, the greatest potential in parallel computing lies in the software part. Hardware architectures have been constantly evolving for the last 40 years and sooner or later saturation may begin. The number of transistors cannot continue to increase indefinitely. While the software has evolved, it still isn't keeping up. There is a shortage of programmers trained to design and program parallel systems. Intel recently launched the Parallel Computing Center program with the main purpose of "keeping parallel software in sync with parallel hardware." The international community needs to develop parallel programming capabilities to keep up with the new processors that are being created. As this awareness spreads, the parallel architectural panorama will reach even greater heights than expected.