Abstract
In the past, the speed of computers was mainly increased by increasing the speed of their logic element. Thus, the memory cycle time has increased by two orders of magnitude. Improvements in technology achieved in the last 20 years have increased the speed of processors by as much as three orders. Today, since the physical barrier of the speed of transfer of an electric signal has been reached, it is possible to achieve additional speed only by improving the computer organization or by using it more effectively. Current technology has made it possible for the processors to be combined into large parallel structures, and by a suitable organization of n processors it is possible to reach an n-fold increase in the rate of computation. Parallelism in computation has brought with it new problems both in the creation of new algorithms and programs, and in the design of computer architectures. Parallel algorithms and programs are closely connected with the architecture of parallel computers, and therefore design and analysis of parallel algorithms and programs cannot be considered independently of their implementation and the architecture of the computer on which they are to be implemented. Several examples are known from the history of parallel data processing, where a valuable concept in the design of algorithms, programs or computers has had a large impact on the efficiency of computation.
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
“It is easy to design computers, but it is hard to know what kind of computer to design...”,
D. J. Kuck [16]
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Baer, J. L.: Survey of some theoretical aspects of multiprocessing. Comp. Surveys. 5, 1973, 1. 31–80
Barnes, G. et al.: The ILLIAC IV computer. IEEE Trans. on Computers, C-17, 1968, 746–757.
Batcher, K. E.: Sorting networks and their applications. Spring Joint Comp. Conf., 1968, AFIPS Proc., 32. Thompson, Washington, 1968, pp. 307–314.
Chon, S. CH. and Kuck, D. J.: Time and parallel processor bounds for linear recurrence systems. IEEE Trans. on Computers, C-24, 1975, 701–717.
Control Data Corporation, STAR-100 Computer Hardware Reference Manual, 1974.
Conway, M. E.: A multiprocessor system design. AFIPS Conf. Proc. 1963, FJCC 24. Spartan Books, Baltimore, 1963, pp. 139–146.
Dijkstra, E. W.: Cooperating sequential processes. In: Programming Languages. F Genuys (Editor). Academic Press, New York, 1968. pp. 43–1122.
Duff, M. J. and Watson, D.: A parallel computer for array processing. Proc. IFIP Congress, North-Holland Publ. Co., Amsterdam, 1975, pp. 94–99.
Enslow. P.. Jr. (Editor): Multiprocessors and Parallel Processing. Willey—Intel—science, New York, 1974.
Flanders, P. M. et al.: Efficient high-speed computing with the distributed array processor. In: High-Speed Computers and Algorithms Organization. D. J. Kuck. D. H. Lawrio and A H. Sameh (Editors). Academic Press, New York, 1977. pp. 113–128.
Flynn, M. J.: Toward more efficient computer organizations. Proc. Spring Joint Comp Conf.. AFIPS Press, 1972, pp. 1211–1217.
Gentleman. W. M.: Some complexity results for matrix computations on parallel processors. J. ACM, 25, 1978, 1. 112–115.
Graham, R. L.: Bounds on multiprocessing timing anomalies. SIAM J. Appl. Math.. 17. 1999, 2. 416–429.
Ihnat, J. P. et al.: The use of two levels of parallelism to implement an efficient programmable signal processing computer. Sagamore Comp. Conf. on Parallel Processing, Sagamore, 1973, pp. 113–119.
Keck. D.: ILLIAC IV software and application programming. IEEE Trans. on Computers. C-17. 1968. 8, 758–770.
Kuck, D.: Multioperation machine computational complexity. In: Complexity of Sequential and Parallel Numerical Algorithms. J. F. Traub (Editor). Academic Press. New York. 1973. pp. 17–47.
Kung. H. T.: Synchronized and asynchronous parallel algorithms for multiprocessors. In: Algorithms and Complexity. J. F. Traub (Editor). Academic Press. New York. 1976. pp. 153–200.
Lambiotto, J. J. and Voigt, R. G.: The solution of tridiagonal systems of equations on the CDC STAR-100 computer. ACM Trans. on Math. Software, 1, 1975, 4, 308–329.
Lawrie, D. H. et al.: GLYPNIR–a programming language for ILLIAC IV. Comm. CAC. 15. 1975, 3, 157–164.
Madsen. N. K. et al.: Matrix multiplication by diagonals on a vector parallel processor. Inform. Proc. Lett., 5, 1976, 2, 41–45.
Mirenkov, N. N.: Strukturnoe parallelnoe programmirovanie. Programmirovanie. 3. 1975. 3–14.
Owens, J. L.: The influence of machine organization on algorithms. In: Complexity of Sequential and Parallel Numerical Algorithms. J. F. Traub (Editor). Academic Press, New York, 1973, pp. 111–130.
Raj Reddy. D.: Some numerical problems in artificial intelligence: Implications for complexity and machine architecture. In: Complexity of Sequential and Parallel Numerical Algorithms. J. F. Traub (Editor). Academic Press, New York. 1973, pp. 131–147.
Sl Aran: System description. A new class of computer. Goodyear Aerospace Corp.. Akron. Ohio, 1974.
Stone, H. S.: Parallel processing with perfect schuffle. IEEE Trans. on Computers, C-20, 1971, 2. 153–161.
Stone, H. S.: Parallel tridiagonal equation solver. ACM Trans. on Math. Software, 1. 1975, 289–307.
Stone, H. S. (Editor): Introduction to Computer Architecture. Sci. Res. Assoc., Inc., Chicago, 1975.
Stone. H. S.: An efficient parallel algorithm for the solution of a tridiagonal system of equations. J. ACM, 20. 1973, 27–30.
Swan, R. J. et al.: The structure and architecture of CM*: A modular multiprocessor. Tech. Report, Dep. Comp. Sci., Carnegie-Mellon Univ., Pittsburg, 1977.
Shakhbazyan, K. V. and Tushkina, T. A.: Obzor metodov sostavleniya raspisanii dls’a mnogoprotsessornykh sistem. Zap. nauch. semin. LOMI, AN SSSR, Leningrad, 5-I. 1975. pp. 229–258.
Thompson, C. D.: Generalized connection networks for parallel processor intercommunication. Tech. Report, Dep. Comp. Sci., Carnegie-Mellon Univ., Pittsburg, 1977.
Thurber, K. J.: Large scale computer architecture. In: Parallel and Associative Processors. Hayden Book Co., Rochello Part, N. J., 1976.
Tutle, P. G.: Implementation of selected eigenvalue algorithms on a vector computer. Tech. Report NPGD-TM-330, Babcock and Wilcox 1975.
Vairavan, K. and Demilt.o, R. A.: On the computational complexity of a generalized scheduling problem. IEEE Trans. on Computers, C-25, 1976, 11, 1067–1073.
Wulf, W. A. and Bell, C. G.: C mmp — a multi-miniprocessor. AFIPS Conf. Proc. 1972, FJCC 41. AFIPS Press, Montwale, N. J., pp. 765–777.
Stone, H. S.: Parallel processing with the perfect shuffle. IEEE Trans. on Computers. C-20. 1971, 2, 153–161.
Fixo, B. J. and Algazi, V. R.: A unified treatment of discrete fast unitary transforms. SIAM J. Computing, 6, 1977, 4, 700–717.
Batcher, K. E.: Sorting networks and their applications. Spring Joint Computer Conf. AFIPS Proc., Vol. 32. Thompson, Washington, D. C., 1968, pp. 307–314.
Brigham, E. O.: The Fast Fourier Transform. Prentice Hall, Englewood Cliffs. N. J., 1974.
Clos, C.: A study of non-blocking switching networks. Bell Syst. Tech. J., 32, 1953, 406–424.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1984 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Mikloško, J., Kotov, V.E. (1984). Correlation of Algorithms, Software and Hardware of Parallel Computers. In: Mikloško, J., Kotov, V.E. (eds) Algorithms, Software and Hardware of Parallel Computers. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-11106-2_12
Download citation
DOI: https://doi.org/10.1007/978-3-662-11106-2_12
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-662-11108-6
Online ISBN: 978-3-662-11106-2
eBook Packages: Springer Book Archive