Get Free Shipping on orders over $79
A Parallel Algorithm Synthesis Procedure for High-Performance Computer Architecture : Series in Computer Science - Ian N. Dunn

A Parallel Algorithm Synthesis Procedure for High-Performance Computer Architecture

By: Ian N. Dunn, Gerard G. L. Meyer

Hardcover | 30 April 2003

At a Glance

Hardcover


$169.00

or 4 interest-free payments of $42.25 with

 or 

Ships in 5 to 7 business days

Despite five decades of research, parallel computing remains an exotic, frontier technology on the fringes of mainstream computing. Its much-heralded triumph over sequential computing has yet to materialize. This is in spite of the fact that the processing needs of many signal processing applications continue to eclipse the capabilities of sequential computing. The culprit is largely the software development environment. Fundamental shortcomings in the development environment of many parallel computer architectures thwart the adoption of parallel computing. Foremost, parallel computing has no unifying model to accurately predict the execution time of algorithms on parallel architectures. Cost and scarce programming resources prohibit deploying multiple algorithms and partitioning strategies in an attempt to find the fastest solution. As a consequence, algorithm design is largely an intuitive art form dominated by practitioners who specialize in a particular computer architecture. This, coupled with the fact that parallel computer architectures rarely last more than a couple of years, makes for a complex and challenging design environment.

To navigate this environment, algorithm designers need a road map, a detailed procedure they can use to efficiently develop high performance, portable parallel algorithms. The focus of this book is to draw such a road map. The Parallel Algorithm Synthesis Procedure can be used to design reusable building blocks of adaptable, scalable software modules from which high performance signal processing applications can be constructed. The hallmark of the procedure is a semi-systematic process for introducing parameters to control the partitioning and scheduling of computation and communication. This facilitates the tailoring of software modules to exploit different configurations of multiple processors, multiple floating-point units, and hierarchical memories. To showcase the efficacy of this procedure, the book presents three case studies requiring various degrees of optimization for parallel execution.

More in Algebra

The Maths Book : Big Ideas Simply Explained - DK

RRP $42.99

$32.99

23%
OFF
The Mending of Broken Bones : A Modern Guide to Classical Algebra - Paul Lockhart
Algebra Workbook Grades 6-8 : Algebra - Kumon
Pre-Algebra Workbook Grades 6-8 : Algebra - Kumon

RRP $24.99

$18.99

24%
OFF
Linear Algebra - Lilian Mandelbaum

$458.75

Textbook of Algebra - Jonas Hoover

$458.75

Fundamentals of Algebra - Kevin Houston

$427.75

Linear Algebra and Its Applications : 6th Global Edition - David Lay
Linear Algebra: A Modern Introduction : 4th Edition - David Poole

RRP $189.95

$152.75

20%
OFF
Essential Calculus : 2nd Edition - James Stewart

RRP $219.95

$164.75

25%
OFF
$G$-Global Homotopy Theory and Algebraic $K$-Theory - Tobias Lenz