To order through AMS contact the AMS Customer Services Department, P.O. Box 6248, Providence, Rhode Island 02940-6248 USA. For Visa, Mastercard, Discover, and American Express orders call 1-800-321-4AMS.
You may also visit the AMS Bookstore and order directly from there. DIMACS does not distribute or sell these books.
Formally, a dop can be stated as follows: Given a finite discrete set X and a function defined on the elements of X, find an optimal element , such that, . In certain problems, the aim is to find any member of the set of all optimal solutions. These problems can also be easily stated in the above format by making f(x) = 0 for all , and for . In most problems of practical interest, the set is quite large. Consequently, exhaustive enumeration of the elements of to determine is not feasible. Often, elements of can be viewed as paths in graphs/trees, the cost function can be defined in terms of the cost of the arcs, and the dop can be formulated in terms of finding a (least cost) solution path in the graph from an initial node to a goal node. Branch and bound and dynamic programming methods use the structure of these graphs to solve dop's without searching exhaustively.
Given that many classes of dop are NP-hard, one may argue that there is no point in applying parallel processing to these problems, as the worst-case run time can never be reduced to a polynomial unless we have an exponential number of processors. However, the average time complexity of heuristic search algorithms for many problems is polynomial. Also, there are some heuristic search algorithms which find suboptimal solutions in polynomial time (e.g., for certain problems, approximate branch-and-bound algorithms are known to run in polynomial time). In these cases, parallel processing can significantly increase the size of solvable problems. Some applications using search algorithms (e.g. robot motion planning, task scheduling) require real time solutions. For these applications, parallel processing is perhaps the only way to obtain acceptable performance. For some problems, optimal solutions are highly desirable, and cab be obtained for moderate sized instances in a reasonable amount of time using parallel search techniques (e.g. vlsi floor-plan optimization).
Parallel computers having thousands of processing elements are now commercially available. The cost of these machines is similar to that of large mainframes, but they offer significantly more raw computing power. Due to advances in vlsi technology and economies of scale, the cose of these machines is expected to go down drastically over the next decade. It may soon be possible to construct computers comprising thousands to millions of processing elements at costs ranging from those of high-end workstations to large mainframes. This technology has created substantial interest in exploring the use of parallel processing for search based applications.
In the context of the 1993-1994 DIMACS special year on parallel computing, a two-day workshop entitled "Parallel Processing of Discrete Optimization Problems" was held on April 28-29. The workshop featured about twenty invited speakers from Europe and North America. This volume contains a collection of refereed papers from the workshop. The papers cover a wide spectrum of algorithms and applications in parallel processing of discrete optimization and related problems.
One of the key successes of the workshop was that collaboration started among the European group of researchers working on the scoop project (Solving Combinatorial Optimization Problems in Parallel) and researchers from U.S. In addition, many participants stayed at DIMACS after the workshop working on joint projects.
The workshop was sponsored by DIMACS with funds from National Science Foundation and The New Jersey Commission on Science and Technology. We would like to take this opportunity to thank the sponsors, the DIMACS staff, the participants, the authors, the referees, and the American Mathematical Society, for making the workshop successful and the publication of this volume possible.
Panos M. Pardalos, Mauricio G.C. Resende, and K.G. Ramakrishnan
Foreword xi Preface xiii Proving Correctness for Balancing Networks Costas Busch and Marios Mavronicolas 1 A Note on Parallel Randomized Algorithms for Searching Porblems Andrea Clemnti, Jose Rolim and Ludek Kucera 33 Modeling Parallel Branch-and-Bound for Asynchronoous Implementations Ricardo Correa and Afonso Ferreira 45 A Data Parallel Space Dilation Algorithm for the Concentrator Location Problem Olof Damberg and Athanasios Migdalas 57 A Multistage Approach for Scheduling Task Graphs on Parallel Machines Apostolos Gerasoulis, Jia Jiao and Tao Yang 81 Parallel Algorithms for Satisfiability (SAT) Problem Jun Gu 105 Experiences with a Parallel Formulation of an Interior Point Algorithm George Karypis, Anshul Gupta and Vipin Kumar 163 Experiences with a Synchronous Parallel Branch and Bound Algorithm Per S. Laursen 181 New Anticipatory Load Balancing Strategies for Parallel A* Algorithms Nihar R. Mahapatra and Shantanu Dutt 197 A Parallel Algorithm for Computing all Homomorphisms of Deterministic Finite Automata Boleslaw Mikolajczak 233 Query Optimization and Processing in Parallel Databases T.M. Niccum, J. Srivastava, B. Himatsingka and J. Li 259 Scheduling Acyclic Task Graphs on Distributed Memory Parallel Architectures Santos Pande and Kleanthis Psarris 289 Sclability of Massively Parallel Depth-First Search Alexander Reinefeld 305 On Irregular Data Structures and Asynchronous Parallel Brand and Bound Algorithms Catherine Roucairol 323 Parallel Algorithms for the Assignment Problem - An Experimental Evaluation of Three Distributed Algorithms Christian Schutt and Jens Clausen 337 Serial & Parallel Algorithms for QSP Problems J. MacGregor Smith and Jui Xu 353