UP | HOME
2019-12-09 | ../../

Foundations … Gregory R. Andrews TOC

Table of Contents

1 TOC

  1. Foundations of Multithreaded, Parallel, and Distributed Programming / Edition 1 by Gregory R. Andrews

2 Part 0. The Concurrent Computing Landscape.

  1. The Essence of Concurrent Programming.
  2. Hardware Architectures.
  3. Single Processor Machines.
  4. Shared-Memory Multiprocessors.
  5. Multicomputers Networks.
  6. Applications and Programming Styles.
  7. Iterative Parallelism: Matrix Multiplication.
  8. Recursive Parallelism: Adaptive Quadrature.
  9. Producers and Consumers: Unix Pipes.
  10. Clients and Servers: Remote Files.
  11. Peers: Distributed Matrix Multiplication.
  12. Summary of Programming Notation.
  13. Declarations Sequential Statements.
  14. Concurrent Statements and Process Declarations.
  15. Comments.

3 Part I. SHARED VARIABLE PROGRAMMING.

3.1 2. Processes and Synchronization.

  1. States, Actions, Histories, and Properties.
  2. Parallelization: Finding Patterns in Files.
  3. Synchronization: The Maximum of an Array.
  4. Atomic Actions and Await Statements.
  5. Fine-Grained Atomicity.
  6. Specifying Synchronization: The Await Statement.
  7. Finding Patterns in a File Revisited.
  8. A Synopsis of Axiomatic Semantics.
  9. Formal Logical Systems.
  10. A Programming Logic.
  11. Semantics of Concurrent Execution.
  12. Techniques for Avoiding Interference.
  13. Disjoint Variables Weakened Assertions.
  14. Global Invariants.
  15. Synchronization.
  16. An Example: The Array Copy Problem Revisited.
  17. Safety and Liveness Properties.
  18. Proving Safety Properties.
  19. Scheduling Policies and Fairness.

3.2 3. Locks and Barriers.

  1. The Critical Section Problem.
  2. Critical Sections: Spin Locks.
  3. Test and Set.
  4. Test and Test and Set.
  5. Implementing Await Statements.
  6. Critical Sections: Fair Solutions.
  7. The Tie-Breaker Algorithm.
  8. The Ticket Algorithm.
  9. The Bakery Algorithm.
  10. Barrier Synchronization.
  11. Shared Counter.
  12. Flags and Coordinators.
  13. Symmetric Barriers.
  14. Data Parallel Algorithms.
  15. Parallel Prefix Computations.
  16. Operations on Linked Lists.
  17. Grid Computations: Laplace's Equation.
  18. Synchronous Multiprocessors.
  19. Parallel Computing with a Bag of Tasks.
  20. Matrix Multiplication.
  21. Adaptive Quadrature.

3.3 4. Semaphores.

  1. Syntax and Semantics.
  2. Basic Problems and Techniques.
  3. Critical Sections: Mutual Exclusion.
  4. Barriers: Signaling Events.
  5. Producers and Consumers: Split Binary Semaphores.
  6. Bounded Buffers: Resource Counting.
  7. The Dining Philosophers.
  8. Readers and Writers.
  9. Readers/Writers as an Exclusion Problem.
  10. Readers/Writers Using Condition Synchronization.
  11. The Technique of Passing the Baton.
  12. Alternative Scheduling Policies.
  13. Resource Allocation and Scheduling.
  14. Problem Definition and General Solution Pattern.
  15. Shortest-Job-Next Allocation.
  16. Case Study: Pthreads.
  17. Thread Creation.
  18. Semaphores.
  19. Example: A Simple Producer and Consumer.

3.4 5. Monitors.

  1. Syntax and Semantics.
  2. Mutual Exclusion.
  3. Condition Variables.
  4. Signaling Disciplines.
  5. Additional Operations on Condition Variables.
  6. Synchronization Techniques.
  7. Bounded Buffers: Basic Condition Synchronization.
  8. Readers and Writers: Broadcast Signal.
  9. Shortest-Job-Next Allocation: Priority Wait.
  10. Interval Timer: Covering Conditions.
  11. The Sleeping Barber: Rendezvous.
  12. Disk Scheduling: Program Structures.
  13. Scheduler as a Separate Monitor.
  14. Scheduler as an Intermediary.
  15. Scheduler as a Nested Monitor.
  16. Case Study: Java.
  17. The Threads Class.
  18. Synchronized Methods.
  19. Parallel Readers/Writers.
  20. Exclusive Readers/Writers.
  21. True Readers/Writers.
  22. Case Study: Pthreads.
  23. Locks and Condition Variables.
  24. Example: Summing the Elements of a Matrix.

3.5 6. Implementations.

  1. A Single-Processor Kernel.
  2. A Multiprocessor Kernel.
  3. Implementing Semaphores in a Kernel.
  4. Implementing Monitors in a Kernel.
  5. Implementing Monitors Using Semaphores.

4 Part II. DISTRIBUTED PROGRAMMING.

4.1 7. Message Passing.

  1. Asynchronous Message Passing.
  2. Filters: A Sorting Network.
  3. Clients and Servers.
  4. Active Monitors.
  5. A Self-Scheduling Disk Driver.
  6. File Servers: Conversational Continuity.
  7. Interacting Peers: Exchanging Values.
  8. Synchronous Message Passing.
  9. Case Study: CSP.
  10. Communication Statements.
  11. Guarded Communication.
  12. Example: The Sieve of Eratosthenes.
  13. Case Study: Linda.
  14. Tuple Space and Process Interaction.
  15. Example: Prime Numbers with a Bag of Tasks.
  16. Case Study: MPI.
  17. Basic Functions.
  18. Global Communication and Synchronization.
  19. Case Study: Java.
  20. Networks and Sockets.
  21. Example: A Remote File Reader.

4.2 8. RPC and Rendezvous.

  1. Remote Procedure Call.
  2. Synchronization in Modules.
  3. A Time Server Caches in a Distributed File System.
  4. A Sorting Network of Merge Filters.
  5. Interacting Peers: Exchanging Values.
  6. Rendezvous.
  7. Input Statements.
  8. Client/Server Examples.
  9. A Sorting Network of Merge Filters.
  10. Interacting Peers: Exchanging Values.
  11. A Multiple Primitives Notation.
  12. Invoking and Servicing Operations.
  13. Examples.
  14. Readers/Writers Revisited.
  15. Encapsulated Access.
  16. Replicated Files.
  17. Case Study: Java.
  18. Remote Method Invocation.
  19. Example: A Remote Database.
  20. Case Study: Ada.
  21. Tasks.
  22. Rendezvous.
  23. Protected Types.
  24. Example: The Dining Philosophers.
  25. Case Study: SR.
  26. Resources and Globals.
  27. Communication and Synchronization.
  28. Example: Critical Section Simulation.

4.3 9. Paradigms for Process Interaction.

  1. Managers/Workers (Distributed Bag of Tasks).
  2. Sparse Matrix Multiplication.
  3. Adaptive Quadrature Revisited.
  4. Heartbeat Algorithms.
  5. Image Processing: Region Labeling.
  6. Cellular Automata: The Game of Life.
  7. Pipeline Algorithms.
  8. A Distributed Matrix Multiplication Pipeline.
  9. Matrix Multiplication by Blocks.
  10. Probe/Echo Algorithms.
  11. Broadcast in a Network.
  12. Computing the Topology of a Network.
  13. Broadcast Algorithms.
  14. Logical Clocks and Event Ordering.
  15. Distributed Semaphores.
  16. Token-Passing Algorithms.
  17. Distributed Mutual Exclusion.
  18. Termination Detection in a Ring.
  19. Termination Detection in a Graph.
  20. Replicated Servers.
  21. Distributed Dining Philosophers.
  22. Decentralized Dining Philosophers.

4.4 10. Implementations.

  1. Asynchronous Message Passing.
  2. Shared-Memory Kernel.
  3. Distributed Kernel.
  4. Synchronous Message Passing.
  5. Direct Communication Using Asynchronous Messages.
  6. Guarded Communication Using a Clearing House.
  7. RPC and Rendezvous.
  8. RPC in a Kernel.
  9. Rendezvous Using Asynchronous Message Passing.
  10. Multiple Primitives in a Kernel.
  11. Distributed Shared Memory.
  12. Implementation Overview.
  13. Page Consistency Protocols.
  14. * Part III. PARALLEL PROGRAMMING:
  15. Speedup and Efficiency.
  16. Overheads and Challenges.

4.5 11. Scientific Computing.

  1. Grid Computations.
  2. Laplace's Equation.
  3. Sequential Jacobi Iteration.
  4. Shared Variable Program.
  5. Message Passing Program.
  6. Red/Black Successive Over Relaxation.
  7. Multigrid Methods.
  8. Particle Computations.
  9. The Gravitational #N-Body Problem.
  10. Shared Variable Program.
  11. Message Passing Programs.
  12. Approximate Methods.
  13. Matrix Computations.
  14. Gaussian Elimination.
  15. LU Decomposition.
  16. Shared Variable Program.
  17. Message Passing Program.

4.6 12. Languages, Compilers, Libraries, and Tools.

  1. Parallel Programming Libraries.
  2. Case Study: Pthreads.
  3. Case Study: MPI.
  4. Case Study: OpenMP.
  5. Parallelizing Compilers.
  6. Dependence Analysis.
  7. Program Transformations.
  8. Other Programming Models.
  9. Imperative Languages.
  10. Coordination Languages Data Parallel Languages.
  11. Functional Languages.
  12. Abstract Models.
  13. Case Study: High Performance Fortran (HPF).
  14. Parallel Programming Tools.
  15. Performance Measurement and Visualization.
  16. Metacomputers and Metacomputing.
  17. Case Study: The Globus Toolkit. 0201357526T04062001

5 End


Copyright © 2016 • www.wright.edu/~pmateti • 2019-12-09