4 Parallel Decomposition | Concurrent Programming in Java: Creating Threads | InformIT



4 Parallel Decomposition | Concurrent Programming in Java: Creating Threads | InformIT

Parallel programs are specifically designed to take advantage of multiple CPUs for solving computation-intensive problems. The main performance goals are normally throughput and scalability — the number of computations that can be performed per unit time, and the potential for improvement when additional computational resources are available. However, these are often intertwined with other performance goals. For example, parallelism may also improve response latencies for a service that hands off work to a parallel execution facility.

Among the main challenges of parallelism in the Java programming language is to construct portable programs that can exploit multiple CPUs when they are present, while at the same time working well on single processors, as well as on time-shared multiprocessors that are often processing unrelated programs.

Some classic approaches to parallelism don't mesh well with these goals. Approaches that assume particular architectures, topologies, processor capabilities, or other fixed environmental constraints are ill suited to commonly available JVM implementations. While it is not a crime to build run-time systems with extensions specifically geared to particular parallel computers, and to write parallel programs specifically targeted to them, the associated programming techniques necessarily fall outside the scope of this book. Also, RMI and other distributed frameworks can be used to obtain parallelism across remote machines. In fact, most of the designs discussed here can be adapted to use serialization and remote invocation to achieve parallelism over local networks. This is becoming a common and efficient means of coarse-grained parallel processing. However, these mechanics also lie outside the scope of this book.

We instead focus on three families of task-based designs, fork/join parallelism, computation trees, and barriers. These techniques can yield very efficient programs that exploit multiple CPUs when present, yet still maintain portability and sequential efficiency. Empirically, they are known to scale well, at least up through dozens of CPUs. Moreover, even when these kinds of task-based parallel programs are tuned to maximally exploit a given hardware platform, they require only minor retunings to maximally exploit other platforms.

As of this writing, probably the most common targets for these techniques are applications servers and compute servers that are often, but by no means always, multiprocessors. In either case, we assume that CPU cycles are usually available, so the main goal is to exploit them to speed up the solution of computational problems. In other words, these techniques are unlikely to be very helpful when programs are run on computers that are already nearly saturated.


Read full article from 4 Parallel Decomposition | Concurrent Programming in Java: Creating Threads | InformIT


No comments:

Post a Comment

Labels

Algorithm (219) Lucene (130) LeetCode (97) Database (36) Data Structure (33) text mining (28) Solr (27) java (27) Mathematical Algorithm (26) Difficult Algorithm (25) Logic Thinking (23) Puzzles (23) Bit Algorithms (22) Math (21) List (20) Dynamic Programming (19) Linux (19) Tree (18) Machine Learning (15) EPI (11) Queue (11) Smart Algorithm (11) Operating System (9) Java Basic (8) Recursive Algorithm (8) Stack (8) Eclipse (7) Scala (7) Tika (7) J2EE (6) Monitoring (6) Trie (6) Concurrency (5) Geometry Algorithm (5) Greedy Algorithm (5) Mahout (5) MySQL (5) xpost (5) C (4) Interview (4) Vi (4) regular expression (4) to-do (4) C++ (3) Chrome (3) Divide and Conquer (3) Graph Algorithm (3) Permutation (3) Powershell (3) Random (3) Segment Tree (3) UIMA (3) Union-Find (3) Video (3) Virtualization (3) Windows (3) XML (3) Advanced Data Structure (2) Android (2) Bash (2) Classic Algorithm (2) Debugging (2) Design Pattern (2) Google (2) Hadoop (2) Java Collections (2) Markov Chains (2) Probabilities (2) Shell (2) Site (2) Web Development (2) Workplace (2) angularjs (2) .Net (1) Amazon Interview (1) Android Studio (1) Array (1) Boilerpipe (1) Book Notes (1) ChromeOS (1) Chromebook (1) Codility (1) Desgin (1) Design (1) Divide and Conqure (1) GAE (1) Google Interview (1) Great Stuff (1) Hash (1) High Tech Companies (1) Improving (1) LifeTips (1) Maven (1) Network (1) Performance (1) Programming (1) Resources (1) Sampling (1) Sed (1) Smart Thinking (1) Sort (1) Spark (1) Stanford NLP (1) System Design (1) Trove (1) VIP (1) tools (1)

Popular Posts