Introduction to Parallel Computing
About 427 wordsAbout 1 min
2025-08-02
Parallel computing has become essential because the hardware for it is now everywhere, from smartphones and tablets to massive supercomputers and cloud data centers. The fundamental reason for developing parallel systems is to boost computational performance and "make applications run much faster." At its core, parallelism is the technique of using concurrency to execute many instructions simultaneously. While this offers significant speed improvements, it comes with the cost of more complex and expensive hardware.
Parallel computing is the practice of programming for this concurrent environment, which requires a different mindset compared to traditional sequential programming. Programmers must understand parallel architectures to redesign applications effectively and must consider factors like concurrency and synchronization, not just linear time.
Parallel vs. Distributed Computing
Though often confused, parallel and distributed computing have distinct focuses.
- Parallel Computing: Primarily concerns dividing a single computing task into smaller parts that execute concurrently on multiple processing cores. These cores could be on one chip or spread across multiple computers.
- Distributed Computing: Primarily concerns executing different or identical tasks concurrently on multiple computers connected by a network or fabric.
- Overlap: Many modern systems, such as clouds and clusters, are both parallel and distributed, making it necessary to understand concepts from both fields.
Memory Architectures
The way a parallel system accesses memory is critical, as it directly impacts performance, data consistency, and security.
- Shared Memory: In this architecture, all processors can access a common memory through a global address space. A change to a memory location by one processor is immediately visible to all other processors.
- Distributed Memory: Here, each processor has its own private local memory, and there is no global address space. To share data, the programmer must explicitly define communication between processors over a network.
- Hybrid Memory: This architecture combines both shared and distributed models and is the dominant structure in high-end computing today. It typically involves multiple shared-memory machines connected via a network, forming a cluster.
Parallel Programming Models
Parallel programming models provide an abstraction layer that sits above the hardware architecture.
- SPMD (Single Program, Multiple Data): A single program is written, and each processor executes its own copy of it. The program can be written so that processors perform different actions based on their unique ID.
- MPMD (Multiple Program, Multiple Data): Each processor can execute a completely different program. A common structure is a master-slave model, where a master processor runs one program to distribute tasks, and slave processors run another program to execute them.
Changelog
2b7ff
-web-deploy(Auto): Update base URL for web-pages branchon
Copyright
Copyright Ownership:WARREN Y.F. LONG
License under:Attribution-NonCommercial-NoDerivatives 4.0 International (CC-BY-NC-ND-4.0)