Interprocessor Communication and Synchronization in Computer Architecture YASH PAL, March 7, 2026March 7, 2026 Interprocessor Communication and Synchronization in Computer Architecture – Interprocess communication is the mechanism that allows processes to communicate with each other. This communication could involve a process letting another process know that some event has occurred, or the transferring of data from one process to another. Synchronization is often necessary when processes communicate. Processes are executed with unpredictable speeds. Yet to communicate, one process must perform some action, such as setting the value of a variable or sending a message that the other detects. This only works if the events that perform an action or detect an action are constrained to happen in that order. Thus, one can view synchronization as a set of constraints on the ordering of events. The programmer employs a synchronization mechanism to delay the execution of a process in order to satisfy such constraints. A diagram that illustrates interprocess communication is as follows. Interprocessor communication Synchronization in Interprocess Communication Synchronization is a necessary part of interprocess communication. It is either provided by the interprocess control mechanism or handled by the communicating processes. Some of the methods to provide synchronization are as follows: Semaphore: A semaphore is a variable that controls access to a common resource by multiple processes. The two types of semaphores are binary semaphores and counting semaphores. Mutual Exclusion: Mutual exclusion requires that only one process thread can enter the critical section at a time. This is useful for synchronization and also prevents race conditions. Barrier: A barrier does not allow individual processes to proceed until all the processes reach it. Many parallel languages and collective routines impose barriers. Spinlock: This is a type of lock. The processes trying to acquire this lock wait in a loop while checking if the lock is available or not. This is known as busy waiting because the process is not doing any useful operation, even though it is active. The communication between these processes can be seen as a method of cooperation between them. Processes can communicate with each other using these two ways: Shared Memory Message passing Shared Memory Shared memory method of communication, and then message passing. Communication between processes using shared memory requires processes to share some variables, and it completely depends on how the programmer implements it. The Figure below shows a basic structure of communication between processes via the shared memory method and via message passing. Shared Memory and Message Passing Let’s take an example Producer-Consumer problem to understand the shared communication method. There are two processes: Producer and Consumer. The producer produces some item, and the consumer consumes that item. The two processes share a common space or memory location known as a buffer, where the item produced by the Producer is stored and from where the Consumer consumes the item if needed. There are two versions of this problem: Unbounded buffer problem: in which the producer can keep on producing items, and there is no limit on the size of the buffer. Bounded buffer problem: in which a producer can produce up to a certain number of items, and after that it starts waiting for a consumer to consume them. First, the Producer and the Consumer will share some common memory, then the Producer will start producing items. If the total produced item is equal to the size of the buffer, the producer will wait to get it consumed by the Consumer. Similarly, the consumer first checks for the availability of the item, and if no item is available, the consumer will wait for the producer to produce it. If there are items available, consumer will consume it. Messaging Passing Method In this method, processes communicate with each other without using any kind of shared memory. If two processes, p1 and p2, want to communicate with each other, they proceed as follows: Establish a communication link (if a link already exists, no need to establish it again.) Start exchanging messages using basic primitives. We need at least two primitives: Send(message, destination) or send(message) Receive(message, host) or receive(message) Message Passing Method Mechanism for Structured Form of Interprocess Communication and Synchronization In the above section, we demonstrated that a semaphore is a powerful tool for interprocess synchronization and mutual exclusion. The simplicity and ease of implementation have made a semaphore a very popular tool that is usually found in most of the operating system packages. There are also some drawbacks to this mechanism. They are: Semaphores are unstructured. They force programmers to follow the synchronization protocols (WAIT & SIGNAL). Any change in the WAIT and SIGNAL operation sequence, forgetting either of them, or simply jumping around them, may easily corrupt or block the entire system. Semaphores do not support data abstraction. Data abstraction is a software model that specifies a set of data and operations that can be performed on the data. Even when used properly. Semaphores can only protect access to critical sections, but cannot restrict the type of operations on shared resources performed by processes that have been granted permission. On one hand, semaphores encourage interprocess communication via a global variable. On the other hand, it only protects from the dangers of concurrency. Such global variables remain vulnerable to illegal or meaningless manipulation by processes legally allowed to modify them by executing the semaphore operations themselves correctly. Computer System Architecture engineering subjects Computer System Architecture