Introduction:
Distributed processes often need to coordinate their activities. If a collection of processes share a resource or collection of resources, then often mutual exclusion is required to prevent interference and ensure consistency when accessing the resources.
Mutual exclusion ensures that at a time only one of the concurrent processes are allowed to access the common or shared resources. In case of distributed systems, where multiple sites are considered it is named as distributed mutual exclusion (DME).
Mutual exclusion is a fundamental problem in distributed computing systems. Mutual exclusion ensures that concurrent access of processes to a shared resource or data is serialized, that is, executed in a mutually exclusive manner. Mutual exclusion in a distributed system states that only one process is allowed to execute the critical section (CS) at any given time.
In a
distributed system, shared variables (semaphores) or a local kernel cannot be
used to implement mutual exclusion. Distributed mutual exclusion is based on message passing.
Essential requirements for mutual exclusion are as follows:
ME1: safety: At most one process can execute in the critical section (CS) at a time. No two sites must be allowed entering in critical section at same time
ME2: liveness: Requests to enter and exit the critical section eventually succeed (no deadlock, no starvation). The liveness condition implies freedom from both deadlock and starvation.
Mutual exclusion should be free from deadlocks. Site entering into critical section should release it in a finite time so, that a fair chance is given to all other sites.
ME3: Ordering: Access is granted in the happened
before relation. If one request to enter the CS happened-before another, then
entry to the CS is granted in that order.
If request A happens-before
request B then grant A before grant B.
Performance Evaluation Criteria:
We evaluate the performance of algorithms for mutual exclusion according to the following criteria:
- the bandwidth consumed, which is proportional to the number of messages sent in each entry and exit operation;
- the client delay incurred by a process at each entry and exit operation;
- the algorithm’s effect upon the throughput of the system. This is the rate at which the collection of processes as a whole can access the critical section, given that some communication is necessary between successive processes. We measure the effect using the synchronization delay between one process exiting the critical section and the next process entering it; the throughput is greater when the synchronization delay is shorter.
The Central Server Algorithm:
The central server algorithm can be seen as a token based algorithm. The system maintains one token and make sure that only one process at a time gets that token. A process has to not enter the C.S. if it doesn’t have a token.
The simplest way to achieve mutual exclusion is to employ a server that grants permission to enter the critical section. To enter a critical section, a process sends a request message to the server and awaits a reply from it. Conceptually, the reply constitutes a token signifying permission to enter the critical section. If no other process has the token at the time of the request, then the server replies immediately, granting the token. If the token is currently held by another process, then the server does not reply, but queues the request.
When a process exits the critical section, it sends a message to the server, giving it back the token. If the queue of waiting processes is not empty, then the server chooses the oldest entry in the queue, removes it and replies to the corresponding process. The chosen process then holds the token.
In short, the Central
Server Algorithm works as follows:
· A server serves as the coordinator for the CS
· Any process that needs to access the CS sends a request to the coordinator
· The coordinator puts requests in a queue in the order it receives them and grants permission to the process that is at the head of the queue
· When a process exits the CS, it sends a release message to the coordinator
In the above figure, we show a situation in which p2 ’s request has been appended to the queue, which already contained p4 ’s request. p3 exits the critical section, and the server removes p4 ’s entry and grants permission to enter to p4 by replying to it. Process p1 does not currently require entry to the critical section.
A Ring-Based Algorithm:
In Ring based algorithm, the processes are arranged in a logical ring. A token passes through the processes in a single direction. A process can access the CS when it receives the token. It forwards the token to its neighbor when it exits the CS.
If a process receives the token and does not need to access the CS, it immediately forwards the token to its neighbor. A process that requires the token waits until it receives it, but retains it. To exit the critical section, the process sends the token on to its neighbour.
This algorithm continuously consumes network bandwidth (except when a process is inside the critical section): the processes send messages around the ring even when no process requires entry to the critical section.
Ricart Agarwala Algorithm:
Ricart and Agrawala [1981] developed an algorithm to implement mutual exclusion between N peer processes that is based upon multicast. The basic idea is that processes that require entry to a critical section multicast a request message, and can enter it only when all the other processes have replied to this message.
Each process records its state of being outside the critical section (RELEASED), wanting entry (WANTED) or being in the critical section (HELD) in a variable state.
On initialization
state := RELEASED;
To enter the critical section
state := WANTED;
Multicast request to all processes;
T := request’s
timestamp;
Wait until (number of replies received = (N – 1));
state := HELD;
On receipt of a request <Ti, pi> at pj
(I = j) if (state = HELD or (state = WANTED and (T, pj) < (Ti, pi)))
then
queue request from pi
without replying;
else
reply immediately to pi;
end if
To exit the critical section
state := RELEASED;
reply to any queued requests;
To illustrate the algorithm, consider a situation involving three processes, p1 , p2 and p3 , shown in Figure above. Let us assume that p3 is not interested in entering the critical section, and that p1 and p2 request entry concurrently. The timestamp of p1’s request is 41, and that of p2 is 34. When p3 receives their requests, it replies immediately. When p2 receives p1’s request, it finds that its own request has the lower timestamp and so does not reply, holding p1 off. However, p1 finds that p2 ’s request has a lower timestamp than that of its own request and so replies immediately. On receiving this second reply, p2 can enter the critical section. When p2 exits the critical section, it will reply to p1 ’s request and so grant it entry.
To
Enter Critical Section:
· When a process pi wants to enter the critical section, it sends a time-stamped
REQUEST message to all other process (Multicasting).
· When a process pj receives a REQUEST message from process pi , it sends a REPLY
message to process pi if
and only if:
o Process pj
neither requesting nor currently executing the critical section.
o Incase process pj requesting, the timestamp of process pi’s request is smaller than
its own request. Otherwise the request is deferred by process pj.
To Execute
Critical Section:
· Process pi enters the critical section if it has received the REPLY message from all process.
To
Release the Critical Section:
· Upon exiting process pi sends REPLY message to all deferred requests.
Maekow’s
Voting Algorithm:
Maekawa’s Algorithm is an algorithm for achieving mutual exclusion in a distributed system. It is a deadlock free algorithm that ensures that only one process can enter the critical section at any given time. The algorithm allows multiple processes to access a shared resource without interfering with each other. It is based on the concept of quorum.
A quorum is defined as a set of processes that satisfies two conditions:
· Any two quorums have at least one process in common.
· The intersection of all quorums is non-empty.
The algorithm works
as follows:
· Each process maintains a list of its own quorums and the quorums of other processes.
· When a process wants to access the shared resource, it sends a request message to all the processes in its quorum.
· If a process receives a request message, it checks if it is currently accessing the shared resource. If it is not, it adds the requesting process to its own quorum and sends a reply message.
· If a process receives a reply message from all the processes in its own quorum, it grants the request to access the shared resource.
· Once the process has finished accessing the shared resource, it sends a release message to all the processes in its quorum, and they remove it from their quorums.
0 Comments
if you have any doubts plz let me know...