CEG 7370: Distributed Computing Principles

Distributed Semaphores

Prabhaker Mateti

This web page is organized in way that is useful during my lecture, instead of ppt slides.


Semaphore Invariant, Distributed

The number nP of completed P operations is at most the number nV of completed V operations plus iV the semaphore's initial value:

nP ≤ nV + iV.

Distributed: The state of the semaphore s must be distributed. These counters must not be maintained by just one process. Replicated?

We need a way to count P and V operations and a way to delay P operations. Moreover, the processes that share a semaphore need to cooperate so they maintain the semaphore invariant even though the program state is distributed.

Andrews Algorithm

The following algorithm implements one semaphore; let us call it, s.

  1. Semaphore s is to be used by n (user) processes. n does not grow or shrink.
    User[i: 1..n]::
     var lc : int := O  # users logical clock
     var ts : int       # received in go messages
    
  2. Uses an additional n helper processes. Helpers do not do P(s) or V(s). Initial value of s is magically given to all Helpers.
  3. Fully distributed. No process has the numerical value of s. "Way too many" messages. Strictly peer-to-peer.
  4. Uses logical clocks. As required, every send/recv includes LC.
  5. type kind = enum(VEE, PEE, ACK) used below.
  6. channel semop[1..n](sender: int, k: kind, timestamp: int)

    User processes broadcast on semop[] with k = PEE or VEE only.

  7. channel go[1..n](timestamp: int)

    This is a collection of n private channels -- in the sense that only User[i] receives on go[i] what is sent by Helper[i].

  8. Each Helper maintains var mq : queue of (int, kind, int) in which the elements are ordered by the third item, namely timestamp.
  9. Helper processes broadcast on semop[] with k = ACK only.

V operation

When the i-th User process wishes to do a V(s), it does the following.

 broadcast semop(i, VEE, lc);
 lc := lc + 1

P operation

When the i-th User process wishes to do a P(s), it does the following.

 broadcast semop(i, PEE, lc); lc := lc+1;
 receive go[i](ts);
 lc := max(lc, ts+1); lc := lc+1

i-th Helper Process

The following is a slightly modified version of 387p, Figure 7.20 of Andrews book.

Helper[i: 1..n]::
 var mq : queue of (int, kind, int) # ordered by timestamps
 var lc : int := O                  # logical clock
 var sem : int:= initial value      # int value of semaphore
 var sender : int, k : kind, ts : int

do true -->   { loop invariant DSEM }
   receive semop[i](sender, k, ts); lc := max(k, t.s+1); lc := lc+1;
   if k = PEE  or  k = VEE  -->
      insert (sender, k, ts) at appropriate place in mq;
      broadcast semop(i, ACK, lc); lc := lc+1;
   [] k = ACK  -->
      record that another ACK has been seen;
      do for all msg in fully acknowledged V messages -->
          remove msg from mq; sem := sem + 1
      od;
      do for all msg in fully acknowledged P messages such that sem > 0 -->
          remove msg from mq; sem := sem - 1;
          if sender = i --> send go[i](lc); lc:= lc+1 fi
      od
   fi
od

"Fully Acknowledged"

Helper processes send ACKs. They are used to determine when a message in mq has become fully acknowledged.

Consider a message m = (s, k, t) in the queue. Then once the process has received a message with a larger time stamp from every other process, it is assured that it will never see a message with a smaller timestamp. At this point, message m is said to be fully acknowledged.

Moreover, once m is fully acknowledged, then every other message in front of it in mq will also be fully acknowledged since they all have smaller timestamps. Thus, the part of mq containing fully acknowledged messages is a stable prefix: No new messages will ever be inserted into it.

Exercises

  1. How many messages are used for one P(s)?
  2. How many messages are used for one V(s)?
  3. Will the values of sem in all the Helpers ever equal? (after intitialization, that is.)
  4. Is sender = i correct?
  5. What must we do if we need, say, three semaphores?

References

  1. Gregory R. Andrews, Concurrent Programming: Principles and Practice, Benjamin/Cummings, 1991. Chapter 7 on AMP. Required Reading.
  2. M. Ben-Ari, Principles of Concurrent and Distributed Programming, Second Edition Addison-Wesley, 0-321-31283-X, 2006. Available on safaribooksonline.com. Chapter 10. Distributed Algorithms. Recommended Reading.

Copyright © 2012 Prabhaker Mateti