Sunday, January 3, 2010

Interprocess Communication, Shared Memory, Process State


Interprocess Communication (IPC)


  1. Mechanism for processes to communicate and to synchronize their actions.
  2. Message system – processes communicate with each other without resorting to shared variables.
  3. IPC facility provides two operations:
    1. send(message) – message size fixed or variable
    2. receive(message)
  4. If P and Q wish to communicate, they need to:
    1. establish a communication link between them
    2. exchange messages via send/receive
  5. Implementation of communication link
    1. physical (e.g., shared memory, hardware bus)
    2. logical (e.g., logical properties)

Direct Communication

I. Processes must name each other explicitly:


a. send (P, message) – send a message to process P

b. receive(Q, message) – receive a message from process Q


II. Properties of communication link

a. Links are established automatically

b. A link is associated with exactly one pair of communicating processes

c. Between each pair there exists exactly one link

d. The link may be unidirectional, but is usually bi-directional

Indirect Communication

  1. Messages are directed and received from mailboxes (also referred to as ports)
    1. Each mailbox has a unique id
    2. Processes can communicate only if they share a mailbox.
  2. Properties of communication link
    1. Link established only if processes share a common mailbox
    2. A link may be associated with many processes
    3. Each pair of processes may share several communication links
    4. Link may be unidirectional or bi-directional
  3. Operations
    1. create a new mailbox
    2. send and receive messages through mailbox
    3. destroy a mailbox
  4. Primitives are defined as:

send(A, message) – send a message to mailbox A

receive(A, message) – receive a message from mailbox A

  1. Mailbox sharing
    1. P1, P2, and P3 share mailbox A
    2. P1, sends; P2 and P3 receive
    3. Who gets the message?
  2. Solutions
    1. Allow a link to be associated with at most two processes
    2. Allow only one process at a time to execute a receive operation
    3. Allow the system to select arbitrarily the receiver. Sender is notified who the receiver was.

Synchronization

  1. Message passing may be either blocking or non-blocking
  2. Blocking is considered synchronous
    1. Blocking send has the sender block until the message is received
    2. Blocking receive has the receiver block until a message is available
  3. Non-blocking is considered asynchronous
    1. Non-blocking send has the sender send the message and continue
    2. Non-blocking receive has the receiver receive a valid message or null

Buffering

Queue of messages attached to the link; implemented in one of three ways

1. Zero capacity – 0 messages
Sender must wait for receiver (rendezvous)

2. Bounded capacity – finite length of n messages
Sender must wait if link full

3. Unbounded capacity – infinite length
Sender never waits


Shared Memory



In computing, shared memory is memory that may be simultaneously accessed by multiple programs with an intent to provide communication among them or avoid redundant copies. Depending on context, programs may run on a single processor or on multiple separate processors. Using memory for communication inside a single program, for example among its multiple threads, is generally not referred to as shared memory.


In computer hardware, shared memory refers to a (typically) large block of random access memory that can be accessed by several different central processing units (CPUs) in a multiple-processor computer system.

A shared memory system is relatively easy to program since all processors share a single view of data and the communication between processors can be as fast as memory accesses to a same location.

The issue with shared memory systems is that many CPUs need fast access to memory and will likely cache memory, which has two complications:



* CPU-to-memory connection becomes a bottleneck. Shared memory computers cannot scale very well. Most of them have ten or fewer processors.


* Cache coherence: Whenever one cache is updated with information that may be used by other processors, the change needs to be reflected to the other processors, otherwise the different processors will be working with incoherent data (see cache coherence and memory coherence). Such coherence protocols can, when they work well, provide extremely high-performance access to shared information between multiple processors. On the other hand they can sometimes become overloaded and become a bottleneck to performance.


The alternatives to shared memory are distributed memory and distributed shared memory, each having a similar set of issues.

Process State

1. As a process executes, it changes state

a. new: The process is being created

b. running: Instructions are being executed

c. waiting: The process is waiting for some event to occur

d. ready: The process is waiting to be assigned to a processor

e. terminated: The process has finished execution



I hope this is useful for your assignment.



No comments: