Distributed Processing

Two directions that traditional operating systems have gone in terms of processing:

1.
support for threads
2.
distributed processing

Threads

Have talked about threads in CS3013, but review a little.

Also called lightweight processes. Contain an execution state within a shared address space.

Threads are natural to use for a server handling requests. Each request can be handled by a thread.

Threads vs. Processes

Terminology among different systems:

Distributed OS kernel Thread name Exec. Env. Name
Amoeba Thread Process
Chorus Thread Actor
Mach Thread Task
V System Process Team
Unix - Process

Thread Implementation Issues

User vs. Kernel threads. Fig 12-8 (from perspective of user threads):

+
can implement on a system not supporting kernel threads
+
fast creation of threads (kernel not involved)
+
fast switching between threads (kernel not involved)
+
customized scheduling algorithm
-
must have jackets around system calls that may block
-
no clock interrupts for time slicing
-
use threads when there are many system calls, not much more work to switch threads in the kernel.
-
do not gain on a multiprocessor.
-
for all threads, worry about non-reentrant code (errno).

System Models for Distributed Computing

Processor Pool Model

Amoeba. Best argument for using this approach comes from queueing theory. ``Replacing n small resources by one big one that is n times more powerful, reduces the average response time n-fold.

Dedicated processors. Do not support user interaction.

Workstation Model

Every user has their own computer. Can share computing resources.

Models for workstations and their use of files. Look at Fig 12-18 as a summary. Talk about load balancing later.

Can show both theoretically and practically that many idle nodes exist at any one time. Look at Fig 11.1 and 11.2 from Singhal.

Would like to use these idle nodes. What is idle?

What is performance we are trying to optimize?

Scheduling Issues

Components

Examples

Co-Scheduling

co-scheduling or gang scheduling to get processes that are cooperating to run at the same time.

Requirements for Load Sharing

Task Migration

State transfer and then unfreeze

Load Sharing

Distriblets

Receiver-initiated approach for Internet-wide scale.

http://distriblets.wpi.edu/

ELZ Paper

``Adaptive Load Sharing in Homogeneous Distributed Systems'' by Eager, Lazowska and Zahorjan

Sender-initiated policies.

It is adaptive in that policies react to system state. Basic idea is to understand how relatively simple load sharing policies work. Problems with complex policies:

Identified a transfer policy (when to transfer) and a location policy (where to transfer).

Assume a system model of 20 homogeneous node with some processor cost assigned to a task transfer.

Exponentially distributed arrival rates and service times. Used an analytically study backed up by a simulation.

Policies: all try to transfer a new task if the queue length is greater than or equal to a threshold T.

Look at primary results. Basically show that simple is good.

Wills and Finkel Work

``Load Sharing Using Multicasting''

Buddy policy by Shin and Chang is to broadcast state changes to buddy set of nodes

Node states: underloaded (eligible to receive tasks), medium, full-loaded (try to transfer tasks)

Policies:

Look at results. Can also look at scaling.

Implemented as MQP on WPI's Beowulf cluster (Spring 2000). System is called PANTS (PANTS Application Node Transparency System).