Go backward to CASA
Go up to Top
Go forward to Parallel Resultant Computation
Distributed Maple is an environment for writing parallel programs in the computer algebra system Maple. It allows to create concurrent tasks and to execute them by Maple kernels running on various machines of a network. In the following, we are going to sketch the use of the system, its programming interface, and its software architecture. For more details, see .
The user interacts with Distributed Maple via a conventional Maple frontend (text or graphical), i.e., she operates within the familiar Maple environment for writing and executing parallel programs. Maple commands establish a distributed session in which tasks are created for execution on any connected machine (see Figure *). The session trace below demonstrates the use of the environment by a simple example:
milkyway!> maple |\^/| Maple V Release 5 (Universitae... ._|\| |/|_. Copyright (c) 1981-1997 by Wat... \ MAPLE / reserved. Maple and Maple V ar... <____ ____> Waterloo Maple Inc. | Type ? for help. > read `dist.maple`; Distributed Maple V1.0.7 (c) 1998 Wolfgang S... See http://www.risc.uni-linz.ac.at/software/... > dist[initialize] > ([[gintonic,linux], [andromeda,octane]]); connecting gintonic... connecting andromeda... okay > t1 := dist[start](int, x^n, x); t1 := 0 > t2 := dist[start](int, x^n, n); t2 := 1 > dist[wait](t1) + dist[wait](t2); (n + 1) n x x -------- + ----- n + 1 ln(x) > dist[terminate](); okay > quit;
We first load the file
dist.maple which implements the interface to the
distributed backend by a Maple package
dist. By issuing the command
dist[initialize], we ask the system to start the distributed backend
and create two additional Maple kernels on machine
gintonic of type
linux and on machine
andromeda of type
respectively. The machine types are used to lookup the system-specific startup
information which is located in a file
dist.systems in the current
After the distributed session has been successfully established, calls of
dist[start] create two tasks evaluating the Maple expressions
int(x^n, x) and
int(x^n, n), respectively. The two
dist[wait] calls block the current execution until the corresponding
tasks have terminated and then return their results. Finally, the distributed
session is closed by a call of
dist[terminate]. If the user issues
during the session the command
dist[visualize], a window pops up that
displays online the state of each machine and the total utilization as shown
in Figure *. By issuing a command
dist[trace] a trace
file is produced that may be used after program execution to generate
corresponding diagrams for printing (as those in Section *).
dist[terminate]that establish a distributed session, we need a possibility to initialize the Maple kernels on all connected machines by loading user code and program libraries.
dist[start](f, a, ...)
The core of Distributed Maple is a scheduler program which is implemented
in Java (package
dist with main class
Scheduler) and is completely independent and even
unaware of Maple; it can in fact embed and schedule tasks from any kind
of computation kernels that implement a specific communication protocol.
Correspondingly each node connected to a Distributed Maple session comprises
two components (see Figure *):
This program coordinates the interaction between nodes and schedules tasks
among nodes. The initial scheduler process (invoked from the Maple kernel
attached to the user frontend) reads all application-specific information from
the configuration file
dist.systems; it then starts instances of the
scheduler on other machines and communicates with them via Internet sockets.
The Maple file
dist.maple read by every Maple kernel implements the
interface between Maple kernel and scheduler. Communication between both
components is based on Unix pipes to which and from which messages are
written; these messages may embed Maple expressions (in the compact linear
format that Maple uses for library files).
The scheduler is implemented by a number of concurrent threads as shown in Figure *. Threads listening on all input channels put the received messages into a central buffer from where a server thread takes and processes them and creates new messages that are placed in some of the output buffers. Threads listening on these buffers take these messages and send them to the corresponding output channels. The multithreaded implementation of the communication interface is intended to maximize the overall throughput of the system.
All remote schedulers send new tasks to the central scheduler which distributes them among all machines. Currently a simple load balancing scheme is used where the central scheduler assigns new tasks to remote schedulers until the number of not completed tasks reaches an upper bound; a remote scheduler asks for new tasks whenever the number of received but not yet started tasks falls below a lower bound. By this "watermark" scheme, the communication latency for the transfer of a new task (after termination of a task) can be masked by the execution of an already received task.
When a new task is created, the scheduler allocates a corresponding "empty" slot in the result table (which will be filled with the result value when the corresponding task will have completed execution). Thus the result of a task is always stored on that machine where the task has been created (not necessarily where it is eventually executed); from the identifier of a task, the scheduler can determine the node holding a task result and correspondingly route requests. Additionally each scheduler holds a cache of all results that it has ever seen such that some requests to non-local task results can be immediately satisfied.
If the scheduler cannot immediately deliver a task result requested by the attached computation kernel, it instead sends a new task for execution. The Maple kernel continues with the execution of the new task (by recursive invocation of the server loop) until this task has been completed. If the originally requested result is then available, the kernel continues with the execution of the previous task; otherwise it receives another task for execution. In this way every kernel (also the kernel connected to the user interface) permanently computes as long as sufficiently many tasks are available.
A watchdog thread in every remote scheduler controls in regular intervals if messages have been received from the central scheduler. If during the last control period no message has been received, the watchdog sends a "ping" message to the central scheduler. If during the subsequent control period no reply is received, the watchdog assumes that the connection is broken and aborts the external application process and the scheduler process. Thus we ensure that broken sessions do not lead to stalled remote processes (which becomes a problem in distributed environments).