[Title] [Prev] [Next] [End]


Chapter 1. Introduction


This chapter gives an overview of the LSF system architecture and the load sharing services provided by the LSF API, introducing their components. It also demonstrates how to write, compile, and link a simple load sharing application using LSF.

LSF Product Suite and Architecture

The Load Sharing Facility (LSF) is a layer of software services on top of UNIX and Windows NT operating systems. LSF creates a single system image on a network of heterogeneous computers such that the whole network of computing resources can be utilized effectively and managed easily. Throughout this guide, LSF refers to the LSF suite of products, which contains the following components:

LSF Base

LSF Batch

LSF JobScheduler

LSF MultiCluster

LSF consists of a number of servers running as root on each participating host in an LSF cluster and a comprehensive set of utilities built on top of the LSF Application Programming Interface (API). The LSF API consists of two libraries:

LSF Base

Figure 1 shows the components of the LSF Base and their relationship.

Figure 1. LSF Base Architecture

LSF Base Architecture

The LSF Base consists of two servers, the Load Information Manager (LIM) and the Remote Execution Server (RES), and the Load Sharing Library (LSLIB). The LSF Base provides the basic load sharing services across a heterogeneous network of computers.

An LSF server host is a host that runs load-shared jobs. The LIM and RES run on every LSF server host. They interface directly with the underlying operating systems and provide users with a uniform, host independent environment.

One of the LIMs acts as the master. The master LIM is chosen among all the LIMs running in the cluster. If the master LIM becomes unavailable, a LIM on another host will automatically take over.

The LIM on each host monitors its host's load and reports load information to the master LIM. The master LIM collects information from all hosts and provides that information to the applications.

The RES on each server host accepts remote execution requests and provides fast, transparent and secure remote execution of tasks.

Application and LSF Base Interactions

Figure 2 shows the typical interactions between an LSF application and the LSF Base.

In order to find out the information about the LSF clusters, an application calls the information service functions in the LSLIB which then contact the LIM to get the information. If the information requested is only available from the master LIM, then LSLIB will automatically send the request to the master host.

To run a task remotely, or to perform a file operation remotely, an application calls the remote execution or remote file operation service functions in the LSLIB, which then contact the RES to get the services.

The LIM on individual machines communicate periodically to update the information they provide to the applications.

Figure 2. LIM, RES, LSLIB, and Applications

LIM, RES, LSLIB, and Applications

LSF Batch System

LSF Batch is a layered distributed load sharing batch system built on top of the LSF Base. The services provided by LSF Batch are extensions to the LSF Base services. Application programmers can access the batch services through the LSF Batch Library, LSBLIB. The structure of the LSF Batch system is shown in Figure 3.

Figure 3. Structure of LSF Batch System

Structure of LSF Batch System

LSF Batch accepts user jobs and holds them in queues until suitable hosts are available. LSF Batch runs user jobs on LSF Batch server hosts, those hosts that a site deems suitable for running batch jobs.

LSF Batch services are provided by one mbatchd (master batch daemon) running in each LSF cluster, and one sbatchd (slave batch daemon) running on each batch server host.

LSF Batch operation relies on the services provided by the LSF Base. LSF Batch contacts the master LIM to get load and resource information about every batch server host. The operation of LSF Batch is shown in Figure 4.

Figure 4. The Operation of LSF Batch System

The Operation of LSF Batch System

mbatchd always runs on the host where master LIM runs. The sbatchd on the master host automatically starts the mbatchd. If the master LIM moves to a different host, the current mbatchd will automatically resign and a new mbatchd will be automatically started on the new master host.

User jobs are held in batch queues by mbatchd, which checks the load information on all candidate hosts periodically. When a host with the necessary resources becomes available, mbatchd sends a job to the sbatchd on that host for execution. When more than one host is available, the best host is chosen.

Once a job is sent to an sbatchd, that sbatchd controls the execution of the job and reports job status to mbatchd.

The log files store important system and job information so that a newly started mbatchd can restore the status of the previous mbatchd easily. The log files also provide historic information about jobs, queues, hosts, and LSF Batch servers.

LSF JobScheduler System

LSF JobScheduler shares the same architecture and job processing mechanism. In addition to services provided by LSF Batch, LSF JobScheduler also provides calendar and event processing services. Both LSF Batch and LSF JobScheduler provides API to applications via LSBLIB.

Note
In the reminder of this Guide, all descriptions about LSF Batch also apply to LSF JobScheduler unless explicitly stated otherwise.

LSF API Services

LSF services are natural extensions to operating system services. LSF services glue heterogeneous operating systems into a single, integrated computing system.

LSF APIs provide easy access to the services of LSF servers. The API routines hide the details of interactions between the application and LSF servers in a way that is platform independent.

LSF APIs have been used to build numerous load sharing applications and utilities. Some examples of applications built on top of the LSF APIs are lsmake, lstcsh, lsrun, LSF Batch user interface, and xlsmon.

LSF Base API Services

The LSF Base API (LSLIB) allows application programmers to get services provided by LIM and RES. The services include:

Configuration Information Service

This set of function calls provide information about the LSF cluster configuration, such as hosts belonging to the cluster, total amount of installed resources on each host (for example, number of CPUs, amount of physical memory, and swap space), special resources associated with individual hosts, and types and models of individual hosts.

Such information is static and is collected by LIMs on individual hosts. By calling these routines, an application gets a global view of the distributed system. This information can be used for various purposes. For example, the LSF command lshosts displays such information on the screen. LSF Batch also uses such information to know how many CPUs are on each host.

Flexible options are available for an application to select the information that is of interest to it.

Dynamic Load Information Service

This set of function calls provide comprehensive dynamic load information collected from individual hosts periodically. The load information is provided in the form of load indices detailing the load on various resources of each host, such as CPU, memory, I/O, disk space, and interactive activities. Since a site-installed External LIM (ELIM) can be optionally plugged into the LIM to collect additional information that is not already collected by the LIM, this set of services can be used to collect virtually any type of dynamic information about individual hosts.

Example applications that use such information include lsload, lsmon, and xlsmon. This information is also valuable to an application in making intelligent job scheduling decisions. For example, LSF Batch uses such information to decide whether or not a job should be sent to a host for execution.

These service routines provide powerful mechanism for selecting the information that is of interest to the application.

Placement Advice Service

LSF Base API provides functions to select the best host among all the hosts. The selected host can then be used to run a job or to login to. LSF provides flexible syntax for an application to specify the resource requirements or criteria for host selection and sorting.

Many LSF utilities use these functions for placement decisions, such as lsrun, lsmake, and lslogin. It is also possible for an application to get the detailed load information about the candidate hosts together with a preference order of the hosts.

A parallel application can ask for multiple hosts in one LSLIB call for the placement of a multi-component job.

The performance differences between different models of machines as well as the number of CPUs on each host are taken into consideration when placement advice is made, with the goal of selecting qualified host(s) that will provide the best performance.

Task List Manipulation Service

Task lists are used to store default resource requirements for users. LSF provides functions to manipulate the task lists and retrieve resource requirements for a task. This is important for applications that need to automatically pick up the resource requirements from user's task list. The LSF command lsrtasks uses these functions to manipulate user's task list. LSF utilities such as lstcsh, lsrun, and bsub automatically pick up the resource requirements of the submitted command line by calling these LSLIB functions.

Master Election Service

If your application needs some kind of fault tolerance, you can make use of the master election service provided by the LIM. For example, you can run one copy of your application on every host and only allow the copy on the master host to be the primary copy and others to be backup copies. LSLIB provides a function that tells you the name of the current master host.

LSF Batch uses this service to achieve improved availability. As long as one host in the LSF cluster is up, LSF Batch service will continue.

Remote Execution Service

The remote execution service provides a transparent and efficient mechanism for running sequential as well as parallel jobs on remote hosts. The services are provided by the RES on the remote host in cooperation with the Network I/O Server (NIOS) on the local host. The NIOS is a per application stub process that handles the details of the terminal I/O and signals on the local side. NIOS is always automatically started by the LSLIB as needed.

RES runs as root and runs tasks on behalf of all users in the LSF cluster. Proper authentication is handled by RES before running a user task.

LSF utilities such as lsrun, lsgrun, ch, lsmake, and lstcsh use the remote execution service.

Remote File Operation Service

The remote file operation service allows load sharing applications to operate on files stored on remote machines. Such services extend the UNIX file operation services so that files that are not shared among hosts can also be accessed by distributed applications transparently.

LSLIB provides routines that are extensions to the UNIX file operations such as open(2), close(2), read(2), write(2), fseek(3), and stat(2).

The LSF utility lsrcp is implemented with the remote file operation service functions.

Administration Service

These set of function calls allow application programmers to write tools for administrating the LSF servers. The operations include reconfiguring the LSF clusters, shutting down a particular LSF server on some host, restarting an LSF server on some host, turning logging on or off, locking/unlocking a LIM on a host, etc.

The lsadmin and xlsadmin utilities use the administration services.

LSF Batch API Services

The LSF Batch API, LSBLIB gives application programmers access to the job queueing processing services provided by the LSF Batch servers. All LSF Batch user interface utilities are built on top of LSBLIB. The services that are available through LSBLIB include:

LSF Batch System Information Service

This set of function calls allow applications to get information about LSF Batch system configuration and status. These include host, queue, and user configurations and status.

The batch configuration information determines the resource sharing policies that dictate the behavior of the LSF Batch scheduling.

The system status information reflects the current status of hosts, queues, and users of the LSF Batch system.

Example utilities that use the LSF Batch configuration information services are bhosts, bqueues, busers, bparams, and xlsbatch.

Job Manipulation Service

The job manipulation service allows LSF Batch application programmers to write utilities that operate on user jobs. The operations include job submission, signaling, status checking, checkpointing, migration, queue switching, and parameter modification.

Log File Processing Service

LSBLIB provides convenient routines for handling log files used by LSF Batch. These routines return the records logged in the lsb.events and lsb.acct files. The records are stored in well-defined data structures.

The LSF Batch commands bhist and bacct are implemented with these routines.

LSF Batch Administration Service

This set of function calls are useful for writing LSF Batch administration tools. The LSF Batch command badmin is implemented with these library calls.

Calendar Manipulation Service

These library calls are used only if you are using the LSF JobScheduler. These function calls allow programmers to write utilities that create, check, or change LSF Batch calendars. All the calendar-related user interface commands of the LSF JobScheduler make use of the calendar manipulation functions of the LSF Batch API.

Getting Started with LSF Programming

LSF programming is like any other system programming. You are assumed to have UNIX operating system and C programming knowledge to understand the concepts involved.

lsf.conf File

This guide frequently refers to the file lsf.conf for definitions of some parameters. lsf.conf is a generic reference file containing definitions of directories and parameters. lsf.conf is by default installed in /etc. If it is not installed in /etc, it is expected that all users of LSF set the environment variable LSF_ENVDIR to point to the directory in which lsf.conf is installed. Refer to 'LSF Base Configuration Reference' in the LSF Administrator's Guide for more details about the lsf.conf file.

LSF Header Files

All LSF header files are installed in the directory LSF_INCLUDEDIR/lsf, where LSF_INCLUDEDIR is defined in the file lsf.conf. The programmer should include LSF_INCLUDEDIR in the include file search path, such as that specified by the '-Idir' option of some compilers or pre-processors.

There is one header file for LSLIB, the LSF Base API, and one header file for LSBLIB, the LSF Batch API.

lsf.h

An LSF application must include <lsf/lsf.h> before any of the LSF Base API services are called. lsf.h contains definitions about constants, data structures, error codes, LSLIB function prototypes, macros, etc, that are used by all LSF applications.

lsbatch.h

An LSF Batch application must include <lsf/lsbatch.h> before any of the LSF Batch API services are called. lsbatch.h contains definitions of constants, data structures, error codes, LSBLIB function prototypes, macros, etc., that are used by all LSF Batch applications.

There is no need to explicitly include <lsf/lsf.h> in an LSF Batch application because lsbatch.h already includes <lsf/lsf.h>.

Linking Applications with LSF APIs

LSF API functions are contained in two libraries, liblsf.a (LSLIB) and libbat.a (LSBLIB) for all UNIX platforms. For Windows NT, the file names of these libraries are liblsf.lib (LSLIB) and libbat.lib (LSBLIB), respectively. These files are installed in LSF_LIBDIR, where LSF_LIBDIR is defined in the file lsf.conf.

Note
LSBLIB is not independent by itself. It must always be linked together with LSLIB. This is because LSBLIB services are built on top of LSLIB services.

LSF uses BSD sockets for communications across the network. On systems that have both System V and BSD programming interfaces, LSLIB and LSBLIB typically use the BSD programming interface. On System V-based versions of UNIX, for example Solaris, it is normally necessary to link applications using LSLIB or LSBLIB with the BSD compatibility library. On Windows NT, a number of libraries are needed to be linked together with LSF API. Details of these additional linkage specifications are shown in Table 1.

Table 1. Additional Linkage Specifications by Platform

Platform Additional Linkage Specifications
ULTRIX 4 (none)
Digital UNIX -lmach -lmld
HP-UX -lBSD
AIX -lbsd
IRIX 5 -lsun -lc_s
IRIX 6 (none)
SunOS 4 (none)
Solaris 2 -lnsl -lelf -lsocket -lrpcsvc -lgen
NEC -lnsl -lelf -lsocket -lrpcsvc -lgen
Sony NEWSs -lc -lnsl -lelf -lsocket -lrpcsvc -lgen -lucb
ConvexOS (none)
Cray Unicos (none)
Linux (none)
Windows NT -MT -DWIN32 libcmt.lib oldnames.lib kernel32.lib advapi32.lib user32.lib wsock32.lib mpr.lib netapi32.lib

Note
On Windows NT, you need to add paths specified by LSF_LIBDIR and LSF_INCLUDEDIR in lsf.conf to the environment variables LIB and INCLUDE, respectively.

The LSF_MISC/examples directory contains a makefile for making all the example programs in that directory. You can modify this file for use with your own programs.

All LSLIB function call names start with "ls_", whereas all LSBLIB function call names start with "lsb_".

Error Handling

LSF API uses error numbers to indicate an error. There are two global variables that are accessible from the application. These variables are used in exactly the same way UNIX system call error number variable errno is used. The error number should only be tested when an LSLIB or LSBLIB call fails.

lserrno

An LSF program should test whether an LSLIB call is successful or not by checking the return value of the call instead of lserrno.

When any LSLIB function call fails, it sets the global variable lserrno to indicate the cause of the error. The programmer can either call ls_perror() to print the error message explicitly to the stderr, or call ls_sysmsg() to get the error message string corresponding to the current value of lserrno.

Possible values of lserrno are defined in lsf.h.

lsberrno

This variable is very similar to lserrno except that it is set by LSBLIB whenever an LSBLIB call fails. Programmers can either call lsb_perror() to find out why an LSBLIB call failed or use lsb_sysmsg() to get the error message corresponding to the current value of lsberrno.

Possible values of lsberrno are defined in lsbatch.h.

Note
lserrno and lsberrno should be checked only if an LSLIB or LSBLIB call fails respectively.

Example Applications

Example Application using LSLIB

#include <stdio.h>
#include <lsf/lsf.h>

void main()
{
        char *clustername;

        clustername = ls_getclustername();
        if (clustername == NULL) {
                ls_perror("ls_getclustername");
                exit(-1);
        }

        printf("My cluster name is: <%s>\n", clustername);
        exit(0);
}

This simple example gets the name of the LSF cluster and prints the cluster name on the screen. The LSLIB function call ls_getclustername() returns the name of the local cluster. In case this call fails, it returns NULL pointer. ls_perror() prints the error message corresponding to the most recently failed LSLIB function call.

The above program would produce output similar to the following:

% a.out
Mycluster name is: <fruit>

Example Application using LSBLIB

#include <stdio.h>
#include<lsf/lsbatch.h>

main()
{
        struct parameterInfo *parameters;

        if (lsb_init(NULL) < 0) {
                lsb_perror("lsb_init");
                exit(-1);
        }

        parameters = lsb_parameterinfo(NULL, NULL, NULL);
        if (parameters == NULL) {
                lsb_perror("lsb_parameterinfo");
                exit(-1);
        }

        /* Got parameters from mbatchd successfully. Now print out the fields */
        printf("Job acceptance interval: every %d dispatch turns\n", 
                                        parameters->jobAcceptInterval);
        /* Code that prints other parameters goes here */
                /* ... */
        exit(0);
}

This example gets LSF Batch parameters and prints them on the screen. The function lsb_init() must be called before any other LSBLIB function is called.

The data structure parameterInfo is defined in lsbatch.h.

Authentication

LSF programming is distributed programming. Since LSF services are provided network-wide, it is important for LSF to deliver the service without compromising the system security.

LSF supports several user authentication protocols. Support for these protocols are described in the section 'Remote Execution Control' of the LSF Administrator's Guide. Your LSF administrator can configure the LSF cluster to use any of the protocols supported.

Note that only those LSF API function calls that operate on user jobs, user data, or LSF servers require authentication. Function calls that return information about the system do not need to be authenticated. LSF API calls that must be authenticated are identified in 'List of LSF API Functions'.

The most commonly used authentication protocol, the privileged port protocol, requires that load sharing applications be installed as setuid programs. This means that your application has to be owned by root with the suid bit set.

If you need to frequently change and relink your applications with LSF API, you can consider using the ident protocol which does not require applications to be setuid programs.


[Title] [Prev] [Next] [End]

doc@platform.com

Copyright © 1994-1997 Platform Computing Corporation.
All rights reserved.