[Contents] [Prev] [Next] [End]
This section describes how LSF can be configured to support specific systems. Several systems are discussed: IRIX 6 processor sets, IBM SP-2, Cray Research NQS, and Atria Clearcase.
IRIX 6 allows the processors in a multiprocessor system to be divided into groups of processors call processor sets. IRIX 6 provides facilities to allow a user to define processor sets, and to run processes onto specific processor sets.
The pset(1M) command allows administrators to set up and manipulate processor sets and associate processes with sets. Once a process is associated with a processor set, the process and all its children will be scheduled only on the processors in that set. The definition of the processor set can be dynamically changed to increase or reduce the number of processors a process can be scheduled on.
LSF can be made to interface with processor sets by using pset(1M) in the queue-level pre- and post-execution commands (see 'Queue-Level Pre-/Post-Execution Commands'). This allows batch jobs to be assigned to certain processors.
Note
Since the pset command must be run as root but, by default, queue-level pre- and post- execution commands are run as the user that submitted the command, you must define LSB_PRE_POST_EXEC_USER=root in /etc/lsf.sudoers. See 'The lsf.sudoers File' for details.
The following gives examples on how to handle different processor allocation situations for an 8-processor machine.
During the day you want to ensure that batch jobs only use four processors with the remaining four dedicated for interactive users. During the night you want the batch jobs to be able to use all processors.
# <symbolic name> <pset id> <processor list> # batch 100 0,1,2,3 interactive 101 4,5,6,7
% pset -i -v
PRE_EXEC = pset -p $LS_JOBPID batch
# Move processors 4,5,6,7 out of the batch processor set pset -s batch \!4,5,6,7
#Move processors 4,5,6,7 into the batch processor set. pset -s batch +4,5,6,7
You want to give a particular user (Jane) exclusive access to one processor if she has jobs to run. Otherwise, users should be able to use all eight processors for batch jobs.
Begin Queue QUEUE_NAME = exclusive PRIORITY = 43 USERS = jane # Move processor 0 from batch processor set to excl processor set # Associate the job with the excl set. PRE_EXEC = pset -s excl +0; pset -s batch -0; pset -p $LS_JOBPID excl #Move processor 0 back to the batch processor set POST_EXEC = pset -s excl -0; pset -s batch +0 DESCRIPTION = Provides exclusive access to a processor for Jane's jobs End Queue
PRE_EXEC = pset -p $LS_JOBPID batch
More complicated situations can be handled by using scripts in the pre- and post-execution commands which check for other conditions. For example in the above 'User-Based Processor Allocation' case, if you wanted to give Jane up to four processors to use (but not more), the pre-execution script could use pset to determine how many processors were already in the 'excl' set and move an additional processor from the 'batch' set into the 'excl' set until the 'excl' set has four processors.
An IBM SP-2 system consists of multiple nodes running AIX. The system can be configured with a high-performance switch to allow high bandwidth, low latency communication between the nodes. The allocation of the switch to jobs as well as the division of nodes into pools is controlled by the SP-2 Resource Manager.
IBM's Parallel Operating Environment (POE) interfaces with the Resource Manager to allow users to run parallel jobs requiring dedicated access to the high performance switch.
The following are provided in LSF to support POE jobs running under LSF:
The 'poejob' script is installed as part of the standard installation procedure. The SP-2-specific ELIM can be found in the examples directory of the distribution.
The following steps should be followed to allow POE jobs to run under LSF:
Begin NewIndex NAME INTERVAL INCREASING DESCRIPTION lock 60 Y (IBM SP2 Node lock status) pool 60 N (IBM SP2 POWERparallel system pool) End NewIndex
For all queues which accept POE jobs define a requeue exit value and a threshold for the lock index.
Begin Queue NAME=poejobs lock=0 REQUEUE_EXIT_VALUES = 133
The poejob script will exit with 133 if it is necessary to requeue the job. Note that other types of jobs should not be submitted to the same queue. Otherwise, they will get requeued if they happen to exit with 133. The scheduling threshold on the lock index prevents dispatching to nodes which are being used in exclusive mode by other jobs.
Note that it is only necessary to enable requeuing of POE jobs if some users submit jobs requiring exclusive access to the node.
The HP Exemplar Technical Server is a high performance cache coherent Non Uniform Memory Access (ccNUMA) computer system. The Exemplar system supports the partitioning of the computing resources into subcomplexes which are collections of processors and memory from one or hypernodes in the system. The Subcomplex Manager (SCM) enables administrators to configure processor and memory resources into subcomplexes.
The following are provided in LSF to support the Exemplar:
LSF does not currently support dynamic load balancing between subcomplexes.
The following steps should be taken to setup an Exemplar system to run LSF.
Edit the LSF_CONFDIR/lsf.shared configuration. Add the definitions for the load indices for each subcomplex. For example, if you have two subcomplexes, you need to configure 12 indices as follows:
Begin NewIndex NAME INTERVAL INCREASING DESCRIPTION sc1Pme 60 N (Subcomplex one private memory) sc1Gme 60 N (Subcomplex one global memory) sc1cpu 60 N (Subcomplex one number cpu) sc1r5s 60 Y (Subcomplex one five sec runQ) sc1r30 60 Y (Subcomplex one thirty sec runQ) sc1r1m 60 Y (Subcomplex one one minute runQ) sc2Pme 60 N (Subcomplex two private memory) sc2Gme 60 N (Subcomplex two global memory) sc2cpu 60 N (Subcomplex two number cpu) sc2r5s 60 Y (Subcomplex two five sec runQ) sc2r30 60 Y (Subcomplex two thirty sec runQ) sc2r1m 60 Y (Subcomplex two one minute runQ) End NewIndex
The index names should be of the form scNxxx where N is the subcomplex number. The name of the subcomplexes defined on the system can be obtained by running the following command
% sysinfo -ls System load average: 4.30 4.28 4.09 largeGlbMem load average: 3.20 2.18 2.07
The subcomplex number corresponds to its position in the list. In the above example, System is subcomplex 1 and largeGlbMem is subcomplex 2. The names of the indices can be modified if appropriate changes are made to the supplied ELIM.
It is possible to change the name of the indices to include the subcomplex name instead of using a number. This requires changes to the supplied Exemplar ELIM.
The built-in load indices reported by the LIM on the Exemplar are implemented as follows:
Edit the queue definitions in LSB_CONFDIR/cluster/configdir/lsb.queues to add queue definitions for each subcomplex. A Job Starter should be specified for each queue to control which subcomplex jobs from the queue will run on. For example:
Begin Queue QUEUE_NAME = long JOB_STARTER = mpa -sc largeGlbMem . . DESCRIPTION = Long jobs on the largGlbMem subcomplex End Queue Begin Queue QUEUE_NAME = short JOB_STARTER = mpa -sc System . . DESCRIPTION = Short jobs on the System subcomplex End Queue
The JOB_STARTER parameter uses the mpa(1) command to start the job script file onto the subcomplex specified with the -sc option. LSF sets the LSB_JOBFILENAME environment variable, which specifies a shell script containing the user's commands.
You can use the load indices for each subcomplex to control the scheduling or suspension of jobs on that subcomplex. For example:
Begin Queue QUEUE_NAME = idle JOB_STARTER = mpa -sc System sc1r1m = 2.0/6.0 . . End Queue
would only start jobs on the System subcomplex if the 1-minute run queue was below 2.0 and suspend jobs if it goes above 6.0. Note that the load index specified in the scheduling constraints should correspond to the subcomplex specified in the JOB_STARTER parameter.
It is possible to make use of the queue level pre-/post-execution commands to move CPUs between subcomplexes on a per-job basis. For example, if an exclusive subcomplex has been set up, CPUs can be moved from the system subcomplex before job execution by the pre-exec command and moved back to the system subcomplex after job execution by the post-execution command.
Begin Queue QUEUE_NAME = exclusive JOB_STARTER = mpa -sc Exclusive . . PRE_EXEC = /usr/spp/moveCpuToEx POST_EXEC = /usr/spp/moveCpuToSys End Queue
Users are not required to take any special actions for submitting jobs on a Exemplar system. If an Exemplar system is integrated into a larger cluster of machines, it is possible to set up queues that can dispatch to all machines. You need to specify a Job Starter script, which runs the job file through the mpa(1) for the Exemplar, and just executes the job file on non-Exemplar systems. Also scheduling constraints should be specified using the queue-level RES_REQ parameter to distinguish between Exemplar and non-Exemplar systems. For example:
RES_REQ= (type==Exemplar && sc1r1m < 2.0) || (type != Exemplar && r1m < 2.0)
NQS (Network Queuing System) is a UNIX batch queuing facility that allows users to queue batch jobs to individual UNIX hosts from remote systems. This chapter describes how to configure and use LSF to submit and control batch jobs in NQS queues.
If you are not going to configure LSF to interoperate with NQS, you do not need to read this chapter.
While it is desirable to run LSF on all hosts for transparent resource sharing, this is not always possible. Some of the computing resources may be under separate administrative control, or LSF may not currently be available for some of the hosts.
An example of this is sites that use Cray supercomputers. The supercomputer is often not under the control of the workstation system administrators. Users on the workstation cluster still want to run jobs on the Cray supercomputer. LSF allows users to submit and control jobs on the Cray system using the same interface as they use for jobs on the local cluster.
LSF queues can be configured to forward jobs to remote NQS queues. Users can submit jobs, send signals to jobs, check the status of jobs, and delete jobs that are forwarded to the remote NQS. Although running on an NQS server outside the LSF cluster, jobs are still managed by LSF Batch in almost the same way as jobs running inside the LSF cluster.
This section describes how to configure LSF and NQS so that jobs submitted to LSF can be run on NQS servers. You should already be familiar with the administration of the NQS system.
NQS uses a machine identification number (MID) to identify each NQS host in the network. The MID must be unique and must be the same in the NQS database of each host in the network. LSF uses the NQS protocol to talk with NQS daemons for routing, monitoring, signalling and deleting LSF Batch jobs that run on NQS hosts. Therefore, you must assign a MID to each of the LSF hosts that might become the master host.
To do this, perform the following steps:
NQS uses a mechanism similar to ruserok(3) to determine whether access is permitted. When a remote request from LSF is received, NQS looks in the /etc/hosts.equiv file. If the submitting host is found, requests are allowed as long as the user name is the same on both hosts. If the submitting host is not listed in the /etc/hosts.equiv file, NQS looks for a .rhosts file in the destination user's home directory. This file must contain the names of both the submitting host and the submitting user. Finally, if access still is not granted, NQS checks for a file called /etc/hosts.nqs. This file is similar to the .rhosts file, but it can provide mapping of remote usernames to local usernames. Cray NQS also looks for a .nqshosts file in the destination user's home directory. The .nqshosts file has the same format as the .rhosts file.
NQS treats the LSF cluster just as if it were a remote NQS server, except that jobs never flow to the LSF cluster from NQS hosts.
For LSF users to get permission to run jobs on NQS servers, you must make sure the above setup is done properly. Refer to your local NQS documentation for details on setting up the NQS side.
The lsb.nqsmaps file in the LSB_CONFDIR/cluster/configdir directory is for configuring inter-operation between LSF and NQS.
LSF must use the MIDs of NQS hosts when talking with NQS servers. The Hosts section of the LSB_CONFDIR/cluster/configdir/lsb.nqsmaps file contains the MIDs and operating system types of your NQS hosts.
Begin Hosts HOST_NAME MID OS_TYPE cray001 1 UNICOS #NQS host, must specify OS_TYPE sun0101 2 SOLARIS #NQS host sgi006 3 IRIX #NQS host hostA 4 - #LSF host; OS_TYPE is ignored hostD 5 - #LSF host hostB 6 - #LSF host End Hosts
Note that the OS_TYPE column is required for NQS hosts only. For hosts in the LSF cluster, OS_TYPE is ignored; the type is specified by the TYPE field in the lsf.cluster.cluster file. The '-' entry is a placeholder.
LSF assumes that users have the same account names and user IDs on all LSF hosts. If the user accounts on the NQS hosts are not the same as on the LSF hosts, the LSF administrator must specify the NQS user names that correspond to LSF users.
The Users section of the lsb.nqsmaps file contains entries for LSF users and the corresponding account names on NQS hosts. The following example shows two users who have different accounts on the NQS server hosts.
Begin Users FROM_NAME TO_NAME user7 (user7l@cray001 luser7@sgi006) user4 (suser4@cray001) End Users
FROM_NAME is the user's login name in the LSF cluster, and TO_NAME is a list of the user's login names on the remote NQS hosts. If a user is not specified in the lsb.nqsmaps file, jobs are sent to the NQS hosts with the same user name.
You must configure one or more LSF Batch queues to forward jobs to remote NQS hosts. A forward queue is an LSF Batch queue with the parameter NQS_QUEUES defined. The 'Adding a Batch Queue' describes how to add a queue to an LSF cluster. The following queue forwards jobs to the NQS queue named pipe on host cray001:
Begin Queue QUEUE_NAME = nqsUse PRIORITY = 30 NICE = 15 QJOB_LIMIT = 5 UJOB_LIMIT = () CPULIMIT = 15 NQS_QUEUES = pipe@cray001 DESCRIPTION = Jobs submitted to this queue are forwarded to NQS_QUEUES USERS = all End Queue
You can specify more than one NQS queue for the NQS_QUEUES parameter. LSF Batch tries to send the job to each queue in the order they are listed, until one of the queues accepts the job.
Since many features of LSF are not supported by NQS, the following queue configuration parameters are ignored for NQS forward queues: PJOB_LIMIT, POLICIES, RUN_WINDOW, DISPATCH_WINDOW, RUNLIMIT, HOSTS, and MIG. In addition, scheduling load threshold parameters are ignored because NQS does not provide load information about hosts.
Cray NQS is incompatible with some of the public domain versions of NQS. Different versions of NQS on Cray may be incompatible with each other. If your NQS server host is a Cray, some additional steps may be needed in order for LSF to understand the NQS protocol correctly.
If the NQS version on a Cray is NQS 80.42 or NQS 71.3, then no extra setup is needed. For other versions of NQS on a Cray, you need to define NQS_REQUESTS_FLAGS and NQS_QUEUES_FLAGS in the lsb.params file.
If the version is NQS 1.1 on a Cray, the value of this flag is 251918848.
For other versions of NQS on a Cray, do the following to get the value for this flag. Run the NQS command:
% qstat -h CrayHost -a
on a workstation, where CrayHost is the host name of the Cray machine. Watch the messages logged by Cray NQS (you need access to the NQS log file on the Cray host):
03/02 12:31:59 I pre_server(): Packet type=<NPK_QSTAT(203)>. 03/02 12:31:59 I pre_server(): Packet contents are as follows: 03/02 12:31:59 I pre_server(): Npk_str[1] = <>. 03/02 12:31:59 I pre_server(): Npk_str[2] = <platform>. 03/02 12:31:59 I pre_server(): Npk_int[1] = <1392767360>. 03/02 12:31:59 I pre_server(): Npk_int[2] = <2147483647>. 03/02 12:31:59 I show_qstat_flags(): Flags=SHO_R_ALLUID SHO_R_SHORT SHO_RS_RUN SHO_RS_STAGE SHO_RS_QUEUED SHO_RS_WAIT SHO_RS_HOLD SHO_RS_ARRIVE SHO_Q_BATCH SHO_Q_PIPE SHO_R_FULL SHO_R_HDR
The value of Npk_int[1] in the above output is the value you need for the parameter NQS_REQUESTS_FLAGS.
To get the value for this flag, run the NQS command:
% qstat -h CrayHost -p -b -l
on a workstation, where CrayHost is the host name of the Cray machine. Watch the messages logged by Cray NQS (you need to have access to the Cray NQS log file):
03/02 12:32:57 I pre_server(): Packet type=<NPK_QSTAT(203)>. 03/02 12:32:57 I pre_server(): Packet contents are as follows: 03/02 12:32:57 I pre_server(): Npk_str[1] = <>. 03/02 12:32:57 I pre_server(): Npk_str[2] = <platform>. 03/02 12:32:57 I pre_server(): Npk_int[1] = <593494199>. 03/02 12:32:57 I pre_server(): Npk_int[2] = <2147483647>. 03/02 12:32:57 I show_qstat_flags(): Flags=SHO_H_ACCESS SHO_H_DEST SHO_H_LIM SHO H_RUNL SHO_H_SERV SHO_R_ALLUID SHO_Q_HDR SHO_Q_LIMITS SHO_Q_BATCH SHO_Q_PIPE SHO_Q_FULL
The value of Npk_int[1] in the above output is the value you need for the parameter NQS_QUEUES_FLAGS.
If you are unable to get the required information after running the above NQS commands, make sure that your Cray NQS is configured properly to log these parameters. To do this, run:
% qmgr
and enter show all to get all information. The parameters related to the logging of the information you need are:
Debug level = 3 MESSAGE_Header = Short MESSAGE_Types: Accounting OFF CHeckpoint OFF COMmand_flow OFF CONfig OFF DB_Misc OFF DB_Reads OFF DB_Writes OFF Flow OFF NETWORK_Misc ON NETWORK_Reads ON NETWORK_Writes ON OPer OFF OUtput OFF PACKET_Contents ON PACKET_Flow ON PROTOCOL_Contents ON PROTOCOL_Flow ON RECovery OFF REQuest OFF ROuting OFF Scheduling OFF USER1 OFF USER2 OFF USER3 OFF USER4 OFF USER5 OFF
Many sites use Atria's ClearCase environment for revision source control and development. A user uses the cleartool command to startup a ClearCase view. After the view is created, the user is presented with a file system containing the user's sources and binaries. The file system is not accessible outside the view. cleartool has an option to start up a view and run a command under the view.
LSF Job Starter can be used to set up a view then run the command (see 'Controlling LSF Batch Jobs' for further details). For example, if you set the Job Starter to a script 'clearstarter' similar to the following:
#!/bin/sh if [ x_$CLEARCASE_ROOT = x_ ]; then cd $LS_SUBCWD $* else /usr/atria/bin/cleartool setview \ -exec "cd $LS_SUBCWD;$*" \ `basename $CLEARCASE_ROOT` fi
The user's job will run by LSF using the command line:
clearstarter myjob
which sets up a view the same as the user's on submission host, changes directory to the same as on submission host, then runs the job. Thus the remote job runs in the same view as on local host.
For interactive jobs, the user sets the environment variable LSF_JOB_STARTER to the ClearCase Job Starter. The RES on the remote host then will run user's job with the Job Starter. After the Job Starter is set, lsmake can run transparently in ClearCase view.
There are three steps to run an interactive job through the RES in a ClearCase view:
% setenv LSF_JOB_STARTER clearstarter
% lsmake -j 4 -V -f foo.mak
To run a batch job in ClearCase view, the csub command should be used instead of bsub. With csub, no Job Starter needs to be used(1).
csub checks whether the environment variable CLEARCASE_ROOT is set. If it is set, which means the job is submitted from a view, it wraps the user's job as following:
cleartool setview -exec "cd $LS_SUBCWD;job" `basename $CLEARCASE_ROOT`
and passes all options to bsub, except -i, -o, and -e. These three options will be translated to shell I/O redirection. For example, suppose CLEARCASE_ROOT=/view/myview and the user enters:
% csub -q myqueue -o myout -i myin myjob
csub will translate this into:
bsub -q myqueue cleartool setview -exec "cd $LS_SUBCWD; myjob < myin > myout \ 2>&1" myview
An alternative way is to configure a queue level Job Starter (define JOB_STARTER in the file lsb.queues; see 'Controlling LSF Batch Jobs' for details), then use bsub to submit the job.
Copyright © 1994-1997 Platform Computing Corporation.
All rights reserved.