site-info.def configuration variables

The following tables contain a list of variables used to configure most of the yaim modules. General variables can be found in:

/opt/glite/yaim/examples/siteinfo/site-info.def : general variables that need a value specific to the site and that must be configured by the site admin.

/opt/glite/yaim/defaults/site-info.pre : general variables that have a meaningful default value and do not need to be changed unless the site admin is interested in a more advanced configuration.

/opt/glite/yaim/defaults/site-info.post : the same as site-info.pre but sourced after the two previous files. These allows to define variables whose default value depend on variables like INSTALL_ROOT.

In order to know whether a variable is compulsory in the configuration of a node type or not, please check the relevant node type section in this page, where you can find a description of which set of variables is actually needed for each node type.

NOTE: The following distribution in site-info.def, site-info.pre and site-info.post is available for yaim core >= 4.0.5-1. In lower versions the distribution may be different (most of the variables were distributed in site-info.def) but the meaning of the variables is the same and you can still search for variables in this document for any version of yaim core.

The cache period for LDAP records that disappeared from the BDII's input; by default it should be zero, but due to a bug affecting some versions of EMI-1 node types, the admin may need to define it explicitly

BDII_DELETE_DELAY=0

BDII_HOST

BDII hostname

BDII_HOST=yaim-bdii.cern.ch

3.0.1-0

BDII_LIST

Optional variable to define a list of top level BDIIs to support the automatic failover in the GFAL clients and information system tools. The syntax is my-bdii1.$MY_DOMAIN:port1[,my-bdii22.$MY_DOMAIN:port2[...]]. A list of BDIIs is supported by GFAL, lcg_util, lcg-info, lcg-infosites, lcg-ManageVOTag, lcg-tags and glite-sd-query.

BDII_LIST="yaim-bdii.cern.ch:2170,other-bdii.cern.ch:2170"

4.0.5-1

CE_BATCH_SYS

Batch system used by the CE. Possible values are 'torque'', 'lsf', 'pbs', 'condor' and 'sge'

CE_BATCH_SYS=torque

3.0.1-0

CE_CAPABILITY

This YAIM variable is a blank separated list and is used to set the GlueCECapability attribute where: 1) CPUScalingReferenceSI00=<referenceCPU-SI00>: this is the reference CPU SI00 that has to be calculated in two possible ways: a) If the batch system scales the published CPU time limit (GlueCEPolicyMaxCPUTime) to a reference CPU power then CPUScalingReferenceSI00 should be the SI00 rating for that reference; b) If the batch system does not scale the time limit then CPUScalingReferenceSI00 should be the SI00 rating of the least powerful core in the cluster. Sites which have moved to the HEP-SPEC benchmark should use it but converted to SI00 units using the scaling factor of 250, i.e. SI00 = 250*HEP-SPEC. 2) Share=<vo-name>:<vo-share>: this value is used to express VO fairshares targets. If there is no special share, this value MUST NOT be published. <vo-share> can assume values between 1 and 100 (it represents a percentage). Please note that the sum of the shares over all WLCG VOs MUST BE less than or equal to 100. If the worker nodes behind the CE provide the glexec facility (for WLCG VOs), an extra capability glexecwas requested, but this is not needed any more.

Needed for cream CE and lcg-CE in non-cluster mode: This YAIM variable is used to set the GlueHostProcessorModel attribute. System administrators MUST set this variable to the name of the processor model as defined by the vendor for the Worker Nodes in a SubCluster. Given the fact that SubClusters can be heterogeneous, this refers to the typical processor model for the nodes of a SubCluster.

CE_CPU_MODEL=Xeon

3.0.1-0

CE_CPU_SPEED

Needed for cream CE and lcg-CE in non-cluster mode: This YAIM variable is used to set the GlueHostProcessorClockSpeed attribute. System administrators MUST set this variable to the name of the processor clock speed expressed in MHz for the Worker Nodes in a SubCluster. Given the fact that SubClusters can be heterogeneous, this refers to the typiceal processor for the nodes of a SubCluster. If you need to publish this value correctly, you are requested to split your CE/Subclusters to be homogenous.

CE_CPU_SPEED=2334

3.0.1-0

CE_CPU_VENDOR

Needed for cream CE and lcg-CE in non-cluster mode: This YAIM variable is used to set the GlueHostProcessorVendor attribute. System administrators MUST set this variable to the name of the processor vendor for the Worker Nodes in a SubCluster. Given the fact that SubClusters can be heterogeneous, this refers to the typical processor for the nodes of a SubCluster.

CE_CPU_VENDOR=intel

3.0.1-0

CE_HOST

Computing Element Hostname

CE_HOST=yaim-ce.cern.ch

3.0.1-0

CE_DATADIR

This YAIM variable is used to set the GlueCEInfoDataDir attribute. This is an optional variable that can be left undefined. Otherwise, system administrators should set it to the path of a shared directory available for application data. Typically a POSIX accessible transient disk space shared between the Worker Nodes. It may be used by MPI applications or to store intermediate files that need further processing by local jobs or as staging area, specially if the Worker Node have no internet connectivity.

CE_DATADIR=/mypath

3.0.1-0

CE_INBOUNDIP

Needed for cream CE and lcg-CE in non-cluster mode: This YAIM variable is used to set the GlueHostNetworkAdapterInboundIP attribute. System administrators MUST set this variable to either FALSE or TRUE (in uppercase !) to express the permission for inbound connectivity for the WNs in the SubCluster, even if limited.

CE_INBOUNDIP=FALSE

3.0.1-0

CE_LOGCPU

Needed for cream CE and lcg-CE in non-cluster mode: This YAIM variable is used to set the GlueSubClusterLogicalCPUs. System administrators MUST set this variable to the value of the “Total number of cores/hyperthreaded CPUs in the SubCluster, including the nodes part of the SubCluster that are temporary down or offline”. In order to overcome the current YAIM limitation when a new CE head node giving access to the same batch resources is added to a site, site admins MUST set the CE_LOGCPU YAIM variable to 0 if the resources used by the new subclusters are already published via another CE.

CE_LOGCPU=1472

4.0.3-9

CE_MINPHYSMEM

Needed for cream CE and lcg-CE in non-cluster mode: This YAIM variable is used to set the GlueHostMainMemoryRAMSize attribute. System administrators MUST set this variable to the Total physical memory of a WN in the SubCluster expressed in MegaBytes. Given the fact that SubClusters can be heterogeneous, this refers to the typical worker node in a SubCluster. It is advisable to publish here the minimum total physical memory of the WNs in the SubCluster expressed in MegaBytes. If you need to publish this value correctly, you are requested to split your CE/Subclusters to be homogenous.

CE_MINPHYSMEM=16000

3.0.1-0

CE_MINVIRTMEM

Needed for cream CE and lcg-CE in non-cluster mode: This YAIM variable is used to set the GlueHostMainMemoryVirtualSize attribute. System administrators MUST set this variable to the Total virtual memory of a WN in the SubCluster expressed in MegaBytes. Given the fact that SubClusters can be heterogeneous, this refers to the typical worker node in a SubCluster. It is advisable to publish here the minimum total virtual memory of the WNs in the SubCluster expressed in MegaBytes. If you need to publish this value correctly, you are requested to split your CE/Subclusters to be homogenous.

CE_MINVIRTMEM=32000

3.0.1-0

CE_OS

Needed for cream CE and lcg-CE in non-cluster mode: This YAIM variable is used to set the GlueHostOperatingSystemName attribute. System administrators MUST set this variable to the name of the operating system used on the Worker Nodes part of the SubCluster. - see https://wiki.egi.eu/wiki/Operations/HOWTO05

CE_OS="ScientificSL"

3.0.1-0

CE_OS_RELEASE

Needed for cream CE and lcg-CE in non-cluster mode: This YAIM variable is used to set the GlueHostOperatingSystemRelease attribute. System administrators MUST set this variable to the release of the operating system used on the Worker Nodes part of the SubCluster - see https://wiki.egi.eu/wiki/Operations/HOWTO05

CE_OS_RELEASE=6.3

3.0.1-0

CE_OS_VERSION

Needed for cream CE and lcg-CE in non-cluster mode: This YAIM variable is used to set the GlueHostOperatingSystemVersion attribute. System administrators MUST set this variable to the version of the operating system used on the Worker Nodes part of the SubCluster - see https://wiki.egi.eu/wiki/Operations/HOWTO05

CE_OS_VERSION=Carbon

3.0.1-0

CE_OS_ARCH

Needed for cream CE and lcg-CE in non-cluster mode: This YAIM variable is used to set the GlueHostArchitecturePlatformType attribute. System administrators MUST set this variable to the Platform Type of the WN in the SubCluster. Given the fact that SubClusters can be heterogeneous, this refers to the typical worker node in a SubCluster. More information can be found here: https://wiki.egi.eu/wiki/Operations/HOWTO06

CE_OS_ARCH=i686

3.0.1-0

CE_OTHERDESCR

This YAIM variable is used to set the GlueHostProcessorOtherDescription attribute. The value of this variable MUST be set to: Cores=<typical-number-of-cores-per-CPU>[,Benchmark=<your-value>-HEP-SPEC06] where <typical-number-of-cores-per-CPU> is equal to the number of cores per CPU of a typical Worker Node in a SubCluster. The second value of this attribute MUST be published only in the case the CPU power of the SubCluster is computed using the Benchmark HEP-SPEC06. The syntax is Cores=value[,Benchmark=value-HEP-SPEC06].

CE_OTHERDESCR="Cores=4,Benchmark=100-HEP-SPEC06"

4.0.7-1

CE_OUTBOUNDIP

Needed for cream CE and lcg-CE in non-cluster mode: This YAIM variable is used to set the GlueHostNetworkAdapterOutboundIP attribute. System administrators MUST set this variable to either FALSE or TRUE (in uppercase !) to express the permission for direct outbound connectivity for the WNs in the SubCluster, even if limited.

CE_OUTBOUNDIP=FALSE

3.0.1-0

CE_PHYSCPU

Needed for cream CE and lcg-CE in non-cluster mode: This YAIM variable is used to set the GlueSubClusterPhysicalCPUs. System administrators MUST set this variable to the value of the “Total number of real CPUs/physical chips in the SubCluster, including the nodes part of the SubCluster that are temporarily down or offline”. In order to overcome the current YAIM limitation when a new CE head node giving access to the same batch resources is added to a site, site admins MUST set the CE_PHYSCPU YAIM variable to 0 if the resources used by the new subclusters are already published via another CE.

CE_PHYSCPU=736

4.0.3-9

CE_RUNTIMEENV

Needed for cream CE and lcg-CE in non-cluster mode: This YAIM variable is used to set the GlueHostApplicationSoftwareRunTimeEnvironment. It should define a space separated list of software tags supported by the site. The list can include VO-specific software tags. In order to ensure backwards compatibility it should include the entry 'LCG-2', the current middleware version and the list of previous middleware tags

Needed for cream CE and lcg-CE in non-cluster mode: This YAIM variable is used to set the GlueHostArchitectureSMPSize attribute. System administrators MUST set this variable to the number of Logical CPUs (cores) of the WN in the SubCluster. Given the fact that SubClusters can be heterogeneous, this refers to the typical worker node in a SubCluster. If you need to publish this value correctly, you are requested to split your CE/Subclusters to be homogenous.

CE_SMPSIZE=2

3.0.1-0

CLASSIC_STORAGE_DIR

The root storage directory on CLASSIC_HOST. This variable is no longer used after introducing SE_MOUNT_INFO_LIST. See bug 33210 for lcg CE and bug 46681 for cream CE to check in which yaim module version the SE_MOUNT_INFO_LIST was used so you can deprecate this variable.

deprecated

3.0.1-0

CREAM_PEPC_RESOURCEID

If specified and configuration of ARGUS PEP client is enabled then yaim will configure ARGUS on the cream CE, otherwise ARGUS setup is skipped on that node. The variable specifies the ARGUS resource ID to be used.

CREAM_PEPC_RESOURCEID=urn:mysitename.org:resource:ce

4.0.12-1

DPM_HOST

Host name of the DPM host

DPM_HOST=yaim.dpm.cern.ch

3.0.1-0

FTS_HOST

FTS server hostname. It's deprecated. See the FTS section of this twiki to know which variables are needed to configure a FTS.

If specified and configuration ARGUS is enabled then yaim will configure ARGUS PEP clients on nodes (where supported). Otherwise ARGUS PEP client setup is skipped. The variable specifies the ARGUS resource ID to be used. The cream CE and WMS have their own node specific version of this variable, and GLEXEC on the WN is controlled by other variables. So this generic variable is not used in the configuration of those node types.

GENERAL_PEPC_RESOURCEID=urn:mysitename.org:resource:other

4.0.12-1

GLITE_EXTERNAL_ROOT

The directory where the TAR UI and TAR WN install the external dependencies. Please, check the TAR UI and TAR WN installation instructions for more details. Note that GLITE_EXTERNAL_ROOT=${INSTALL_ROOT}/external is the only configuration that has been tested.

GLITE_EXTERNAL_ROOT=${INSTALL_ROOT}/external

3.0.1-0

GLITE_USER_HOME

This variable will be deprecated in the future. From yaim-core >= 4.0.5-3 it defaults to GLITE_HOME_DIR. Please see yaim core 4.0.5-7 Known Issues if you are using yaim core <= 4.0.7-7 or yaim wms <= 4.0.5-2

GLITE_USER_HOME=/home/glite

3.0.1-0

GRIDICE_SERVER_HOST

GridIce server hostname. Only used in 3.0 configurations.

GRIDICE_SERVER_HOST=my-gridice.cern.ch

3.0.1-0

GROUPS_CONF

Path to the file containing information on the mapping between VOMS groups and roles to local groups. An example of this configuration file is given in /opt/glite/yaim/examples/groups.conf. More details can be found in the Group configuration section in the YAIM guide.

GROUPS_CONF=/opt/glite/etc/groups.conf

3.0.1-0

JOB_MANAGER

The name of the job manager used by the gatekeeper. Must be one of: lcgpbs, lcglsf, lcgsge, lcgcondor, lsf, pbs or condor. For a CREAM CE and glite-Cluster instead specify one of: pbs, lsf, sge or condor (no "lcg" version)

JOB_MANAGER=lcgpbs

3.0.1-0

LB_HOST

LB hostname. It won't be anymore mandatory for UI and VOBOX, only for WMS configuration. See more information in the variable list of each node type

LB_HOST=yaim-lb.cern.ch

3.0.1-0

LOCAL_GROUPS_CONF

Optional variable to specify a local groups.conf. It is similar to GROUPS_CONF but used to specify a separate file where local accounts specific to the site are defined. More details can be found in the Group configuration section in the YAIM guide.

LOCAL_GROUPS_CONF=/opt/glite/yaim/etc/local.conf

4.0.5-1

MON_HOST

RGMA hostname.

MON_HOST=yaim-mon.cern.ch

3.0.1-0

MYSQL_PASSWORD

The mysql root password. Define it only if you are installing a mysql server.

MYSQL_PASSWORD=password

3.0.1-0

PX_HOST

Myproxy hostname.

PX_HOST=yaim-px.cern.ch

3.0.1-0

QUEUES

The name of the queues defined in the CE

QUEUES="dteam atlas"

3.0.1-0

<queue-name>_GROUP_ENABLE

Space separated list of VO names and VOMS FQANs which are allowed to access the queue.

DTEAM_GROUP_ENABLE="dteam /dteam/Higgs /dteam/ROLE=production"

3.0.1-0

RB_HOST

Resource Broker hostname.

RB_HOST=yaim-rb.cern.ch

3.0.1-0

RFIO_PORT_RANGE

Optional variable for the rfio port range

RFIO_PORT_RANGE="20000,25000"

3.0.1-0

SE_GRIDFTP_LOGFILE

Variable necessary to configure the gridview client on the SEs. It sets the location and filename of the gridftp server logfile on the different types of SEs.

SE_GRIDFTP_LOGFILE=/var/log/dpm-gsiftp/dpm-gsiftp.log

4.0.3-9

SE_LIST

A space separated list of SE hostnames available at your site

SE_LIST="dpm.cern.ch castor.cern.ch"

3.0.1-0

SE_MOUNT_INFO_LIST

This YAIM variable is used to set the GlueCESEBindMountInfo attribute for each defined SE. The variable is a space separated list of SE hosts from SE_LIST with the export directory from the Storage Element and the mount directory common to worker nodes part of the Computing Element like SE1:export_dir1,mount_dir1. If any SE from SE_LIST doesn't support he mount concept, don't define anything for that SE in this variable. If this is the case for all the SEs in SE_LIST, put the value none. The GlueCESEBindMountInfo will be in both cases "n.a". Please, note that in the way the glue schema is specified, a SE can only have one mount point. See also Bug 54530 affecting this variable.

This YAIM variable is used to set the GlueSiteEmailContact attribute. It's the main email contact for the site. The syntax is a coma separated list of email addresses.

SITE_EMAIL=yaim-contact@cern.ch,admin-yaim@cern.ch

3.0.1-0

SITE_HTTP_PROXY

Optional variable to specify whether your site has an http proxy (syntax is as that of the http_proxy environment variable). It will be used in config_crl and used by the cron jobs (http_proxy) in order to reduce to load on the CA host.

SITE_HTTP_PROXY="http-proxy.my.domain

3.0.1-0

SITE_INFO_VERSION

Optional variable to specify the version of the set of configuration files (site-info.def, vo.d/, group.d/, nodes/ and local functions) that the sys admin can package under one rpm. It's in fact the rpm version. This variable is used when executing the option -p of the yaim command. Note that this variable has to be defined in site-info.def and not in any other configuration file in the siteinfo directory.

SITE_INFO_VERSION=1.1

4.0.3-5

SITE_LAT

This YAIM variable is used to set the GlueSiteLatitude attribute. It's the position of the site north or south of the equator measured from -90º to 90º with positive values going north and negative values going south.

SITE_LAT=46.20

3.0.1-0

SITE_LONG

This YAIM variable is used to set the GlueSiteLongitude attribute. It's the position of the site east or west of Greenwich, England measured from -180º to 180º with positive values going east and negative values going west.

SITE_LONG=6.1

3.0.1-0

SITE_NAME

This YAIM variable is used to set the GlueSiteName attribute. It's the human-readable name of your site.

SITE_NAME=yaim-testbed

3.0.1-0

SPECIAL_POOL_ACCOUNTS

Optional variable. It determines the use of pool accounts for special users when generating the grid-mapfile. If not defined, YAIM will decide whether to use special pool accounts or not automatically. The value is yes or no

SPECIAL_POOL_ACCOUNTS=yes

4.0.5-1

USE_ARGUS

Optional variable. When set to yes indicates that setup of the ARGUS authorisation framework is to be done. A number of other variables are required to fully specify the ARGUS parameters and allow the configuration to be made. See the "ARGUS authorisation framework control" section in the definition file, where the variables are grouped together. Currently the enabling of ARGUS on the WN is independent of this option.

USE_ARGUS=no

4.0.12-1

USER_HOME_PREFIX

Optional variable used to specify a home directory for the pool accounts different from /home. The directory must exist in the system. YAIM is not creating it. If it doesn't exist, when trying to add the users, the yaim command will fail. So sys admins must ensure the directory specified by this variable already exists. See below in the VO related variables the usage of this variable per VO. If the variable is defined for a certain VO, that value will have priotity over this one.

USER_HOME_PREFIX=/special/dir/

4.0.4-1

USERS_CONF

Path to the file containing the list of Linux users (pool accounts) to be created. This file should be created by the site administrator. It contains a plain list of the users and their IDs. An example of this configuration file is given in /opt/glite/yaim/examples/users.conf. More details can be found in the User configuration section in the YAIM guide.

USERS_CONF=/opt/glite/yaim/etc/users.conf

3.0.1-0

VOS

List of supported VOs

VOS="dteam atlas"

3.0.1-0

VO_SW_DIR

Base directory for installation of the experiment software. It's normally used in combination of a VO related variable.

VO_SW_DIR=/opt/exp_soft

3.0.1-0

WMS_PEPC_RESOURCEID

If specified and configuration of ARGUS PEP client is enabled then yaim will configure ARGUS on the WMS, otherwise ARGUS setup is skipped on that node. The variable specifies the ARGUS resource ID to be used.

WMS_PEPC_RESOURCEID=urn:mysitename.org:resource:wms

4.0.12-1

WN_LIST

Path to the list of Worker Nodes. The list of Worker Nodes is a file to be created by the site administrator. An example of this configuration file is given in /opt/glite/yaim/examples/wn-list.conf. For more information please check the WN list section in the YAIM guide.

Note that <vo-name> should be in capital letters and in case '.' and '-' are part of the vo name, they should be transformed into '_'. For example, a vo called org.yaim.vo should define its variables as VO_ORG_YAIM_VO_NAME_*. For more information on VO variables please check the vo.d directory section in the YAIM guide.

Optional variable to specify a space separated list of LBs hostname:port supported by the VO.

glite-yaim-clients 4.0.3-3

VO_<vo-name>_MAP_WILDCARDS

Optional variable to automatically add wildcards per FQAN in the LCMAPS gripmap file and groupmap file. Set it to 'yes' if you want to add the wildcards in your VO. Leave it undefined or set it to 'no' if you don't want to configure wildcards in your VO.

4.0.5-5

VO_<vo-name>_PX

Myproxy server supported by the VO.

glite-yaim-clients 3.1.0-0

VO_<vo-name>_RBS

A space separated list of RBs hostname supported by the VO.

3.0.1-0

VO_<vo-name>_STORAGE_DIR

SE classic is no longer part of gLite 3.1. Path to the storage area for the VO on an SE_classic.

deprecated in 3.1

VO_<vo-name>_SW_DIR

Area on the WN for the installation of the experiment software. If on the WNs a predefined shared area has been mounted where VO managers can pre-install software, then these variable should point to this area. If instead there is not a shared area and each job must install the software, then this variables should contain a dot ( . ). Anyway the mounting of shared areas, as well as the local installation of VO software is not managed by yaim and should be handled locally by Site Administrators.

3.0.1-0

VO_<vo-name>_UNPRIVILEGED_MKGRIDMAP

Optional variable to create a grid-map file which only contains mappings to ordinary users for the VO. no will create a grid-map file with special users as well, if defined in groups.conf. yes, will create a grid-mapfile containing only mappings to ordinary pool accounts.

4.0.10-1

VO_<vo-name>_USER_HOME_PREFIX

Optional variable used to specify a home directory for the pool accounts different from /home. The directory must exist in the system. YAIM is not creating it. If it doesn't exist, when trying to add the users, the yaim command will fail. So sys admins must ensure the directory specified by this variable already exists.

4.0.11-1

VO_<vo-name>_VOMSES

This variable contains the vomses file parameters needed to contact a VOMS server. Multiple VOMS servers can be given if the parameters are enclosed in single quotes. The syntax should be 'vo_nickname voms_server_hostname port voms_server_host_cert_dn vo_name gt_version' where gt_version is optional and it refers to the version of Globus Toolkit the VOMS sever is running. This argument is needed to know how to contact the VOMS server, which is done in a different way depending on the GT version it's running. YAIM supports a nickname as first argument (rather than requiring it to be the same as vo_name) since version 4.0.12-1.

3.0.1-0

VO_<vo-name>_VOMS_EXTRA_MAPS

Optional variable used to define any further arbitrary maps you need in edg-mkgridmap.conf.

Deprecated glite-yaim-core >= 4.0.4-2

VO_<vo-name>_VOMS_CA_DN

DN of the CA that signs the VOMS server certificate. Multiple values can be given if enclosed in single quotes. Note that there must be as many entries as in the VO_<vo-name>_VOMSES variable. There's a one to one relationship in the elements of both lists, so the order must be respected.

4.0.3-6

VO_<vo-name>_VOMS_SERVERS

A list of the VOMS servers used to create the DN grid-map file. The format is 'vomss://<host-name>:8443/voms/<vo-name>?/<vo-name>.

3.0.1-0

VO_<vo-name>_WMS_HOSTS

Optional variable to specify a space separated list of WMSs hostname supported by the VO.

The creation of groups and users needed by the middleware is done by YAIM. The default value is yes. If you want to disable this functionality set it to no. You must ensure the users and groups defined in $INSTALL_ROOT/glite/yaim/examples/edgusers.conf are created in your system. For the VO pool accounts, YAIM provides also an example file in INSTALL_ROOT/glite/yaim/examples/users.conf. Even if you create your own users, you must provide a similar file that will be used to create the gridmap file.

yes

4.0.5-1

CRON_DIR

Directory where YAIM writes all the cron jobs

/etc/cron.d

3.0.1-0

DPMMGR_USER

DPM user

dpmmgr

4.0.5-1

DPMMGR_GROUP

DPM user group

dpmmgr

4.0.5-1

EDG_WL_SCRATCH

Optional scratch directory for jobs

""

3.0.1-0

EDG_USER

edg user

edguser

4.0.5-1

EDG_GROUP

edg user group

edguser

4.0.5-1

EDG_HOME_DIR

edg user home directory. Note: It is recommendable to use /var/lib/user_name as HOME directory for system users.

/home/edguser

4.0.10-1

EDGINFO_USER

edginfo user

edginfo

4.0.5-1

EDGINFO_GROUP

edginfo user group

edginfo

4.0.5-1

EDGINFO_HOME_DIR

edg user home directory. Note: It is recommendable to use /var/lib/user_name as HOME directory for system users.

/home/edginfo

4.0.10-1

FQANVOVIEWS

If set to yes yaim will configure the infosystem to publish the CE VOViews also for groups maped/identified by a VOMS FQANS. If set to no then only the VO VOViews will be published. If you want to know more on how the publication mechanism of VOViews works read this.

no

4.0.4-1

FUNCTIONS_DIR

The directory where YAIM functions are stored

/opt/glite/yaim/functions

3.0.1-0

GIP_CACHE_TTL

How long information in the cache is valid.

300

3.0.1-0

GIP_FRESHNESS

If the information from the plug-ins is within this timelimit, the dynamics plug-ins will not be executed.

60

3.0.1-0

GIP_RESPONSE

How long the GIP will wait for dynamic plug-ins to run before reading the information from the cache.

$BDII_SITE_TIMEOUT - 5

3.0.1-0

GIP_TIMEOUT

The timeout value to be used with dynamic plug-ins.

150

3.0.1-0

GLITE_USER

glite user

glite

4.0.5-1

GLITE_GROUP

glite user group

glite

4.0.5-1

GLITE_HOME_DIR

glite user home directory

/home/glite

4.0.5-1

GLOBUS_TCP_PORT_RANGE

Port range for Globus IO. It should be specified as "num1,num2". YAIM automatically handles the syntax of this variable depending on the version of VDT. If it's VDT 1.6 it leaves "num1,num2". If it's a version < VDT 1.6 it changes to "num1 num2"

"20000,25000"

3.0.1-0

GRIDFTP_CONNECTIONS_MAX

Maximum number of simultaneous connections to the gridftp server. This default is increased to 150 in yaim core >= 4.0.10-1. It's actually recommendable to increase this variable to a number 2/3 times higher than its default of 50 for yaim core <= 4.0.6-1

150

4.0.6-1

INSTALL_ROOT

Installation root - change if using the re-locatable distribution.

/opt

3.0.1-0

INFOSYS_GROUP

Information system user group

infosys

4.0.5-1

JAVA_LOCATION

Path to Java VM installation. It can be used in order to run a different version of java installed locally. WARNING! This variable will dissappear soon

/usr/java/j2sdk1.4.2_12

Deprecated in glite-yaim-core >= 4.0.8-1

LCMAPS_DEBUG_LEVEL

LCMAPS debugging level

0

4.0.1-4

LCMAPS_LOG_LEVEL

LCMAPS logging level

1

4.0.1-4

LCAS_DEBUG_LEVEL

LCAS debugging level

0

4.0.1-4

LCAS_LOG_LEVEL

LCAS logging level

1

4.0.1-4

LCG_REPOSITORY

APT repository for the EGEE middleware. This is only for gLite 3.0 that uses apt. For gLite 3.1, please check this link.

This variable is used in the trustmanager configuration and it defines how often the X509_CERT_DIR is polled for changes in the files. It's a number followed by h,m or s time units.

2h

4.0.8-1

UNPRIVILEGED_MKGRIDMAP

Note that this variable should be specified per VO in yaim core >= 4.0.10-1 !!! In case you want to create a grid-map file which only contains mappings to ordinary users. no will create a grid-map file with special users as well, if defined in groups.conf. yes, will create a grid-mapfile containing only mappings to ordinary pool accounts.

site-info.post

edg users configuration file. If you disable YAIM user configuration, make sure you add these users and groups in your system. The format of this file is: user:id:group:gip:description:home. More details can be found in /opt/glite/yaim/defaults/edgusers.conf.README

Service configuration variables

NOTE: Some yaim modules have started to distribute node specific variables that in some cases used to be part of site-info.def. This documentation already describes the situation where all node specific variables are distributed by the corresponding yaim module. Remember that this is not yet the current situation for all yaim modules, so maybe some of the files described here do not exist yet in the yaim module.

In order to configure a service you would need to define some variables distributed in different files. You can define these variables in your site-info.def or leave them under siteinfor/services directory, where siteinfo is the directory where your site-info.def is located. Default variables can be redefined as well in one of the two locations.

The files where the variables are located are:

Mandatory general variables: sys admins must define these variables. They can be found in opt/glite/yaim/examples/siteinfo/site-info.def and are described in the previous section site-info.def variables.

Mandatory service specific variables: sys admins must define these variables. They can be found in opt/glite/yaim/examples/siteinfo/services/node-type and are described in the following sections.

Default general variables: sys admins don't need to define these variables unless they want a specific value for their site which is different from the default one. They can be found in opt/glite/yaim/defaults/site-info.pre or post and are described in the previous section site-info.pre variables and site-info.post.

Default service specific variables: sys admins don't need to define these variables unless they want a specific value for their site which is different from the default one. They can be found in opt/glite/yaim/defaults/node-type.pre or post and are described in the following sections.

All the services need to have INSTALL_ROOT defined. This variable is always defined in site-info.pre and defaults to /opt.

AMGA

AMGA oracle

Mandatory general variables

VOS

Mandatory service specific variables: they can be found in /opt/glite/yaim/examples/siteinfo/services/glite-amga_oracle.

The connection string to use for Oracle server with te sqlplus command. Example oracleuser/secret_passowrd@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=oracle.grid.ucy.ac.cy)(PORT=1521))(CONNECT_DATA=(SID=orcl)))

connection string

4.0.3-3

AMGA postgres

Mandatory general variables

VOS

Mandatory service specific variables: they can be found in /opt/glite/yaim/examples/siteinfo/services/glite-amga_postgres.

Number of records that APEL will select in one go. The value of should be adjusted according to the memory assigned to the Java VM. In general, for 512Mb the number of records should be 150000 and for 1024Mb around 300000. The default value that is included in the APEL code is 300000, as the default memory is 1024Mb.

Set this variable to 'no' if you don't want yaim to create the pap_configuration.ini file

string

yes

1.0.0-1

CONFIG_PDP

Set this variable to 'no' if you don't want yaim to create the pdp.ini file

string

yes

1.0.0-1

CONFIG_PEP

Set this variable to 'no' if you don't want yaim to create the pepd.ini file

string

yes

1.0.0-1

PAP_HOME

Home directory of the pap service

path

${PAP_HOME:-${INSTALL_ROOT}/argus/pap}

1.0.0-1

PAP_ENTITY_ID

This is a unique identifier for the PAP. It must be a URI (URL or URN) and the same entity ID should be used for all PAP instances that make up a single logical PAP. If a URL is used it doesn't neet to resolve to any specific webpage.

URI

${PAP_ENTITY_ID:-"http://${ARGUS_HOST}/pap"}

1.1.0-1

PAP_HOST

Set this variable to another value if PAP_HOST is not installed in the same host as PDP and PEP.

IP address

127.0.0.1

1.0.0-1

PAP_CONF_INI

Configuration file for the pap service

path

${PAP_CONF_INI:-${PAP_HOME}/conf/pap_configuration.ini}

1.0.0-1

PAP_AUTHZ_INI

Configuration file for the pap service authorization policies

path

${PAP_AUTHZ_INI:-${PAP_HOME}/conf/pap_authorization.ini}

1.0.0-1

PAP_REPO_LOCATION

Path to the repository directory

path

${PAP_REPO_LOCATION:-${PAP_HOME}/repository}

1.0.0-1

PAP_POLL_INTERVAL

The polling interval (in seconds) for retrieving remote policies

number

14400

1.0.0-1

PAP_ORDERING

Comma separated list of pap aliases. Example: alias-1, alias-2, ..., alias-n. Defines the order of evaluation of the policies of the paps, that means that the policies of pap "alias-1" are evaluated for first, then the policies of pap "alias-2" and so on.

string

default

1.0.0-1

PAP_CONSISTENCY_CHECK

Forces a consistency check of the repository at startup.

boolean

false

1.0.0-1

PAP_CONSISTENCY_CHECK_REPAIR

if set to true automatically fixes problems detected by the consistency check (usually means deleting the corrupted policies).

boolean

false

1.0.0-1

PAP_PORT

PAP standalone service port

port

8150

1.0.0-1

PAP_SHUTDOWN_PORT

PAP standalone shutdown service port

port

8151

1.0.0-1

PAP_SHUTDOWN_COMMAND

PAP standalone shutdown command (password)

port

generated pseudo random

1.1.0-1

PDP_HOME

Home directory of the pdp service

path

${PDP_HOME:-${INSTALL_ROOT}/argus/pdp}

1.0.0-1

PDP_CONF_INI

Configuration file for the PDP service

path

${PDP_CONF_INI:-${PDP_HOME}/conf/pdp.ini}

1.0.0-1

PDP_ENTITY_ID

This is a unique identifier for the PEP. It must be a URI (URL or URN) and the same entity ID should be used for all PEP instances that make up a single logical PEP. If a URL is used it need not resolve to any specific webpage.

URI

${PDP_ENTITY_ID:-"http://${ARGUS_HOST}/pdp"}

1.1.0-1

PDP_HOST

Set this variable to another value if PDP_HOST is not installed in the same host as PAP and PEP.

IP address

127.0.0.1

1.0.0-1

PDP_PORT

PDP standalone service port

port

8152

1.0.0-1

PDP_ADMIN_PORT

PDP admin service port

port

8153

1.1.0-1

PDP_ADMIN_PASSWORD

PDP admin service password for shutdown, reload policy, ..., commands

port

PSEUDO_RANDOM

1.1.0-1

PDP_RETENTION_INTERVAL

The number of minutes the PDP will retain (cache) a policy retrieved from the PAP. After this time is passed the PDP will again call out to the PAP and retrieve the policy

number

240

1.0.0-1

PDP_PAP_ENDPOINTS

Space separated list of PAP endpoint URLs for the PDP to use. Endpoints will be tried in turn until one returns a successful response. This provides limited failover support. If more intelligent failover is necessary or load balancing is required, a dedicated load-balancer/failover appliance should be used.

This is a unique identifier for the PEP. It must be a URI (URL or URN) and the same entity ID should be used for all PEP instances that make up a single logical PEP. If a URL is used it need not resolve to any specific webpage.

URI

${PEP_ENTITY_ID:-"http://${ARGUS_HOST}/pepd"}

1.1.0-1

PEP_HOST

Set this variable to another value if PEP_HOST is not installed in the same host as PAP and PDP. But remember to use the hostname and not 127.0.0.1 !

hostname

${ARGUS_HOST}

1.1.0-1

PEP_PORT

PEP service port

port

8154

1.0.0-1

PEP_ADMIN_PORT

PEP admin service port

port

8155

1.1.0-1

PEP_ADMIN_PASSWORD

PEP admin service password for shutdown, clear cache, ..., commands

port

generated pseudo random

1.1.0-1

PEP_MAX_CACHEDRESP

The maximum number of responses from any PDP that will be cached. Setting this value to 0 (zero) will disable caching.

number

500

1.0.0-1

PEP_PDP_ENDPOINTS

Space separated list of PDP endpoint URLs for the PEP to use. Endpoints will be tried in turn until one returns a successful response. This provides limited failover support. If more intelligent failover is necessary or load balancing is required, a dedicated load-balancer/failover appliance should be used.

URLs

${PEP_PDP_ENDPOINTS:-"https://${PDP_HOST}:8152/authz"}

1.1.0-1

BDII

site BDII

Mandatory general variables

CE_HOST

SITE_BDII_HOST

SITE_EMAIL

SITE_LAT

SITE_LONG

SITE_NAME

Mandatory service specific variables: they can be found in /opt/glite/yaim/examples/services/siteinfo/glite-bdii_site:

List of host identifiers publishing information to the BDII. For each item listed in the BDII_REGIONS variable you need to create a BDII_<host-id>_URL variable</host-id>

node-type name

3.0.1-0

BDII_<host-id>_URL

URL of the information producer (e.g. BDII_host1_URL="ldap://host1_hostname:2170/mds-vo-name=resource,o=grid". Where host1 is a host where several node types may be installed, for example a lcg CE and a site BDII. It's therefore not necessary to create one variable per node type, but per host)

Most of these variables correspond to variables that are defined for a CE when it is deployed in non-cluster mode. Please refer to the description of the old variable where applicable as shown in the table below.

Variable Name

Description

Value type

Version

CE_HOST_<host-name>_CE_TYPE

CE type: 'jobmanager' for lcg CE and 'cream' for cream CE

string

glite-yaim-cluster 1.0.0-2

CE_HOST_<host-name>_CE_InfoJobManager

The name of the job manager used by the CE. This variable has been renamed in the new infosys configuration. The old variable name was: JOB_MANAGER. Please define as: pbs, lcgpbs, lsf or lcglsf, etc.

string

glite-yaim-cluster 1.0.0-2

CE_HOST_<host-name>_QUEUES

Space separated list of the queue names configured in the CE.

string

glite-yaim-cluster 1.0.0-2

CLUSTER_HOST

hostname where the cluster is configured

hostname

glite-yaim-cluster 1.0.0-2

CLUSTERS

Space separated list of your cluster identifiers, Ex. ="cluster1 [cluster2 [...]]". The identifiers are only used within yaim configuration files.

string list

glite-yaim-cluster 1.0.0-1

CLUSTER_<cluster-identifier>_CLUSTER_UniqueID

Cluster UniqueID. It may contain lowercase alphanumeric characters, dot, dash and underscore only. It must be globally unique, for instance base it on the DNS domain.

string

glite-yaim-cluster 1.0.0-1

CLUSTER_<cluster-identifier>_CLUSTER_Name

Cluster human readable name

string

glite-yaim-cluster 1.0.0-1

CLUSTER_<cluster-identifier>_SITE_UniqueID

Site name where the cluster belongs to. It should be consistent with your variable SITE_NAME. NOTE: This may be changed to SITE_UniqueID when the GlueSite is configured with the new infosys variables

string

glite-yaim-cluster 1.0.0-1

CLUSTER_<cluster-identifier>_CE_HOSTS

Space separated list of CE hostnames configured in the cluster

hostname list

glite-yaim-cluster 1.0.0-1

CLUSTER_<cluster-identifier>_SUBCLUSTERS

Space separated list of your subcluster identifiers, Ex="subcluster1 [subcluster2 [...]]". The identifiers are only used within yaim configuration files.

string list

glite-yaim-cluster 1.0.0-1

COMPUTING_SERVICE_ID

The Glue2 computing service id

String

glite-yaim-cluster 2.1.0-3

SUBCLUSTER_<subcluster-identifier>_SUBCLUSTER_UniqueID

Subcluster UniqueID. It may contain lowercase alphanumeric characters, dot, dash and underscore only. It must be globally unique within all Subcluster UniqueIDs, for instance base it on the DNS domain to ensure it will not collide with an ID at another site. Typically if a cluster will only have one subcluster the Subcluster UniqueID may be set to be the same as the Cluster UniqueID.

CONDOR client

The Condor Client is a Worker Node configured as an Executer for condor. You need the same variables used for the Condor Server configuration.

CONDOR Utils

The Condor Utils is a lcg-CE or creamCE configured as Job submitter for condor. It also provides the information to be published via the site BDII and parses the accounting data to the APEL database located at the MON_HOST. The list of variables needed to configure it are:

NOTE: There are no queues in the "conventional" sense in condor. Set the variable QUEUES to the short hostname of the condor server (e.g. QUEUES=condor). Then set the variable ${QUEUES}_GROUP_ENABLE accordingly to your access policy to the condor pool, e. g. as you would do in PBS.

By default the cream DB is on localhost and it's accessible only from localhost. Setting this variable to true will allow all computers in your domain to access the cream DB

String

No

4.0.7-0

BLAH_CHILD_POLL_TIMEOUT

BLAH timeout

Number

200

4.0.7-2

BLAH_JOBID_PREFIX

BLAH jobId prefix. It MUST be 6 chars long, begin with 'cr' and terminate with '_'. The other 3 characters must be alpha-numeric. It is important in case there's more than one CE using the same farm. In this case, it is suggested that each CREAM_CE has its own prefix

String

cream_

4.0.4-13

BLAH_JOBID_PREFIX_ES

BLAH jobId prefix. It MUST be 6 chars long, begin with 'cr' and terminate with '_'. The other 3 characters must be alpha-numeric. It is important in case there's more than one CE using the same farm. In this case, it is suggested that each CREAM_CE has its own prefix

String

4.3.0-3

BLPARSER_WITH_UPDATER_NOTIFIER

Specifies if the new blparser (set it to 'true') or the old one (set it to 'false' should be used)

String

false

4.0.8-0

BLP_PORT

Port where BLAH Blparser listens to

Number

333333

4.0.6-0

BUPDATER_LOOP_INTERVAL

Used to set the value bupdater_loop_interval in blah.config. It specifies how often the batch system should be queried

Relevant when the batch system is a PBS implementation). If the value for the variable is 'yes', staging will be done using: "-W stagein=file1@host:source1,stagein=file2@host:source2". If the value for the variable is 'no', staging will be done using: -W stagein="file1@host:source1,file2@host:source2"

String

yes

4.1.2-0

QUEUE_xxx_CLUSTER_UniqueID

The cluster uniqueid mapped to the specified queue

String

4.3.0-3

RESET_CREAM_DB_GRANTS

If yes, yaim will remove any unneeded (for CREAM) and potential dangerous grants on the CREAM DB

String

yes

4.0.9-2

SANDBOX_TRANSFER_METHOD_BETWEEN_CE_WN

If the value for this variable is GSIFTP, the transfer of sandbox files between the CE node and the WN is done using gridftp. If instead the value for this variable is LRMS, such file transfer is done using the batch system staging capabilities

The base directory after /dpm. Change it if you have several DPM head node in the same domain, to ensure the uniform name space. Ex.: 1st head node then /dpm/cern.ch/home 2nd head node /dpm/cern.ch/home2

directory name

home

4.0.1-7

DPMFSIZE

The default disk space allocated per file on a DPM node.

Number followed by storage unit

200M

3.0.1-0

DPM_HTTPS

Enable DPM's HTTPS acces

yes or no

no

4.0.1-7

DPM_XROOTD

Enable DPM's xROOTD access (obsolete)

yes or no

no

4.0.1-7 to 4.2.7

DPM_XROOTD_NOGSI

Enable DPM's xROOTD access without GSI authentication (obsolete)

yes or no

no

4.0.1-7 to 4.2.7

RFIO_PORT_RANGE

The port range used by RFIO operations

Two space separated number

20000 25000

3.0.1-0

DPM disk

Mandatory general variables

BDII_HOST

DPM_HOST

GROUPS_CONF

SE_GRIDFTP_LOGFILE

SE_LIST

USERS_CONF

VOS

VO_<vo-name>_SW_DIR

VO_<vo-name>_VOMS_SERVERS

VO_<vo-name>_VOMS_CA_DN

VO_<vo-name>_VOMSES

Mandatory service specific variables: they can be found in /opt/glite/yaim/examples/siteinfo/services/glite-se_dpm_disk :

The base directory after /dpm. Change it if you have several DPM head node in the same domain, to ensure the uniform name space. Ex.: 1st head node then /dpm/cern.ch/home 2nd head node /dpm/cern.ch/home2

directory name

home

4.0.1-7

DPMFSIZE

The default disk space allocated per file on a DPM node.

Number followed by storage unit

200M

3.0.1-0

DPM_HTTPS

Enable DPM's HTTPS acces

yes or no

no

4.0.1-7

DPM_XROOTD

Enable DPM's xROOTD access (obsolete)

yes or no

no

4.0.1-7 to 4.2.7

DPM_XROOTD_NOGSI

Enable DPM's xROOTD access without GSI authentication (obsolete)

yes or no

no

4.0.1-7 to 4.2.7

RFIO_PORT_RANGE

The port range used by RFIO operations

Two space separated number

20000 25000

3.0.1-0

E2EMONIT

Mandatory general variables

MON_HOST

Mandatory service specific variables: they can be found in /opt/glite/yaim/examples/services/siteinfo/glite-e2emonit:

Define this variable to configure the operation mode of glexec in your WN. The possible values are: 1) setuid: it will actually enable glexec to do the identity change. 2) log-only : it won't do any identity change but the log files will show if the mapping was succesful or not.

String

4.0.5-1

GLEXEC_WN_SCAS_ENABLED

Define this variable to configure glexec to work against a SCAS server. 'yes' means you want to use a SCAS server and therefore you need to define; 'no' means you don't want to use any SCAS server. See also the notes below.

String

4.0.5-1

GLEXEC_WN_ARGUS_ENABLED

Define this variable to configure glexec as a PEP client (see the EGEE/AuthorizationFramework); 'yes' means use ARGUS, 'no' means do not use ARGUS. See also the notes below.

String

N/A

SCAS_HOST

SCAS server hostname. Define this variable if you want to configure glexec to work against a SCAS server.

hostname

4.0.5-1

SCAS_PORT

SCAS port where the SCAS server is listening. Define this variable if you want to configure glexec to work against a SCAS server.

port

4.0.5-1

SCAS_ENDPOINTS

complete URL of SCAS endpoint, e.g. https://scas1.example.com:8443. Alternative to using SCAS_HOST and SCAS_PORT. Multiple values are allowed, separated by whitespace

it disables the creation of the gridmap directory only when GLEXEC_WN_SCAS_ENABLED = yes.

string

no

N/A

GLEXEC_LOCATION

installation root for the glexec software; set this if you have an alternate build and install location.

path

${GLITE_LOCATION}

N/A

GLEXEC_WN_CONFIG

full path of the glexec.conf file; this file is written by YAIM. Make this the hardcoded value in your version of glexec

path

/opt/glite/etc/glexec.conf

N/A

GLEXEC_WN_LCASLCMAPS_LOG

lcas/lcmaps log file

path

${GLEXEC_WN_LOG_DIR}/lcas_lcmaps.log

4.0.5-1

GLEXEC_WN_LCAS_DEBUG_LEVEL

lcas debug level

number

0

4.0.5-1

GLEXEC_WN_LCAS_DIR

lcas configuration directory

path

${INSTALL_ROOT}/glite/etc/lcas

4.0.5-1

GLEXEC_WN_LCAS_CONFIG

lcas configuration file

path

${GLEXEC_WN_LCAS_DIR}/lcas-glexec.db

4.0.5-1

GLEXEC_WN_LCAS_LOG_LEVEL

lcas log level

number

1

4.0.5-1

GLEXEC_WN_LCMAPS_DEBUG_LEVEL

lcmaps debug level

number

0

4.0.5-1

GLEXEC_WN_LCMAPS_DIR

lcmaps configuration directory path

path

${INSTALL_ROOT}/glite/etc/lcmaps

4.0.5-1

GLEXEC_WN_LCMAPS_CONFIG

lcmaps configuration file

path

${GLEXEC_WN_LCMAPS_DIR}/lcmaps-glexec.db

4.0.5-1

GLEXEC_WN_LCMAPS_LOG_LEVEL

lcmaps log level

number

1

4.0.5-1

GLEXEC_WN_LOG_DIR

Directory of the lcas and lcmaps log file.

path

/var/log/glexec

4.0.5-1

GLEXEC_WN_LOG_FILE

glexec log file. Define this variable if you have defined GLEXEC_WN_LOG_DESTINATION=file

path

${GLEXEC_WN_LOG_DIR}/glexec.log

4.0.5-1

GLEXEC_WN_LOG_LEVEL

glexec log level

number

0

N/A

GLEXEC_WN_LOG_DESTINATION

Optional variable to tell glexec where to send the glexec logging information. There are two values: 'syslog' and 'file'. The default is 'syslog'. The value 'syslog' puts all messages in the syslog and 'file' puts the messages in a file. Define this variable if you want to specify a file. For value 'file' define GLEXEC_WN_LOG_FILE as well.

string

syslog

4.0.5-1

GLEXEC_WN_PEPC_RESOURCEID

The resource id passed by the PEP client module to ARGUS. DO NOT CHANGE THIS PARAMETER.

string

http://authz-interop.org/xacml/resource/resource-type/wn

N/A

GLEXEC_WN_PEPC_ACTIONID

The action id passed by the PEP client module to ARGUS. DO NOT CHANGE THIS PARAMETER.

string

http://glite.org/xacml/action/execute

N/A

PILOT_JOB_FLAG

Flag used in users.conf and groups.conf to define the special pilot job accounts.

Notes on using SCAS and ARGUS

Although untypical, it is possible to configure both the SCAS and
ARGUS modules as back-ends for LCMAPS. The resulting configuration
will first do the callout to ARGUS, then SCAS. It may be useful,
e.g. if you want ARGUS to perform global banning and SCAS for account mapping.

In the following examples we show entries for users.conf and groups.conf needed by the GLEXEC_wn cofiguration. We use PILOT_JOB_FLAG=pilot, but you can choose a different identifier. We've chosen the dteam VO but you should change it to the VOs you support.

Example of users.conf file where user accounts for pilot jobs are defined:

Bear in mind that you need to contact your VO to know which FQAN is supported for pilot jobs. If you define role pilot in your configuration but this is not defined in the corresponding VO, it will be useless. This information should be part of the VO ID Card, otherwise please contact the VO.

Note: Use GLITE_LB_SUPER_USERS instead of GLITE_LB_WMS_DN in older versions.

lcg CE

Mandatory general variables

BATCH_SERVER

BDII_HOST

CE_BATCH_SYS

CE_CAPABILITY (mandatory for yaim-lcg-ce >= 4.0.5-4)

CE_OTHERDESCR (mandatory for yaim-lcg-ce >= 4.0.5-4)

SE_MOUNT_INFO_LIST (mandatory for yaim-lcg-ce >= 4.0.5-4)

GROUPS_CONF

<queue-name>_GROUP_ENABLE

JOB_MANAGER

MON_HOST

QUEUES

SE_LIST

USERS_CONF

VOS

VO_<vo-name>_VOMS_SERVERS

VO_<vo-name>_SW_DIR

VO_<vo-name>_VOMS_CA_DN

VO_<vo-name>_VOMSES

Also required for lcg-CE in non cluster mode (i.e. all lcg-CE <= 3.1.40)

CE_RUNTIMEENV

CE_SMPSIZE

CE_OS_ARCH

CE_SF00

CE_SI00

CE_MINPHYSMEM

CE_MINVIRTMEM

CE_INBOUNDIP

CE_OUTBOUNDIP

CE_OS

CE_OS_RELEASE

CE_OS_VERSION

CE_CPU_SPEED

CE_CPU_MODEL

CE_CPU_VENDOR

CE_PHYSCPU

CE_LOGCPU

cluster mode with lcg-CE>=3.1.46 is selected by defining

LCGCE_CLUSTER_MODE=yes

New Mandatory variables exist for the lcg-CE >= 3.1.46 when in cluster mode, although many of the CE_ yaim variables above are no longer needed (but are set in new variables when configuring the glite-CLUSTER node). Variables which are required in cluster mode are described in the following paragraphs and one can also find lists of variables which are available in the example /opt/glite/yaim/examples/siteinfo/services/lcg-ce

When the lcg-CE is configured in cluster mode it will stop publishing information about clusters and subclusters. That information should be published by the glite-CLUSTER node type instead. The glite-CLUSTER may be installed on the same machine as the lcg-CE or on a different host. A new set of yaim variables has been defined for configuring the information which is still required by the lcg-CE in cluster mode. Follow the instructions below:

The new variable names follow this syntax:

In general, variables based on hostnames, queues or VOViews containing '.' and '_' # should be transformed into '-'

Prefix of the experiment software directory in a site. This variable has been renamed in the new infosys configuration. The old variable name was: VO_SW_DIR. This parameter can be defined per CE, queue, site or voview. See /opt/glite/yaim/examples/siteinfo/services/lcg-ce for examples.

string

glite-yaim-lcg-ce 4.0.5-1

CE_CAPABILITY

Is a space separated list, each item will be published as a GlueCECapability attribute. It must include a CPUScalingReferenceSI00 value and may also need to include Share values. It can be defined by CE, queue or site by adding the appropriate prefix to the variable name. See /opt/glite/yaim/examples/siteinfo/services/lcg-ce for an example of a queue specific setting. An example site wide value is also set in site-info.def. This should be edited, or commented out and alternate value(s) set in services/lcg-ce

string

glite-yaim-lcg-ce-5.0.3-1

The following variables will be distributed in the future in site-info.def since they affect other yaim modules. At this moment we are in a transition phase to migrate to the new variable names.

Variable Name

Description

Value type

Version

CE_HOST_<host-name>_CE_TYPE

CE type: 'jobmanager' for lcg CE and 'cream' for cream CE

string

glite-yaim-lcg-ce 4.0.5-1

CE_HOST_<host-name>_QUEUES

Space separated list of the queue names configured in the CE. This variable has been renamed in the new infosys configuration. The old variable name was: QUEUES

string

glite-yaim-lcg-ce 4.0.5-1

CE_HOST_<host-name>_QUEUE_<queue-name>_CE_AccessControlBaseRule

Space separated list of FQANS and/or VO names which are allowed to access the queues configured in the CE. This variable has been renamed in the new infosys configuration. The old variable name was: _GROUP_ENABLE

string

glite-yaim-lcg-ce 4.0.5-1

CE_HOST_<host-name>_CE_InfoJobManager

The name of the job manager used by the gatekeeper. This variable has been renamed in the new infosys configuration. The old variable name was: JOB_MANAGER. Please, define: lcgpbs, lcglsf, lcgsge or lcgcondor

string

glite-yaim-lcg-ce 4.0.5-1

JOB_MANAGER

The old variable is still needed since config_jobmanager in yaim core hasn't been modified to use the new variable. To be done.

string

OLD variable

When using yaim-core >= 4.0.13 the OLD variables JOB_MANAGER, _GROUP_ENABLE and QUEUES will be set (or reset) to the values of the new replacement variables listed above. With prior versions the new and the old style need to both be set consistently.

Number of records that APEL will select in one go. The value of should be adjusted according to the memory assigned to the Java VM. In general, for 512Mb the number of records should be 150000 and for 1024Mb around 300000. The default value that is included in the APEL code is 300000, as the default memory is 1024Mb.

number

300000

4.0.2-7

GIN_BDII

If this is set to yes it will configure GIN to use the site BDII to populate the Glue tables in R-GMA. If set to no it will use the fmon to populate the tables.

Space separated list of the DNs of the host certificates which are autorised key retrievers (ex: '/O=Grid/O=CERN/OU=cern.ch/CN=host/testbed013.cern.ch'). Note that the DN should be declared between '.

Hostname DN list

4.0.3-1

GRID_AUTHORIZED_RENEWERS

Space separated list of the DNs of the host certificates which are autorised renewers(ex: '/O=Grid/O=CERN/OU=cern.ch/CN=host/testbed013.cern.ch'). Note that the DN should be declared between '

Hostname DN list

4.0.3-1

GRID_AUTHORIZED_RETRIEVERS

Space separated list of the DNs of the host certificates which are autorised retrievers(ex: '/O=Grid/O=CERN/OU=cern.ch/CN=host/testbed013.cern.ch'). Note that the DN should be declared between '

Hostname DN list

4.0.3-1

GRID_DEFAULT_RENEWERS

Space separated list of the DNs of the host certificates which are default renewers(ex: '/O=Grid/O=CERN/OU=cern.ch/CN=host/testbed013.cern.ch'). Note that the DN should be declared between '

Hostname DN list

4.0.3-1

GRID_DEFAULT_RETRIEVERS

Space separated list of the DNs of the host certificates which are default retrievers(ex: '/O=Grid/O=CERN/OU=cern.ch/CN=host/testbed013.cern.ch'). Note that the DN should be declared between '

Hostname DN list

4.0.3-1

GRID_DEFAULT_KEY_RETRIEVERS

Space separated list of the DNs of the host certificates which are default key retrievers(ex: '/O=Grid/O=CERN/OU=cern.ch/CN=host/testbed013.cern.ch'). Note that the DN should be declared between '

Hostname DN list

4.0.3-1

GRID_DEFAULT_TRUSTED_RETRIEVERS

Space separated list of the DNs of the host certificates which are default trusted retrievers(ex: '/O=Grid/O=CERN/OU=cern.ch/CN=host/testbed013.cern.ch'). Note that the DN should be declared between '

Hostname DN list

4.0.3-1

GRID_TRUSTED_BROKERS

Space separated list of the DNs of the host certificates which are trusted by the Proxy node: Resource brokers, WMS and FTS servers. (ex: '/O=Grid/O=CERN/OU=cern.ch/CN=host/testbed013.cern.ch'). Note that the DN should be declared between '

Hostname DN list

deprecated > 4.0.3-1

GRID_TRUSTED_RETRIEVERS

Space separated list of the DNs of the host certificates which are trusted retrievers(ex: '/O=Grid/O=CERN/OU=cern.ch/CN=host/testbed013.cern.ch'). Note that the DN should be declared between '

Hostname DN list

4.0.3-1

RB

Mandatory general variables

BATCH_LOG_DIR

BDII_HOST

GRIDICE_SERVER_HOST

GROUPS_CONF

MYSQL_PASSWORD

RB_HOST

SITE_NAME

SITE_EMAIL

USERS_CONF

VOS

VO_<vo-name>_VOMSES

SCAS

Mandatory general variables

GROUPS_CONF

USERS_CONF

VO_<vo-name>_VOMSES

VO_<vo-name>_VOMS_CA_DN (Mandatory for glite-yaim-core > 4.0.5-7)

VO_<vo-name>_VOMS_SERVERS

VOS

Mandatory service specific variables: None.

Default general variables

X509_HOST_CERT

X509_HOST_KEY

X509_CERT_DIR

Default service specific variables: they can be found in /opt/glite/yaim/defaults/glite-scas.pre or post:

TAR UI

TAR WN

TORQUE

A note on the use of munge

Torque for EPEL/Fedora is built to use munge as of version 2.5.7. See the release notes. This means that in order to use these versions of torque, munged must be started on the server and submit hosts (i.e., CEs) with a shared secret key in /etc/munge/munge.key. It is up to the administrator to take care of the distribution of this key, but the YAIM variable MUNGE_KEY_FILE can be used to install the key from a location that can be read by YAIM at configuration time. Leaving this variable empty means that the administrator is responsible for the installation of this key before YAIM is run, or the system will be left in a non-working state. Munge is required on all node types: CEs (submit hosts), the torque head node and the worker nodes.

TORQUE server

Mandatory general variables

BATCH_SERVER

CE_HOST

CE_SMPSIZE

USERS_CONF

QUEUES

VOS

WN_LIST

Default service specific variables: can be found in /opt/glite/yaim/defaults/glite-torque-server.pre:

Hostname of the server where the MySQL DB for APEL is installed. Bear in mind that in case you use the default value, which is MON_HOST, but MON_HOST is not defined in site-info.def, YAIM will complain APEL_MYSQL_HOST is not defined.

hostname

MON_HOST

glite-yaim-torque-utils-4.0.4-1

CONFIG_MAUI

Set it to 'no' if you want to disable the maui configuration in YAIM

yes or no

yes

glite-yaim-torque-utils-4.0.4-1

MUNGE_KEY_FILE

Path of a file containing the munge key of the Torque server. Munge is required since Torque version 2.5.7. This file will be copied to /etc/munge/munge.key.

path

(empty)

glite-torque-utils-4.1.0-1

TORQUE_VAR_DIR

Path to relocated Torque var hierarchy

path

/var/torque

UI

Mandatory general variables

BDII_HOST

LB_HOST (mandatory for glite-yaim-clients < 4.0.4-4)

MON_HOST (not needed anymore in gLite 3.2 UI)

PX_HOST

WMS_HOST

VOS

VO_<vo-name>_VOMSES

VO_<vo-name>_VOMS_CA_DN (Mandatory for glite-yaim-core > 4.0.5-7)

Default general variables

OUTPUT_STORAGE

Default service specific variables: they can be found in can be found in =/opt/glite/yaim/defaults/glite-ui.pre and post:

Service discovery settings to determine the FTS endpoint. Possible values are: 1) file : look for the FTS endpoint in a static file specified in GLITE_SD_SERVICES_XML. 2) bdii : look for the FTS endpoint dynamically from the BDII. Both options can be specified. The first one is tried first.

string

file,bdii

4.0.8-1

GLITE_SD_SERVICES_XML

Location of the FTS services.xml cache file. This has to be used in combination with GLITE_SD_PLUGIN="file,bdii"

path

"${INSTALL_ROOT}/glite/etc/services.xml"

4.0.8-1

VOBOX

Mandatory general variables

BDII_HOST

GROUPS_CONF

LB_HOST (mandatory for glite-yaim-clients < 4.0.4-4)

MON_HOST (not needed anymore in gLite 3.2 VOBOX)

PX_HOST

RB_HOST (only for releases < 3.2.9)

SITE_NAME

SE_LIST

USERS_CONF

VOS

VO_<vo-name>_VOMSES

VO_<vo-name>_VOMS_CA_DN (Mandatory for glite-yaim-core > 4.0.5-7)

VO_<vo-name>_SW_DIR

WMS_HOST

Mandatory service specific variables: they can be found in /opt/glite/yaim/examples/siteinfo/services/glite-vobox:

Database-backend independent YAIM variables

Hostname of the database server. Put 'localhost' if you run the database on the same machine. This parameter can be specified per VO in the following way: VO_<vo-name>_VOMS__DB_HOST

hostname

1.0.0-3

VO_<vo-name>_VOMS_PORT

The port on the VOMS server listening for request for each VO. This is used in the vomses configuration file. By convention, port numbers are allocated starting with 15000

port number

1.0.0-3

VOMS_ADMIN_SMTP_HOST

Host to which voms-admin-service-generated emails should be submitted. Use 'localhost' if you have an fully configured SMTP server running on this host. Otherwise specify the hostname of a working SMTP submission service. This parameter can be specified per VO in the following way: VO_<vo-name>_VOMS_ADMIN_SMTP_HOST

hostname

1.0.0-3

VOMS_ADMIN_MAIL

E-mail address that is used to send notification mails from the VOMS-admin. This parameter can be specified per VO in the following way: VO_<vo-name>_VOMS_ADMIN_MAIL

mail

1.0.0-3

The following variables are optional. You can comment them out if you want to define it. Otherwise voms will apply a default value internally:

The path of the certificate file (in pem format) of an initial VO administrator. The VO will be set up so that this user has full VO administration privileges. This parameter can be specified per VO in the following way: VO_<vo-name>_VOMS_ADMIN_CERT

path

1.0.0-3

VOMS_ADMIN_TOMCAT_GROUP

The UNIX group that Tomcat is run under

group name

1.0.0-3

VOMS_ADMIN_VOMS_GROUP

The UNIX group that the VOMS core service is run under

group name

1.0.0-3

Default service specific variables: can be found in can be found in /opt/glite/yaim/defaults/glite-voms.[pre|post]:

If set to 'true' it will attempt the creation and deployment of the database schema and initial contents (unless an existing database is found).

true/false

true

1.0.0-3

VOMS_ADMIN_INSTALL

Set this variable to 'false' if you don't want to configure voms-admin.

true/false

true

1.0.0-3

VOMS_ADMIN_VERBOSE

VOMSAdmin verbosity

true/false

true

1.0.0-3

VOMS_ADMIN_WEB_REGISTRATION_DISABLE

Set this variable to true if you want to disable the user registration via the voms-admin web interface. This parameter can be specified per VO in the following way: VO_<vo-name>_VOMS_ADMIN_WEB_REGISTRATION_DISABLE

true/false

false

1.0.0-3

VOMS_CORE_LOGROTATE_LOGNUMBER

This parameter can be specified per VO in the following way: VO_<vo-name>_VOMS_CORE_LOGROTATE_LOGNUMBER

number of rotated log files

90

1.0.0-3

VOMS_CORE_LOGROTATE_PERIOD

This parameter can be specified per VO in the following way: VO_<vo-name>_VOMS_CORE_LOGROTATE_PERIOD

daily, weekly, monthly

daily

1.0.0-3

VOMS_CORE_TIMEOUT

The maximum length of validity of the ACs that VOMS will grant (in seconds) The default value is 24 hours This parameter can be specified per VO in the following way: VO_<vo-name>_VOMS_CORE_TIMEOUT

seconds

86400

1.0.0-3

VOMS_SHORT_FQANS

FQANs syntax that will appear in the VO extension information of a voms proxy.

true/false

false

1.0.0-4

VOMS_PYTHONPATH

ZSI module path

path

=/opt/ZSI/lib/python2.3/site-packages, useless in EMI deployments

1.0.0-4

CATALINA_HOME

Tomcat Catalina home directory

path

/var/lib/tomcat5

1.0.0-3

TOMCAT_USER

Tomcat user name

user name

tomcat

1.0.0-3

GLITE_LOCATION_VAR

-

path

/var/glite

1.0.0-4

GLITE_LOCATION_LOG

-

path

/var/log/glite

1.0.0-4

GLITE_LOCATION_TMP

-

path

/tmp/glite

1.0.0-4

VOMS mysql specific variables

Mandatory service specific variables: can be found in /opt/glite/yaim/examples/services/glite-voms_mysql: