configuration information, the nodes to be managed, information about
how those nodes are grouped into partitions, and various scheduling
parameters associated with those partitions. This file should be
consistent across all nodes in the cluster.
.LP
The file location can be modified at system build time using the
DEFAULT_SLURM_CONF parameter or at execution time by setting the SLURM_CONF
environment variable. The Slurm daemons also allow you to override
both the built\-in and environment\-provided location using the "\-f"
option on the command line.
.LP
The contents of the file are case insensitive except for the names of nodes
and partitions. Any text following a "#" in the configuration file is treated
as a comment through the end of that line.
Changes to the configuration file take effect upon restart of
Slurm daemons, daemon receipt of the SIGHUP signal, or execution
of the command "scontrol reconfigure" unless otherwise noted.
.LP
If a line begins with the word "Include" followed by whitespace
and then a file name, that file will be included inline with the current
configuration file. For large or complex systems, multiple configuration files
may prove easier to manage and enable reuse of some files (See INCLUDE
MODIFIERS for more details).
.LP
Note on file permissions:
.LP
The \fIslurm.conf\fR file must be readable by all users of Slurm, since it
is used by many of the Slurm commands.  Other files that are defined
in the \fIslurm.conf\fR file, such as log files and job accounting files,
may need to be created/owned by the user "SlurmUser" to be successfully
accessed.  Use the "chown" and "chmod" commands to set the ownership
and permissions appropriately.
See the section \fBFILE AND DIRECTORY PERMISSIONS\fR for information
about the various files and directories used by Slurm.

.SH "PARAMETERS"
.LP
The overall configuration parameters available include:

.TP
\fBAccountingStorageBackupHost\fR
The name of the backup machine hosting the accounting storage database.
If used with the accounting_storage/slurmdbd plugin, this is where the backup
slurmdbd would be running.
Only used with systems using SlurmDBD, ignored otherwise.

.TP
\fBAccountingStorageEnforce\fR
This controls what level of association\-based enforcement to impose
on job submissions.  Valid options are any combination of
\fIassociations\fR, \fIlimits\fR, \fInojobs\fR, \fInosteps\fR, \fIqos\fR,
\fIsafe\fR, and \fIwckeys\fR, or \fIall\fR for all things (except nojobs
enforced, users can be limited by association to whatever job size or run
time limits are defined.

If \fInojobs\fR is set, Slurm will not account for any jobs or steps on the
system. Likewise, if \fInosteps\fR is set, Slurm will not account for any
steps that have run.

If \fIsafe\fR is enforced, a job will only be launched against an association
or qos that has a \fBGrpTRESMins\fR limit set, if the job will be able to
run to completion. Without this option set, jobs will be launched as long as
their usage hasn't reached the cpu-minutes limit. This can lead to jobs being
launched but then killed when the limit is reached.

With \fIqos\fR and/or \fIwckeys\fR enforced jobs will not be scheduled unless
a valid qos and/or workload characterization key is specified.

When \fBAccountingStorageEnforce\fR is changed, a restart of the slurmctld
daemon is required (not just a "scontrol reconfig").

.TP
\fBAccountingStorageExternalHost\fR
A comma separated list of external slurmdbds (<host/ip>[:port][,...]) to
register with. If no port is given, the \fBAccountingStoragePort\fR will be
used.

This allows clusters registered with the external slurmdbd to communicate with
each other using the \fI--cluster/-M\fR client command options.

The cluster will add itself to the external slurmdbd if it doesn't exist. If a
non-external cluster already exists on the external slurmdbd, the slurmctld
will ignore registering to the external slurmdbd.

.TP
\fBAccountingStorageHost\fR
The name of the machine hosting the accounting storage database.
Only used with systems using SlurmDBD, ignored otherwise.
Also see \fBDefaultStorageHost\fR.

.TP
\fBAccountingStorageParameters\fR
Comma separated list of key-value pair parameters. Currently
supported values include options to establish a secure connection to the
database:
.RS
.TP 2
\fBSSL_CERT\fR
The path name of the client public key certificate file.
.TP
\fBSSL_CA\fR
The path name of the Certificate Authority (CA) certificate file.
.TP
\fBSSL_CAPATH\fR
authentication this can be configured to use a MUNGE daemon
specifically configured to provide authentication between clusters
while the default MUNGE daemon provides authentication within a
cluster.  In that case, \fBAccountingStoragePass\fR should specify the
named port to be used for communications with the alternate MUNGE
daemon (e.g.  "/var/run/munge/global.socket.2"). The default value is
NULL.  Also see \fBDefaultStoragePass\fR.

.TP
\fBAccountingStoragePort\fR
The listening port of the accounting storage database server.
Only used for database type storage plugins, ignored otherwise.
The default value is SLURMDBD_PORT as established at system
build time. If no value is explicitly specified, it will be set to 6819.
This value must be equal to the \fBDbdPort\fR parameter in the
slurmdbd.conf file.
Also see \fBDefaultStoragePort\fR.

.TP
\fBAccountingStorageTRES\fR
Comma separated list of resources you wish to track on the cluster.
These are the resources requested by the sbatch/srun job when it
is submitted. Currently this consists of any GRES, BB (burst buffer) or
license along with CPU, Memory, Node, Energy, FS/[Disk|Lustre], IC/OFED, Pages,
and VMem. By default Billing, CPU, Energy, Memory, Node, FS/Disk, Pages and VMem
are tracked. These default TRES cannot be disabled, but only appended to.
AccountingStorageTRES=gres/craynetwork,license/iop1
will track billing, cpu, energy, memory, nodes, fs/disk, pages and vmem along
with a gres called craynetwork as well as a license called iop1. Whenever these
resources are used on the cluster they are recorded. The TRES are automatically
set up in the database on the start of the slurmctld.

If multiple GRES of different types are tracked (e.g. GPUs of different types),
then job requests with matching type specifications will be recorded.
Given a configuration of
"AccountingStorageTRES=gres/gpu,gres/gpu:tesla,gres/gpu:volta"
Then "gres/gpu:tesla" and "gres/gpu:volta" will track only jobs that explicitly
request those two GPU types, while "gres/gpu" will track allocated GPUs of any
type ("tesla", "volta" or any other GPU type).

Given a configuration of
"AccountingStorageTRES=gres/gpu:tesla,gres/gpu:volta"
Then "gres/gpu:tesla" and "gres/gpu:volta" will track jobs that explicitly
request those GPU types.
If a job requests GPUs, but does not explicitly specify the GPU type, then
its resource allocation will be accounted for as either "gres/gpu:tesla" or
"gres/gpu:volta", although the accounting may not match the actual GPU type
allocated to the job and the GPUs allocated to the job could be heterogeneous.
In an environment containing various GPU types, use of a job_submit plugin
may be desired in order to force jobs to explicitly specify some GPU type.

.TP
Only used for database type storage plugins, ignored otherwise.
Also see \fBDefaultStorageUser\fR.

.TP
\fBAccountingStoreJobComment\fR
If set to "YES" then include the job's comment field in the job
complete message sent to the Accounting Storage database.  The default
is "YES".
Note the AdminComment and SystemComment are always recorded in the database.

.TP
\fBAcctGatherNodeFreq\fR
The AcctGather plugins sampling interval for node accounting.
For AcctGather plugin values of none, this parameter is ignored.
For all other values this parameter is the number
of seconds between node accounting samples. For the
acct_gather_energy/rapl plugin, set a value less
than 300 because the counters may overflow beyond this rate.
The default value is zero. This value disables accounting sampling
for nodes. Note: The accounting sampling interval for jobs is
determined by the value of \fBJobAcctGatherFrequency\fR.

.TP
\fBAcctGatherEnergyType\fR
Identifies the plugin to be used for energy consumption accounting.
The jobacct_gather plugin and slurmd daemon call this plugin to collect
energy consumption data for jobs and nodes. The collection of energy
consumption data takes place on the node level, hence only in case of exclusive
job allocation the energy consumption measurements will reflect the job's
real consumption. In case of node sharing between jobs the reported consumed
energy per job (through sstat or sacct) will not reflect the real energy
consumed by the jobs.

Configurable values at present are:
.RS
.TP 20
\fBacct_gather_energy/none\fR
No energy consumption data is collected.
.TP
\fBacct_gather_energy/ipmi\fR
Energy consumption data is collected from the Baseboard Management Controller
(BMC) using the Intelligent Platform Management Interface (IPMI).
.TP
\fBacct_gather_energy/pm_counters\fR
Energy consumption data is collected from the Baseboard Management
Controller (BMC) for HPE Cray systems.
.TP
\fBacct_gather_energy/rapl\fR
Energy consumption data is collected from hardware sensors using the Running
Average Power Limit (RAPL) mechanism. Note that enabling RAPL may require the
execution of the command "sudo modprobe msr".
.TP
network traffic by the jobs.

Configurable values at present are:
.RS
.TP 20
\fBacct_gather_interconnect/none\fR
No infiniband network data are collected.
.TP
\fBacct_gather_interconnect/ofed\fR
Infiniband network traffic data are collected from the hardware monitoring
counters of Infiniband devices through the OFED library.
In order to account for per job network traffic, add the "ic/ofed" TRES to
\fIAccountingStorageTRES\fR.
.RE

.TP
\fBAcctGatherFilesystemType\fR
Identifies the plugin to be used for filesystem traffic accounting.
The jobacct_gather plugin and slurmd daemon call this plugin to collect
filesystem traffic data for jobs and nodes.
The collection of filesystem traffic data takes place on the node level,
hence only in case of exclusive job allocation the collected values will
reflect the job's real traffic. In case of node sharing between jobs the reported
filesystem traffic per job (through sstat or sacct) will not reflect the real
filesystem traffic by the jobs.


Configurable values at present are:
.RS
.TP 20
\fBacct_gather_filesystem/none\fR
No filesystem data are collected.
.TP
\fBacct_gather_filesystem/lustre\fR
Lustre filesystem traffic data are collected from the counters found in
/proc/fs/lustre/.
In order to account for per job lustre traffic, add the "fs/lustre" TRES to
\fIAccountingStorageTRES\fR.
.RE

.TP
\fBAcctGatherProfileType\fR
Identifies the plugin to be used for detailed job profiling.
The jobacct_gather plugin and slurmd daemon call this plugin to collect
detailed data such as I/O counts, memory usage, or energy consumption for jobs
and nodes. There are interfaces in this plugin to collect data as step start
and completion, task start and completion, and at the account gather
frequency. The data collected at the node level is related to jobs only in
case of exclusive job allocation.

Configurable values at present are:
.RS

.TP
\fBAllowSpecResourcesUsage\fR
If set to "YES", Slurm allows individual jobs to override node's configured
CoreSpecCount value. For a job to take advantage of this feature,
a command line option of \-\-core\-spec must be specified.  The default
value for this option is "YES" for Cray systems and "NO" for other system types.

.TP
\fBAuthAltTypes\fR
Comma separated list of alternative authentication plugins that the slurmctld
will permit for communication. Acceptable values at present include
\fIauth/jwt\fR.

\fBNOTE\fR: \fIauth/jwt\fR requires a jwt_hs256.key to be populated in the
\fBStateSaveLocation\fR directory for \fBslurmctld\fR only. The jwt_hs256.key
should only be visible to the SlurmUser and root. It is not suggested to place
the jwt_hs256.key on any nodes but the controller running \fBslurmctld\fR.
\fIauth/jwt\fR can be activated by the presence of the \fISLURM_JWT\fR
environment variable.  When activated, it will override the default
\fBAuthType\fR.

.TP
\fBAuthAltParameters\fR
Used to define alternative authentication plugins options. Multiple options may
be comma separated.
.RS
.TP 15
\fBdisable_token_creation\fR
Disable "scontrol token" use by non-SlurmUser accounts.
.TP
\fBjwt_key=\fR
Absolute path to JWT key file. Key must be HS256, and should only be accessible
by SlurmUser. If not set, the default key file is jwt_hs256.key in
\fIStateSaveLocation\fR.
.RE

.TP
\fBAuthInfo\fR
Additional information to be used for authentication of communications
between the Slurm daemons (slurmctld and slurmd) and the Slurm
clients.  The interpretation of this option is specific to the
configured \fBAuthType\fR.
Multiple options may be specified in a comma delimited list.
If not specified, the default authentication information will be used.
.RS
.TP 14
\fBcred_expire\fR
Default job step credential lifetime, in seconds (e.g. "cred_expire=1200").
It must be sufficiently long enough to load user environment, run prolog,
deal with the slurmd getting paged out of memory, etc.
This also controls how long a requeued job must wait before starting again.
.TP
\fBAuthType\fR
The authentication method for communications between Slurm
components.
Acceptable values at present include "auth/munge" and "auth/none".
The default value is "auth/munge".
"auth/none" includes the UID in each communication, but it is not verified.
This may be fine for testing purposes, but
\fBdo not use "auth/none" if you desire any security\fR.
"auth/munge" indicates that MUNGE is to be used.
(See "https://dun.github.io/munge/" for more information).
All Slurm daemons and commands must be terminated prior to changing
the value of \fBAuthType\fR and later restarted.

.TP
\fBBackupAddr\fR
Deprecated option, see \fBSlurmctldHost\fR.

.TP
\fBBackupController\fR
Deprecated option, see \fBSlurmctldHost\fR.

The backup controller recovers state information from the
\fBStateSaveLocation\fR directory, which must be readable and writable from both
the primary and backup controllers.
While not essential, it is recommended that you specify a backup controller.
See  the \fBRELOCATING CONTROLLERS\fR section if you change this.

.TP
\fBBatchStartTimeout\fR
The maximum time (in seconds) that a batch job is permitted for
launching before being considered missing and releasing the
allocation. The default value is 10 (seconds). Larger values may be
required if more time is required to execute the \fBProlog\fR, load
user environment variables, or if the slurmd daemon gets paged from memory.
.br
.br
\fBNote\fR: The test for a job being successfully launched is only performed when
the Slurm daemon on the compute node registers state with the slurmctld daemon
on the head node, which happens fairly rarely.
Therefore a job will not necessarily be terminated if its start time exceeds
\fBBatchStartTimeout\fR.
This configuration parameter is also applied to launch tasks and avoid aborting
\fBsrun\fR commands due to long running \fBProlog\fR scripts.

.TP
\fBBurstBufferType\fR
The plugin used to manage burst buffers. Acceptable values at present are:
.RS
.TP
\fBburst_buffer/datawarp\fR
Use Cray DataWarp API to provide burst buffer functionality.
The name by which this Slurm managed cluster is known in the
accounting database.  This is needed distinguish accounting records
when multiple clusters report to the same database. Because of limitations
in some databases, any upper case letters in the name will be silently mapped
to lower case. In order to avoid confusion, it is recommended that the name
be lower case.

.TP
\fBCommunicationParameters\fR
Comma separated options identifying communication options.
.RS
.TP 15
\fBCheckGhalQuiesce\fR
Used specifically on a Cray using an Aries Ghal interconnect.  This will check
to see if the system is quiescing when sending a message, and if so, we wait
until it is done before sending.
.TP
\fBDisableIPv4\fR
Disable IPv4 only operation for all slurm daemons (except slurmdbd). This
should also be set in your \fBslurmdbd.conf\fR file.
.TP
\fBEnableIPv6\fR
Enable using IPv6 addresses for all slurm daemons (except slurmdbd). When
using both IPv4 and IPv6, address family preferences will be based on your
/etc/gai.conf file. This should also be set in your \fBslurmdbd.conf\fR file.
.TP
\fBNoAddrCache\fR
By default, Slurm will cache a node's network address after successfully
establishing the node's network address. This option disables the cache and
Slurm will look up the node's network address each time a connection is made.
This is useful, for example, in a cloud environment where the node addresses
come and go out of DNS.
.TP
\fBNoCtldInAddrAny\fR
Used to directly bind to the address of what the node resolves to running
the slurmctld instead of binding messages to any address on the node,
which is the default.
.TP
\fBNoInAddrAny\fR
Used to directly bind to the address of what the node resolves to instead
of binding messages to any address on the node which is the default.
This option is for all daemons/clients except for the slurmctld.
.RE


.TP
\fBCompleteWait\fR
The time to wait, in seconds, when any job is in the COMPLETING state
before any additional jobs are scheduled. This is to attempt to keep jobs on
nodes that were recently in use, with the goal of preventing fragmentation.
If set to zero, pending jobs will be started as soon as possible.
Since a COMPLETING job's resources are released for use by other
\fBControlAddr\fR
Deprecated option, see \fBSlurmctldHost\fR.

.TP
\fBControlMachine\fR
Deprecated option, see \fBSlurmctldHost\fR.

.TP
\fBCoreSpecPlugin\fR
Identifies the plugins to be used for enforcement of core specialization.
The slurmd daemon must be restarted for a change in CoreSpecPlugin
to take effect.
Acceptable values at present include:
.RS
.TP 20
\fBcore_spec/cray_aries\fR
used only for Cray systems
.TP
\fBcore_spec/none\fR
used for all other system types
.RE

.TP
\fBCpuFreqDef\fR
Default CPU frequency value or frequency governor to use when running a
job step if it has not been explicitly set with the \-\-cpu\-freq option.
Acceptable values at present include a numeric value (frequency in kilohertz)
or one of the following governors:
.RS
.TP 14
\fBConservative\fR
attempts to use the Conservative CPU governor
.TP
\fBOnDemand\fR
attempts to use the OnDemand CPU governor
.TP
\fBPerformance\fR
attempts to use the Performance CPU governor
.TP
\fBPowerSave\fR
attempts to use the PowerSave CPU governor
.RE
There is no default value. If unset, no attempt to set the governor is
made if the \-\-cpu\-freq option has not been set.

.TP
\fBCpuFreqGovernors\fR
List of CPU frequency governors allowed to be set with the salloc, sbatch, or
srun option  \-\-cpu\-freq.
Acceptable values at present include:
.RS
.TP 14
.RE
The default is OnDemand, Performance and UserSpace.
.TP
\fBCredType\fR
The cryptographic signature tool to be used in the creation of
job step credentials.
The slurmctld daemon must be restarted for a change in \fBCredType\fR
to take effect.
Acceptable values at present include "cred/munge" and "cred/none".
The default value is "cred/munge" and is the recommended.

.TP
\fBDebugFlags\fR
Defines specific subsystems which should provide more detailed event logging.
Multiple subsystems can be specified with comma separators.
Most DebugFlags will result in verbose-level logging for the identified
subsystems, and could impact performance.
Valid subsystems available include:
.RS
.TP 17
\fBAccrue\fR
Accrue counters accounting details
.TP
\fBAgent\fR
RPC agents (outgoing RPCs from Slurm daemons)
.TP
\fBBackfill\fR
Backfill scheduler details
.TP
\fBBackfillMap\fR
Backfill scheduler to log a very verbose map of reserved resources through
time. Combine with \fBBackfill\fR for a verbose and complete view of the
backfill scheduler's work.
.TP
\fBBurstBuffer\fR
Burst Buffer plugin
.TP
\fBCPU_Bind\fR
CPU binding details for jobs and steps
.TP
\fBCpuFrequency\fR
Cpu frequency details for jobs and steps using the \-\-cpu\-freq option.
.TP
\fBData\fR
Generic data structure details.
.TP
\fBDependency\fR
Job dependency debug info
.TP
\fBElasticsearch\fR
Elasticsearch debug info
.TP
.TP
\fBHetjob\fR
Heterogeneous job details
.TP
\fBGang\fR
Gang scheduling details
.TP
\fBJobContainer\fR
Job container plugin details
.TP
\fBLicense\fR
License management details
.TP
\fBNetwork\fR
Network details
.TP
\fBNetworkRaw\fR
Dump raw hex values of key Network communications. Warning: very verbose.
.TP
\fBNodeFeatures\fR
Node Features plugin debug info
.TP
\fBNO_CONF_HASH\fR
Do not log when the slurm.conf files differ between Slurm daemons
.TP
\fBPower\fR
Power management plugin
.TP
\fBPowerSave\fR
Power save (suspend/resume programs) details
.TP
\fBPriority\fR
Job prioritization
.TP
\fBProfile\fR
AcctGatherProfile plugins details
.TP
\fBProtocol\fR
Communication protocol details
.TP
\fBReservation\fR
Advanced reservations
.TP
\fBRoute\fR
Message forwarding debug info
.TP
\fBSelectType\fR
Resource selection plugin
.TP
\fBSteps\fR
Slurmctld resource allocation for job steps
.TP
Slurmctld triggers
.TP
\fBWorkQueue\fR
Work Queue details
.RE

.TP
\fBDefCpuPerGPU\fR
Default count of CPUs allocated per allocated GPU.

.TP
\fBDefMemPerCPU\fR
Default real memory size available per allocated CPU in megabytes.
Used to avoid over\-subscribing memory and causing paging.
\fBDefMemPerCPU\fR would generally be used if individual processors
are allocated to jobs (\fBSelectType=select/cons_res\fR or
\fBSelectType=select/cons_tres\fR).
The default value is 0 (unlimited).
Also see \fBDefMemPerGPU\fR, \fBDefMemPerNode\fR and \fBMaxMemPerCPU\fR.
\fBDefMemPerCPU\fR, \fBDefMemPerGPU\fR and \fBDefMemPerNode\fR are
mutually exclusive.

.TP
\fBDefMemPerGPU\fR
Default real memory size available per allocated GPU in megabytes.
The default value is 0 (unlimited).
Also see \fBDefMemPerCPU\fR and \fBDefMemPerNode\fR.
\fBDefMemPerCPU\fR, \fBDefMemPerGPU\fR and \fBDefMemPerNode\fR are
mutually exclusive.

.TP
\fBDefMemPerNode\fR
Default real memory size available per allocated node in megabytes.
Used to avoid over\-subscribing memory and causing paging.
\fBDefMemPerNode\fR would generally be used if whole nodes
are allocated to jobs (\fBSelectType=select/linear\fR) and
resources are over\-subscribed (\fBOverSubscribe=yes\fR or
\fBOverSubscribe=force\fR).
The default value is 0 (unlimited).
Also see \fBDefMemPerCPU\fR, \fBDefMemPerGPU\fR and \fBMaxMemPerCPU\fR.
\fBDefMemPerCPU\fR, \fBDefMemPerGPU\fR and \fBDefMemPerNode\fR are
mutually exclusive.

.TP
\fBDefaultStorageHost\fR
The default name of the machine hosting the accounting storage and
job completion databases.
Only used for database type storage plugins and when the
\fBAccountingStorageHost\fR and \fBJobCompHost\fR have not been
defined.

.TP
The listening port of the accounting storage and/or job completion
database server.
Only used for database type storage plugins, ignored otherwise.
Also see \fBAccountingStoragePort\fR and \fBJobCompPort\fR.

.TP
\fBDefaultStorageType\fR
The accounting and job completion storage mechanism type.  Acceptable
values at present include "filetxt", "mysql" and "none".
The value "filetxt" indicates that records will be written to a file.
The value "mysql" indicates that accounting records will be written to a MySQL
or MariaDB database.
The default value is "none", which means that records are not maintained.
Also see \fBAccountingStorageType\fR and \fBJobCompType\fR.

.TP
\fBDefaultStorageUser\fR
The user account for accessing the accounting storage and/or job
completion database.
Only used for database type storage plugins, ignored otherwise.
Also see \fBAccountingStorageUser\fR and \fBJobCompUser\fR.

.TP
\fBDependencyParameters\fR
Multiple options may be comma-separated.

.RS
.TP
\fBdisable_remote_singleton\fR
By default, when a federated job has a singleton dependeny, each cluster in the
federation must clear the singleton dependency before the job's singleton
dependency is considered satisfied. Enabling this option means that only the
origin cluster must clear the singleton dependency. This option must be set
in every cluster in the federation.
.TP
\fBkill_invalid_depend\fR
If a job has an invalid dependency and it can never run terminate it
and set its state to be JOB_CANCELLED. By default the job stays pending
with reason DependencyNeverSatisfied.
\fBmax_depend_depth=#\fR
Maximum number of jobs to test for a circular job dependency. Stop testing
after this number of job dependencies have been tested. The default value is
10 jobs.
.RE

.TP
\fBDisableRootJobs\fR
If set to "YES" then user root will be prevented from running any jobs.
The default value is "NO", meaning user root will be able to execute jobs.
\fBDisableRootJobs\fR may also be set by partition.

.TP
to be submitted. The default value is "NO".
NOTE: If set, then a job's QOS can not be used to exceed partition limits.
NOTE: The partition limits being considered are its configured MaxMemPerCPU,
MaxMemPerNode, MinNodes, MaxNodes, MaxTime, AllocNodes, AllowAccounts,
AllowGroups, AllowQOS, and QOS usage threshold.

.TP
\fBEpilog\fR
Fully qualified pathname of a script to execute as user root on every
node when a user's job completes (e.g. "/usr/local/slurm/epilog"). A
glob pattern (See \fBglob\fR (7)) may also be used to run more than
one epilog script (e.g. "/etc/slurm/epilog.d/*"). The Epilog script
or scripts may be used to purge files, disable user login, etc.
By default there is no epilog.
See \fBProlog and Epilog Scripts\fR for more information.

.TP
\fBEpilogMsgTime\fR
The number of microseconds that the slurmctld daemon requires to process
an epilog completion message from the slurmd daemons. This parameter can
be used to prevent a burst of epilog completion messages from being sent
at the same time which should help prevent lost messages and improve
throughput for large jobs.
The default value is 2000 microseconds.
For a 1000 node job, this spreads the epilog completion messages out over
two seconds.

.TP
\fBEpilogSlurmctld\fR
Fully qualified pathname of a program for the slurmctld to execute
upon termination of a job allocation (e.g.
"/usr/local/slurm/epilog_controller").
The program executes as SlurmUser, which gives it permission to drain
nodes and requeue the job if a failure occurs (See scontrol(1)).
Exactly what the program does and how it accomplishes this is completely at
the discretion of the system administrator.
Information about the job being initiated, its allocated nodes, etc. are
passed to the program using environment variables.
See \fBProlog and Epilog Scripts\fR for more information.

.TP
\fBExtSensorsFreq\fR
The external sensors plugin sampling interval.
If \fBExtSensorsType=ext_sensors/none\fR, this parameter is ignored.
For all other values of \fBExtSensorsType\fR, this parameter is the number
of seconds between external sensors samples for hardware components (nodes,
switches, etc.) The default value is zero. This value disables external
sensors sampling. Note: This parameter does not affect external sensors
data collection for jobs/steps.

.TP
\fBExtSensorsType\fR
.RE

.TP
\fBFairShareDampeningFactor\fR
Dampen the effect of exceeding a user or group's fair share of allocated
resources. Higher values will provides greater ability to differentiate
between exceeding the fair share at high levels (e.g. a value of 1 results
in almost no difference between overconsumption by a factor of 10 and 100,
while a value of 5 will result in a significant difference in priority).
The default value is 1.

.TP
\fBFederationParameters\fR
Used to define federation options. Multiple options may be comma separated.

.RS
.TP
\fBfed_display\fR
If set, then the client status commands (e.g. squeue, sinfo, sprio, etc.) will
display information in a federated view by default. This option is functionally
equivalent to using the \-\-federation options on each command. Use the client's
\-\-local option to override the federated view and get a local view of the
given cluster.
.RE

.TP
\fBFirstJobId\fR
The job id to be used for the first submitted to Slurm without a
specific requested value. Job id values generated will incremented by 1
for each subsequent job. This may be used to provide a meta\-scheduler
with a job id space which is disjoint from the interactive jobs.
The default value is 1.
Also see \fBMaxJobId\fR

.TP
\fBGetEnvTimeout\fR
Controls how long the job should wait (in seconds) to load the user's
environment before attempting to load it from a cache file.
Applies when the salloc or sbatch \fI\-\-get\-user\-env\fR option is used.
If set to 0 then always load the user's environment from the cache file.
The default value is 2 seconds.

.TP
\fBGresTypes\fR
A comma delimited list of generic resources to be managed (e.g.
\fIGresTypes=gpu,mps\fR).
These resources may have an associated GRES plugin of the same name providing
additional functionality.
No generic resources are managed by default.
Ensure this parameter is consistent across all nodes in the cluster for
proper operation.
The slurmctld daemon must be restarted for changes to this parameter to become
Controls how frequently information about which users are members of
groups allowed to use a partition will be updated, and how long user
group membership lists will be cached.
The time interval is given in seconds with a default value of 600 seconds.
A value of zero will prevent periodic updating of group membership information.
Also see the \fBGroupUpdateForce\fR parameter.

.TP
\fBGpuFreqDef\fR=[<\fItype\fR]=\fIvalue\fR>[,<\fItype\fR=\fIvalue\fR>]
Default GPU frequency to use when running a job step if it
has not been explicitly set using the \-\-gpu\-freq option.
This option can be used to independently configure the GPU and its memory
frequencies. Defaults to "high,memory=high".
After the job is completed, the frequencies of all affected GPUs will be reset
to the highest possible values.
In some cases, system power caps may override the requested values.
The field \fItype\fR can be "memory".
If \fItype\fR is not specified, the GPU frequency is implied.
The \fIvalue\fR field can either be "low", "medium", "high", "highm1" or
a numeric value in megahertz (MHz).
If the specified numeric value is not possible, a value as close as
possible will be used.
See below for definition of the values.
Examples of use include "GpuFreqDef=medium,memory=high and "GpuFreqDef=450".

Supported \fIvalue\fR definitions:
.RS
.TP 10
\fBlow\fR
the lowest available frequency.
.TP
\fBmedium\fR
attempts to set a frequency in the middle of the available range.
.TP
\fBhigh\fR
the highest available frequency.
.TP
\fBhighm1\fR
(high minus one) will select the next highest available frequency.
.RE

.TP
\fBHealthCheckInterval\fR
The interval in seconds between executions of \fBHealthCheckProgram\fR.
The default value is zero, which disables execution.

.TP
\fBHealthCheckNodeState\fR
Identify what node states should execute the \fBHealthCheckProgram\fR.
Multiple state values may be specified with a comma separator.
The default value is ANY to execute on nodes in any state.
.RS
Run on nodes in the IDLE state.
.TP
\fBMIXED\fR
Run on nodes in the MIXED state (some CPUs idle and other CPUs allocated).
.RE

.TP
\fBHealthCheckProgram\fR
Fully qualified pathname of a script to execute as user root periodically
on all compute nodes that are \fBnot\fR in the NOT_RESPONDING state. This
program may be used to verify the node is fully operational and DRAIN the node
or send email if a problem is detected.
Any action to be taken must be explicitly performed by the program
(e.g. execute
"scontrol update NodeName=foo State=drain Reason=tmp_file_system_full"
to drain a node).
The execution interval is controlled using the \fBHealthCheckInterval\fR
parameter.
Note that the \fBHealthCheckProgram\fR will be executed at the same time
on all nodes to minimize its impact upon parallel programs.
This program is will be killed if it does not terminate normally within
60 seconds.
This program will also be executed when the slurmd daemon is first started and
before it registers with the slurmctld daemon.
By default, no program will be executed.

.TP
\fBInactiveLimit\fR
The interval, in seconds, after which a non\-responsive job allocation
command (e.g. \fBsrun\fR or \fBsalloc\fR) will result in the job being
terminated. If the node on which the command is executed fails or the
command abnormally terminates, this will terminate its job allocation.
This option has no effect upon batch jobs.
When setting a value, take into consideration that a debugger using \fBsrun\fR
to launch an application may leave the \fBsrun\fR command in a stopped state
for extended periods of time.
This limit is ignored for jobs running in partitions with the
\fBRootOnly\fR flag set (the scheduler running as root will be
responsible for the job).
The default value is unlimited (zero) and may not exceed 65533 seconds.

.TP
\fBInteractiveStepOptions\fR
When LaunchParameters=use_interactive_step is enabled, launching salloc will
automatically start an srun process with InteractiveStepOptions to launch
a terminal on a node in the job allocation.
The default value is "--interactive --preserve-env --pty $SHELL".

.TP
\fBJobAcctGatherType\fR
The job accounting mechanism type.
Acceptable values at present include "jobacct_gather/linux" (for Linux
managed by a slurmstepd daemon that will persist through the lifetime of
that job step and not change its communication protocol. Only change this
configuration parameter when there are no running job steps.

.TP
\fBJobAcctGatherFrequency\fR
The job accounting and profiling sampling intervals.
The supported format is follows:
.RS
.TP 12
\fBJobAcctGatherFrequency=\fR\fI<datatype>\fR\fB=\fR\fI<interval>\fR
where \fI<datatype>\fR=\fI<interval>\fR specifies the task sampling
interval for the jobacct_gather plugin or a
sampling interval for a profiling type by the
acct_gather_profile plugin. Multiple,
comma-separated \fI<datatype>\fR=\fI<interval>\fR intervals
may be specified. Supported datatypes are as follows:
.RS
.TP
\fBtask=\fI<interval>\fR
where \fI<interval>\fR is the task sampling interval in seconds
for the jobacct_gather plugins and for task
profiling by the acct_gather_profile plugin.
.TP
\fBenergy=\fI<interval>\fR
where \fI<interval>\fR is the sampling interval in seconds
for energy profiling using the acct_gather_energy plugin
.TP
\fBnetwork=\fI<interval>\fR
where \fI<interval>\fR is the sampling interval in seconds
for infiniband profiling using the acct_gather_interconnect
plugin.
.TP
\fBfilesystem=\fI<interval>\fR
where \fI<interval>\fR is the sampling interval in seconds
for filesystem profiling using the acct_gather_filesystem
plugin.
.TP
.RE
.RE
The default value for task sampling interval
is 30 seconds. The default value for all other intervals is 0.
An interval of 0 disables sampling of the specified type.
If the task sampling interval is 0, accounting
information is collected only at job termination (reducing Slurm
interference with the job).
.br
.br
Smaller (non\-zero) values have a greater impact upon job performance,
but a value of 30 seconds is not likely to be noticeable for
applications having less than 10,000 tasks.
.br
\fBUsePss\fR
Use PSS value instead of RSS to calculate real usage of memory.
The PSS value will be saved as RSS.
.TP
\fBOverMemoryKill\fR
Kill processes that are being detected to use more memory than requested by
steps every time accounting information is gathered by the JobAcctGather plugin.
This parameter should be used with caution because a job exceeding its memory
allocation may affect other processes and/or machine health.

\fBNOTE\fR: If available, it is recommended to limit memory by enabling
task/cgroup as a TaskPlugin and making use of ConstrainRAMSpace=yes in the
cgroup.conf instead of using this JobAcctGather mechanism for memory
enforcement. With OverMemoryKill, memory limit is applied against each process
individually and is not applied to the step as a whole as it is with
ConstrainRAMSpace=yes. Using JobAcctGather is polling based and there is a
delay before a job is killed, which could lead to system Out of Memory events.
.RE

.TP
\fBJobCompHost\fR
The name of the machine hosting the job completion database.
Only used for database type storage plugins, ignored otherwise.
Also see \fBDefaultStorageHost\fR.

.TP
\fBJobCompLoc\fR
The fully qualified file name where job completion records are written
when the \fBJobCompType\fR is "jobcomp/filetxt" or the database where
job completion records are stored when the \fBJobCompType\fR is a
database, or a complete URL endpoint with format <host>:<port>/<target>/_doc
when \fBJobCompType\fR is "jobcomp/elasticsearch" like i.e.
"localhost:9200/slurm/_doc".
NOTE: More information is available at the Slurm web site
fR(8) manual.

.TP
\fBPowerParameters\fR
System power management parameters.
The supported parameters are specific to the \fBPowerPlugin\fR.
Changes to this value take effect when the Slurm daemons are reconfigured.
More information about system power management is available here