添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接
  • Available GPUs
  • Specifying GPU memory (optional)
  • Requesting GPU resources in your SLURM script
  • Compiling
  • Submitting Jobs
  • Submitting Pre-emptable Jobs
  • Swan has two types of GPUs available in the gpu partition. The type of GPU is configured as a SLURM feature, so you can specify a type of GPU in your job resource requirements if necessary.

    Specifying GPU memory (optional)

    You may optionally specify a GPU memory amount via the use of an additional feature statement. The available memory specifcations are:

    Requesting GPU resources in your SLURM script

    To run your job on the next available GPU regardless of type, add the following options to your srun or sbatch command:

    --partition=gpu --gres=gpu

    To run on a specific type of GPU, you can constrain your job to require a feature. To run on P100 GPUs for example:

    --partition=gpu --gres=gpu --constraint=gpu_p100

    You may request multiple GPUs by changing the --gres value to - -gres=gpu:2 . Note that this value is per node . For example, --nodes=2 --gres=gpu:2 will request 2 nodes with 2 GPUs each, for a total of 4 GPUs.

    The GPU memory feature may be used to specify a GPU RAM amount either independent of architecture, or in combination with it.

    For example, using

    --partition=gpu --gres=gpu --constraint=gpu_16gb

    will request a GPU with 16GB of RAM, independent of the type of card (P100, T4, etc.). You may also request both a GPU type and memory amount using the & operator (single quotes are used because & is a special character).

    For example,

    --partition=gpu --gres=gpu --constraint='gpu_32gb&gpu_v100'

    will request a V100 GPU with 32GB RAM.

    You must verify the GPU type and memory combination is valid based on the available GPU types. . Requesting a nonexistent combination will cause your job to be rejected with a Requested node configuration is not available error.

    Compiling

    Compilation of CUDA or OpenACC jobs must be performed on the GPU nodes. Therefore, you must run an interactive job to compile. An example command to compile in the gpu partition could be:

    $ srun --partition=gpu --gres=gpu --mem=4gb --ntasks-per-node=2 --nodes=1 --pty $SHELL

    The above command will start a shell on a GPU node with 2 cores and 4GB of RAM in order to compile a GPU job.  The above command could also be useful if you want to run a test GPU job interactively.

    Submitting Jobs

    CUDA and OpenACC submissions require running on GPU nodes.

    cuda.submit
    #!/bin/bash
    #SBATCH --time=03:15:00
    #SBATCH --mem-per-cpu=1024
    #SBATCH --job-name=cuda
    #SBATCH --partition=gpu
    #SBATCH --gres=gpu
    #SBATCH --error=/work/[groupname]/[username]/job.%J.err
    #SBATCH --output=/work/[groupname]/[username]/job.%J.out
    module load cuda
    ./cuda-app.exe

    OpenACC submissions require loading the PGI compiler (which is currently required to compile as well).

    openacc.submit
    #!/bin/bash
    #SBATCH --time=03:15:00
    #SBATCH --mem-per-cpu=1024
    #SBATCH --job-name=cuda-acc
    #SBATCH --partition=gpu
    #SBATCH --gres=gpu
    #SBATCH --error=/work/[groupname]/[username]/job.%J.err
    #SBATCH --output=/work/[groupname]/[username]/job.%J.out
    module load cuda/8.0 compiler/pgi/16
    ./acc-app.exe

    Submitting Pre-emptable Jobs

    Some GPU hardware is reserved by various groups for priority access. While the group that has purchased the priority access will always have immediate access, HCC makes these nodes available opportunistically. When not otherwise utilized, jobs can run on these resources with the limitation that they may be pre-empted (i.e. killed) at any time .

    To submit jobs to these resources, add the following to your srun or sbatch command:

    --partition=guest_gpu --gres=gpu

    In order to properly utilize pre-emptable resources, your job must be able to support some type of checkpoint/resume functionality.

    See something wrong? Help us fix it by contributing !