Job Script Templates
Example: Submitting a CPU Job
Note
If a job requests 1 TB or more memory via the PBS directive (e.g., #PBS -l ...:mem=1tb), it will be automatically routed to the large memory nodes.
| single-node-multi-core-job.pbs |
|---|
| #!/bin/bash
#PBS -q normal
#PBS -N parallel_job
#PBS -l select=1:ncpus=4:mpiprocs=4:ompthreads=1:mem=16gb
#PBS -l walltime=00:10:00
#PBS -j oe
#PBS -P <project-id>
cd $PBS_O_WORKDIR
cc mpihello.c -o mpihello
mpirun -np 4 ./mpihello > mpihello.txt
|
Example: Submitting a GPU Job
| single-node-multi-gpu-job.pbs |
|---|
1
2
3
4
5
6
7
8
9
10
11
12 | #!/bin/bash
#PBS -q normal
#PBS -N gpu_job
#PBS -l select=1:ngpus=4
#PBS -l walltime=06:00:00
#PBS -j oe
#PBS -P <project-id>
cd $PBS_O_WORKDIR
module load miniforge3/23.10
conda activate myenv
python run_script.py
|
Example: Submitting an AI GPU Job
Note
By specifying the queue name as ai in the PBS directives, the job will be routed to the AI GPU nodes. Requests will only be fulfilled if the project <project-id> is approved to use these nodes.
| single-node-multi-gpu-job.pbs |
|---|
1
2
3
4
5
6
7
8
9
10
11
12 | #!/bin/bash
#PBS -q ai
#PBS -N gpu_job
#PBS -l select=1:ngpus=4
#PBS -l walltime=06:00:00
#PBS -j oe
#PBS -P <project-id>
cd $PBS_O_WORKDIR
module load miniforge3/23.10
conda activate myenv
python run_script.py
|