Skip to content

Monarch (Monash) Useful links, commands, and workflows

Alberto F. Martin edited this page Jun 14, 2022 · 11 revisions

Useful links

Know-how acquired so far

Project's budget

To check the quota and limit, follow the commands in https://docs.monarch.erc.monash.edu.au/MonARCH/slurm/project-credit-management.html .

Disk quota

​ In order to check if the amount of files and the total space used by the project members are close to the disk quota. ​ ​

The commands to analyse the files for the /scratch filesystem is:

lfs quota -h -g "project_id" /mnt/lustre/scratch ​ ​

For the /home filesystem, the following command should work:

quota -f /home

Using Julia installation on the cluster

​ Simply use the following commands ​

module unload intel-mkl
module unload julia
module load julia/1.6.1

​ After executing these, you should be able to execute, e.g., the julia --version command succesfully. ​

-bash-4.2$ julia --version
julia version 1.6.1

Downloading Julia on the cluster

To install Julia in your directory on the cluster, first login to MonARCH. Open the terminal and run:

ssh "username"@monarch.erc.monash.edu (Replace "username" with your usename).

​ Change to the directory where you would like the Julia installation (it is recommended that it is installed in the /home directory) and execute the command:

wget https://julialang-s3.julialang.org/bin/linux/x64/1.6/julia-1.6.2-linux-x86_64.tar.gz

​ This link is obtained from https://julialang.org/downloads/ (look here if a different Julia version is required). ​ And, finally, untar it:

tar zxvf julia-1.6.2-linux-x86_64.tar.gz ​ In the ~/.profile file, add Julia to the PATH by adding the line

export PATH="$PATH:~/julia-1.6.2/bin.

(We also recommend to add module load git)

​ After executing these, you should be able to execute, e.g., the julia --version command succesfully. ​

-bash-4.2$ julia --version
julia version 1.6.2

Workflow for Gridap.jl (serial computations)

​ In this section, the steps for running a serial Julia process non-interactively using a single MonARCH node is described. ​ Job script ​ To batch any job to the cluster, whether it be serial or parallel, we must design a shell script (which we will refer to as the job script), detailing the important specifications of our job. This is mainly so the clusters management system can appropriately allocate the required resources. A template for this job_script.sh, is shown below: ​

#!/bin/bash
#SBATCH --job-name=MyJob
#SBATCH --time=00:05:00
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --output=MyJob.out
#SBATCH --error=MyJob.err
​
dir=/scratch/p0001/username/<PATH_WHERE_YOU_WANT_TO_KEEP_DATA>
cd $dir
​
<PATH_TO_YOUR_INSTALLATION_OF_JULIA> <PATH_TO_YOUR_JULIA_SCRIPT>

​ The script is divided into two parts, the header and the body. The header lines are prepended with #SBATCH and include the specifications for configuring the job request: ​

  • #SBATCH --job-name=MyJob With -J or --jobname we specify the name of the job, "MyJob" in this example. ​
  • #SBATCH --time=00:05:00 With this option we specify the time the job will spend on the node. If the job exceeds the time specified here it will be terminated. ​
  • #SBATCH --ntasks=1 This specifies the number of tasks that should be created for this job. ​
  • #SBATCH --cpus-per-task=1 With these options we specify the number of cpu's required. For a serial job, we only require 1 cpu. ​
  • #SBATCH --output=MyJob.out , #SBATCH --error=MyJob.errThe results of the program (output messages to screen) are written into batch files. With --output (-o) and --error(-e) we set the location for the output of our code and any error messages. ​ ​ On the other hand, the body of the job script is a regular Unix shell script. In the particular example: ​ ​
  • dir=/scratch/p0001/username/<PATH_WHERE_YOU_WANT_TO_KEEP_DATA> cd $dir Here we use standard Unix shell commands to again change the working directory if desired. username and p0001 should be replaced with the user ID and Project ID, respectively. ​
  • <PATH_TO_YOUR_INSTALLATION_OF_JULIA> <PATH_TO_YOUR_JULIA_SCRIPT> Finally, we execute the Julia script. This line should look something like: /home/username/julia-1.6.2/bin/julia /scratch/p0001/username/myjob.jlJob script submission ​ After writing the job script, we submit it on MonARCH. ​ To login, open the terminal and run: ssh [email protected] ​ Once logged into the cluster, submit the freshly written job script using the command:

sbatch job_script.sh ​ To check on the progress of the job, use the command: show_job "JOB_ID" ​ Replace "JOB_ID" with your job ID. ​

Workflow for GridapEmbedded.jl

​ Follow the same steps as for Gridap.jl ​ To use the package MiniQhull, we need to update the C Compiler to the latest version. We run the following command ​ module load gcc/10.2.0 ​ To check that the command has been executed successfully, we can execute, ​ gcc --version ​ which should yield ​ gcc (GCC) 10.2.0. ​ To make this the default compiler, we add this command in ~/.bashrcmodule load gcc/10.2.0. ​ ​

Workflow for GridapDistributed.jl (parallel computations)

​ Work in progress ...