Skip to content

Latest commit

 

History

History
26 lines (18 loc) · 1.25 KB

other_clusters.md

File metadata and controls

26 lines (18 loc) · 1.25 KB

Running pipeline on clusters and other systems

Th pipeline can be run locally on a local machine, provided that all required softwares are available.
But because of the nature of thee imputation process as the size of dataset gets huge, it is recommended to use a high performance computing systems (HPC).
It is entirely possible to run this pipeline on other clusters, though you will need to set up your own config file so that the pipeline knows how to work with your cluster.

Cluster Environment

By default, pipeline uses the local Nextflow executor - in other words, all jobs are run in the login session. If you're using a simple server, this may be fine. If you're using a compute cluster, this is bad as all jobs will run on the head node.

To specify your cluster environment, add the following line to your config file:

process.executor = 'YOUR_SYSTEM_TYPE'

Many different cluster types are supported by Nextflow. For more information, please see the Nextflow documentation.

Note that you may need to specify cluster options, such as a project or queue. To do so, use the clusterOptions config option:

process {
  executor = 'SLURM'
  clusterOptions = '-A myproject'
}