diff --git a/_episodes/13-cluster.md b/_episodes/13-cluster.md index 59a95ac..965b21a 100644 --- a/_episodes/13-cluster.md +++ b/_episodes/13-cluster.md @@ -197,7 +197,7 @@ cluster: "sbatch --time={resources.time_min} --mem={resources.mem_mb} -e slurm/logs/{rule}_{wildcards}" jobs: 25 default-resources: [cpus=1, mem_mb=1000, time_min=5] -resources: [cpus=100, mem_mb=1000000] +resources: [cpus=100, mem_mb=100000] ``` {: .source} @@ -206,14 +206,18 @@ This file has several components. the cluster. Here we've used SLURM's `sbatch` command and arguments for setting time limits and resources with snakemake wildcards defining the requested values. -We've also specified where to save SLURM logs and what to call them; note that -this folder must already exist. + +We've also specified where to save SLURM logs and what to call them. **Note** that +this folder must already exist. If the folders don't exist, snakemake will hang. + Values for any command line argument to snakemake can be defined in our profile, although a value is required (e.g. the `--use-conda` argument could be included in our profile with `use-conda: true`). + `jobs` specifies the maximum number of jobs that will be submitted at one time. We also specified the `default-resources` that will be requested for each job, while `resources` defines the resource limits. + With these parameters, snakemake will use no more than 100 cpus and 100000 MB (100 GB) at a time between all currently submitted jobs. While it does not come into play here, a generally sensible default is slightly