Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 7 additions & 3 deletions _episodes/13-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -197,7 +197,7 @@ cluster: "sbatch --time={resources.time_min} --mem={resources.mem_mb}
-e slurm/logs/{rule}_{wildcards}"
jobs: 25
default-resources: [cpus=1, mem_mb=1000, time_min=5]
resources: [cpus=100, mem_mb=1000000]
resources: [cpus=100, mem_mb=100000]
```
{: .source}

Expand All @@ -206,14 +206,18 @@ This file has several components.
the cluster.
Here we've used SLURM's `sbatch` command and arguments for setting time limits
and resources with snakemake wildcards defining the requested values.
We've also specified where to save SLURM logs and what to call them; note that
this folder must already exist.

We've also specified where to save SLURM logs and what to call them. **Note** that
this folder must already exist. If the folders don't exist, snakemake will hang.

Values for any command line argument to snakemake can be defined in our
profile, although a value is required (e.g. the `--use-conda` argument could be
included in our profile with `use-conda: true`).

`jobs` specifies the maximum number of jobs that will be submitted at one time.
We also specified the `default-resources` that will be requested for each job,
while `resources` defines the resource limits.

With these parameters, snakemake will use no more than 100 cpus and 100000 MB
(100 GB) at a time between all currently submitted jobs.
While it does not come into play here, a generally sensible default is slightly
Expand Down