Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Implemented in this package (the `ClusterManagers.jl` package):
| PBS (Portable Batch System) | `addprocs_pbs(np::Integer; qsub_flags=``)` or `addprocs(PBSManager(np, qsub_flags))` |
| Scyld | `addprocs_scyld(np::Integer)` or `addprocs(ScyldManager(np))` |
| HTCondor[^1] | `addprocs_htc(np::Integer)` or `addprocs(HTCManager(np))` |
| Slurm | `addprocs_slurm(np::Integer; kwargs...)` or `addprocs(SlurmManager(np); kwargs...)` |
| Slurm (deprecated - consider using [SlurmClusterManager.jl](https://github.com/kleinhenz/SlurmClusterManager.jl) instead) | `addprocs_slurm(np::Integer; kwargs...)` or `addprocs(SlurmManager(np); kwargs...)` |
| Local manager with CPU affinity setting | `addprocs(LocalAffinityManager(;np=CPU_CORES, mode::AffinityMode=BALANCED, affinities=[]); kwargs...)` |

[^1]: HTCondor was previously named Condor.
Expand Down Expand Up @@ -93,7 +93,7 @@ julia> From worker 2: compute-6
From worker 3: compute-6
```

Some clusters require the user to specify a list of required resources.
Some clusters require the user to specify a list of required resources.
For example, it may be necessary to specify how much memory will be needed by the job - see this [issue](https://github.com/JuliaLang/julia/issues/10390).
The keyword `qsub_flags` can be used to specify these and other options.
Additionally the keyword `wd` can be used to specify the working directory (which defaults to `ENV["HOME"]`).
Expand Down Expand Up @@ -176,10 +176,10 @@ ElasticManager:
Active workers : []
Number of workers to be added : 0
Terminated workers : []
Worker connect command :
Worker connect command :
/home/user/bin/julia --project=/home/user/myproject/Project.toml -e 'using ClusterManagers; ClusterManagers.elastic_worker("4cOSyaYpgSl6BC0C","127.0.1.1",36275)'
```

By default, the printed command uses the absolute path to the current Julia executable and activates the same project as the current session. You can change either of these defaults by passing `printing_kwargs=(absolute_exename=false, same_project=false))` to the first form of the `ElasticManager` constructor.
By default, the printed command uses the absolute path to the current Julia executable and activates the same project as the current session. You can change either of these defaults by passing `printing_kwargs=(absolute_exename=false, same_project=false))` to the first form of the `ElasticManager` constructor.

Once workers are connected, you can print the `em` object again to see them added to the list of active workers.
Once workers are connected, you can print the `em` object again to see them added to the list of active workers.
12 changes: 12 additions & 0 deletions src/slurm.jl
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,18 @@ end

function launch(manager::SlurmManager, params::Dict, instances_arr::Array,
c::Condition)
let
msg = "`ClusterManagers.addprocs_slurm` and `ClusterManagers.SlurmManager` " *
"(from the `ClusterManagers.jl` package) are deprecated. " *
"Consider using the `SlurmClusterManager.jl` package instead."
funcsym = :addprocs_slurm
@static if Base.VERSION >= v"1.3"
Base.depwarn(msg, funcsym; force = true)
else
# Julia 1.2 and earlier do not support the `force` kwarg.
Base.depwarn(msg, funcsym)
end
end
try
exehome = params[:dir]
exename = params[:exename]
Expand Down
Loading