Describe a recommended way to execute models
We should evaluate the great variety of possiblities to execute a model and make one or two recommendations. There will be no one size fitting all but there should be recommendations for
-
Executing in instant/deferred mode using a script. What about clashing / sharing the same working directory for execution? Shall we honor the "batch category" of some statements in these modes to create isolated environments for their execution? If yes -> create a feature request issue.
-
Executing in workflow mode:
- default synchronous executor vs. WFEngine (async mode)
- interactive session and Jupyter vs. non-interactive script
- file paths for I/O - relative vs. absolute
- regarding Slurm: Slurm available (on HPC clusters, JupyterHub) or Slurm not available (local shell or local JupyterLab)
Edited by Rodrigo Cortes Mejia