Selection of batch queue is hard-wired in resources.py module
The resources utility module (/src/virtmat/language/utilities/resources.py
) is set up to use as default the dev_cpuonly partition (in HoreKa).
Even if a qadapter.yaml file is provided with the queue
flag (for example queue:other_queue
), the resources module will override (or ignore?) the flag and will queue to dev_cpuonly.
One workaround is to adjust manually the resources.py module.
For example, to enable the use of a queue called cpuonly
:
res_config = {
# 'queue': 'dev_cpuonly', <<<--- COMMENTED OUT!
'queue': 'cpuonly',
'max_tasks_per_node': 76,
'min_ncores': 1,
'max_ncores': 912,
'max_walltime': ureg.Quantity(4, 'hours'),
'max_memory': ureg.Quantity(3200, 'MB'),
'min_ntasks': 1,
'min_nodes': 1,
'default_nodes': 1,
'default_ntasks': 1,
'default_ncores': 1,
'default_cpus_per_task': 1,
'default_memory': None,
'default_mem_per_cpu': ureg.Quantity(1600, 'MB'),
'default_walltime': ureg.Quantity(10, 'minutes')
}
This is obviously this is not a long term solution, and uncovers some underlying issues. While some resource keywords can be updated from input within a model, such as wall time, tasks, and memory (see resource annotations), there are some others that are hard-wired and require more involved/advanced manipulation of the program's modules. Ideally the user should be able to configure resources for each individual session without much complication. If this is not possible then, at least, there should be some way to configure resources according to the user's installation (or hpc facility) in a flexible manner, avoiding unnecessary tinkering with modules.