-
Improvement
-
Resolution: Unresolved
-
Minor
-
None
-
Future Dev
-
None
If we want to run tasks on AWS spot instances this is safe for all sorts of small tasks like sending a forum email that might take a few seconds. But a large lumpy task like a big course backup could take many minutes and be unsafe.
In MDL-85173 we propose to collect more aggregate metadata on the average timings of tasks. Similar to MDL-85176 what we could leverage this to add cli filters to the various cron cli scripts so that we can have some cron runners on ondemand containers, and some running on cheaper spot instances. The spot ones could be told to only run tasks which are known with a high probability to complete within 2 minutes which is the spot termination warning period.
So proposing something like:
# Run tasks which are forecast to be faster than 2 minutes:
|
php admin/cli/cron.php --prefer-faster-than=120
|
|
# Run tasks which are forecast to be slower than 2 minutes:
|
php admin/cli/cron.php --prefer-slower-than=120
|
In the second case there would probably be some probabilistic element to this, ie if the queue only has fast tasks in it then it may as well run one quickly and then see if there are any slower ones now rather than doing nothing.