I’m currently using System Agent Scheduling to generate some automated reports on a daily / weekly / monthly basis to great effect, and it’s working exactly as I hoped.
I’m now looking to use it to schedule a function to run, which again it’s doing just fine. It’s currently set to run every 15 minutes whilst I’m testing, but once deployed into Live, I might be looking at running it every 60 seconds.
We have a SaaS solution; is scheduling a function to run at this frequency likely to create performance issues?
Essentially it’s returning shipment data, so if it doesn’t find anything, it doesn’t do anything.
Just want to make sure I don’t inadvertently bring the systems to a grinding halt.
If the function is a simple query and then stops if it returns null, I don’t see this being an issue. The task scheduler would handle any concurrent issues that might bog down the system - I also have some functions (albeit not report generation) running every 2 mins on a MT SaaS environment.
I’m curious, is there a reason you’re not using a data directive to send data?
Sixty seconds is short, but it shouldn’t be a problem if it’s a simple function. I was using a function scheduled every 5 minutes to automate the Order Job Wizard and print travelers for orders created via API from our website. Sometimes as many as 10 orders in a batch, and it was causing no issues. The only problem I can see would be if the function ran longer than the scheduled interval.
So the questions you need to ask are: Could the function you’re executing take longer than a minute to execute? And if so, what could happen if a second instance executed before the first was complete?
If the answers are acceptable, I think you’ll be ok.
Pass - it’s a (pretty awesome) 3rd party integration that’s being developed for us, and I’m not technical enough to pretend to understand the ins and outs of it!
Good to know that it shouldn’t be a massive issue.