Nope, server side isn’t an option.
The DMT application inserts enough latency with the desktop application’s procedural workload that you’d need to abuse multiple workstations to apply a rate load that anyone should worry about. Parallel updates from a single DMT instance slow each other down.
If it’s a straightforward set to convert to REST API, running it as a stack of serial curl’s usually outperforms serial DMT by roughly 5-15x. That’s just a baseline, theoretically. Running HTTP requests async is relatively easy. Just try to not do a tragedy of the commons I guess?
I found that 25 parallel requests to Part.UpdateExt puts us at 95% CPU usage. I think @aosemwengie1 mentioned recently that Epicor was after them after a couple of long running BAQs. Makes me wonder how far you can push the cloud before they notice or throttle you.
Everyone in the cloud is throttled.
Isn’t the whole point of Cloud to allocate extra resources for a period of time so you basically run extremely fast with unlimited ram during a spike, then scale down.
Sounds like a legacy hardware dedicated server to me.
Even Hyper-V and VMWare have settings to scale you during spikes.
That’s the marketing hype but not the reality.
It’s a nice option.
During a typically high volume IO period of time (late afternoon of last December 31), I received two Azure Enterprise 429’s in curl responses.
If service scaling or rate limiting exist, they’re configured outside the scope of available resources.
Sounds like on-prem architecture used in the cloud to me. Honestly, even on-prem people can feel this pain when the corporate server is in Chicago and there are plants in Texas, Canada, Utah, and Florida. Distributed computing has a different set of problems whether one uses cloud resources or not.
DMT’s model was read a record and make the service calls. It’s not very efficient over the network. Running it at the server was always so much faster. Native cloud companies, like HubSpot, actually have batch versions of some calls to reduce the number of calls on the wire. On some Microsoft services, one makes a call but sends the location of the file and the file is brought closer to the compute and the user either polls the service waiting for completion or waits for a WebHook callback. Other companies work from queues and do work as time allows. If the queue length gets too high, then yes, resources can be added.
But adding resources to a process architected for a LAN isn’t very efficient.
But AI is much more important than reengineering the product to actually perform in the cloud that they want to push everyone to.
Obviously.
You all had answers far more intelligent than the one I am about to say but…
I found with jobs that the DMT was pulling the entire job in memory just to update one comment. Our jobs are massive, so this is like 30 seconds per update. Even if it were 10 updates for the same job, it starts over each time.
Whereas a well-configured EFx does it in milliseconds.
There was a time - and I feel it is still true - that the order of your spreadsheet rows matters. If you can sort it alphabetically by primary key(s) you may see better results for a lot of data. It was very true in BOM updates (engineering).
Personally, and I think this has been brought up before somewhere, I believe that DMT should ingest the data, upload it as a single transaction and then do all the processing directly on the App server, take the local client right out of it instead of processing one row at a time. Whether on-prem or in the cloud. I feel this same pain when I have to run DMT from home, I remote to a machine at the office and run it there for anything more than a few hundred rows.

That bar goes a LOT lower! Not nearly long enough ago I worked with a “SaaS” platform that was literally the desktop application installed on Windows Server. The “desktop client” was a reskinned outdated copy of literal Windows RDP mapped to the multi user desktop.
We use DMT for large multi-company JE imports. Saves lots of time - even though it IS slow - and allows the user to troubleshoot if there are problems.
We did find a bug with really large imports of multiple JEs in different companies (not Inter-company JEs) that the data would scramble into wrong companies and journals. It was odd but re-creatable, though the data scrambled wasn’t always the same records. I suspect that DMT imports the data asynchronously so “misses” when the company and/or Journal changes. That said, we split our imports into smaller groups and in some cases made them single company and haven’t had problems. We use both manual imports and automated with command line controls. A problem ticket was created in March 2025 but there has been no traction. Like many things, this is a great tool if Epicor recognized how much it can help big data-movers and holding companies. (PRB0295631-DMT - GL Journal Combined- Scrambling data between companies)
