Why is the DMT so slow?

So bear with me here, I never ever use the DMT personally, but it just happened that I had to load around 40K rows into a UD table in a cloud environment. Do people actually use this thing productively? Why is it so slow?! I split my CSV into 4 and started all 4 tasks, and it’s running at a whopping 240 rows per minute total!!

This should take seconds! What gives?

1 Like

DMT works by taking one line at a time and pushing to Epicor through business objects (may be rest now), just like the Epicor client. It does this even with UD tables. There are many methods that it touches and overheads involved. It’s not like a direct SQL import. Not arguing if it’s fast or slow, just explaining why it’s not seconds.

2 Likes

Sadly DMT just sucks. It’s always been super slow. It runs each record through the business objects. If you split the file at least you can run multiple at a time. It still is awful though.

Epicor could make it matter. Salesforce has their Data Loader which can process 40K records in a few minutes. Their bulk API does it in batches of 250K records. All while hitting all of the built-in SF logic plus any custom logic customers add to SF.

2 Likes

It’s dumb is what it is… BO’s have the UpdateExt methods to load up entire datasets…

Would probably take me less time to build an updatable dashboard to load it all at once through copy paste by the time the DMT is halfway done……

Yeah I’m not arguing with you there, though UpdateExt would still have to do all those checks individually per row anyway so I’m not sure how much we’re saving. You could try using the rest UpdateExt endpoint and see if it’s any better.

As far as I know, DMT was initially made by someone else and Epicor purchased it at some point. I’m not sure they’ve made many changes or improvements since.

2 Likes

Yes, DMT was made by a 3rd party that Epicor acquired around 10 years ago now IIRC. If it’s available use the “Server Processing” checkbox makes it load much faster.

1 Like

It’s grayed out for UD tables…

Looks like it kicked the server out of power saving mode, doing 800 rows per minute now…

1 Like

:frowning: I wish they’d add it to more tables as it really does make it faster. But what they really need to do is take SF example of Data Loader as @chaddb mentioned.

1 Like

I see you just mentioned updatable dashboards. I do believe that uses UpdateExt, so can probably check the speed there.

I do find that Epicor doesn’t deal with multiple lines very well. In the Engineering workbench, at least a few versions back when I was dealing with this problem, takes a long time to process a paste insert with something like 50 materials. The first materials would be very fast, and then it would start slowing down so it would take longer and longer per line.

I’ve also experienced this during physical count, where I was using paste update to update values in a list copied from excel, and the same thing happened where the first lines go fast and it slows down to a crawl when you’re adding 100s of lines.

These two experiences have nothing to do with DMT, just Epicor client.

As is tradition!

Where a REST equivalent is available, feeding a heap of curl through a terminal usually runs much much faster. Hopefully our hosts will acquire rate limiting before a tragedy of the commons compels them to invent something proprietary. Until then, we get super fast uploads.

Why even do this client side is my question… There should just be a server process for this, upload the file, it validates and processes everything server side, instead of receiving one BO call for each individual row…

3 Likes

You can do this by uploading a .csv and creating a function to load it into a dataset and feed the dataset to the BO.UpdateExt, it’s quite performant :slight_smile: I may or may not have done this a few times.

1 Like

Doing that would consume a lot of memory on the app server for a really large file.

For me when using larger number batches of records I found it started out ok but slowed over time so I now use smaller batches but run multiple at the same time.

yup. you can make your function split the csv with a little bit of added logic if you are worried about le crashies :slight_smile: (this is what DMT does with the server processing checkbox in default chunks of 200, so you can mirror dmt’s behavior safely) - i don’t know why they don’t unlock the other BOs that have UpdateExt in DMT.

1 Like

My theory is they’re too busy chasing squirrels

1 Like

If you chunk your dataset into 50-100 rows at a time, and pass that to UpdateExt, you can get pretty fast speeds. I run about 25 parallel UpdateExt requests at a time and it takes a couple hours each week to synchronize our 100k part library from the parent company to 3 other companies.

Epicor is going to hate me if I do this in the cloud.

2 Likes