Recalculate Customer Credit taking days to run

What is a reasonable speed for Recalculate Customer Credit? The log was growing by 800 rows per minute yesterday morning, and has slowed to 380 rpm, 39 hours later.

It would bomb out on E10 when we ran it wide open for all 1.2M customers, so we’d run it in 10 batches via a QuickSearch to filter on Customer Groups. With our 10.2.200.8 test environment, we kicked it off for all 1.2M customers. It’s still crunching along 39 hours later at 896k rows with another 13+ hours to go…

Is it wise to do a Quicksearch to filter for active customers in the last x months (I’m assuming credit can’t get messed up if there have been no orders/shipments), or should we periodically run it for all?

Can multiple instances be run in parallel like DMT? If so, how should we do it?

Thanks guys!

1.2M(illion)?
I run it every two weeks and it takes about 1/2 hour
But… system only has approx 2000 Customers and with a moderate sales order/billing volume.
Your volume sounds like A LOT to try running wide open.

Quicksearch to filter(s)
In your case I think you definitely want to run the process in smaller “chunks”
I might try some experimentation to help in defining BAQ(s)

e.g. are there conditions/groups so you can pick just a few “average” customers

  • then filter the process to run for one “average” customer of a type
  • specifying a log and recording the run times?
  • and if it is making any difference in the customer tracker, on their financial sheets

That way you might be able to estimate some overall runtimes for different groups of customer?
(but guessing that may not be too accurate - as once a system load grows, things tend to slow down)

Will be interested to hear what you end up doing/finding…

1 Like

Bruce, thanks for chiming in.

The other way we’ve managed is waiting for someone to get an index -1 error in Customer Tracker, then running Recalc Customer Credit it for that specific CustID.

We can’t find a pattern to those, and I’d love to get ahead of it instead of getting a few help tickets a week to run it. Maybe 10.2.200 has gotten more robust so we don’t need to run this as often (or at all)?

Maybe but… I’m guessing you will always need to run it, at least occasionally.
I first needed to run the process V8 and the need has persisted for every version since.
Lucky for me, I never had to wait for it to process 1.2M customers

I run the process on a daily basis now, but only have approx. 650-700 customer accounts to deal with.

I spotted that when I got weird numbers in Customer Tracker (can’t remember if we got an error on the tracker or not), the following SQL query allowed me to spot customer accounts that I needed to run the process on:

select CustID, Name, TotalOrders from Erp.GlbCustCred custcred 
	inner join Erp.Customer cust on custcred.CustNum = cust.CustNum 
where TotalOrders < 0

For some reason the same thing that screws up the credit values was also taking this number negative. You could run this as a check on SSIS every 5 mins - if there are no records found, just close out. If records found, then alert somebody via email.

1 Like

Mark - Great! We’ll have to try that. Thanks for sharing.

Next step is for Epicor to be pro-active about fixing what causes these issues, or at least running this behind the scene and doing an auto recalc cust credit as accounts are flagged…

Andris

I am very curious as to what the fundamental problem is because this issue occurs in every BO with large datasets. For example, we have a support ticket where a Cost Adjustment takes 3 minutes for a Lot Tracked Part with 170k lots. If I run the exact same SQL query directly it takes less than a second.

My assumption is that this is an issue with Epicor’s implementation of Entity Framework that shows with large datasets. I am not advocating just using stored procedures, but I think that there is a fundamental issue that needs to be addressed.