10.2.300 Change log Tanking performance

Hello!

After some in depth trace log performance analysis both server and client side using the PDT tool, customized and vanilla, I have identified that specifically enabling the change log on the orderhed table absolutely TANKS performance on the sales order master update method when there are 30+ lines.

When the change log is enabled on the order header in a vanilla env to paste insert 52 lines takes 30 minutes.
When it is disabled it takes 6 min 30 sec to paste insert all lines.

Is anyone else experiencing this in 10.2.300?

We are EMS hosted and the db and app server pass all PDT configuration checks. Are we not supposed to enable all fields to monitor for changes on an order?

1 Like

I’m pretty sure it’s recommended to be judicious on how many fields you monitor. I’ve been warned many times to keep logging to a bare minimum or performance will tank. I’m sure that all fields is overkill for sure.

1 Like

The best part of that statement is that Epicor will consider that “working as designed”.

Vote!

If we’re talking about design, following the single responsibility principle, logging should be done in a separate class/thread and should not affect the main program.

However, if you’re logging all fields then just do it at the database level. It’s prudent to restrict logging to fields that are sensitive. And don’t log logging files (no delete allowed) like ChgLog, PartTran, GL transactions…

Mark W.

Thanks everyone. I’ll be more selective in what we monitor.

Also be weary, change logs on super active modules of the system can bring the entire thing to its knees.

We had everything on the sales order detail being logged (including the sysrev ids). This kept causing a long running SQL transaction. We were able to identify the query (the ice.changelog was in play). Because the transaction for the change log table was running for 2 days our trans log backups were not occurring.

Thus, our trans log drive kept filling up and crashing the system.

Once we limited the fields on the change log the problem went away.

Order UI that calls Adapters / BO are a big joke. Plagued with traps, when you customize you have to literally update 1 row at a time and do a Refresh() over and over again.

1 Like

Considering how Epicor is now paying Microsoft for all of the performance, you would think it would be in their best interest to refactor some code…

Jeff - Maybe now is a good time to purge the (maybe) massive change logs too! Ours are over 30gig in 10.0.700.4 LIVE, and we’re looking forward to purging them once we upgrade to 10.2.300 in a few weeks.

1 Like

I probably know why its tanking performance. This may be related to:

So assuming the ChgLog does exactly the same. So When I move this to a Custom Code Condition block and handle them all in a single .Any() - I speed it up by 30 seconds!

Traced it via SQL Profiler back in the days. @Bart_Elia had me put in a ticket and he sent it off to devs, looks like planned for 10.3 ? or 3.X

Problem:
When using the Visual BPM Designer Interface and you create OR / AND Statements - the SQL Database is hit each time PER Statement. Assumption is that it’s all 1 Query. But instead each statement creates a new .Any() SQL Lookup.
https://epicorcs.service-now.com/u_task_communication.do?sys_id=327bb363db1103c0949df438bf961967

NOTE: I reported this back in 10.1.500.x - if this worked for you in .500 but is slower in 10.2 - you may have a different problem.

Lastly - here was a chart from StopWatching the BPMs

Doing my own .Any() vs using Epicors Visual Designer:

C# StopWatch:

Epicor: 00:01:15.7179120  
Custom .Any(): 00:00:00.2989190
5 Likes

What a great tech analysis! I love getting into the nitty gritty on these types of things.

I’m still pretty new in my environment and I am trying to overhaul the way Epicor has been managed; performance testing the shit out of the system and trying to make it rock solid!

@jgehling
If you create a Custom BPM or even a ChangeLog, and you go to:

\\SERVER\inetpub\EpicorERP\Server\BPM\Sources

You will see the source code generated by the BPM and basically analyzing that and running SQL Profiler, helped.

You can also learn from it, if you let’s say add an AutoPrint node and save, you can see the code generated and you could re-created the AutoPrint logic in a Code block then :slight_smile:


Another Great Tool if you want to understand Epicor is:
SQL Data Compare: Compare And Synchronize SQL Server Database Contents - you could take a backup, run MRP and then compare your backup to your database and see all the tables an MRP run affected.

Another tool I like is SentryOne SQL

Basically that one SQL Profiles your Database, but also populates all the variables which SQL Profiler doesnt do so good – when someone says MRP Crashed at 3am, I can go back in time to that second and review Deadlocks, which Queries ran etc…

But Most Importantly I can study How MRP Runs… All of its logic:

  • SELECTs
  • UPDATEs
  • DELETEs

Everytime I upgrade to Epicor Major Version I run Data Compare and Structure Compare and within seconds I know if any of my data has been moved, changed, modified and all the new columns Epicor added, removed and renamed…

When someone says this used to work in Production, I compare Database to a .bak and I can tell them all the settings modified since the so claimed used to work date. “Did you know this ShipTo’s Demand Flags were on then…”

Get some PowerTools and you become Fast, Accurate.

3 Likes

Awesome thank you! Too cool.

I’ll definitely add all of these methods to my madness haha. :slight_smile:

1 Like

I want to get some Nagios/Centreon open source monitoring set up for us on-prem so I can monitor all critical epicor services / the task agent schedules to see if they get backed up / automate and monitor and execute DMT jobs. Soo much cool stuff.

We use PRTG – Discover the 3 Paessler PRTG monitoring solutions - you can create custom .vbs or scripts to execute every x often and based on return value - create custom warnings, errors. It will monitor out of the box, even every port on your switch. It has alot of things packed out of the box.

What I liked about SentryOne when I get a deadlock, I can narrow it to a time “a second” a range. Then I can see an Overview… and see yeah “Disk IO, spiked up to 100% and VMWare initiated Backups of a Snapshot” and helps find root cause. As well as every SQL Query executed within that frame.

You guys can grab free trial of PRTG, Sentryone, Data Compare by RedGate etc and install it within seconds, It does all the configuration for you and you can play around with it in TEST/DEV.

1 Like

Two rules:

  1. change logs DO affect performance. Only create a log for “Important” fields (see important field definition below)
  2. See rule 1.

Important fields:

  • Customer Table: Credit Limit, Credit Hold, Sales rep, Maybe Address(?), BillTo
    Order Table: only those that affect Commission, Price, Qty… Typically don’t worry about dates
    Part table: Description (Maybe), partclass, product group

Following fields/Records NEVER should be logged:
Last Change Date - Think about it… the date changes every time you change the record, and you are logging the date that the last change date changed.
UNFIRM JOBS - MRP creates unfirm jobs, then changes them, then reschedules them. If you want a really large change log, then simply track every change to the JobHead, JobOpr, and JobAssy, and JobMtl… (Yes, I have seen this… yes, the customer asked “Why is my database growing by X Gig every time we run MRP?”)
Someone else already said this… but… do NOT log changes to tables that cannot be changed such as PartTran, GLTran… They ARE change logs already Logging a log table is redundant.

2 Likes