SalesOrder.GetByID slowness (again/still)

NOTE: This is a continuation of this thread which I have been doing more research into.

Yes, I know the response to this is going to be “Well, duh!”, but just in case I am wrong, I am going to put this question out there:

The number of releases in a Sales Order is directly related to how long it takes the Sales Order to open, or to do anything with.
[insert ‘Well, duh!’ here]

Anyways, we have Sales Orders that are in excess of 5,000 releases, some of which are as high as 12,008 releases. Sometimes those releases are scattered between 100+ lines, other times the Sales Order only has 1 line (with 12,008 releases…)

Now I understand that the more data that is returned from query (such as SalesOrder.GetByID), the longer it is going to take to load. But, to me it seems that taking well over a minute is a bit excessive.

I sent my DB to EpicCare and they say that they can open those Sales Orders in 3 seconds flat and are unable to reproduce what I am seeing in my environment.

With that knowledge, I decided to poke around a bit more and ran PDT. Upon looking at the ‘Server Diagnostics > Summary Analysis’ tab, I saw this:

That is mirrored in a Client Trace Analysis as well:

As you can see, the Average Execution Time for SalesOrder.GetByID is 45 seconds, which is right in line with what I am experiencing.

The next thing I did was go and disable all Method and Data Directives that relate to SalesOrder / OrderHed / Customer / CstCnt and tried to open the Sales Order again…47 seconds.

I then checked to make sure I was only loading the “base” version of the menu…42 seconds.

Going back to PDT, the only thing in the Config Check that comes back as a Fail/Error is:

I don’t know why that is throwing an error though. We are on SQL 2017 running on Server 2019. From what I can tell, that meets the requirements for 10.2.400.X. Regardless, I don’t think that would be the reason for the extreme load times with this Sales Order…or would it?

The SQL Server and the App Server are on separate pieces of hardware. The SQL Server is averaging between 3% and 7% CPU utilization with 300GB of memory (384GB in the system).

PDT does return 1 Warning as well, which is this:

That prompted me to go and set that to False, but I am still at 46 seconds to load one of the larger Sales Orders.

My questions are:

  • How did EpicCare load this Sales Order in 3 seconds in their test environment? Is it simply because they have extremely beefy hardware?
  • Is there something with the SQL Server configuration that I may be overlooking that would potentially be causing this?
    • I can’t say I am proficient with using the SQL Profiler, but I am willing to learn if someone is able to point me in the right direction.
  • Am I chasing my tail here and this is simply how Epicor handles large Sales Orders? What, if anything, can I do to speed up loading the larger Sales Orders?

Do you rebuild your indexes on a schedule?

To be honest, I don’t know. I am still trying to get a grasp on the practices of this company. I know enough about SQL to be dangerous, however rebuilding indexes on 4000+ tables is not something I have looked into.

From what I do know about indexing in SQL, it is done on a per-table basis and I can’t see myself clicking 4000+ tables in the Epicor DB and rebuilding the indexes manually.

This is a good thread:

1 Like

Looking through our system, it looks like IndexOptimize from https://ola.hallengren.com is running on a weekly schedule for all User Databases (which includes all of the Epicor Databases).

This is what Step 1 in that SQL Job contains:

EXECUTE [dbo].[IndexOptimize]
@Databases = ‘USER_DATABASES’,
@LogToTable = ‘Y’

Looking at the history, it has been running successfully for as far back as the logs go.

So, with that being said, I don’t know if it is an Indexing issue or not :face_with_diagonal_mouth:

Jason, if you get into this and you want to split the cost of consulting we’d be cool doing that with you cause we do make direct jobs to these 5000 releases… and I know I’m going to need to the same help you need to make the system work.

Idk where to go honestly.

What is database compatibility level?

1 Like

@Olga - I am at compatibility level “SQL Server 2016 (130)”. That holds true for all of the Epicor databases, while the Report Server databases, system databases, and other user databases are all at “SQL Server 2017 (140)”

image

The install guide for 10.2.400 state that the SQL Server requirements are:

  • Windows Server 2019 with SQL Server 2016 or 2017

My SQL Server is 14.0.2027.2 which is SQL 2017, albeit a few CU’s behind. My Windows Server is 2019.

There are a handful of posts here on EpiUsers regarding the Compatibility Level and SQL Server versions, however I have not seen anything that states “Upgrading to at least ‘this’ version of SQL Server will be better for performance” or anything of that nature.

This can explain PDT error you get.

@Olga - that’s a good point. I never thought of the compatibility level being the cause of that. I’ll look into that this weekend.

EDIT: So I just updated the Compatibility Level, and I still get that error. I will be updating to a later CU on SQL Server this evening and we will see if that does anything.

So I just finished spinning up another SQL Server and migrated a copy of my Demo database to it. I then reconfigured my Demo environment to point to the new SQL Server and redeployed it.

I opened up the client and opened up Order Entry. It was my hope that a different server that had nearly 0 load on it would somehow make some sort of a difference with this issue of doing anything with orders that have thousands upon thousands of releases.

Unfortunately, that was not the case and it still took nearly 53 seconds to load up the entire sales order. :face_with_diagonal_mouth:

1 Like

I know and then imagine having to update quantities on any of the lines, or link and unlink jobs that have coparts that fulfill the lines. OR shipping address changes, or whatever it is. Multiply that time by 53 seconds. It’s not okay.

What really hurts is when we try to make a change…saving the record only works half the time…when it doesn’t work, we have to wait a full 3 minutes for it to finally toss a time-out error.

1 Like

And the results are in from EpicCare:

This issue can be replicated beyond the version I am on, all the way up to the latest version of 2023.2.11. The official response is “Your sales order is too big. Break it down into multiple smaller orders.”

Well, it was worth a shot, eh?

Hilarious honestly. How do other people dealing with retail which has 1000+ locations do any of this? Do P21 users have to do this?

@utaylor - Little bit, yes.

BUT, if you think about it, it does make some sense. When opening up that order which has 89 Rows and 11,007 Releases is a LOT of data to have to send over the wire to the client, and the client has to parse each of those lines/releases and draw the screen each time.

Prior to opening Sales Order Entry on 10.2.400.22, Epicor is using 4,508 K of memory. When I open Order Entry, it jumps to around 68,796 K of memory. Once I load up the large SO, the memory usage spikes to 426,840 K…The tracelog indicates about 150 M of data is being sent from the AppServer to the client…So, it is a lot of data.

Could they (Epicor) be handling the data transfer differently? Perhaps compressing or serializing it in a way that would make the total size smaller? No doubt. Hell, they could even do it in a way that grabs the releases only when a line is made active in the UI. That would speed things up dramatically. But, do I think we would be able to convince Epicor to rewrite thing? Probably not. I would love to be proven wrong, however.

I get it too, one software made for tons of people, not us specifically, but is that really the end of the road?

I’m with you. I’m not sure it’s a performance improvement that would benefit enough customers but it does seem like a great dashboard to me. :thinking: Just pull in the line and associated releases, make changes, update.

and UBAQ will use UpdateExt that in turn will use GetByID again :roll_eyes:

Olga it’s a huge struggle for @jhecker and I. Our businesses thrive off of customers with 1000s of ship tos on their order.

Is there any way you could connect us with someone there so that we could start a focus group on this?

We would both be willing to pay for whatever solution comes out of it. We really need some help with this.