2023.1.X Update Post Mortem and PSA Regarding Performance (and how to fix it)

So Monday of last week we went live with a jump from 2021 to 2023.1.9

The upgrade process itself was fine there were some minor issues particularly around PCID where a bunch of new features were added that caused some headaches but overall it was a clean and easy upgrade.

However immediately we noticed everything was running rather sluggish and we blamed SQL per usual, did some index maintenance , statistics etc and moved on with our lives. We continued to get non stop gripping and complaints from users all week about performance and while we weren’t necessarily able to replicate the described issues we did notice somethings that were taking forever. For example opening the BPM Code window was taking between 45-60 seconds every time. We did some tracing and saw that BPM window now downloads a bunch of files from the server including the Epicor Data Model which is 40+MB in size

This was a little odd but after asking around it has to do with the client compiling back end code in BPM and it requires the data model and the framework stuff which it doesn’t have off the bat

Anyway we thought it was only this behavior and this screen and moved on with our lives. But the more the week went on the more complaints we gathered. Eventually we were able to find a smoking gun we could replicate. Opening Customer Tracker and Loading a Customer was taking upwards of 2 minutes.

We did some tracing and found out the performance issues came from doing a GetByID on the customization in customer tracker which is 6MB in size and was taking 12 seconds to load by itself.

Furthermore running 2 BAQs to get Customer Invoices was taking 60+ seconds a piece. As we tried to explain this we replicated these calls in Postman and quickly saw that the REST calls in postman to the same business object with the same parameters were coming back incredibly faster that same call from above that took 12 seconds in the client was taking 1 seconds in postman

How to explain this? After a lot of investigation and network pattern checking and using fiddler to monitor the actual network calls we found the smoking gun.

The call in postman was returning 454K of data

While the SAME call from the client was returning 6Megs

The culprit? COMPRESSION! JSON calls are compressed in transit, but the Smart Client (Classic) uses RPC to make the same calls which uses a custom binary encoding mechanism which is NOT being compressed in transit. Thankfully compression happens at a low level in NET so after sniffing around a bit longer we found a setting in the web config that allows you to turn on / off compression based on content type. It was already set for JSON

As a test I added the custom epicor content type turned on compression and crossed my fingers, my hope was that this was low level enough in the HTTPS transport that the client wouldn’t break, we got lucky and it worked. Once I turned on compression for the epicor content type used in RCP that call that was downloading 6 Megs went down to 17K :exploding_head:

Here is a side by side demo of a query run with compression vs not

The fix is easy just open your web.config and add the following line to the compression section both in static and dynamic

 <add mimeType="application/x-epicor-erp-rpc" enabled="true" />

Then recycle and restart your appserver.

I don’t know if this is an issue with their cloud deployment yet I am opening a ticket with support, but I know everyone that has 2023 on prem has looked and this compression is indeed off.

Obviously test everything in a separate environment before you go making random web config changes. But the difference is huge particularly if you run epicor from 47 different parts of the world.

CC: @Rich @Epic_Santiago, @timshuwy
Case: CS0003846921


Thanks for the detailed report. I have forwarded it onto the cloud team as well as Rich and others in Dev.


I have seen an improvement on our remote sites on version 2022.2.X as well.

An Improvement? I’m not sure I follow Devin.

Are you saying that adding compression helped your remote sites?

Really nice catch! I’m definitely hoping that’s an obvious and easy enough whoopsie to get fixed for us cloud users.

1 Like

Can confirm 2022.2.8 is also missing compression. 60% the size of original calls after enabling.

1 Like

Nice work @josecgomez! As always, great write up.


Thanks Jeff!

When you guys have done your internal assessment it would be useful to know what additional changes (if any) are needed. Or if that line I added is all encompassing for the Smart Client RPC calls and all we need or if we should do something else.

Also I put it both in Dynamic and Static routes but I was shooting in the dark.

I can neither confirm nor deny I have looked at this. But when I did…

There shouldn’t be any need to add to the RPC (only used by WinForms) mime type to
<httpCompression><staticTypes>. None of the RPC calls should ever be to static types. It shouldn’t hurt anything, but I wouldn’t add it.

This change only affects the server response. In RPC, we already have code that compresses the data in the server request. There is a minimum size on what will be compressed as you would just waste CPU time under a certain number of bytes. So, with your change, we will have both sides compressing.

1 Like

Did you mean to say there’s no need to add it to static?

We are seeing a pretty significant reduction in response size as well as processing time on our calls by adding this, are you saying it shouldn’t be necessary? Or are you saying that this change only affects the server response but it should still be there?

No need to add RPC mime type to <static>. At least I can’t think of any reason. The current <static> section is needed for REST and browser client.

Nothing to add for the server request side. We 100% need your fix for the server response side.


I had a public cloud friend do some testing with fiddler and in our limited testing it does look like the cloud is afflicted the same way

Public Cloud Pilot RPC Call

Same Call using JSON (not RPC)

That’s a 7X jump (shrink) with compression. :tada::tada:

Say @JeffLeBert can I get 1/4th of the savings on the azure data transfer from Azure when this is fixed towards my maintenance? #AskingForAFriend



Yes, but we will charge you for the extra CPU usage. Send a bill to Epicor saying I approved this. They should get a good laugh out of that. :grin:

1 Like

Just print this out and send it in @josecgomez !!

Mail GIF

Edit: Print out this thread, not the Blue’s Clues guy.

I was gonna say I like Steve just like anyone else (I’m proud of him for going to college) and screw that other guy that took over but I doubt Epicor will pay me for sending a picture of him. :joy::joy:

Curious, is this a general performance issue, or is it very specific to certain functionality ?
Could you take a shot at explaining how widespread this might be ?

thx, Scott

I would say anything in classic that pulls any significant data down would be noticeable.


General performance issue for Smart Client. Wait for direction from Epicor as it sounds like they are on this like stink on a hog. If you don’t want to wait then add this line to the dynamicTypes section of the web.config httpCompression node, then stop and start your appserver. It should auto restart but for safety sake force a full stop and start at will.

Do not add it to the staticTypes section, per Jeffs direction.

<add mimeType="application/x-epicor-erp-rpc" enabled="true" />
1 Like

He’s SaaS, so he’ll have to wait.

1 Like