SQL Commits?

Is there a delay from when I save data in E10 to when it gets committed to the SQL Database?

We notice that when doing a sql query from SQL Studio that sometimes the data isn’t there yet but appears a few minutes later.

No there is no delay. Not a fan of you doing a bunch of SQL direct stuff. There should be absolutely no delay between an epicor commit and what you see in SQL.
If you are seeing this theres something else going on. Are you replicating SQL? And querying against the replicated db?
As general rule you should avoid going direct to SQL it can cause record locking and other issues if not done correctly
What’s the end goal?


Are you perhaps referring to inter-company transactions? There is a process that runs to sync the record from companyA to companyB.

There is also the asynchronous saving of ‘Deferred Updates’ during some processes. It is used in some heavy lock contentious scenarios. I know how it works from the plumbing side but which services are using it currently is beyond me.

@josecgomez A former employee wrote a bunch of excel macros (30 or more) that query the DB (just select statements, no updates, inserts, or deletes) but for some reason they don’t seem to pull the most current data. It seems to take upwards of 10 minutes after the data is saved in Epicor for the Excel Query to return the updated data.

The E10 App Server is on the same server as SQL 2016,
and there is no mirroring or replication of the DB.

The queries are ran on demand, via a macro, so I don’t think Excel would be interfering. The queries are quite complex for my limited SQL knowledge, so i can’t say for sure if what is going on. I just find it odd that the new data is there, but the queries don’t return it right away.

@Bart_Elia Can you elaborate on Deferred Updates?

The tables being queried are all part and job related. They queried pull info for all open jobs, and all the parts that are needed for them based on the date the job is due, giving them info on stock status and such so they can get the proper components pulled from stock for each assembly and operation.

@aidacra And this is a single company and site. There are no sync processes that run for our company.

I don’t know the app service specifics as mentioned before, just the pattern. I’ll give an example from the framework side to describe it. As stated before, not sure if it’s what you are seeing. If you state ten minutes, that makes me suspicious if this is it - unless your server is overwhelmed by load. A quick check of PDT to ensure your system is running well is also a good sanity check.

The pattern…
In the industry there are a bunch of names for this behavior, message queuing being the one I choose. The effect is to free up the server call as quickly as possible to return control to the client. The server continues processing the data into it’s final form and storing it in its final locations.
The case I use is for all of the logging and tracing done on the server. When an activity occurs on the server and a log entry is written, it does not go directly to the ‘serverlog.txt’ file. It goes into a queue and each entry gets added to that queue. Every so many milliseconds (250? 500?), an event triggers and a background thread start flushing that log data to the file. This is on a separate thread than the server call so logging basically takes no time to occur so no client latency impact. Speed of the client should be the same whether there is logging on or off. In a 10 (30?)k user test we did, we saw no client impact at all with a bunch of tracing on.

Take that pattern to the app side. There are times when a piece of ‘non critical’ data needs to be calculated to roll up some data. Data is pushed into a temp table. This is done because these summary pieces of data are highly contentious on db locks so for performance experience of the client and due to the non critical type of data it is reasonable to do. A background process comes thru a few ms later, crunches the numbers and updates the summary data.

So that’s the pattern. Does it apply here? I am not sure. Minutes seems waaaay too long unless your server is having a lot of issues. The type of issues where you would be complaining about bad perf, not this. So my original knee jerk response I am shying away from as a diagnosis. We are missing some piece of data as to when and where that data is changing.

You are not a SQL expert from your statement so I am struggling for further advice without that ability. If you were or could get one, I’d turn on the sql profiler and monitor the fields that are slow changing, turn on some E10 profiling like profile://system/db/epiprovider/sqltext to check when that field is updated.

We are missing some piece of data…


@Bart_Elia - Oh the simplicity of things. So, I came in to work this morning to dig into this issue. I had the SQL profiler open, I had query windows open, I had the system monitor open on the server, etc. Each and every piece of the puzzle was in the right place. And everything worked exactly as it should. Nothing was delayed, nothing was erroring out, nothing negative at all.

So, after the employees from Inventory started to come in, I had them report to my office and walk me through what they were doing. So they did, and each one of them had issues. I had to bite my tongue when asking them why they were using reports that were 5 or 6 revisions old rather than the most current one. Why were they not using the file that is named: “USE_THIS_FILE_FOR_INVENTORY.xlsm”. Seriously, that is it’s name. They were using one of the older versions that was named with the date that it was taken out of use.

I asked them why they were using it, even after numerous e-mails were sent out over the course of time instructing them which one to use, and each one of them said that one of the other employees in Inventory told them to…each of them blamed the others.

It turns out that one of the less-computer-savvy staff members was somehow copy/pasting the old version of the files from her desktop back into the network location, rather than copying the new one from the network to her desktop.

So, it is experiences like this that remind me of two things:

  1. Get the whole story before asking for help online and looking like a fool. My apologies for taking up your time. However, I do appreciate your last response, as it did give me some good things to look into when/if we ever do have issues with our SQL server.
  2. If it were not for employees, I wouldn’t have a job. As much as I despise them at times, I have to take the time to remember that if it were not for them I wouldn’t have a roof over my head, food in my mouth, and a wife that spends WAY too much money shopping…

Thank you again for your help.


Welcome to IT :slight_smile:

Well at least everyone got a tour of some innards of ERP10…


FYI (and because Bart opened up the topic), there are 5 tables (listed below) that can become extreme bottlenecks if locks on records are held for too long. So, we’ve added a companion table for each of them where changes are queued when running the usual business logic. Once the business logic ends, a new and separate transaction is created and the queued records are processed as quickly as possible to avoid leaving locks in place for too long. For whatever reason, this pattern is called “deferred update”.

Deferred update records are processed at the end of a call – not on a background task – so when a call returns to the client, the deferred updates produced during the call should be fully processed. There is also a Task Agent job that periodically sweeps the queuing tables to delete abandoned records in the rare case where a call does not complete and queued records are not rolled back.

Here are the tables that are updated using the deferred update pattern (h/t to Bruce Rund)…

PartBin (PartBinDeferred)

PartQty (PartQtyDeferred)

PartLot (PartLotDeferred)

GlbCustCred (GlbCustCredDeferred)

BankBal (BankBalDeferred)


Thanks for these details, Erik. Some of this functionality was actually in 9.05 although hidden from the user, and maybe missing the cleanup Task Agent job. That job was an excellent addition to Epicor and cuts way down on the FixBinsAllNew and conversion 6430 runs necessary in a busy system with MRP, Enhanced Quality and AMM.