When a phone number is added or changed to the Customer Contact in Customer Maintenance, Epicor takes 16+ seconds to respond. I did a trace, and see that the CheckDupPerCon method has an execution time of 16196ms! We currently have 834,933 records in the CustCnt table. Suggestions on how I we can speed this up?
We’re on 10.2.300.12, and testing our upgrade to 10.2.500 next month.
Stripping of the non-numeric characters probably wont help. Because it has to strip them out of every existing record before each check.
An interesting test (in a test environment) would be to export the phone numbers, strip all non-numerics, then DMT them back in. The test is if the search is quicker if stored phone has no non-numerics.
Just for SnGs have you had a look at your indexes and if there is a more optimal way? Perhaps even manually running an update statistics. I’d be wanting to understand a bit more on how the CheckDupPerCon method works from a SQL query perspective, profiling it as well, it may not be the CustCnt table that’s slowing things down, it may be some other complementary one. You do ask why this method exists. If you have a unique index on the Custcnt table, you would never have a need to check for duplicates in the first place.
My guess is that it isn’t to absolutely prevent dupes (you can Hit OK on the Dupe found dialog and make the dupe anyway), but rather prevent accidental duplicate entries. So that only one contact record exists per real person
Hally, I checked with our DBA, but he’s not that familiar with Epicor. We run update statistics every night, and do a weekly re-index on the weekend.
I asked him about profiling, and he asked back what information we’d need to profile (profile settings? sounds like an Epicor trace…). Is the goal of the profile to identify a field that isn’t indexed, but should be?
There are couple of standard profiles that you can use, if you were looking at indexing issues, I’d be starting with the index tuning wizard and generating a trace on that. There are a lot of posts around that topic, as always I’d be starting one of the authorities being Brent Ozar and his blog https://www.brentozar.com/ and other indexing tools that are available (Redgate), and of course Microsoft’s own Docs on the Databasase Engine Tuning advisor.
As far as what to return in a standard trace, I’m a bit lazy and I should have a custom template with all the options setup, but I don’t I just start with the TSQL template normally and trim it down, showing all events and filtering by the database name. Not really that scientific an approach, but at the end of the day profiler can be used for all sorts of different investigations, from performance to errors, to security auditing, so it really depends on what you are looking for, as to how you setup the profiler template, but the base templates are a good start. Most importantly remember that you can affect performance if you are running profiler and you can always search the events if you are looking for something specific. (ctrl-F)
The other thing that was pointed out in another post recently is that the data dictionary (there’s a few) Improving BAQ Performance - #9 by josecgomez contains a list of all the indexes so it’s easy to see that from within Epicor instead of having to hassle the DBA (if your not it in your business) and run an SP_Help or use SSMS to look at the database details.
Answering your question about the end game of the exercise is to work out exactly which query during the process is causing the issue (CPU time, Reads) then take that query from profiler, load it with the relevant parameters, clear your procedure cache, then take a look at the execution plan of that query. That will help you determine where things are going wrong if it is a SQL issue, it could be poor code, everyone can be guilty of that from time.
Did you ever get anywhere with this issue? We’re having the same thing, have about 400k CustCnt records in our db. Interestingly the Name duplicate check is lightning fast while phone dup check is at least 10 seconds.
We had the same issue with Lot’s when there are a large number of lot records. From my understanding the issue is related to lazy loading in Entity Framework but somebody who knows EF could probably give a better answer.