16 second delay after PhoneNumber change - Customer Entry - CheckDupPerCon method

When a phone number is added or changed to the Customer Contact in Customer Maintenance, Epicor takes 16+ seconds to respond. I did a trace, and see that the CheckDupPerCon method has an execution time of 16196ms! We currently have 834,933 records in the CustCnt table. Suggestions on how I we can speed this up?

We’re on 10.2.300.12, and testing our upgrade to 10.2.500 next month.

Thanks.

      <businessObject>Erp.Proxy.BO.PerConImpl</businessObject>
      <methodName>CheckDupPerCon</methodName>
      <appServerUri>net.tcp://masked/</appServerUri>
      <returnType>Erp.Tablesets.PerConListTableset</returnType>
      <localTime>1/14/2020 09:43:39:1325599 AM</localTime>
      <threadID>1</threadID>
      <executionTime total="16196" roundTrip="16184" channel="0" bpm="3" other="9" />
      <retries>0</retries>
      <parameters>
        <parameter name="vName" type="System.String"><![CDATA[]]></parameter>
        <parameter name="vEmail" type="System.String"><![CDATA[]]></parameter>
        <parameter name="vPhone" type="System.String"><![CDATA[redacted]]></parameter>
        <parameter name="vSysRowID" type="System.Guid"><![CDATA[00000000-0000-0000-0000-000000000000]]></parameter>
        <parameter name="CallContext" type="Ice.Bpm.Context.ContextDataSet">
          <ContextDataSet xmlns="http://www.epicor.com/Ice/300/Bpm/Context">
            <BpmData>
              <SysRowID>00000000-0000-0000-0000-000000000000</SysRowID>
            </BpmData>
          </ContextDataSet>
        </parameter>
      </parameters>
      <returnValues>

What does it do if it does find a duplicated Phone Num? Ask you about using the existing contact?

If you always ignore any message (and allow two contacts to have the same number), make a BPM on
PerConImpl.CheckDupPerCon to set param vPhone to null.

I’m assuming that would speed things up, as it shouldn’t be searching for records where vName, vEmail and vPhone are all blank/null.

Okay, I see that on just tabbing from the email or address field causes that method to execute. If a matching email or phone are found, it prompts you with the Partial Matching Contact dialog.

I assume you still want this functionality.

edit

I do notice that it matches phone #’ regardless of the non-numeric symbols.
215-555-1212 matches 2155551212 and (215)555-1212

And a trace shows that the symbols are passed to the method

<parameters>
    <parameter name="vName" type="System.String"><![CDATA[]]></parameter>
    <parameter name="vEmail" type="System.String"><![CDATA[]]></parameter>
    <parameter name="vPhone" type="System.String"><![CDATA[(215)555-1211]]></parameter>
    <parameter name="vSysRowID" type="System.Guid"><![CDATA[00000000-0000-0000-0000-000000000000]]></parameter>
  </parameters>

Does it take the same time to respond to checking for 2155551212 as it does 215-555-1212 ?

Thought about it some more…

Stripping of the non-numeric characters probably wont help. Because it has to strip them out of every existing record before each check.

An interesting test (in a test environment) would be to export the phone numbers, strip all non-numerics, then DMT them back in. The test is if the search is quicker if stored phone has no non-numerics.

Just for SnGs have you had a look at your indexes and if there is a more optimal way? Perhaps even manually running an update statistics. I’d be wanting to understand a bit more on how the CheckDupPerCon method works from a SQL query perspective, profiling it as well, it may not be the CustCnt table that’s slowing things down, it may be some other complementary one. You do ask why this method exists. If you have a unique index on the Custcnt table, you would never have a need to check for duplicates in the first place.

Now I’m interested why :thinking:

Anyway that’s my 2c

My guess is that it isn’t to absolutely prevent dupes (you can Hit OK on the Dupe found dialog and make the dupe anyway), but rather prevent accidental duplicate entries. So that only one contact record exists per real person

1 Like

Your probably right… I was going to follow up with a comment on the bad old days of E9, percon and custcnt, but I’d rather leave that nightmare in the dim distant past. :angry:

I have to say I’m so glad that the Primary Context checkbox in Person Contact Maintenance now functions.

Reviving an old(er) thread:

Hally, I checked with our DBA, but he’s not that familiar with Epicor. We run update statistics every night, and do a weekly re-index on the weekend.

I asked him about profiling, and he asked back what information we’d need to profile (profile settings? sounds like an Epicor trace…). Is the goal of the profile to identify a field that isn’t indexed, but should be?

Thanks again for the help, guys!

There are couple of standard profiles that you can use, if you were looking at indexing issues, I’d be starting with the index tuning wizard and generating a trace on that. There are a lot of posts around that topic, as always I’d be starting one of the authorities being Brent Ozar and his blog https://www.brentozar.com/ and other indexing tools that are available (Redgate), and of course Microsoft’s own Docs on the Databasase Engine Tuning advisor.

As far as what to return in a standard trace, I’m a bit lazy and I should have a custom template with all the options setup, but I don’t I just start with the TSQL template normally and trim it down, showing all events and filtering by the database name. Not really that scientific an approach, but at the end of the day profiler can be used for all sorts of different investigations, from performance to errors, to security auditing, so it really depends on what you are looking for, as to how you setup the profiler template, but the base templates are a good start. Most importantly remember that you can affect performance if you are running profiler and you can always search the events if you are looking for something specific. (ctrl-F)

While digging around generating my response, and this shows how distracted I’ve been, that in this post they mention Profiler is depreciated https://docs.microsoft.com/en-us/sql/tools/sql-server-profiler/sql-server-profiler?view=sql-server-ver15, basically it mentions you should be using Extended Events instead here’s a link Quickstart: Extended Events in SQL Server - SQL Server | Microsoft Learn. When I get some more time I’ll be reading it myself.

The other thing that was pointed out in another post recently is that the data dictionary (there’s a few)
Improving BAQ Performance - #9 by josecgomez contains a list of all the indexes so it’s easy to see that from within Epicor instead of having to hassle the DBA (if your not it in your business) and run an SP_Help or use SSMS to look at the database details.

Answering your question about the end game of the exercise is to work out exactly which query during the process is causing the issue (CPU time, Reads) then take that query from profiler, load it with the relevant parameters, clear your procedure cache, then take a look at the execution plan of that query. That will help you determine where things are going wrong if it is a SQL issue, it could be poor code, everyone can be guilty of that from time.

Simon,
Thank you! This is a big help. I’ll give it a shot on our Pilot db and see what happens. Always learning something new :+1:

1 Like

Did you ever get anywhere with this issue? We’re having the same thing, have about 400k CustCnt records in our db. Interestingly the Name duplicate check is lightning fast while phone dup check is at least 10 seconds.

@TomAlexander - I got pulled off this onto other projects, but it’s on my todo list! Let me know if you get to it first and find a smoking gun.

@askulte Report it as a bug.

We had the same issue with Lot’s when there are a large number of lot records. From my understanding the issue is related to lazy loading in Entity Framework but somebody who knows EF could probably give a better answer.

@John_Mitchell - Yup! I did it last year on Case CS0001437410, which Epicor accepted as Problem PRB0212691. Status say’s it’s ‘Closed Complete’ been completed and fixed on 10.2.600.

Haven’t tested it yet - we’re on 10.2.500.9, and holding off our upgrade until 10.2.700. Hope to get that in the next few weeks (on prem) and start testing.