E10.1 BPMs are Slow

E10.1.400.18/SQL 2014

I would like to see if anyone can confirm what I believe is the reason why my E10 BPMs run terribly slow after migrating them from E9.05.

Using LINQ, if I join a ttTable, e.g. ttLaborDtl, to a database table, e.g. JobHead, my BPM code is slow. When I separate the LINQ code to two individual queries the code is much faster. I believe the reason is that the ttTable is in memory while the database table is not. In order for LINQ to perform the join, it’s reading the entire JobHead table into memory and then performing the join. Can anyone confirm my thinking or offer a better description of what may be happening?

Thanks,
Bob Beaghan

Can you post 2 examples the “slow one” and the “fast one” and let us take a look. Maybe there’s something you are missing. Though generally I try to avoid joining tt Tables to Db context.

Here’s an example of slow code:

foreach (var ttLaborDtl_iterator in (from ttLaborDtl_Row in ttLaborDtl
                                     join JobHead_Row in Db.JobHead on 
	                                    new { Key0 = ttLaborDtl_Row.Company, Key1 = ttLaborDtl_Row.JobNum } 
                                     equals 
	                                    new { Key0 = JobHead_Row.Company, Key1 = JobHead_Row.JobNum }
                                     where (string.Equals(ttLaborDtl_Row.RowMod, IceRow.ROWSTATE_UPDATED, StringComparison.OrdinalIgnoreCase))
									    && ((ttLaborDtl_Row.LaborQty > 0.00m)
									    || (ttLaborDtl_Row.ScrapQty > 0.00m)
									    || (ttLaborDtl_Row.DiscrepQty > 0.00m))
									    && (string.Compare(JobHead_Row.JobType, "MFG", true) == 0)
                                     select ttLaborDtl_Row))

When I split up the join, as below, the performance improved:

foreach (var ttLaborDtl_iterator in (from ttLaborDtl_Row in ttLaborDtl
                                     where (string.Equals(ttLaborDtl_Row.RowMod, IceRow.ROWSTATE_UPDATED, StringComparison.OrdinalIgnoreCase))
	                                     && ((ttLaborDtl_Row.LaborQty > 0.00m)
			                             || (ttLaborDtl_Row.ScrapQty > 0.00m)
			                             || (ttLaborDtl_Row.DiscrepQty > 0.00m))
                                     select ttLaborDtl_Row))
{
    var ttLaborDtlRow = ttLaborDtl_iterator;

    JobHead = (from JobHead_Row in Db.JobHead
               where (JobHead_Row.Company == ttLaborDtlRow.Company)
                   && (JobHead_Row.JobNum == ttLaborDtlRow.JobNum)
                   && (string.Compare(JobHead_Row.JobType, "MFG", true) == 0)
               select JobHead_Row).FirstOrDefault();
               if (JobHead != null)
    {
        /* do stuff */
    }
}
1 Like

I have similar issues with similar code - it takes 15-20 seconds minimum to process which doesnt seem like a lot but it adds up. Jose, if you dont join the tt temptables, how would you proceed? Use the temptable as a where clause to perform a query on the actual table, and continue with a join?

Correct

i haven’t determined exactly what is going on, but since 10, I’ve noticed that any BPM’s on the Labor BO surrounding MES transactions are abysmal in performance, compared to other BO/Method transactions. I have vastly more complex BPM’s in other areas that don’t have the performance impact that some of the simplest one’s I’ve written for that area do. I’ve since then scrapped any BPM’s that are attached to the Labor BO and solved it through other means.

@rbucek Interesting, my particular problem also occurs within the MES. Mine is BO for PartTran when reporting qty.

Yeah, I don’t get it, I’m not the most experienced programmer, but in E9 i used to have 9 -10 BPM’s surrounding labor activities and they flew (admittedly in ABL code) and Epicor’s implementation of the BPM layer is much different in 10 than it was in E9 understandably. However, that being said i can somewhat give apples to apples comparisons in scope examples where you really scratch your head and wonder what is going on performance wise. I found i was able to cope with this through UI customizations with almost zero impact on performance. It’s not my first choice, global tools like BPM’s are usually preferred, but to retain the same capability it was necessary.

1 Like

I have reached out to Bart Elia - Software Architect working in the core framework tools group

He signed up to this board yesterday and he will explain it in more detail. From what I understood in his twitter reply - it seems to be more of a Vanilla LINQ issue than Epicor or its BPM Engine.

It’s an issue with vanilla LINQ - order in which the query is processed. Looking for example…

https://twitter.com/bartelia
Bart Elia - Software Architect working in the core framework tools group at Epicor

Welcome to LINQ confusion 101 or better yet - dotNET enumerations magical side effects.

If I can get a more complete snippet and some background on your BPM I can give exact example. A couple of questions…
Where is this BPM?
-I am trying to understand what ttLabodDtl is an object type
Can you guess how many records will be the norm for ttLaborDtl (in memory) and the return from the DB.JobHead table?
Bonus points if you can determine which columns are needed in the ‘Do Stuff’ section
-One of the worst patterns I always see is to pull back every column (and sometimes every row) from the db only to throw away all but one row and one column. Very quick way to make your DBA, SQL Server, network and RAM pop a gasket.

With those I’ll give a more exact example.

The issue you are likely running into is in the scanning of the ttLaborDtl collection of records. This is in memory and probably a simple collection type - no indexes to speed matching. This means each potential match against the db then has to scan the entire ttLaborDtl collection to do the join. Horribly slow. LINQ is easy to get into but the perf pot holes are many. Luckily there are several helpers to give LINQ a nudge to performing the correct logic. No different than doing bad TSQL or ABL or proper hints or patterns to make them perform better.

Knowing nothing about the assumed needs and sizes…
One guess would be to retrieve all ttLaborDtl records that match your where clause and create a new objecxt that contains just the fields of interest in the ‘DoStuff’ and any columns used to join to Db.JobHead. That will end up giving you a subset collection of records with the shape being just the columns you care about. Hopefully this collection is small - if it is then I might put a ‘ToList’ on the ttLaborDtl subset and place it into a new object - this will reduce the size the join has to scan.
Next, I would do the Db.JobHead query but instead of the join approach I’d use a TSQL ‘in’ approach - the LINQ Contains method.

If you browse Stack Overflow for ‘LINQ SQL IN’ you’ll have some interesting reading. dotNet collections are extremely powerful and more often than not misused. Learning IEnumeration, IList and IQueryable are good bedtime reading topics.
Give as many details as you can and I’ll try to break away from some honey-dos to answer more precisely :wink:

2 Likes

Bart,

Awesome response! Is it possible to give an example of how you would pare down a ttDataSet, create that new object with a reduced column set and plug it into a list?

Your participation and insight is greatly appreciated here.

Thanks!

Rob

I always start with this example when teaching our devs.

Note the same query done two ways. The first is ‘vanilla’ and the second wrapped with a ‘ToList()’.

The first has an advantage in that it only loads up into memory the list as needed. It’s Enumerable.

The second changes the type to a List by spinning up a new object and materializing all the data from the db query into a concrete List. Great for perf but if a million records are in the query, you just took out all available ram on your app server.

One other issue is that that List object is bouncing around in memory until the dotNet garbage collector comes along and cleans it up. If you have a ton of objects laying around then your server spends all day collecting memory instead of serving up data to the clients.

*Anecdote - We messed this up in the first internal Proof Of Concept in ICE3 years ago at MS. We spent over 30% of the app server garbage collecting. Three line fix, scale and perf skyrocketed. I love simple oops. :slight_smile:

So make your usage of collections an intentional choice and balance the trade off between memory usage and perf and increasing objects in memory.

3 Likes

Awesome sauce. Thanks for all of the details.

I was having performace issues in my BPM as well and @josecgomez.trigemco had suggested that I move to a different method of linking tables outside of LINQ within my BPM and once I did, performance increased several fold.

1 Like