E10 BPM Slow due to Join?

10.1.400.15/SQL2014

Hello all,

I have just recently been tasked with going through code on BPMs to try and figure out why certain ones are taking so long. I had originally run diagnostics on some server logs and found 2 that are taking anywhere from 20-80 seconds to run. The code was originally written for us by a company that we no longer have a relationship with so reaching out to them to try and optimize the code is not on the table.

The code is doing multiple joins from a tt table and I believe this is what is causing the slow down. I am not sure if at this point it would be easier to completely redo the code or try to salvage what we have and try and rewrite it. Only problem is I haven’t been working with Epicor that long and am not very proficient in writing C# yet.

I will paste the code below to see if there are any ideas as to which way I should be going with this.

//string messageText = string.Empty;
foreach(var tt in (from ttrow in ttShipTo select ttrow))
{
	double TotalBillToDate = 0, TotalOrderToDate = 0;
	foreach(var st in (from strow in Db.ShipTo where strow.Company == tt.Company && strow.CustNum == tt.CustNum && strow.ShipToNum == tt.ShipToNum select strow))
	{
		int CurrentOrder = 0;
		foreach(var i in (from irow in Db.InvcDtl 
				join hrow in Db.InvcHead on new {irow.InvoiceNum, irow.Company} equals new {hrow.InvoiceNum, hrow.Company}
				where 
				irow.Company == tt.Company && 
				irow.CustNum == tt.CustNum && 
				irow.ShipToNum == tt.ShipToNum 
				orderby irow.InvoiceNum
			        select new {irow.Company, irow.InvoiceNum, hrow.DocInvoiceAmt, hrow.CreditMemo } ))
		{
				if(CurrentOrder == i.InvoiceNum)continue;
				TotalBillToDate += Convert.ToDouble(i.DocInvoiceAmt);
				CurrentOrder = i.InvoiceNum;
		}

		foreach(var i in (
				from irow in Db.OrderHed 
				join drow in Db.OrderDtl on new {irow.Company, irow.OrderNum} equals new {drow.Company, drow.OrderNum}
				join rrow in Db.OrderRel on new {drow.Company, drow.OrderNum, drow.OrderLine, tt.ShipToNum} equals new {rrow.Company, rrow.OrderNum, rrow.OrderLine, rrow.ShipToNum}
				where irow.Company == tt.Company && irow.ShipToCustNum == tt.CustNum
				orderby irow.OrderNum
				select new { rrow.ShipToNum, rrow.OrderNum, DocTotalCharges = irow.DocTotalCharges + irow.DocTotalTax, irow.DocTotalMisc } ))
		{
			if(CurrentOrder == i.OrderNum)continue;
			TotalOrderToDate += Convert.ToDouble(i.DocTotalCharges + i.DocTotalMisc);
			CurrentOrder = i.OrderNum;
		}
		st["TotalBilledToDate_c"] = Convert.ToDecimal(TotalBillToDate);
		st["TotalOrderedToDate_c"] = Convert.ToDecimal(TotalOrderToDate - TotalBillToDate);
		tt["TotalBilledToDate_c"] = Convert.ToDecimal(TotalBillToDate);
		tt["TotalOrderedToDate_c"] = Convert.ToDecimal(TotalOrderToDate - TotalBillToDate);
	}
}

At a glance, the first thing that screams at me is the continue statements.

They are a code smell. If you are doing an if and continue that means you just pulled back all that data from the db into the app server to throw it away when you don’t need it. I’d probably look at adding the condition to the where clause

from irow in Db.InvcDtl
join hrow in Db.InvcHead on new { irow.InvoiceNum, irow.Company } equals new { hrow.InvoiceNum, hrow.Company }
where
irow.Company == tt.Company &&
irow.CustNum == tt.CustNum &&
irow.ShipToNum == tt.ShipToNum
//Something like this → && irow.InvoiceNum != CurrentOrder
orderby irow.InvoiceNum
select new { irow.Company, irow.InvoiceNum, hrow.DocInvoiceAmt, hrow.CreditMemo }))
{
//Then this can be deleted → if (CurrentOrder == i.InvoiceNum) continue;

The next would be pulling extra columns not needed. Company and CreditMemo are not needed in the first query. ShipToNum is not needed on the second. Nitpicking but I like to trim my queries against SQL as much as possible.

The other oddity i see is you are looping over a collection of ttShipTo rows. How many duplicates do you expect versus unique data in those inner queries? Whenever I look at looping, I always worry I am retrieving the same data for every loop. I don’t think that is applicable to the data you have here but always factors into my thinking. It may be more appropriate to retrieve all data lookup in one shot then iterate those in one shot.

I have not even considered the query itself or joins and how they align with indexes - that’s often an issue

wasn’t there some hubbub about JOIING to a TT table and HOW it’s about the same as global thermal nuclear war??? does this apply here?

2 Likes

That was in a BPM Query. In Custom CODE it doesn’t have the same issue.

At least that was the answer I got when I asked the same question.

@knash that same issue does apply to custom code. You should NEVER EVER NEVER join to a ttTable. Weren’t you at our talk? Did ya fall asleep? LoL
In this case though the joins are not to the ttTable they are using a primitive within the ttRecord to as a whereClause and that is ok.

2 Likes

That Joining bug on tt is only if you do the joins on the tt itself.

If you do var x = ttTable;

and use it on the bottom to Join, you will be fine, but dont throw in your joins in the beginning. He is not doing that he should be okay

foreach(var tt in (from ttrow in ttShipTo select ttrow))

But perhaps try to use .ToList() on a few of them.

1 Like

2 Likes

ok ok …

1 Like

I was not able to attend. Is there a recording or presentation summary. Thank you.

not to be Robin…

What about a TT to TT table? still bad.

I understand that you are creating an on Demand View which adds overhead.

TT to TT is fine they are small in memory POCOS have a blast!

Here are the slides @asmar there was a LOT more covered in conversation Q&A part unfortunately I don’t have that… Someone recorded it but they haven’t sent it to me yet.
E10Help Panel.pdf (1.3 MB)

1 Like

Also a few months ago - my exchange with Bart/Jose and Jose’s Screenshots of testing =)


Execution Time was 90 MS


Execution Time was 0 MS
3

“So it appears the assumption I made was correct, joining to the class object causes the “join” to occur in memory post SQL query so it would yield a much slower result… Good stuff!” - Jose

You need to always be aware that joining to an in memory structure is problematic. Sorry I did not review your question to dig it out.

I run into this in internal reviews as well. I still have not looked at your complete scenario but one of the things folks do a lot is push the ttrows (or some subset of columns) into the db as a ‘temp table’ and join against it. Very quick then even with millions of db records in partTran, etc type tables. The push of the data into the temp table and cleanup of records afterwards is many times faster than the ‘join to tt’
– Bart

3 Likes

Thanks for the insight Bart.

I went ahead and made some of the changes that you outlined and the code was still going quite slow. I think I am going to have to dive into this code and really break it down to figure out why it was created this way. In my opinion it is the code itself and we may have to think about using other methods to get the information as there has to be an easier/more efficient way that might not even require custom code.

The code was originally dictated by our credit department to our consultant at the time. It is meant to go through all the shiptos for the current customer and grab their total ordered to date , and give us the total outstanding balance for that current selected shipto. We are going to be talking with the credit department to hopefully clean up the naming convention as it does not accurately reflect what it is supposed to representing.

I don’t know if the code is populating all the custom fields in the Erp.ShipTo_UD table for every shipto for all customers at once somehow. We have as the same data fields used in dashboards that populate the information in seconds yet when pulling up a customer in customer maintenance it hangs sometimes up to 3 minutes in extreme cases.

It has definitely been fun trying to go into code written by other people trying to figure out everything they’ve created/touched. We have had a total of 3 different consulting companies doing things for us and my team has been brought into the fold with Epicor and we are now tasked with cleaning it all up as much as possible. I am going to have some further discussions with my team and see if there is something that we can come up with.

Also thanks for the pdf of the E10 Help Panel Jose! I attended it this year at insights and wanted to get those slides to have as a reference in addition to the notes I took.

1 Like

You should try this :slight_smile:

I usually do a prototype first in LINQPad and I can Debug and see results instantly… then with minor tweaks I turn it into a BPM.

3 Likes

Agree with @hkeric.wci on that prototyping approach. I would be curious how much data is in each query you are looking at. There is always the time someone has some outlier number of records that was not expected and throws this all off. Doing some breakdowns on how many records are in ttShipTo, then the first query, then the second, etc would be interesting to see.

You can also turn on the SQL and DB Perf traces in appserver.config to get some numbers on how long it takes to execute different items.

Thanks for that @hkeric.wci . I will look into LINQPad and use your example code posted above as a reference.

I will try prototyping it and see how it goes.

@Bart_Elia I’ll also turn on the SQL and DB traces and run through the process to get some numbers to analyze.

@IanCastellanos If you just download LINQPad (I would recommend the 99$ PAID) then you open those files in the above attachment, you should be ready to go with minimal configuration. You are welcome no worries, let us know if we can help.

@hkeric.wci I just bought myself a copy. Was on sale this month for $89.

I will play around with it and reply if I have any questions/issues.

Appreciate all the help from everyone in this thread!

Got LINQpad4 all set up and have run into what I think is a big problem. Seems that there is no ttShipto in our databases Which makes me wonder what the code is exactly doing unless I am missing something. Which could be the case since Epicor is still new to me.