I submitted a case to Epicor and of course they told me that it was my BPM even though the BPM runs correctly. Why should Epicor be allowed to say that it is my problem when in my opionion, a memory leak should not be allowd to happen in any ERP system?
Below is the Epicor support case details. Any help would be appreciated.
Thanks,
Richard
Description
I am triggering a method directive in our 10.2.300.5 test database. The BPM completes successfully with no errors and the result is what it was designed to do. Memory is totally used up by the IIS Worker Process on the Epicor server, the production database no longer works and the Epicor server has to be rebooted. This of course is a huge problem because the production environment is no longer usable for the users.
Please help!!!
Thanks,
Richard
The Epicor response was:
Thank you as this is a custom bpm that needs to be either recreated or updated to work properly please work with our pro services team if you do not have access to creator of the bpm
Knowledge Article KB0071926 - Submitting a Professional Services Request on the EpicCare Portal (Video)
I cannot afford to take a chance bringing the production database down while I try and figure out how a working BPM is causing a memory leak in the Epicor server.
If it’s not their base code they aren’t responsible for any issues created, I think that’s fair. Now if it’s something in the BPM framework causing the memory leak that’s a different story, but then it’s just a game of proving out to them that it’s an issue with the framework. What is the BPM and what does it do? Maybe we can help you fix it.
You are awesome for helping. I backup data in the partrev table into the ud14 table. Check out the partrevs, change data in the partopr, check in the partrevs. When I check in the partrevs, it changes the original approved by, approved date, approved status, the code that is causing the memory leak restores this data from the ud14 table back into the partrev table. The code is below.
Thank you !
Richard
//restore partrev
using (var txScope = IceContext.CreateDefaultTransactionScope())
{
Ice.Tables.UD14 ud14 = null;
Erp.Tables.PartRev partrev = null;
foreach (var ud14_iterator in (from ud14_Row in Db.UD14.With(LockHint.NoLock)
where ud14_Row.Key3 == "burev"
select ud14_Row))
{
if (ud14_iterator != null)
{
ud14 = ud14_iterator;
partrev = (from partrev_Row in Db.PartRev.With(LockHint.NoLock)
where partrev_Row.PartNum == ud14.Key1
&& partrev_Row.RevisionNum == ud14.Key2
&& partrev_Row.AltMethod == ud14.Key4
select partrev_Row).FirstOrDefault();
using (Erp.Contracts.PartSvcContract partSvc = Ice.Assemblies.ServiceRenderer.GetService<Erp.Contracts.PartSvcContract>(Db))
{
partdb = partSvc.GetByID(partrev.PartNum);
}
for (int i = 0; i < 100000; i++)
{
if (partdb.PartRev[i].PartNum == partrev.PartNum
&& partdb.PartRev[i].RevisionNum == partrev.RevisionNum
&& partdb.PartRev[i].AltMethod == partrev.AltMethod)
{
partdb.PartRev[i].ApprovedBy = ud14.Key5;
partdb.PartRev[i].ApprovedDate = ud14.Date01;
partdb.PartRev[i].Approved = ud14.CheckBox01;
partdb.PartRev[i].RowMod = "U";
using (Erp.Contracts.PartSvcContract partSvc = Ice.Assemblies.ServiceRenderer.GetService<Erp.Contracts.PartSvcContract>(Db))
{
partSvc.Update(ref partdb);
}
i = 100000;
DateTime now = DateTime.Now;
string asString = now.ToString("yyyy MM dd HH:mm:ss:ffff");
ud14.Character08 = asString;
Db.Validate();
}
}
}
}
}
txScope.Complete();
}
You are looping through every part rev and creating 2 instances of the Part service for each loop instance and relying on garbage collection to clean up…
You should create a single instance of the service outside the loop and re-use it. That alone will fix your memory leak.
Further more you are running through an open cursor in PartRev with NO-LOCK meaning you are reading dirty rows. You should “NEVER” (or rarely) loop through a Db context like that you are much better suited grabbing the fields you need returning it to memory (ToList()) then executing your loop outside of SQL.
Also you are seemingly arbitrary counting to 100000 if the PartRev is never found I’m not sure I follow what your logic is there.
Last, you are using a transaction scope while at the same time reading UD04 with no lock and then updating UD04. I’m not even sure that data is getting commited the way you expect.
You should do something like this
Start Transaction Scope (if you’d like rollback capabilities)
Create an instance of Part service
Find all relevant records in UD04 and retrieve them from the Db in a List (do not use no-lock)
Foreach UD04, Run GetByID on the Part Record
use LINQ query in your partdataset to find the appropriate matching rev (no need to loop 100000 times)
Here are some changes to the code to make it more efficient and to prevent potential memory leaks. Thrown together mind you so test lots. I would agree with Epicor on this one, that it’s a your company problem not a them problem. Thought the BPM might function it would for sure be the source of you issues.
Remove the for loop which iterates 100000 times unnecessarily. Use a LINQ query instead to find the PartRev that matches the specified conditions.
Only one PartSvcContract instance is necessary, the original code unnecessarily creates a new instance for every iteration of the loop which can cause memory leak if not properly disposed. You can do like below or use a using which will auto dispose. Either works.
The txScope.Complete() call should be within the using statement to ensure it’s always called even if an exception is thrown.
Do not use LockHint. 99% of the time there is no use case to include this and open yourself up to dirty rows.
In this modified version, partSvc is instantiated once before the foreach loop, and then it is disposed in a finally block. This ensures that even if an exception is thrown, the object will be correctly disposed of. The for loop has been replaced with a LINQ query to find the specific PartRev that matches the required conditions. The txScope.Complete() method is called within the using block, ensuring it’s always called even if an exception occurs.