Troubleshooting intermittent database errors

It recently came to my attention that we have a bunch of post processing BPMs updating the database incorrectly. The BPMs do this:

Db.Validate();
Db.SaveChanges();

This intermittently results in an EpicorServerException with the message

Error accessing the database: Transaction (Process ID 308) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction.

I don’t know if REST has anything to do with it, but I’ve never seen these errors in the UI, or a log entry that I’m sure was not from a REST call.

I started fixing this by changing one BPM. First I changed it to only call Db.Validate(). This didn’t change the frequency of errors from that BPM, but changed the type of error to a System.Transactions.TransactionInDoubtException with the message

Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.

Then I changed it to what I think is the “right way”, using a transaction scope and LockHint and calling Db.Validate() and tx.Complete() as shown in the Converted Code Programming Guide and Luke McAvoy’s blog post. I’m still getting intermittent errors from that BPM, but now it’s a System.Transactions.TransactionAbortedException with the message

The COMMIT TRANSACTION request has no corresponding BEGIN TRANSACTION.

The weird thing is, neither the TransactionInDoubtException nor the TransactionAbortedException seem to result in any data loss. So I’m not even sure it’s my update that’s generating these errors. I imagine there’s some kind of enclosing transaction scope surrounding the execution of the BPM which may be failing.

So I wonder if I’m still getting errors because of interactions with BPMs that are updating the database the wrong way. Before I fix them, can someone confirm that this is the right way to do it?

using(var tx = IceDataContext.CreateDefaultTransactionScope())
{
  var dbJob = Db.JobHead.With(LockHint.UpdLock).First(o => o.JobNum == ttJob.JobNum);
  dbJob.ProdQty = someValue;
  Db.Validate(dbJob);
  tx.Complete();
}

This is the syntax, but please don’t do this. Use the JobEntry business object instead.

using(var tx = IceDataContext.CreateDefaultTransactionScope()) { 
  var dbJob = Db.JobHead.With(LockHint.UpdLock).First(o => o.JobNum ==  ttJob.JobNum); 
  dbJob.ProdQty = someValue; 
  Db.Validate(); 
  tx.Complete();
}
1 Like

@Jason_Woods I’m not sure what you mean. The purpose of this particular BPM is to override the built-in calculation of ProdQty. Calling Update after adding a JobProd row, for example, recalculates the undesired value, and sometimes an apparent bug where ProdQty exactly doubles for no reason I can discern. A post processing BPM was the only way I found to set ProdQty reliably. The reasons we need to do that are a whole other can of worms. Maybe I should revisit that, but not yet.

You are using code to do a direct table update. Instead, you will want to use the Business Object of JobEntry to make the change. For Example:

JobEntry.GetByID(myJob);
JobEntry.JobEntryData.JobProd[0]["ProdQty"] = myNewValue;
JobEntry.Update();

Note this isn’t the exact code, but it is the right process.

2 Likes

@Jason_Woods Thanks, I’ll try that. The problem we ran into before was that ProdQty would just get recalculated whenever we call update. The reason we’re trying to override it is that our JobHead.PartNumber is a part master part, but the job produces non-part master parts. So JobHead.ProdQty may be something like 1,000 sheets of paper, and there’s a co-part reflecting that. But the job actually produces 100,000 printed parts. Epicor keeps trying to set JobHead.ProdQty to 1,000 + 100,000, which is wrong for our use case. This is tantalizingly similar to the example in the docs about stamping washers from sheets of metal, but in that example the washers are also part master parts. I forget why the solution in the docs didn’t work for us.