Method Directive Custom Code

I have a Post-Processing Method Directive for Erp.MasterPack.GetShipDetails that was created by someone else that is to update the Number01 field in the MasterPackDtl with ShipHead.Weight. The below code is producing a result of 0.00 as written. If I hard code a number in place of the ‘ShipHeadRow.Weight’, the number is written to the Number01 field. I am assuming that the code is not making the required connection to the ShipHead table, can someone point me in the right direction?

Erp.Tables.ShipHead ShipHead;
Erp.Tables.MasterPackDtl MasterpackDtl;


foreach(var masterPackDtlRow in ttMasterPackDtl)
{
   var ShipHeadRow = Db.ShipHead.FirstOrDefault(x => x.PackNum == masterPackDtlRow.DtlPackNum && x.Company == masterPackDtlRow.Company);

  {
    if(ShipHeadRow != null)
    masterPackDtlRow.SetUDField("Number01",ShipHeadRow.Weight);
 }
}

Is this Epicor 9.05 or 10?

Erp table refs so 10.

If you replace ShipHeadRow.Weight with a number and it shows up then you are getting a row back since your ShipHeadRow != null condition must be true for that to work. Try writing to the event viewer with some of the values and see what you are getting back for data in ShipHeadRow.

Ice.Diagnostics.Log.WriteEntry($"{Newtonsoft.Json.JsonConvert.SerializeObject(ShipHeadRow)}");

Copy paste the result in event viewer at this site to make it more readable https://jsonformatter.curiousconcept.com/

You might try optimizing your code to also only return the one Weight field that you want instead of the entire record:

Erp.Tables.ShipHead ShipHead;
Erp.Tables.MasterPackDtl MasterpackDtl;

foreach (var masterPackDtlRow in ttMasterPackDtl) {
    decimal weight = Db.ShipHead.Where(x => x.PackNum == masterPackDtlRow.DtlPackNum && x.Company == masterPackDtlRow.Company).Select(x => x.Weight).FirstOrDefault();
    if (weight != null)
        masterPackDtlRow.SetUDField("Number01", weight);
}

Another option sometimes is skipping the SetUDFIeld command and referencing the value directly:

masterPackDtlRow["Number01"] = weight;

or maybe even:

masterPackDtlRow.Number01 = weight;

Be careful modifying the DB directly.

Add this at the end.
Db.Validate();

Actually that is dangerous too. Test is the critical aspect.
Why dangerous?
Db.Validate flushes the changes to the dB and everything in its cache. That can have a timing side effect.

DBContext 201…
The DB Context is a whole novel of its own but to simplify it, it has all the db connection goo, a collection of caches for each table in the database (e.g the data model), methods to query and update the db, transaction handling, the data triggers, etc etc.

The important part in all that for this context (pun intended) is that collection of table caches. Look at one of the queries above. When they execute, the Db.ShipHead has a cache of a row in memory - along with any other previously updated data before the BPM intercepted the server logic in all the tables in memory.

If you do a Db.Validate() - ALL tables are flushed to disk, not just that row you just updated in your bpm. This can cause interesting side effects if there were further processing further down the server call that assumed a record was in memory.

The safer thing to do is only play with your data -> Do a Db.Validate(the_Row_I_Changed).

The trade off is when you are doing something in a loop like above you can pummel the db with many writes. In some cases it is safer to do nothing with the Db.Context. At the end of the server call, pending saves are flushed to the db for you so that may be faster as opposed to doing an explicit flush to disk row by row.

Which way to go? Test. Measure. Know your data that you updated.

8 Likes

in this case, I dont believe a Db.Validate is needed, becuase the row being modified is a tt record… This is a post processing Method BPM… I believe (but could be wrong) that this is running after a “get details” but before the “update”.

You are absolutely right, I misread it and thought he was updating the Db, instead he was updating the tt from the Db.

However, that is the issue, he is modding a tt in a POST, which will have no effect. Either this will need to be a Pre-Proc or he will need to write a value to the Db.

Btw- I had no idea you could target specific records with Db.Validate. Neat!

3 Likes

or use invoke Update method and pass the required parameters for BO either with code or with widget

1 Like

updating TT in a Post can SOMETIMES be correct… depends on the method being discussed… Typically any “Update” method does not work to update in the POST process, and should be updated in the PRE…
but another example is the “Customer.GetNew” (or any Get new)… if you try to do anything in PRE, there is no record yet to update… so you must do it as a post process. you are basically saying->>> Go get the new customer record, THEN I will update some defaults (in the post process).
I am not certain what the Erp.MasterPack.GetShipDetails process does, but “PRE” probably doesn’t have the records in the TT record yet… post would be the only time it would have something.

1 Like

You make an excellent point. Some methods update info (which require the things I mentioned in my last post) and some methods which retrieve data. Those that retrieve data can have TT modified for effect.

@bcrile did you ever get your issue sorted?

How do Transactions come into play with this method? Let’s say you are modifying the Job Materials on a JobHead update, do you wrap the iteration of material rows in a Using Transaction and call the Db.Validate(the_Row_I_Changed) on each row then after call tx.Complete()? Or is the transaction unnecessary?

I’ve been experimenting with this because I just learned that we’ve been doing database updates all wrong, which I think explains the deadlock errors we see in the logs. This is my new experimental post processing BPM on SalesOrder.Update. Our real use case is to override the built in calculation of JobHead.ProdQty. That’s why I’m experimenting with changing values on post, which as someone noted upthread may be a little unusual.

// temp table should just contain the updated values
// if this were on pre we'd take ttOrderHed.First(o => o.Updated() || o.Added())
// because there would be another row containing the old values
var ttHed = ttOrderHed.First();
// get the corresponding db row
var dbHed = Db.OrderHed.First(h => h.OrderNum == ttHed.OrderNum);
// change it
dbHed.OrderComment = "Updated " + DateTime.Now.ToString();
// save it - no args validate considered harmful
Db.Validate(dbHed);
// update temp row so change is reflected in ui
ttHed.OrderComment = dbHed.OrderComment;
// avoid the dreaded row changed by other user
ttHed.SysRevID = BitConverter.ToInt64(dbHed.SysRevID.Reverse().ToArray(), 0);

@caleb.grundmeier My tests have not shown a need to enclose this in a transaction. Perhaps someone more knowledgeable can shed some light on when you would want to use an explicit transaction scope as shown in the Converted Code Programming Guide.

Update: I learned that Update is also called on delete, so watch out for that. I’m not sure if the temp table is empty in that case, or contains a row with RowMod “D”.

Kevin,
I gotta warn other users and let them know this is definitely NOT a great approach.

You should never hack around the SysRevID manually that is a good way to get yourself in trouble. SysRevID is there for a reason and if the code you are writing is causing you to get errors such as “Record has been modified” then changing the code is a better alternative to hacking SysRevID.

You should avoid doing direct DB updates (EVER) if at all possible, use the business objects to do all your writes. If you MUST write directly using the Db object then ensure that you are refreshing your dataset by using the business object once again GetByID and using

this.dsHolder.Attach method to “refresh” the returned dataset. This is referenced in the Programmers Guide available in EpicWeb.

4 Likes

@josecgomez Yes, I can see how this might cause problems. If the update triggered a data directive that modified some other field, then the data in the temp table would not match the database row with the same SysRevID.

Unfortunately we cannot avoid direct updates because the business object calculation of JobHead.ProdQty does not accommodate our use case. Epicor requires that custom parts have the same UOM as the job (annoying but workable) and then assumes that the UOMs on the job and part are measuring the same thing (totally wrong in our case).

Your situation sounds like it might warrant tracking multiple UOMs. I think if you track multiple you can produce in a different UOM than you receive to. Can anyone else confirm that. I’m curious what your situation is that your part UOM and finished assembly UOM are not the same or would not be translatable through a conversion. What do you make?

@jgiese.wci Printed materials. Our job head parts are sheets of paper. Each sheet of paper produces some number of printed parts. The final parts are mostly custom one offs, so not in the part master, and they’re different sizes so every job produces a different number of them per sheet. Epicor keeps trying to set JobHead.ProdQty to the number of final parts instead of the number of sheets.

Why wouldn’t you set the quantity per on the parts?

Your job head part should be the final part number. The sheets should be a material on the job.

@Banderson The final parts are custom one offs so not in the part master. And each job produces hundreds of different final parts.