BPM Condition Using "DOES NOT begin with"

Related to this post:

… has anyone figured out a way to construct a BPM that simulates a “does NOT begin with” condition?

Why wouldn’t there be a “does not begin with” condition available to use if consideration has been given for a “begins with” condition?

Could you kick your logic out if it passes then on the false side instead of the true?
Alternatively, you could write your own custom logic for this or use the condition with the BAQ in it to evaluate the logic against a constant

3 Likes

You’re on to something there… let me give it a try.

The Condition Begins with generates is

ds.ABCCode.Any(r => !r.Unchanged() && r.Company.StartsWith(("BLABLA"), StringComparison.OrdinalIgnoreCase));

It basically looks for the row and check the column with .StartsWith()

Following up on @hkeric.wci’s post …

Use a "The custom Code <expression> condition is valid" condition with the expression:

return (ttABCCode.Any(r => !r.Unchanged() && r.ABCCode.StartsWith("T")) ? false : true);

Or don’t use the ( ? : ) to reverse the result, and just change the second part of the condition line to invalid.

Tweaked as you need it.

3 Likes

Yes :slight_smile:

ttABCCode.Any(r => !r.Unchanged() && !r.ABCCode.StartsWith("T"))
2 Likes

@Aaron_Moreng / @ckrusen / @hkeric.wci:

Thanks guys. I appreciate the feedback.

Before I move forward, I wanted to address an indirectly related issue concerning the directive I would need to use (Data, or Method) to apply the solution. I always seem to get confused on this and spend a lot of time in trial-and-error before I hit success by luck of the draw.

The section of the trace log that I believe is relevant, looks like this:

  <businessObject>Erp.Proxy.BO.LaborImpl</businessObject>
  <methodName>Update</methodName>
  <returnType>System.Void</returnType>
  <localTime>9/9/2020 06:49:46:2106030 AM</localTime>
  <threadID>1</threadID>
  <executionTime total="1945" roundTrip="1886" channel="0" bpm="0" other="59" />
  <retries>0</retries>
  <parameters>
  <parameter name="ds" type="Erp.BO.LaborDataSet">
  <LaborDataSet xmlns="http://www.epicor.com/Ice/300/BO/Labor/Labor">

Looking at the above trace log excerpt (and assuming it’s relevant to assessing the directive; which it might not be), what would specifically clue me in on the type of directive to use?

Here, I was thinking to use a Data Directive, because I want to check if a specific condition exists as the record is saved/updated.

Any guidance here would certainly be appreciated.

Data Directives should be used with very narrow scope. Typically you only know info about the table the DD is for.

Method Directives work on bigger scopes. You have access to multiple tables worth of info - the whole dataset related to the BO’s method. In your trace, the BO is Labor, and the Method is Update.

1 Like

@ckrusen:

So, it doesn’t exactly depend on the trace log, but rather, it depends on what you’re trying to accomplish, correct?

I’m my case, I’m checking to make sure that, under a Time & Expense entry, the job number does not begin with the phrase, “HFT”. If it does not, then I want to make sure the labor hours are equal to the burden hours for those “non-HFT” jobs.

So, I was thinking – “data directive on the LaborDtl table”, because that is specific to checking against updates to that table where “LaborHrs” and “BurdenHrs” are concerned. Does that seem correct?

Yes.

One good thing about Data Directives (and sometimes it is bad) , is it doesn’t matter what is causing the table to change. It’s just looking at the record being added/modified/deleted.

There may be several ways that the LaborDtl table gets updated. A DD would fire on any of them. But it is also indiscriminant. There might be different Method Directives that hit that table. If you only wanted certain functions to be checked, then use a MD.

If all the info you need is in the one table, a DD is the simplest way to go.

edit

Someone probably chime in and say that MD’s are always they way to go. And that DD’s are very brute force-ish. And I’d have no real argument against that.

One last thing to remember… A DD will have to process every time the table is updated. So never put one a table like PartTran.

@Bart_Elia thought me that an Exception in a Data Directive is expensive. I put all my Validation, according to Epicor’s standards into Method Directives PRE Update

Well Bart has been with Epicor for 20yrs and DDs are brute-forcish. The problem is most non-developers don’t see the expense. Use a debugger and you will see the stack trace of rollbacks.

I use DDs for setting defaults before write, or pulling in values, or transforming data lets say LBs to KGs etc… anything that flows smooth.


image

Method Directives:

  • Pre, Post and Base processing logic
  • Access to BOs
  • Ideal place for Validations / Exceptions / Data Changes

In-Tran:

  • Executes after standard Entity Framework data triggers
  • Executes within a transaction, as a part of the trigger pipeline
  • Immediately processes affected row
  • Processes one row at a time (two rows for update operation (RowMod = “” is old row)
  • Can change data on save
  • — The In-Tran should never if it can throw an exception, it is very very expensive rollback.
  • You shouldnt access BOs in here!

Standard:

  • Executes when service method call has completed
  • Executes only if service method completes without exception
  • Processes batch of affected rows at once
  • Does not affect data save
  • Ideal place for integration operations (Audits, Email, Logging, Notifications, API)

It all comes down to cost, stability, for example an Email is a nice to have, if the SMTP Server is down, I don’t want to kill a mission critical process, who cares (guess there wont be a an email that day), The Standard for example is the lightest of them all, small foot-print.

  • Reference Insights Presentation by Rich and others.
4 Likes

Ahhh… “Insights”

… “From the Before Time”

Yes :slight_smile: Maybe Bart will come back if we keep tagging him :smiley:

Hey @Bart_Elia remember

“Data triggers are the last line of defense not the first. They can be massively abused and have caused many performance problems in the past when misused.
The problem with them is the same as exists with triggers in SQL. The context of what is happening is a single row. When you look at a service Update method, it’s a hierarchical graph of data - a Tableset / DataSet. You know the context of what is going on. You have the header and the line items. In a data triggers (or sql) you don’t have that context You are forced to do extra lookups, some calculations, etc to figure out the context of why am I here. That costs cpu cycles. Potentially lots of them.”

Nonetheless, id say, it depends - not all BOs and DDs are the same. I’ve had a few scenarios where I had 10 Method Directives vs 1 Data Directive. #ExceptionToTheRule

3 Likes

Thanks @hkeric.wci. Awesome information here.

If my aim is to simply perform a data validation (a comparison of values) between two fields before the record is committed, then it sounds like I need to implement a pre-processing Method Directive.

@Aaron_Moreng / @ckrusen / @hkeric.wci:

With everyone’s help, I was able to get the job done by validating the custom code condition, as suggested by @ckrusen, but using the false side of the conditional logic, as suggested by @Aaron_Moreng… all under a tidy pre-processing Method Directive as suggested by @hkeric.wci (with a hat-tip to @Bart_Elia).

Here’s the set-up:

… and here are the validation results (looking for jobs that begin with, or do not begin with the phrase “HFT”):

This works really well and allows me to handle the validation further, based on the True/False result.

Precisely what I was hoping to accomplish guys, so I appreciate the terrific guidance from everyone on this.

Enjoy the rest of your week.

1 Like