So you want to write a REST Interface to Kinetic

Since most of us grew up with procedural programming, we see a lot of attempts to call Kinetic Business Objects directly from our interfaces. Many use Functions to abstract away the details from the client. But we are still tightly coupling our interface program with Kinetic. This makes error handling very messy. Here’s a fairly short video explaining how to move past tightly coupled integrations and give us something more resilient.

I really like Derek Comartin’s YouTube channel and you may as well.

3 Likes

@Mark_Wonsil I watched the video, and I’ll take the bait - and this comes at a good time for me as well.

How would you apply this to Kinetic? I’ll give an example.

(DISCLAIMER: I’m sure everyone has a better way to do this and what is the real problem I am trying to solve, etc. The point here is not my issue, but to understand Mark’s idea of programming in workflows.)

Problem: I want to override/supplement the backflush logic for parts on a job.

Flow would be like this:

Thus (I think that) I need to:

# Task How to do it (maybe)
1 Find the part’s primary bin (in the MFG warehouse) SomeBO.GetByID
2 Determine if that has a value Condition block
3 If no, log to table EFx
4 If yes, do an inventory transfer EFx
5 Did transfer succeed Not entirely sure…
6 If no, log to table EFx

And so I would string that together in another Function, triggered by a BPM on… something.

So how would you implement the workflow logic? Especially #5, the error handling; I think that has the most relevance to the video, for example.

In this particular case, we not interfacing to another system and I’d probably try to get ahead of the Primary Bin problem earlier, maybe a pre-process directive when the job was released. You still need to check to make sure it exists of course. I might even put a shape in Job Entry/Tracker to indicate that parts don’t have the primary bin set up. But like you said, we’re not tackling that issue. So let’s pretend there’s an external system like a handheld device that reports completions by scanning the job, operation, and entering the quantity.

So, one point Derek made in the video was “don’t write your own workflow engine.” For the sake of this exercise, we’ll use and open source workflow engine to do the workflow. But remember, these systems execute the business processes outside of Kinetic. Why? What if Kinetic is down for patching or database maintenance? It can hold onto the transaction and then execute when Kinetic is back up.

Here’s an example of what this might look like using Workflow Core.

public class QuantityWorkflow : IWorkflow
{
    public void Build(IWorkflowBuilder<MyData> builder)
    {    
        builder
            .StartWith<StartCompletion>()
            .Then<CheckForPrimaryBins>()
                .OnError(WorkflowErrorHandling.Retry, TimeSpan.FromMinutes(60))
            .Then<CheckForAvailableInventory>()
                .OnError(WorkflowErrorHandling.Retry, TimeSpan.FromMinutes(60))
            .Then<DoCompletion>()
                .OnError(WorkFlowErrorHandling.NotifyUser, TimeSpace.FromHours(2));
    }
}

Workflow engines will execute each activity and wait until complete, times out, or throws an error. Many workflow engines provide a method to run items sequentially or in parallel. So checking the primary bin and inventory could both run and when both checks succeed then the flow continues.

Again, this makes more sense with longer flows like receiving orders from a web site or EDI. Workflow code is a lot cleaner than our typical procedural flow.

3 Likes

OK, I know that nobody is going to make the jump right to some Work Flow framework. So, one thing that helped me understand this pattern was learning about State Machines. Scott Hanselman has a nice blog explaining this in .NET Core. What the workflow frameworks do is handle this work for you, but it’s always good to know how it all comes together.

A way to ease into workflow in a low-code way is to use Azure Logic Apps. Logic apps is the engine behind Power Automate. Unlike Power Automate, one can run apps under a service account and not a specific user. Also, you can run Logic Apps in consumption mode, which can be very cost effective since you only pay for what you use. A Logic App can receive and make REST calls, which is all one needs to communicate with Kinetic. One would need to install the Azure Logic Apps Data Gateway so calls from Azure can hit your URLs on-prem. Alternatively, for more advanced shops who already run Kubernetes on prem, you can also run Logic Apps locally.

Could we write some state handling routines using a UD table? Sure. Would it be better than the tools out there? Not without a lot of time and effort. The point of this post is that we tightly couple our integrations which tend not to handle the non-happy path very well - as @JasonMcD mentioned above. Before Epicor Ideas, there was a suggestion to move notifications to a service. Right now, we put the handling of notifications right in directives, Epicor Functions, or Advanced Print Routing. What a pain it is to have to change code when email addresses change. There is no retry mechanism if the intial SMTP server is down. There’s no proof of delivery and no proper way to handle Do Not Contact requests.

Does it make sense to put all this kind of logic into every directive or Efx? Or would it be better to handle this common software architecture problem with tools that already exist?

3 Likes

OK, I think I get it. This makes a lot of sense.

I have not used Azure Logic Apps yet (I think you mentioned it to me before), but I have done a fair bit in Power Automate and PowerApps. There is a “bin transfer” app I made that gets used A LOT here, and it is rather nice for the app to handle the logic and then in the rare cases of failure, it can email me when “8 of your Flows have failed in the last week.”

But even then, I have debug logic spread out all over the place

  • In the app’s “submit” button (to disable if not all requirements are met)
  • In the warning message that explains why you can’t click the button
  • In the EFx
  • In the app again after submitting the EFx request, to translate the error message into English.

So workflow is taking that quite a bit further, to consolidate the logic and errors and, as you said, handle system-down events as well (if you want).

I was not expecting it to be outside of Kinetic, but like I said, I already see the huge advantages when it is outside.

Actually, we do. The first step of the flowchart I made is from an MES-type system called Tulip. Tulip pings an EFx via REST. And there are times we have issues with connectivity, etc. So I am definitely listening.