Demo for the session "Beyond the Basics: Advanced Techniques with Epicor Functions

I showed 3 examples during my presentation to illustrate function usage both externally and internally inside the system:

  1. Call Epicor function from the LLM chat application using function calling capability.
    Function gets an SQL query text provided by LLM as input string parameter, executes it as BAQ and returns its results to LLM. LLM can process the function response and answer the question.

  2. From Epicor function I make an external call to SLM (small language model) hosted locally with Ollama to perform some simple tasks. In my case it was mail message template translation.

  3. Completely internal use of function - in User Copy functionality execute a function to create copies of employee records assigned to the user in each company (simplified).

Below I add all 3 demos with comments, they are provided AS IS.

15 Likes

For the first example I created a Python application, because all new AI stuff happens in Python first.

For LLM it uses Azure Open AI, not OpenAI directly, see What is Azure OpenAI Service? - Azure AI services | Microsoft Learn , model gpt-4o.

For UI it uses the chainlit package - Overview - Chainlit

I added authentication to use Epicor user name and password, it gets token from Epicor and then uses it for Kinetic function calls (or any other Kinetic calls).

API keys and all other setup information like endpoint URLs or names need to be specified in .env file located in the same folder.

Epicor function library is called llmfunc and is exported from the 2025.1 version. So, to use it in a previous version you have to open it in a text editor and change the version from 5.0.100 to something earlier like 4.3.200.

Library contains 3 functions now:

  • define-tables: creates json string with the description of the tables and columns I provide to LLM. For the demo I just took subset of columns in Customer and OrderHed tables. I put the list at the beginning of this function and then use Schema service to get the description of them in from the data dictionary. I put the result in the free form JSON and just make as part of the system prompt when I chat with LLM.

  • define-tools: creates json string with the description of the functions available for LLM call. The format is the one that all ChatGPT-like LLMs support for function calling. Their example is here: https://platform.openai.com/docs/guides/function-calling?api-mode=responses&example=get-weather.

This function just enumerates all the functions in the current library that are not starting with “define-” and adds them to the json object. Currently I only have 1 function like this: “run-query”, but technically any function can be added in the library, and it will be available to the chat application.
A small problem I had was that our Epicor functions do not have Description field for each parameter, so I added parameter description in the function description for now.

  • run-query: the function that takes the SQL query provided in the input parameter queryText, converts it into the BAQ using the preview feature SQL to BAQ Generator (it should be enabled in the settings), executes BAQ and returns its results. BAQ is not saved.

I don’t promote the function into production, that is why I use api/v2/efx/staging/ endpoint prefix in the calls. It can be changes in .env file.

So, to run Python application you need to unzip attached zip-file in some folder, fill in settings in .env file, then set up usual Python stuff - Python itself - I have 3.12, virtual environment, etc. Use the tools you prefer, dependencies are dumped into requirements.txt.

I use project manager for Python called uv, it is fast and easy to use, see uv .

Step by step commands in the terminal inside the unzipped folder:

  1. uv venv - creates virtual environment

  2. .venv/Scripts/activate - activates virtual environment

  3. uv run ./main.py - install all dependencies and runs the code.

When you start the app, it will open the login page at http://localhost:8000/ and after successful login with Kinetic credentials it should show usual chat window like all other chat apps.

At the bottom of the main.py there are examples of the queries that worked for me when I built the app. It did not go as I wanted during the presentation, probably because capacity for my dev instance was lowered by MS.

I did not use any LLM-related Python libraries like LangChain or LlamaIndex or many other. Only the client package for Azure OpenAI. Those apps would make my small file even smaller, as they abstract and hide a lot of code, but I wanted to understand what exactly is sent to the server, when I use function calling in the OpenAI REST-calls.

Logic is standard for any LLM app - you ask question, providing list of tools LLM can use. LLM decides that it needs to call the provided function, so it returns the response in the special format for the tool call. The app calls the tool with specific arguments, in our case - Epicor function with the query text.

Before the call I show confirmation to see the query text. After the call, if BAQ results are returned, I first show them in the grid for the reference but also append them to the LLM message list and call LLM again. If an error happened, I add error text as it is. LLM usually recreates the query differently in that case.

If BAQ returns something or after 5 retries, LLM processes the BAQ results and answers the question. Because all messages are stored and are resent on each call, LLM usually understands the context of conversation. So subsequent questions can be answered.

Sometimes SQL2BAQ cannot process even valid queries, probably some bug fixing is required in that feature. You can trace the running queries in the server log with <add uri="profile://ice/fw/DynamicQuery/BaqStatement" /> flag and report if you find a bug there.
epicor_func_chainlit.zip (6.5 KB)
llmFunc.efxj (10.3 KB)

4 Likes

The second example is showing how to use open-source local language models from the function.

No Python here, just simple REST call to the Ollama server installed locally (but locally means on the Kinetic server, because functions are executed on the server!).

Probably it would be better to install it on some machine in the local network and open the port it uses.

So, first of all, you need to install Ollama from https://ollama.com/ and decide what model to use, there are plenty of them here: Ollama Search

Ollama client uses docker-like syntax, and when you decide what model to use, you can download the selected model from the terminal, using command text from the model page, like ollama run modelname.

It will start CLI to interact with model and we don’t need it.

So we can just use ollama pull modelname to store the model locally.

List of the models already installed locally can be found using: ollama list

I don’t have a normal GPU on my work laptop, so I only could do something simple.

I chose phi4-mini for my demo.

I also played with Gemma3 gemma3 , it has Vision capabilities. For example, if you provide an image of a payment check, it can process the image, get all items with prices from it and store them in a json object you can then work with. It worked with acceptable speed with NVidia GPU or on Mac, but on my work laptop it was too slow to demo anything like that.

So, I ended up with an example of message template translation. It just calls Ollama API endpoint: http://localhost:11434/api/generate

like they show in their docs: ollama/docs/api.md at main · ollama/ollama · GitHub

Library is called translateMessageTemplate and contains 2 functions:

  • call-ollama: it makes POST call to localhost Ollama using model name and prompt. This is an internal function and is not available from the App Studio. Technically, this function can be reused with any ollama call, not just for translation. RestSharp library is used for the REST call.

  • translate-message: gets message Id and language Id for the message template as parameters from the UI, and translates the subject and body of the message, using call-ollama function.

It gets the message template fields by id, using MessageTemplateSvc.GetByID. It gets full language name from LangNameSvc and then it translates each field and returns both fields back to the caller.

This library must be promoted, because I call it on button click in the application studio layer on the Message Template UI.

After the call the fields are updated on the screen, and user can edit them further and then save.

As in the first example, the library is exported from 2025.1 and its version should be edited manually in the file to import it to earlier versions.
translateMessageTemplate.efxj (5.2 KB)

1 Like

The last demo is an internal function to extend user copy functionality.

Some time ago Copy User menu was added to the browser’s User Account Maintenance UI.

You can select any user record, click on that menu, specify a new User Id, name, email address and create a copy of the initial user with the same groups, companies, etc.

About a release later in addition to Copy User action the User Account Maintenance UI got a new menu item, now on the landing page, called Setup Epicor Function.

There you can specify a function to run after a user was copied. It must have 2 string parameters - old user id and new user id. It does not need to be promoted and may remain for internal use only.

I use this feature to demo how to copy employee records for the copied user. It is not very simple, because unlike the user record itself, emp records are created separately in each company and may have different ids.

I created a simplified version of the employee copy, it only copies the main Erp.EmpBasic table, not all tables you can see in EmpBasicSvc.GetByID.

At the beginning I go through the list of the companies for the old user in Erp.UserComp table and store all the empID values defined there.

Next, if this list is not empty, I create a temporary session for each company where this record is defined, using
CallContext.Current.TemporarySessionCreator.SetCompanyID(comp).Create()

Inside that temporary session I get an existing employee record, make its copy and change related fields to the new values.

There are couple of TODO to fill out: new empId value, how it should be set? I just concatenate new user Id and old emp Id. But this should be changed according to the rules you use. Maybe just use same userId for empId too…

Then I call Update for the EmpBasicSvc and if it is successful, I store new value for EmpId in my list. I write errors in the server log, but it can be processed anyway you want.

After all companies are processed, new EmpId values for successfully created records are stored in Erp.UserComp for the new user.

When the execution is finished, the new user record should show new empIds on the company list screen.

As before, the library is exported from 2025.1 and its version should be edited manually in the file to import it to earlier versions.

user-copy.efxj (8.1 KB)

4 Likes

This was my favorite session. Thanks Olga!

3 Likes

Your session was awesome @Olga! Thanks for posting these!

3 Likes

Thanks for sharing!

1 Like

@Olga, are there recordings available for your sessions?

I don’t think so. Actually, I did not see any recording of any Insights session.

1 Like