For the first example I created a Python application, because all new AI stuff happens in Python first.
For LLM it uses Azure Open AI, not OpenAI directly, see What is Azure OpenAI Service? - Azure AI services | Microsoft Learn , model gpt-4o.
For UI it uses the chainlit package - Overview - Chainlit
I added authentication to use Epicor user name and password, it gets token from Epicor and then uses it for Kinetic function calls (or any other Kinetic calls).
API keys and all other setup information like endpoint URLs or names need to be specified in .env file located in the same folder.
Epicor function library is called llmfunc and is exported from the 2025.1 version. So, to use it in a previous version you have to open it in a text editor and change the version from 5.0.100 to something earlier like 4.3.200.
Library contains 3 functions now:
-
define-tables: creates json string with the description of the tables and columns I provide to LLM. For the demo I just took subset of columns in Customer and OrderHed tables. I put the list at the beginning of this function and then use Schema service to get the description of them in from the data dictionary. I put the result in the free form JSON and just make as part of the system prompt when I chat with LLM.
-
define-tools: creates json string with the description of the functions available for LLM call. The format is the one that all ChatGPT-like LLMs support for function calling. Their example is here: https://platform.openai.com/docs/guides/function-calling?api-mode=responses&example=get-weather.
This function just enumerates all the functions in the current library that are not starting with “define-” and adds them to the json object. Currently I only have 1 function like this: “run-query”, but technically any function can be added in the library, and it will be available to the chat application.
A small problem I had was that our Epicor functions do not have Description field for each parameter, so I added parameter description in the function description for now.
- run-query: the function that takes the SQL query provided in the input parameter queryText, converts it into the BAQ using the preview feature SQL to BAQ Generator (it should be enabled in the settings), executes BAQ and returns its results. BAQ is not saved.
I don’t promote the function into production, that is why I use api/v2/efx/staging/ endpoint prefix in the calls. It can be changes in .env file.
So, to run Python application you need to unzip attached zip-file in some folder, fill in settings in .env file, then set up usual Python stuff - Python itself - I have 3.12, virtual environment, etc. Use the tools you prefer, dependencies are dumped into requirements.txt.
I use project manager for Python called uv, it is fast and easy to use, see uv .
Step by step commands in the terminal inside the unzipped folder:
-
uv venv - creates virtual environment
-
.venv/Scripts/activate - activates virtual environment
-
uv run ./main.py - install all dependencies and runs the code.
When you start the app, it will open the login page at http://localhost:8000/ and after successful login with Kinetic credentials it should show usual chat window like all other chat apps.
At the bottom of the main.py there are examples of the queries that worked for me when I built the app. It did not go as I wanted during the presentation, probably because capacity for my dev instance was lowered by MS.
I did not use any LLM-related Python libraries like LangChain or LlamaIndex or many other. Only the client package for Azure OpenAI. Those apps would make my small file even smaller, as they abstract and hide a lot of code, but I wanted to understand what exactly is sent to the server, when I use function calling in the OpenAI REST-calls.
Logic is standard for any LLM app - you ask question, providing list of tools LLM can use. LLM decides that it needs to call the provided function, so it returns the response in the special format for the tool call. The app calls the tool with specific arguments, in our case - Epicor function with the query text.
Before the call I show confirmation to see the query text. After the call, if BAQ results are returned, I first show them in the grid for the reference but also append them to the LLM message list and call LLM again. If an error happened, I add error text as it is. LLM usually recreates the query differently in that case.
If BAQ returns something or after 5 retries, LLM processes the BAQ results and answers the question. Because all messages are stored and are resent on each call, LLM usually understands the context of conversation. So subsequent questions can be answered.
Sometimes SQL2BAQ cannot process even valid queries, probably some bug fixing is required in that feature. You can trace the running queries in the server log with <add uri="profile://ice/fw/DynamicQuery/BaqStatement" />
flag and report if you find a bug there.
epicor_func_chainlit.zip (6.5 KB)
llmFunc.efxj (10.3 KB)