I do not have any insider knowledge into how Epicor is specifically building Prism, but based on how essentially everyone else in the industry is doing this, I would expect it to work very similarly to what you described.
In most enterprise AI products, the LLM itself is a generic, pretrained model. It is not trained on your enterprise data. Instead, at runtime, the system retrieves relevant enterprise data and injects it into the prompt or context that is sent to the model. This pattern is often called retrieval augmented generation, or RAG.
On top of that, companies typically build a layer of tooling that teaches the LLM how to interact with their system. This is usually done through a well defined interface that exposes “tools” the LLM can call. One increasingly common example of this pattern is the Model Context Protocol (MCP), but there are many similar approaches.
Conceptually, this tooling layer tells the LLM things like:
- How to run a BAQ
- How to create or update a sales order
- How to retrieve customer data
- What parameters are required
- What shape the response will have
See: What is the Model Context Protocol (MCP)? - Model Context Protocol
On top of these tools, you then build agents. An agent is really just a combination of:
- A set of available tools
- A system prompt that explains how and when to use them
- Rules about how to structure responses and what action to take with the response. These are usually built in or adjacent to the system. For example this would be the PRISM interface (front and back end) built directly on top of Kinetic UX
The agent walks the LLM through taking one or more actions against the system.
A simple conceptual example:
MCP style tools:
- A tool that exposes an endpoint to query customer information (for example a BAQ like zCustomer01) and explains how filtering works.
- Another tool that knows the URL schema for Kinetic so it can generate deep links into specific screens, like Customer Entry.
Agent behavior:
You expose these tools to an LLM via an agent running inside the Kinetic web app.
User prompt:
“What is the Customer ID for my customer named Verizon?”
LLM:
- Sees it has a tool that can query customers
- Calls the tool to run zCustomer01 with a filter like Customer_Name contains “Verizon”
- Receives the results in a known, structured format (JSON Schema)
Response:
“The customer ID for Verizon is 123456.”
User prompt:
“Open this customer on my Kinetic screen.”
LLM:
- Sees it has a tool for generating deep links
- Calls the tool with CustID = 123456
- Returns a structured response, for example:
{
"responseType": "action",
"action": "launch_app",
"target": "https://tld/instance/customerscreen?CustID=123456"
}
The chat or agent host (running inside the Kinetic UI) does not treat this as a normal text response. It deterministically parses the response, sees it is an action, and launches the Customer Entry screen using that URL.
None of this requires training a specialized model on Epicor data. A general purpose LLM works fine when you control its behavior using prompting, tooling, and structured response schemas.
Because training a custom LLM from scratch is extremely expensive, most vendors avoid it. Instead they rely on generic models combined with:
- Runtime data retrieval
- REST or service endpoints
- Tool definitions
- Agents
- Deterministic application logic on the client side
Epicor may choose different models or implementation details, but architecturally this is the standard pattern today, and I would be surprised if Prism were fundamentally different
As far as the question of will there be a host of new endpoints that’s a likely scenario. Though I’m not sure those would be directly exposed to us (regular users)