These are the pain points (not solutions) that I shared with Epicor:
Single Source of Truth
Users have to manually manage what code is in each environment. There is no way to know the source of the code that is current running in production. I would guess that most customers work directly in production, aka Click-Ops. Some User Developers may have a personal versioning system where they rename objects with dates, especially if there is only one developer. Once there is more than one developer (within the company, a consultant, Professional Services, etc.), it becomes very difficult to manage where running code comes from.
Loss of Development Work
There are several ways that User Developers lose work:
Clobbered Databases
Since all work is stored in the database, any restore will wipe out work in that instance. In the cloud, there isn’t an upgrade that goes by where a refresh of Pilot wipes out work because the user dev did not export it ahead of time.
With the new Cloud Portal, we can expect more of this as users have control of restoring instances.
Corrupted layers
As Application Studio is still maturing, there are still several ways to corrupt a layer and have to restore a previous version or completely start over.
Configurator
The Product Configurator has been reported as losing work too.
Product Configurator - Numeric Box - ‘Initial Value’ is Stuck - Kinetic ERP - Epicor User Help Forum
Importing Solutions
Users who use an old Solution Workbench CAB file can accidentally clobber newer work if the CAB file wasn’t built with the latest versions.
In Directive Import, enabling the “Replace Existing Group” and not know which Groups are assigned to various directives can wipe out a lot of work with no easy way to restore.
Upgrades
Probably the biggest adjustment for companies is upgrading on a faster schedule. I was in the cloud for three years. As time goes by, users do get quicker at upgrades. But I do worry that they don’t check patches (and even upgrades) as thoroughly as they should. Automation would be very welcome here.
Testing
There are several levels of testing required for upgrades: end-to-end testing as well as some “unit” testing.
Pre/Post Processing
Prior to the upgrade, we would lock out the users (DMT set the Active Flag to False for non-Security Manager Users) and run several reports: Balance Sheet, Income Statement, Stock Status, Open A/R, Open A/P, Uninvoiced Receipts, Uninvoiced Shipments, etc. After the system is released by the cloud team, we would run the same reports and compare to make sure no financials changed. We would then reactivate some users to do a sniff test on common quote-to-cash activities.
By the way, we did this for the Pilot upgrade as well. We want to simulate the actual upgrade as closely as possible.
Development
Functions
Kinetic Functions are one of the most useful tools for the User Developer/Partner. The Promote/Demote functionality is a pain-point though. Demoting a helper library to update the code can throw errors all over the system. To just view the code, one has to demote it, unless something has changed. Having a staged version of code is an excellent idea! But that only works in REST. I don’t think a Function or Directive can call a demoted Function directly in code, but I could be wrong. How should the user developer manage this? Create a UI library and then have that library call other libraries/functions? What is the error handling story? Should the user developer cycle between A and B versions of a library for easier roll back? Should we handle 404s (the error one gets when a function is demoted, in the UI by calling a backup function?
Having a test harness for Directives/Functions that could mock some data would make upgrade testing (and development) less painful.
Auditing
For companies who require SOX compliance, it is challenging to prove who installed code, what it does, who approved it, and when. While seriously overkill, this is what CapitalOne does with DevOps to improve their compliance position: Governance in a DevOps Environment | Capital One
Tracking New Development
Documentation around new development doesn’t have a home.
-
Who asked for changes? What business problem does it solve? Is there an Issue Log to help people in the future?
-
Who else is working in this area? Will there be accidental clobbering by coworkers or consultants?
-
AI can discover problems and automatically add issues
Instance Management/Deployment
Knowing which instances are running which version of Kinetic can be very challenging. When there are multiple instances, there are (potential) steps that people have to take when copying live to another instance:
-
Mark instance as non-prod to indicate to integrations to behave differently (EDI, automatic email, Automation Studio, Data ingestion to BI systems, etc.)
-
Change Company Name/Colors
-
Configure integrations (SMTP, Automation Studio, EDI, Credit Card, etc.) to not interfere in production environments
-
Alter Task Agent to stop running some tasks
Copying all customizations (directives, BAQs, SSRS reports, etc.) is clunky and error-prone.
People have direct database manipulation for doing this, and I have a Start Up function but again, the source of settings is living in the database and can be clobbered.
Deployment
We can export all the source code in the world but if it cannot be installed in an automated fashion. Its usefulness is limited. When things go badly, being able to run a deployment to build or rebuild an instance would be very helpful.
Monitoring
Not being able to easily observe what is happening in the system is very painful. Is CPU high? Memory starved? Hung processes? Disc space adequate? Time to order more licenses?
In Development
Debugging code is a slow process. Without a debugger, capturing logs, exporting the log, and then viewing the log is a long cycle. Being able to write logs to a sink that would display in (near) real time would be an improvement.
There was a recent post on EpiUsers showing the need for better access to logs:
Application+Server Logging/Server File Download/Functions Kinetic Dashboard - Code Review / Sharing - Epicor User Help Forum
In Production
When deploying user code or post upgrade, it would be useful to highlight new errors since changes were made in code or infrastructure. Non error logging would be helpful as well, like “is MRP taking longer run?”, “Are there new non-fatal errors?”, “BAQs running longer?”, “Is License Usage the same?”, etc.
Business Process Monitoring
Utilizing a monitoring system could improve business process adherence. Instead of sending out emails for everything, track objects in invalid states and perform a single notification: parts without standard costs on open purchase orders, material not issued to jobs with part completions, sales order lines with low margins, etc.
Security
It would be wonderful to have testing data that is not actual live data. This is a bigger topic, but it is a security concern to share data with 3rd parties like Epicor and even user developers. It is a pain point but one that could be eased if objects could be exported, anonymized, and imported. The onus would be on the users to build their test data, but having tools to help build a test/dev environment would reduce this pain. The same test data would be used to check for regressions during upgrades.
Configuration Management
Leaning on Configuration as Code would remove the pain of configuration management. If a repository had a list of current Security Managers, the system could check if new ones were added without proper vetting. Keeping settings in code could test for configuration drift to make sure the software’s behavior isn’t altered without proper change management.
One might also be able to lean on external authorization providers to offer just-in-time security. For example, if a user is on vacation and someone else is doing their job, the capabilities will exist for a given period of time and then automatically removed. At one point, the Cloud Team added that as a capability for support staff to access a system for a given period of time.
This would be one way to remove the need for impersonation. If someone has access to the repository then just committing it will start a process that will add capabilities without directly granting access to those people.
AI
Can’t escape this topic! How can user developers take advantage of the new capabilities?
-
Prompts to replace wizards: Ensure formats of fields, mandatory, etc.
-
Recoder requires the developer to manually run the browser to record scripts. Tools like Playwright have an MCP that navigates pages and creates tests. They also have CLI versions that are more AI friendly.
-
While code in SaaS databases can be scanned to train AI models, we’re missing out on the on-prem code. If source were stored in a central repos like GitHub, it could be added to the corpus if companies voluntarily shared it.