How to Get More Details of Issues from Change List

I was reviewing the Change List for and came across Jira Issue ERPS-92822.

Improved performance of the Inventory WIP Reconciliation report for posted records

I have a ticket in with support for more information, but I am trying to understand what the performance baseline benchmark is? In other words, what number has to get how bad to be accepted as an issue? Also, how much do they have to improve it to mark it fixed and acceptable?

I don’t know if there is an official statement or not from the company. In general I have always seen performance issues positioned from two areas:

  1. It effects your business because ‘x’. e.g. - If a fundamental part of your business is failing due to the performance of a feature.
  2. It is slower than previous release.

That’s just my experience on the inside when a perf issue makes its way to me. Most of the time it’s from an internal tester / cross team but sometimes from the wild as well.

Each performance issue is different, but, broadly they go like:

XYZ process currently takes X time with my data at Y version and I feel it should take Z time

Many times that Z time is the amount of time it took in a previous release. (number 2 in Bart’s listing).

This specific Jira number/SCR is regarding the Inventory WIP recon report taking 5 hours in with the customer’s data (4000+ pages generated) where it took ~ 1hour in E905. The objective was to get it to perform similarly in 10.2 as it did in E905 which is what this Jira number/SCR does.

That customer needs to run their Inv/WIP process more frequently!! Unless you come back and say that’s a days worth of transcations! Nobody can possibly check 4000+ pages for errors, break it down into smaller more manageable chunks…

I completely understand the very very messy details around “performance”. How can we get those additional details from Jira that are lost in the Change Lists descriptions without having to request them for every issue closed?

Speaking for myself only:

What would the value to most people be for more context about performance issues within the change list with the way that Epicor is positioning point updates (no breaking changes to RDDs, UI customizations, BPMs, etc)? If someone is on a point update on a given release, Support really does expect our customers to apply the latest point update in their testing environment at some point to determine if the latest point update resolves the issue (ideally for all involved before submitting an issue as that will really save our customers time–if it is addressed in the latest update, then you don’t even need to open a case with Support as that is the solution to issue).

If someone has performance issues in 10.2.200.x with a process where we have listed we’ve done some work around that process in 10.2.200.y, why wouldn’t one just apply the update in their testing environment and test to see if it has an impact? If not, then, one would have to submit a case to Support for us to review if it was a concern for the user.

If someone has a performance issue in 10.2.100.x with a process listed in the change list for that process that was changed in 10.2.200.y, I would still recommend upgrading the testing environment to that release/update, but, if more context was needed, a Support case could be opened to determine if there would be any impact–though, one of the first three things out of Support will be “have to tested to see the actual impact?”

I guess my broader point is that for updates within a release (10.2.200.x to 10.2.200.y), if someone wanted to know if Jira/SCR XYZ included in 10.2.200.y would have any benefit for their usage, the best (arguably the only) way to know is to actually apply it and test–more context around the issue wouldn’t actually validate whether or not it would help or not with absolute certainty.

If someone has a need for more information for a specific change than what is included in the change list, contacting Support would be the only way to get that information. This isn’t to say that Support can always provide more information.

EDIT: As a Support resource I should say that I do not have direct control over what or how much data is provided via this process, so even if there were really good reasons to include more info that would benefit the majority of customers, all I could do is advocacy.

1 Like

I agree that most customers really just want the problem fixed, and will install the newest build, test and validate.

My point is that performance is really hard, and my assumption is whatever caused the poor performance in this case probably exists somewhere else in the code. For example, did they rewrite the BO to use SqlBulkCopy or Streaming? Why do the BO’s do so poorly with large datasets like MRP, COS/WIP, and Stock Status Report?

In many of the cases I have been pulled into over the years, it’s the unexpected. 4000 page report? I’d say that would be not normal as mark mentioned.
Once we see a need for a design change it gets reviewed with new parameters. That is a norm in software. Otherwise you see devs over engineering code for no practical need and other areas where customers do feel a pain get shorted. That’s just resource management for expected need as in any process.
Responding quickly in a manner a customer can rely upon is what is critical. The patch process improvement has helped a ton and most give us good feedback which I truly appreciate.