Upgrade to 10.2.700 Issues

We have a need to expedite an update of Epicor from 10.2.500.19 to 10.2.700.9. I reviewed the Release Guides and Feature Highlights for .600 and .700. Other than some of the financial updates in .600 I didn’t see much else that was applicable. Our Finance group is the one pushing this update to implement DocStar AP Automation. Consequently, they are ready, willing & able to test the release. The other departments however are not as eager or “available”. So my question is, can we perform this update with little UAT? We are a little heavy on the customizations (Service Connect, C#, Linq, Handhelds, APM etc). I spun up a new app server with 10.2.700.9. I tested the integrations of service connect and the reports and everything seems to be running as expected.

I would get full user acceptance testing if I were you, but that’s just me…

1 Like

You only need to test the things you want to work properly. :wink:


We just moved from 500.26 to 700.9 … there are quite a few things we found in our customizations that were “broken”.

With your description, I would say you need to do a full UAT before moving to prevent major headaches.

1 Like

@AlexanderDelarge We are prepping for AP automation in 10.2.400 and will move up to 600/700 after that. What is driving the move before automation?

DocStar had “Issues” in .600 and I believe earlier versions of .700. We engaged Epicor in the Fall of 2020 to implement AP Automation. They said we needed to be on .600 to take advantage of a feature set .600 offered. This deployment was delayed when .700 was released because it broke AP Automation and Epicor diverted their deployment/development teams to “Fix” it. They basically told us last week that we needed to go directly to the latest release of (.700.9) if we wanted it to work without workarounds.

@AlexanderDelarge sounds like you management intervention.

We upgraded this past weekend from 10.2.500.9 to 10.2.700.6. Not too many issues, but some BPM’s needed tweaks. Biggest one I’ve got currently is several customizations and personalizations fail validation, or the status is missing (valid for 10.2.700). The work around for that one is to force-verify them in Customization Maintenance.

It seems the affected customizations have warnings about overlapping fields, which should be a cosmetic issue, but it causes Epicor to remove the status, which prevents the customization from loading.

If I were you, I’d test everything that would impact business if it can’t wait a few days for you to figure out and fix. You don’t want to stop shipping!


Our last upgrade, we had a test env. where we could proceed with all the changes before hand, So you can take your time to correct all the misfits, and when the upgrade hapened we copied the modified custo into prod. Went swiftly a part some BAQ’s timing out (that we did not test… ;(


@askulte good to know. I remember this was also an issue in previous upgrades I have done.

We are on 10.2.500.8 and will go to 10.2.700 next so it is nice to know you had success with it.

1 Like

I would copy your live system to a test. Do the update on test to see what errors get generated during the update. Once you address the errors then have the users go into the test system and generate some transactions / reports. If you want to speed up the process have them at very minimum preform what they consider “essential” transactions / reports. If the essentials, work most likely you will be able to correct other issues on the fly. Just upgrading blindly is risky since EVERYONE is different.


One thing I noted is certain Menu’s Args get wiped, so if you have Customizations they get wiped, on certain screens only.

Example in my scenario Args list got overwritten by -KineticUI completely.


I wonder if anyone knows if we should check this checkbox on-prem. Doesn’t provide any field help.

Found more info in Application Help

JobEntry BO GetDetails is now GetDetailsWarningMsg. If you have a BPM triggering on GetDetails via the UI you need to also duplicate your code from GetDetails to this new Method. [ FAIL ] Its not documented in the BO Ref Guide and I couldn’t find it in any notes like the Change Lists. It looks like it was introduced probably in like a patch level .20+ atleast, def a breaking change for a minor patch.

@timshuwy should customers be expected to fully test their entire workflows after every upgrade? (Even 10.2.700.2 to 10.2.700.20) We can usually script something with ATE, also is there a different file that would show these new changes, the ChangeList no longer has that tab.

1 Like

This is why I have to test all workflows no matter what the patch is.

This is why we don’t (and won’t) do dot level upgrades casually. We bought the Kool-Aid when Epicor said they won’t change anything in the underlying structure. Upgrade with confidence, they said. We had a small bug fix with a later .release, so we upgraded. It broke counter sales for a week until Epicor discovered the dot release changed something in Company Configuration… So yeah… Management will NEVER consider doing a dot level release now without full upgrade testing. This takes us a full 8 weeks for a quote to cash with every department involved, testing each of their customizations and processes. We now do this once a year, and lock in the latest version at the time final Q2C testing starts. Then live with the bugs (err, quirks) the rest of the year… Testing is too large of a burden to do frequently.

@hkeric.wci - How difficult is it to learn ATE? How much time does it take to program each scenario? I’m thinking each BPM would need several tests to validate them for each upgrade. IMHO, Epicor should include it at no charge…

Did you mean, “Testing manually is too large a burden to do frequently?”

I am curious how much effort do companies put into the idea of automated testing? When you’re using Epicor Functions, there are plenty of tools that should help you test those out without user intervention. (Postman, PowerShell, .NET Interactive notebooks, … ATE only works for the .Net client :frowning: )

If one subscribes to modern programming methods like Test Driven Development, then many of these automated tests will already exist and will just have to be run when testing is required.

My DevOps message of the day…


I have a function, the function works… but the Function is triggered by JobHead.GetDetails which still exists but the UI is now calling JobHead.GetDetailsWarningMsg – so my function still works… but Epicor’s triggers dont :slight_smile:

So even if I automated a Epicor Trace Test… it would work… however the UI is the one who now calls a different BO Method. Unit Tests would be golden. You still end up with something like Selenium/ATE to be accurate.

There are now a few additional UpdateExt’s which fire you might need to do a BPM on UpdateExtRowMod now

1 Like

@Mark_Wonsil - I’d love to see how we could automate our testing. Are you able to demonstrate something?

If I can get a break even ROI on it with a few upgrade cycles, I’d definitely consider it. Since I’m not a power user with those tools, my learning curve and programming time would be thousands of hours, I’d imagine… And our end users and managers would then expect it to catch any scenario if their testing is replaced with an automated tool.

I could only imagine how long it would take to program each test, let alone a full quote to cash, and then program validation checks to make sure everything is kosher… Heck, it takes enough time to program a simple BPM to check one box!

1 Like

Yes, you should have test scripts to validate your BPMs, and obviously, the more BPMs and customizations, the more that needs tested. There is always the possibility that something happens behind the scenes that would unintentionally break your custom code.
regarding this specific issue, I checked with Development. I found that there was a need to add new parameters to the GetDetails business object for the new UI. But they didn’t want to break or detach the old GetDetails functionality since it is heavily used. So… GetDetails was modified to call the new GetDetailsWarningMsg process using the new parameter(s). Any process that calls GetDetails will still work without adding the new parameter.
But… Once the new process was created, and they also needed to call it directly (instead of through GetDetails) from the UI… this had the unintentional result of skipping your BPM. And this was missed in the release documentation.
Your choice COULD be to simply move all the BPM logic from GetDetails to GetDetailsWarningMsg, since it should always be triggered. Once moving (and testing) you can get rid of the code in GetDetails BPM.

1 Like