I am working DMT loads for a go live and have found the need to switch versions as I work through the list.
Latest one is loading Quote Header - failed on version 126.96.36.199 but worked fine with 188.8.131.52
Noted that Playlist for Purchasing (others?) not working on 184.108.40.206 but worked on 220.127.116.11
Earlier I mentioned BOM’s failing on 18.104.22.168 but working on 22.214.171.124
Could we start a list somewhere? Or even a category?
I just stopped updating DMT. Up until a couple months ago it was rock solid and I just let it update without thinking about it. The version that I have now is 39, but we primarily us it for updating comments on method operations, and setting labor standards. So we don’t use it (regularly) for many of the other things it can do.
I really hope they can figure out why it keeps getting messed up.
My opinion: Epicor needs to learn best practices with its regression tests as it encourages customers to migrate to its cloud products. My company uses the Epicor Multi-Tenant Cloud (currently DMT 126.96.36.199/ Epicor Smart Client 10.2.300.12). It is nice to never need to worry about backups, capacity management, server patches, etc. But without a truly robust test cycle, errors can (and do) creep in to the system.
While I’ve never seen Epicor’s source code, I know from other systems that adding meaningful automated tests to legacy products (legacy=designed without automated testing in mind) is very difficult.
Yes!!! Another Michael Feathers follower I presume?
DotNetIT (the company that Epicor purchased) wrote DMT and @edge who came from there is now a VP at Epicor. There have been some quality issues recently with DMT and maybe the coding is moving from the UK to other groups at Epicor. Epicor DOES do automated testing but there is NO way to test every combination that each company will try in Epicor. Some testing must come from us - the users.
I wish there was a free version of the ATE utility and a premium version (that does recording for example) so that we can share tests with Epicor and each other.
Paul the issue you will find is the volume of the source code when you review it. When we went through the conversion to E10 the ‘core’ application services and framework were over 100 Million lines of code. No add on ‘extension’ products. And we have not been at a stand still in the explosion of new efforts since 10.0. I would guess we have at least double that code base since.
We have a massive automated testing system in place. We use a variety of automation approaches based upon the product and technology. TFS / VSTS / Azure DevOps is in place for automated ‘Unit Testing’. We have a variety of ATE and Protractor with a few other automation tools for specific projects we have tried and still have in place (I’m not going to remove test coverage even if it’s not a go forward automation platform!).
We have taken some inspiration from the VSTS folks in their ‘journey to the cloud’. We have met with them many times as we are one of the largest if not the largest dotNet products on the planet. They have an excellent series by the way - https://www.youtube.com/playlist?list=PLReL099Y5nRcEqPp8oMXIL2gFhFadrB8P
Their concept of ‘L0 to L3’ tests as opposed to ‘unit’ and ‘integration’, etc is especially interesting to us in our CI / CD pipelines.
The time based categorization is important considering the breadth of the testing undertaking. I have mentioned previously the amount of automation we do internally. One of the major consumers is the regression test coverage and code coverage analysis we do nightly. I have not checked recently but last glance we are doing 27 server deployments nightly to run all our core automation coverage. We use SonarQube to track quality over time and do some coarse static code analysis as well. I doubt our customers or management would let us go dark and do nothing but test creation for a year at which point we would probably still not have 100% code coverage across the suite.
It is increasing which is the critical aspect. We have been focused on targeting efforts in a few areas. The first was the patch reliability. It’s been a while since I have seen a major revolt over applying a patch (e.g. 400.5 to 400.6). Not perfect by any means but its extremely stabilized. When issues do occur in a patch, there are more checks in place now to prevent regressions that we can improve upon.
New product development is held to higher thresholds as they do not have the almost thirty year old code base that is still refactoring for testability. I think the oldest code I saw during the 9 to 10 conversion was 1993. I don’t think NUnit was around back then. We actually have cross company weekly meetings to discuss testing approaches, refactoring and general code quality. The learning / mentoring / growth environment is very vibrant as we improve these areas.
The issue I have seen over and over is the raw data dependency on issue detection. That can and is being mitigated in changes to our coding style with appropriate refactoring to ensure better code effectiveness. That has to be balanced against destabilizing the code base or introducing performance issues which is definitely not allowed.
Lastly, I have to mention the telemetry we have been putting in place over the last couple of years. Massively helpful. Allowing us to know how many times form, a service, a report is run, the number of times an error occurs, the clicks through the system - the users always click this form and this tab then that form and that tab - sounds like our form layout is not as efficient as it should be? Those tools have been fantastic is getting hard numbers to back up where to put efforts. There was a lot of review to ensure privacy concerns and not collect accidental data but reach out if you have concerns and are not opting into the telemetry ecosystem.
As a multi-tenant cloud customer, I have no say over WHAT gets deployed or WHEN. As a United States-based customer, many recent features (e.g., Legal Numbers) add no value to me; any breakage is simply value reduction. To ensure a smooth migration, I’d pick quality over time-to-release almost every time. So, yes, I would vote for a pause in the Multi-Tenant SaaS cloud deployments if that would reduce post-deployment frustrations.
On a quasi-related note, DMT disappears from the client with each significant Multi-Tenant SaaS release; this feels unnecessary if these changes were made:
All DMT code objects (e.g., DMT.exe) could be migrated to the smart client deployment folder; everyone would get them
To maintain license compliance (I must copy a file named DMT_<MyCompanyName).lic to C:\Epicor\ERPMT\Client with every release), the license itself could be moved to the server controlled by Epicor; no license = DMT won’t instantiate
By the way, as a former software developer, I certainly sympathize with hundreds of millions of lines of 30-year-old code. Been there/done that. Let me tell you about supporting an IBM S/360 application running in compatibility on a DEC VAX… That was “fun.”
As a quasi-related note to the quasi-related note, the DMT “Cloud” version is just the DMT local version. I would like to see a real cloud version of DMT where it works as a service in Azure. Customers would access DMT as a service so there is nothing to install - just like Epicor ERP. Companies would move their files into the region where Epicor is located and get quick runs and not operate through the business objects slowly over the wire. The DMT “cloud” version is priced at a premium for much lower performance.
Multi-Tenant cannot die fast enough. It reflects poorly on the Epicor Cloud and it is certainly hampering Epicor’s Cloud growth. In the Public Cloud, you cannot choose WHAT is deployed ( then again, neither can on-prem users ) but you do have some flexibility as to WHEN if you get the Flex option in the Public Cloud version of Epicor ERP. For example, 10.2.300 arrived in October but we went live with it in February.
Mark - That’s a fantastic idea. We could use that to help EpiCare support test things. If the ATE scripts can be saved in a compatible format like BAQ’s and BPM’s, then we could turn on record, do our steps that show the error, then attach it to an EpiCare case. Then the analyst could do the same in their environment to see the issue. Or vs versus, and they send a script back that steps through their method… We could use it to record an issue in LIVE, then test it in the edu db to see if it’s our customizations, or db, or it’s native Epicor.
Hey I started on VMS! I got my MCSE on DEC Alpha NT so I guess that dates me
And as far as MT - the timing aspect is a HUGE concern and efforts are underway on that. Maybe if you corner me at insights I can hear some of your comments to see if they align with the efforts I am leading.
Specifics are welcome in the feature suggestions up here as well. I just came out of weeks long planning efforts and the E10Help suggestions were a part of the input (Along with EUG, Customer Advisory, all the other things you would consider normal).
Running on the Government Cloud - 10.2.300.12
Tried to load BOMs - 4,000 rows.
Version 4.0.41 - locked up at 3,000 rows - 13 RPM
Version 4.0.40 - didn’t run - 0 RPM
Version 4.0.39 - projected 6 hours - 3 RPM
Version 4.035 - projected 48 minutes - 69 RPM (I’m running with .35 right now)
Let’s see how it goes.
I just have
Quotes, Orders, Jobs, Purchase Order, Labor, File Links, Part Cost, Part Qty, and WIP to load. - we will see how it goes!
More on Sunday after the go live.
Embedding DMT into the standard client deployment and making its license work within the standard licensing subsystem along with adding a configuration to the user file to allow DMT’s use for further lock down is in the works with the License and lock down elements in 10.2.400.
Bruce - with regards to the issues you are having with the latest releases can you drop me an email and I will get you directly connected to the developers, the DMT team has not changed for the last 2 years so we will dig into this as there may be other environmental aspects at play here. I assume for the 4000 rows you have them split up into different ECO Groups?
Thanks - I will get a note to you on my findings from the upload from Vantage to E10.
Will also see you at Insights.
As others on the thread have said, Epicor is working to improve the software which is appreciated.
Key point is that if something is broken, and it is a know bug and a prior version works, that is where it would be really helpful to know. When we are doing uploads - we are trusting that the software is correct and will spend hours doing different configurations just to find out from support that a certain version is not able to upload a file type and to use an earlier version.
That is where it is very frustrating for the one doing the DMT loads and the users looking over their shoulder.
New issue to report - where typically if you are logged into Epicor (in the past) you can stayed logged in.
I just had a 9 hour quoteQty load that I started last night fail because my password expired today!
The process just kept running and all records failed after midnight.
I may be able to use the reprocess CSV, but have not had good luck with the CSV.
(Edit - I was able to go in to my Power Query Editor in Excel to modify the query to show quotes >= the first quote that was failing)
To fix this, I will probably have to rerun the script versus trying to unwind the issue.
This is not helpful at go live time.
Put in your checklist to make sure your account will not expire at cut overtime.
did you not get a message that your password will expire in x number of days? (we do not use expiration of passwords…but thinking about it…
if not, maybe add this to the list: a little message on top of the login indicating password will expire in 3, then 2 then 1 days… at least the user will be able to be aware and proceed to change it before the end date!
My experience with the password expiration notifications is that they don’t always appear. My company uses DMT and Information Worker, both of which authenticate to Epicor separately from the Smart Client. Neither of these products appears to support notifications. (Side note: Information Worker re-authenticates many times per day, so long as Outlook is open.) The end result is that most of my users are first notified about password expiration only after it is too late.
I hope Epicor intends to finish rolling out a password notification system across related products, as well as a self-service password reset option.