Data Migration Best Practice Feedback

We are in the validation progress of our project and are in the phase of our project we’re planning go-live steps with timing of these steps. I wanted to start a conversation with the extended Epicor Community to be able to have a faster way of importing data (Part, PartRev, PartWarehouse, PartBinInfo, etc) and BOMs besides using DMT. What is the community’s thoughts on Ascend?

What is that?

It’s a 3rd party data migration tool. https://www.ascend.io/

Ascend looks like an ETL tool rather than a data migration tool.

However you configure Ascend, you will still need to access Epicor’s business objects via something like the RestAPI to get data into Epicor in a state that Epicor will support.

It sounds like a lot of work when DMT does all this for you out of the box.

You can even script your migration using Powershell for example, to streamline your data migration testing and go live.

Is there any reason to not using DMT?

Andrew: Thanks for your feedback. You’re correct. It’s still around the total amount of time for the data load that the DMT is taking us with the amount of data that has to be loaded.

If you are saying that a weekend is not enough time due to the amount of data, is there any reason why you couldn’t load the bulk of the data earlier and then monitor for changes and update accordingly. Maybe put an embargo on changes in your current system the last week etc to reduce incidence of this??

1 Like

How many slices are you running in the DMT. In BOOs and BOMs I think I was running 10 simultaneous uploads. I had other things I would put in groups on several virtual PCs.

No Ascend is Epicor’s rebranding of what used to be called their cirrus upgrade program, it performs the database conversion for you.

4 Likes

That’s exactly what we did. I prepared queries on both systems that fed into scripted comparisons which output any unmigrated or modified records. Where testing discovered broken DMT methods, I scripted REST-formatted outputs. Everything was planned into dependency-ordered checklists to keep cognitive space free to deal with anything unexpected. I eventually added ‘delete’ outputs too so I could partially/fully reset Pilot at will instead of waiting on support so I could iterate testing faster.

Data imports required around 200 hours. I let those trudge along in advance, migrating active data because I was confident I had proven scripts to close the gaps later. This approach enabled stunts like supporting consecutive training days in a Pilot system that was migrated current to each morning.

Actual Friday → Monday migration was a breeze. I had spare time. Went for a walk in the woods. Went out for bibimbap. Easy peasy.

1 Like