Reducing DB size, but users want historical data access

Hello All,

I’ve been tasked with running the ‘purge and summarize’ utility and I’ve been doing my due diligence on how far back we’re supposed to hold onto our data. Based on compliance guidelines that I’ve found; I have my timeline. After reviewing this with some department heads, there’s a desire to be able to reference all the old data for historical trend analysis.

WIth all that being said, I understand the need for keeping the historical data, but I also understand the need to clean up the database for performance reasons. We’re leaning more towards the Epicor Cloud solution, so this will also provide several challenges. My compromise to the business was we’d hold onto ‘x’ number of years of data in production, then do a database download every few years and upload it into a separate historical database onsite so that the business can do whatever analysis they want to do.

My question to everyone here is has anyone else been in this situation before and how did you resolve it?

1 Like

We moved to the cloud several years ago and this was one of the reasons why, we have never purged our database (except for the change log). We have been on Epicor for 27 years. Fortunately no bad years of data entry reason to redeploy or rethink how we have setup the system an are using it. Even though some of our tables now have a lot of records we don’t see any performance issues because of table or database size. I would guess 90 percent or more, of Epicor users are not pushing SQL size limits.
LaborDtl Table Record Count this moring is 2,316,740

2 Likes

This is what we did when we reimplemented about 6 years ago and it works well for historical access and reports. But… it is also now a hurdle we would need to overcome if we ever wanted to move to the Epicor cloud as we have 2 separate Epicor databases that many users need access to daily.

Do you have any control of backups with cloud-hosted by Epicor?

We’re on-prem (234gb currently) and starting purging the large tables 2 years ago. We keep 2 years of change logs, 3 years for PartTran, TranGLC, GL History, Journal Details, and the rest are not big enough to worry about, so they’re not purged (yet).

Pre-purge, we append a separate database with the records that are getting purged, so that’s available for a full historical look-back.

We also had finance and sales wanting history back forever. For day to day, we found a 3 year cut-off covered 99% of the scenarios. The rest we can get by combining our archive DB (and also the 2 old ERP databases - go AS400!) in Sharepoint…

Keep in mind that there’s a bug in 2023.2.9 regarding purges:

PRB0280555 - When running the database purge and summarize for orders with with a cutoff date of of at least 18-months, you receive an error message that states, Multiple statements not allowed.

Also, if you purge your GL, Epicor will also purge them on open (old) invoices. We have up to 6 year terms on our buildings, so you’ll need to get a data fix to add the GL’s back, one at a time.

Depending on the tier service you get, the backups are taken fairly regularly like every 30 minutes and RTO/RPO for DR is 8/2 as well. The guidelines I’m following are from the IRS as to what records need to be kept and for how long. We would be housing the historical database on-prem for local users and we have a solid backup solution in place. The cost for the database download is around $1000 per download, so I’m figuring download the oldest of 3 years from the production database and import that into the historical database. The export is provided as a SQL .bak file type.

How often, if at all, have you guys needed to buy more SQL cores, DB storage, or app cores to manage the scaling? We’ve only been on Epicor since 2014 so you guys would have run into more of our potential issues by now.

One thing to keep in mind, at least to my understanding unless something has changed, as a cloud customer, you will not be able to access external data sources. So, you’ll have your data backed-up on an on-prem server, but you may not be able to access/query against it (within Epicor) as a cloud customer.

We have not been asked to pay for more SQL Cores, DB storage or anything else as an extra charge in the cloud. Scaling was one of the reasons we went to the cloud. I felt the acceleration of change (updates, Db growth, hey lets add MRP) has now exceeding my companies hardware replacement cycle of guess how much hardware to buy on a 3 year lease, hoping to replace and reanalyze in 3 years to find out I have to keep it 8 years and make it work. Cloud works much better for that.

Duly noted. I haven’t asked how the users plan to interrogate the data. My only assumption I right now is either just through excel (I know) or some type of BI tool. That will just open up a new path of technology for me to look at to work with the business on. I did make clear to the users that there won’t be any type of communication between the Epicor Cloud system and this on-prem database. I’ll be having many more discussions with them to try and narrow down a plan of action for what they intend to do with it though.

That’s fantastic to know. I guess my only other issue now is with the usage of the data and not so much the performance of the database server. With discussions to the userbase about this, there still may be a need to be able to connect directly to this historical database. I’ still trying to find out what all they’re wanting to do so that I can show them that the provided Epicor and Azure tools can be demonstrated to them to show that they can do whatever they need to do with those instead of making this external historical database.

That’s not a huge number in database terms! Big enough to get into noticable performance issue territory without a maintenance plan, but that’s easy. Storage volume is a large multiple of actual data size due to the antinormalized data structure, so storage should impose limits first as long as maintenance happens.

I’m also curious about this as someone with active use cases for backups. I’ve seen mentions of requesting and receiving database backups as a cloud user, but haven’t had any luck inquiring with support. I don’t expect we could dictate backup frequency and strategy, but I’d like to know what they are.

Just adding it’s not impossible, just can’t do it through external baqs etc.

There are many alternative means to get access to external data in the cloud, we just have to jump through a few more hoops.

1 Like

1 Like