Massive growth of tempdb after upgrade to 2025.2

Hello.

We upgraded from 2022 to 2025.2.4 over the weekend.

The SQL Server instance is the same.

Yesterday, we found the drive that stores the tempdb was full, and the tempdb was over 1.2 TB in total.

After a restart of the SQL Server service, the space was reclaimed.
I understand how to manage this going forward, but my question is, has anyone encountered this?
Why did the tempdb grow so much? Our Kinetic database is 500 GB.

I will likely cap the max size, but I’d like to understand why, just in case there is an underlying issue.

2 Likes

I just checked with our Kineic Cloud team to find out if they experienced this at all in the cloud, and they have NOT.. So I am wondering if this is something that is a local issue on your server with the way you have SQL setup. Otherwise, our cloud team would have seen it with all the instances they manage.

1 Like

Thank you, Tim!
I would assume the cloud environment is setup more “tightly” to avoid this happening.
I will set a max size to avoid this from happening again.
Hopefully, there is no underlying issue.

1 Like

Interesting, I have not noticed significant tempdb growth during an upgrade before. Did you notice if the growth happened during the actual db upgrade sql scripts job? Or perhaps the conversion workbench steps afterwards?

One thing I always recommend is to set your db’s recovery model to SIMPLE during the upgrade process (it runs faster in my experience), but I’m not sure if that helps tempdb at all.

2 Likes

I haven’t seen this either during upgrades.
Like @TomAlexander mentioned. I wonder if it’s something that went crazy in the conversion workbench or in a database migration step.

2 Likes

Additional information.
I don’t know if it was directly related to the database upgrade but we went live on Monday, it was discovered on Tuesday, and it happened again today.

1 Like

try to find out what is using the tempdb. Maybe your users run something huge.

I would try and find out what was consuming it. Or at least monitor it and see if it’s growing out of control. It may have been growing over time and just noticed it due to the upgrade?

1 Like

Update.
I capped each tempdb file to 60 GB, with a min of 1 GB, for a total of 480 GB.
While I cannot prove it is a single source, MRP is clearly a culprit.
Before the file cap, the drive was full after an MRP run, that took 2.5 hours. Total tempdb side 1.1 TB.
After restarting SQL Server Service, after putting in the 60 GB cap, the next MRP run took 5.5 hours, and all tempdb files were at 60BB.

What do I do with that?
Tim mentioned that it wasn’t happening on the cloud side, but I wonder…how big are the customers in the cloud? Are they all small businesses? Are there some large companies who don’t have BOMs that go down 40 levels? Does the cloud team have routines to restart SQL Serveer and IIS during late hours?

We usually have the luxury of calling support with issues that we can repeat, and the response is, “we haven’t had any report of this issue”.

Have we uncovered a bug in MRP?

1 Like

I would get a ticket opened with Epicor ASAP.

we have a bunch of “huge” customers that typically expose this type of problem right away, which is why I reached out to the cloud team. They saw nothing like this.

It could be bad indexes, or a bad plan. If the plan it’s choosing is bad (because of parameter sniffing, or an index that’s out of whack), it can spill over into tempDB.

3 Likes

Sorry for the delay in responding.
We are working with Epicor support. They are reviewing the logs from the performance tool. I will post if there is any resolution.

2 Likes