Looks like we may have a memory leak in our test app server as the w3wp process rapidly increases (found it sitting at 70gb over the weekend )
I’ve stopped the app pool for longer than the shutdown time to ensure the processes are terminated and I fully rebooted the server, but the moment that test app server comes back up, the w3wp process starts immediately climbing.
How can I identify what’s causing the leak and kill it?
If its coming right up when you restart the appserver I’d guess is a scheduled process. Do you run MRP? any scheduled BAQs or BAQ Reports?
Anything in the Logs? Any big dump files in the EpicorData folder?
The only thing I can think of is go to your Admin Console and under Server Settings on the Instance enable all the trace logging. To make sure we get alot more info in the logs.
Perhaps the triggers logging caused it to go crazy, maybe when you enabled some other type of logging the triggers stopped logging? I know some of those types override each other if i Recall.
Hmm interesting, see if you can set it back and replicate the issue
Sometimes the computer has a “Mind”… it “KNOWS” you are looking, and it changes its behavior without you doing anything…
I KNOW that there has been times in the past where I had a suspected bug/misbehavior, so I looked at the code (didn’t recompile), and the problem suddenly went away.
It’s back! Turns out I had stopped the Task Agent and forgotten about it.
Running just the app server is fine, but once I start up the Task Agent, the worker process starts growing.
I enabled “Display All Tasks” and there’s nothing actively running or scheduled.
Is there something hidden?