We’re new to E10(Live for ~1 month), and recently had a problem with printing shipping labels after go live. In working with Quick Ship to troubleshoot they said the problem was we were out of RAM. Something along the lines of IIS wouldn’t spool a new worker process because of a lack of available RAM… We’re noticing RAM consumption creeping up, and we found if we restart the application pools it frees up a tremendous amount of RAM(sorry I don’t have specifics numbers but I believe 10s of gigabytes of RAM). We’re thinking about creating a script to run weekly to restart all application pools to free up memory… Do others see this same issue, and if so how do you handle it?
Also, we are getting ready to rollout a website that interfaces with Epicor to create sales orders so keeping down time to a minimum is really important to us.
Do you have APP and DB Servers together (on a single server)? If so, did you limit the memory SQL Server can use on the server? If you did not, SQL Server could take all the memory of the server and may impact the IIS Worker Processes memory allocations.
You can automate stop / start using the following commands on a batch file.
Is this normal, do others restart app pools or reboot periodically? Or is it common practice as @carlosqt indicates to limit the memory SQL Server?
If helpful in E9 we did weekly reboots, if we didn’t performance suffered big time. I got tech support to send me in writing that this was a best practice as I didn’t believe them. I come from the *nix world where uptime is an important metric and rebooting to fix problems periodically is an indication you don’t understand the problem (so you reboot as a workaround / bandaid).
We recycle the app pool every morning at 5am. This is setup in IIS under the app pool Recycling… settings. We are on 10.2.400 now but i started this in 10.1 days when we had a memory leak in the system. For us being offline for 60 seconds at 5am is fine.
Be careful not to do this when there is a scheduled process running in Epicor (eg generate PO suggestions).
We only reboot the server when required by a windows update or something has gone drastically wrong. I would guess a reboot happens every 3 months or so…
Chiming in since this is the latest of the IIS conversations. Does anyone know if Epicor should be running a StartMode of “OnDemand” or “Always Running?” Will this start the app pool if it fails for some reason?
Also, am I wrong, or does Process Model > Idle Time-Out recycle the pool automatically? I inherited a system so I did not set it from scratch and it is set to 1740 minutes, or 29 hours, supposedly so it does not recycle at the same time everyday. Problem with this is that sometimes it does it during busy times. After doing some research, like @bmanners, I set the recycle time to a specific time of the day, when I know we are not processing as much. We are a 21 hours x 7 days a week shop and there is always overtime, so typically I do not have a good time to do this daily.
Someone let me know if I’m off. Even better, if someone can post a screenshot of their IIS settings, that would be great. At least once a month, I get a call overnight or very early in the morning that users are getting error messages trying to process or clock in. It’s typically that the app pool stopped and needs to be started manually. I am trying to automate the restart process, if possible.
You shouldn’t have to schedule IIS restarts etc to “make it work” .something is causing this to happen. Have you looked into performing an RCA using the Memory leak tester in the performance & diagnostics tool?
I’m not sure I have to either. Again, I inherited the setup and am not sure it needs to recycle at all, but the Timeout was set and I noticed in the event viewer that it was recycling on it’s own and often at times we were busy, so I figured recycling at a less busy time would be best. If you are not recycling daily, what is your Idle Timeout set to?
Epicor in 10.1.400 did and does have memory leaks. Your best bet is to upgrade when you can. @Bart_Elia can speak to that better, if I recall they had to get Microsoft involved to fix an issue in Entity Framework 4.
If I recall the Issue was described as the following:
Entity Framework (EF) contains hard coded tuning levels that control the size of a compiled query cache. These compiled queries are a standard coding approach to speed querying the database with an upfront cost of preparing the query. These queries are cached to ensure the cost occurs only once.
Anytime EF exceeds the 800-count threshold, a new process starts in EF to count usage of all the queries for another hard coded value - an interval of one minute. Anything not touched in the last minute gets ‘aged’. After five increments, the query is removed from the cache.
With the hard coded 800 setting, ERP has been penalized for both repeatedly recompiling these queries as well as doing constant scanning to age and remove queries from the cache.
Thanks everyone, I really appreciate all the great people on this forum.
We’re going to start with a script to periodically reset IIS app pools to prevent future out of memory errors(weekly) in the short term, and longer term we’ll look into the memory leak tester as it sounds like standard weekly reboots are no longer considered a best practice in E10(happy to hear), so there must be something leaking memory.