Anybody running Kinetic in their own Azure servers or other private cloud who can speak to the sizing requirements for adequate performance for ~150 office users + ~50 dc users, geographically distributed? I would guess this requires app server, task server, db server at a minimum, but not clear what the necessary server specs would be.
Yes I have the hardware sizing guide. I have always considered that more like minimum installation specs than anything else. And it doesn’t really translate to the machines you can pick from in the various cloud services either.
I would rather hear from people who are actually running what size of servers is actually working for your deployment.
We are currently running in Epicor’s cloud in the central us dc and that works fine performance wise for the most part, but I have no idea what’s actually running behind the scenes given the move to AKS.
We did at my last gig, but this is before Azure Kubernetes of course.
The nice thing, unlike when you’re on-prem, sizing isn’t as big an issue. You can always scale up or scale down. You’re timing is perfect. Microsoft Mechanics (a YouTube podcast) just did an episode about the different types of servers available in Azure and why you would choose one over another.
@josecgomez will be your best friend here. They have a similar reach as you do.
BTW, when first looking at Azure, you will get sticker shock. Remember, this is the difference between generating your own electricity (on-prem) vs buying from the grid (cloud). When you generate your own, you can leave the lights on all day for little incremental costs. Not so on the grid. You will want to manage what is running and when.
The biggest help in reducing costs are savings plans and reservations.
We’re a little smaller but in similar ballpark (~80 concurrent office users, ~50 concurrent EKW mobile app users).
DB SQL server is 32 vCPUs and 256GB ram. “Interactive” Kinetic appserver lives on the SQL box for best performance… This is desktop Epicor, EKW, and SSRS printing - i.e. user interactive workflows that need to be as fast as possible.
Secondary “Task” appserver has 8 vCPUs and 32GB ram. We route DMT, Service Connect, and other integration workflows through this box to minimize what runs through the interactive appserver. These workflows don’t need as good performance since a user isn’t sitting there interacting with something.
You will hear fierce debate that interactive appserver should be on its own box, not on SQL, but the people who have actually tried both will disagree. Perhaps at your size is where the equation starts to change and you look at load balancers, etc., but at our size the SQL monolith approach is vastly more performant. In Azure its harder to get the boxes as “close to” each other as it was back in the day when we could put them on the same VMware blade in the datacenter. The network latency between the boxes in Azure outweighs the benefits you get from giving SQL more room to breathe.
I believe we are in a Virginia Azure region. About 65% of users are in NY/NJ/PA area, 25% in CA, and 10% in FL. Performance is pretty good overall.
For the longest time, we ran with a DB server and an App server, just two boxes. In the last upgrade, they convinced us to setup a separate Task Agent server, which we did for our Prod environment, but we left our Dev and Test environments with the two server approach.
I haven’t noticed ANY gains from having more than the two boxes. But it has been less than a year since we did it. And we have a few more users than you do.