E10 Bindings Performance

Hello Everyone,
With E10.1.600.X we have now a wide range of Bindings to make available for Deploying / Accessing an Epicor App Server.
We have the following EndPoints
Net.Tcp - UsernameSsL, Windows, UsernameWindowsChannel
Http - HttpBinaryUsernameSslChannel, HttpsOffloadBinaryUserNameChannel
Https - HttsBinaryWindowsChannel, HttpsBinaryWindowsChannel

Has anyone bothered to collect some data on Performance differences between all the bindings? The https options seem really attractive because of their ability to allow deployment over the WAN as well as providing what I assume is a more “sturdy” connection mechanism (bypassing a lot of the WCF Windows / Domain / Security dependencies/nightmares)

Just curious if anyone here or at epicor (@Bart_Elia, @Edge, @Olga) have any data on performance metrics for each, or just anecdotal evidence.

Thanks!

3 Likes

I found the Architecture Guide, which does give you some detail about each option, and cover some performance…
https://epicweb.epicor.com/doc/Docs/Epicor10_ArchitectureGuide_101500.pdf

It appears that Net.Tcp has the BEST Latency and Very Good Wire Efficiency, while HTTPS/Binary has Very Good Latency and Very Good Wire Efficiency and it is the easiest to configure. So you pay a small price on Latency but you get easy setup, and overall great performance with HttpsBinary you also get rid of that pesky windows authentication / domain / wcf security stuff which is a nightmare across multiple domains / wan

2 Likes

I would like to see perf stats as well if available… =) It makes a difference when you are serving in the end 1000+ concurrent users.

First - I put this same argument into an internal rant and it got translated into the Bindings section of the help. Please do a search in the help on BInding and have a good overview read.

Now - sorry to throw cold water on most of this conversation…If you are looking to solve some performance needs by choosing net.tcp versus http or https - you are probably looking at the wrong area :slight_smile:. The SQL I/O and ram to cache your DB is going to probably have a much larger impact in your performance. RAM is Love. Solid State I/O. All those things Raj is lecturing about at Insights. :slight_smile:

So some background over my first cup of coffee so hopefully this in coherent …

Time machine back in ye olden days of .NET 3.5 alpha when we some crazy person volunteered to the VPs * converting ABL to C# can’t be that bad…* I was practically living in Redmond trying to go through every best practice that MS had in the .net stack that was coming out in 3.5. As a part of that we were looking at http and net.tcp and at the time - their http stack was… suboptimal… compared to net.tcp. Noticeably. They were learning from us during this cooperative development so the result was a lot of findings from this and other areas that were pushed into their v4 that levelled that performance (Interesting side note - like the autocompile EF perf in vanilla EF? You’re welcome).
When MS went to v4, a lot of early assumption were no longer valid. It was too late to flip to leading with http when E10
released so that is why you see net.tcp everywhere as the default.

So the whole net.tcp / http argument is basically a dead issue in >.Net 3.5. Our testing at the MS scale labs (twice) has put the difference within the noise level of the timings.

Now … things that WILL make a difference when choosing bindings…
The format.
Those bindings where you see us mentioning ‘Binary’ in the name:
binding name=“HttpBinaryUsernameSslChannel”
binaryMessageEncoding compressionFormat=“Deflate”

That means the payload is not something friendly to integrate against. We pack the tableset rows and columns into a highly optimized byte[] that is only possible when ERP is controlling both sides of the wire. Fields are packed in via index, no name lookups to find a field in this firehouse of data. This is as fast of a serialization approach as possible. At the huge cost of no interop. You need a ERP Contract assembly on the other side to deserialize.

Once you start looking at compression, you will note there are two binding flavors:
binding name=“TcpCompressedUsernameSslChannel”
CompressionEncoder
binaryMessageEncoding

and
binding name=“HttpBinaryUsernameSslChannel”
binaryMessageEncoding compressionFormat=“Deflate”

Why two doing almost the same thing on nettcp?

Well, for http, we as an industry have had compression forever. For nettcp, MS never had that as a part of the spec. When we asked about this at the WCF Design Reviews we did with MSm they said not in spec (Back before Open Source and dev in the open).
Our response was a bit of a slap at them - You own the stupid spec, add it!
After a few VPs had a deer in the headlight they got it and added it to … 4.5 or 4.0 - not sure which. But someone was paying attention to what we did - rolled our own custom ‘CompressionEncoder’ to serialize the nettcp data. MS took some research minds and improved our performance so you see us using the vanilla ‘binaryMessageEncoding’ as opposed to our custom CompressionEncoder that will probably be eased into retirement at some point. The vanilla one is the same perf but better on memory. Use it if you are doing nettcp.

Lastly, you can’t get me talking serialization without me bringing up the gorilla in the room - REST. All the serialization we have chatted about is for the full E10 client to the E10 server. For integrations, you need to worry about interop or use the client dlls and deal with the deployment issues. For non dotNet clients, you are left with SOAP historically. Very powerful. Great leap forward at the time. Wildly dated today.
REST and oData specifically gives us the best serialization performance possible as it let’s you choose what data to send across the tiniest pipe in the entire enterprise - the client to server one. If you need 6 fields instead of 400 sent to the client? Choose to just grab those 6.
Whether it’s a cell network to a device, a cable internet link, a WAN, a 100M ethernet cable on the LAN, etc. Those are all slower than the fast links possible within your datacenter between an App Server and DB. Sending less data across the slowest part of your infrastructure is just too obvious to ignore - for you or us.

If you don’t know it today, put it on your bedtime reading list. It’s the next big thing for performance coming. Several products are already using it, more are in development and more under consideration all with a REST / OData dependency.

I hope that helps. Coffee done, back to work :slight_smile:

8 Likes

Thanks @Bart_Elia it was more of a question of what’s better not necessary to address existing performance issues but to see if performance would be negatively affected by leaving net.tcp and moving to https.

I am moving this to the expert’s corner for posterity, thanks again as usual! We already owe you many many beers at next year’s insights. :beers:
-Cheers

If that’s the short question of http v nettcp - then No. No difference at that level :slight_smile:

1 Like

FWIW, being a single-tenant client to two years now, we have been using the https client and love it. Some used the RDP client to a terminal server that used net.tcp so they had no network issues, and the speed was great (both servers located at the same hosting location). The only issue we had with the SSL client was it didn’t handle hot-fixes well. This is a bigger issue for 10.0 as we are moving to Dedicated Tenancy and there will be no hot fixes anymore.

Mark W.

2 Likes