First - I put this same argument into an internal rant and it got translated into the Bindings section of the help. Please do a search in the help on BInding and have a good overview read.
Now - sorry to throw cold water on most of this conversation…If you are looking to solve some performance needs by choosing net.tcp versus http or https - you are probably looking at the wrong area . The SQL I/O and ram to cache your DB is going to probably have a much larger impact in your performance. RAM is Love. Solid State I/O. All those things Raj is lecturing about at Insights.
So some background over my first cup of coffee so hopefully this in coherent …
Time machine back in ye olden days of .NET 3.5 alpha when we some crazy person volunteered to the VPs * converting ABL to C# can’t be that bad…* I was practically living in Redmond trying to go through every best practice that MS had in the .net stack that was coming out in 3.5. As a part of that we were looking at http and net.tcp and at the time - their http stack was… suboptimal… compared to net.tcp. Noticeably. They were learning from us during this cooperative development so the result was a lot of findings from this and other areas that were pushed into their v4 that levelled that performance (Interesting side note - like the autocompile EF perf in vanilla EF? You’re welcome).
When MS went to v4, a lot of early assumption were no longer valid. It was too late to flip to leading with http when E10
released so that is why you see net.tcp everywhere as the default.
So the whole net.tcp / http argument is basically a dead issue in >.Net 3.5. Our testing at the MS scale labs (twice) has put the difference within the noise level of the timings.
Now … things that WILL make a difference when choosing bindings…
Those bindings where you see us mentioning ‘Binary’ in the name:
That means the payload is not something friendly to integrate against. We pack the tableset rows and columns into a highly optimized byte that is only possible when ERP is controlling both sides of the wire. Fields are packed in via index, no name lookups to find a field in this firehouse of data. This is as fast of a serialization approach as possible. At the huge cost of no interop. You need a ERP Contract assembly on the other side to deserialize.
Once you start looking at compression, you will note there are two binding flavors:
Why two doing almost the same thing on nettcp?
Well, for http, we as an industry have had compression forever. For nettcp, MS never had that as a part of the spec. When we asked about this at the WCF Design Reviews we did with MSm they said not in spec (Back before Open Source and dev in the open).
Our response was a bit of a slap at them - You own the stupid spec, add it!
After a few VPs had a deer in the headlight they got it and added it to … 4.5 or 4.0 - not sure which. But someone was paying attention to what we did - rolled our own custom ‘CompressionEncoder’ to serialize the nettcp data. MS took some research minds and improved our performance so you see us using the vanilla ‘binaryMessageEncoding’ as opposed to our custom CompressionEncoder that will probably be eased into retirement at some point. The vanilla one is the same perf but better on memory. Use it if you are doing nettcp.
Lastly, you can’t get me talking serialization without me bringing up the gorilla in the room - REST. All the serialization we have chatted about is for the full E10 client to the E10 server. For integrations, you need to worry about interop or use the client dlls and deal with the deployment issues. For non dotNet clients, you are left with SOAP historically. Very powerful. Great leap forward at the time. Wildly dated today.
REST and oData specifically gives us the best serialization performance possible as it let’s you choose what data to send across the tiniest pipe in the entire enterprise - the client to server one. If you need 6 fields instead of 400 sent to the client? Choose to just grab those 6.
Whether it’s a cell network to a device, a cable internet link, a WAN, a 100M ethernet cable on the LAN, etc. Those are all slower than the fast links possible within your datacenter between an App Server and DB. Sending less data across the slowest part of your infrastructure is just too obvious to ignore - for you or us.
If you don’t know it today, put it on your bedtime reading list. It’s the next big thing for performance coming. Several products are already using it, more are in development and more under consideration all with a REST / OData dependency.
I hope that helps. Coffee done, back to work