Not sure if you had any responses to this, but this sounds kind of old 
to me. When hyperthreading first came out, it would flush the l1 cache 
when you moved the new thread context onto the cpu. I don't think it 
does this anymore. I used to do that kind of recommendation in low 
latency so you wouldn't keep squashy the l1. If you had a single 
threaded app, it wouldn't make sense to keep clearing. Another solution 
to that is to give the app it's own cpu. Nowadays, you can barely buy a 
server without a bunch of cores/sockets.

The second one.. is more current. For sometime, chips slowed themselves 
down..to conserve power, if they don't have work to do, but the speed up 
hasn't been an issue for sometime.

I have done perf tuning for years(on solaris, not linux) but the best 
thing is almost always more/better resources. Also ssd is probably the 
best thing to come around for performance in years. Low end disks do 
like 100-300 iops. SSD's can be anywhere from 1-20k iops. Disk is 
regularly the bottleneck.

My favorite perf tuning for low latency network app is bind network 
interrupts to a cpu or set of cpu's. Not cpu 0 or 1. Interrupts will 
usually cause whatever is running to park, then have to reload. The 
others are usually double caching of same data.

The multipath tcp thing sounds really similar to some of the 
technologies around infiniband. Trying to think of application?

On 7/31/14, 6:59 AM, canito at dalan.us wrote:
> Good Morning-
>
> Last night I watched a couple of Performance Tuning videos on Youtube 
> and hearing a couple of suggestions I've never heard before, prompted 
> me to write and ask what are some of the tunables you find most useful?
>
> Two suggestions for performance that I have not heard up to now:
>
> 1.) Disabling hyper-threading for latency sensitive applications.
> 2.) Disabling power management in the BIOS.
>
> Also just learn of MultipathTCP (MPTCP) which I haven't found any of 
> the "enterprise" distros having support for it. One of the speakers 
> discussed performance degradation using bonding. Has anyone else 
> experienced this? What are the better alternatives for bonding 
> interfaces?
>
> Hope you-all have a good day!
>
> SDA
>
> _______________________________________________
> TCLUG Mailing List - Minneapolis/St. Paul, Minnesota
> tclug-list at mn-linux.org
> http://mailman.mn-linux.org/mailman/listinfo/tclug-list