Azure Storage Persistence and Transport are network IO intensive. Every operation performed against the storage implies one or more network hops, most of which are small HTTP requests to a single IP address (of the storage cluster). By default the .NET framework has been configured to be very restrictive when it comes to this kind of communication.
The performance can be improved by overriding the settings exposed by the
ServicePointManager.DefaultConnectionLimit = 100;
The .NET Framework is configured to only allow 2 simultaneous connections to the same resource by default. A higher connection limit allows more parallel requests and therefore results in a higher network throughput. Setting the connection limit too high bypasses the built in connection reuse mechanism which may result in a sub-optimal resource usage.
The optimal value depends on the physical properties of the host machine and the endpoint's expected workload. The ideal number is lower than the average amount of parallel storage operations. It is recommended to start with a value of 10 and adjusting the value based on the observed performance impact.
ServicePointManager.UseNagleAlgorithm = false;
Nagle's algorithm is a performance optimization for TCP/IP based networks but it has a negative impact on performance of requests when using Azure Storage services. See Microsoft's blog post Nagle’s Algorithm is Not Friendly towards Small Requests.
ServicePointManager.Expect100Continue = false;
Setting the Expect100Continue property to
false configures the client to not wait for a 100-Continue response from the server before transmitting data. Waiting for 100-Continue is an optimization to avoid sending larger payloads when the server rejects the request. That optimization isn't necessary for Azure Storage operations and disabling it may result in faster requests.
Refer to Microsoft's Azure Storage Performance Checklist for more performance tips and design guidelines to optimize Azure Storage performance.
Multiple parallel read operations are used to improve message throughput. The amount of parallel read operations is the square root of the configured message processing concurrency. This value can be increased or decreased if needed by using the
DegreeOfReceiveParallelism configuration parameter. See Azure Storage Queues Transport Configuration on how to use this parameter.
DegreeOfReceiveParallelismwill influence the total number of storage operations against Azure Storage Services and can result in higher costs.