The Azure Storage persistence and transport are network IO intensive. Every operation performed against the storage implies one or more network hops, most of which are small HTTP requests to a single IP address (of the storage cluster). By default the .NET framework has been configured to be very restrictive when it comes to this kind of communication.
Performance can be improved by overriding the settings exposed by the
ServicePointManager.DefaultConnectionLimit = 100;
The .NET Framework is configured to allow only 2 simultaneous connections to the same resource by default. A higher connection limit allows more parallel requests and therefore results in a higher network throughput. Setting the connection limit too high bypasses the built-in connection reuse mechanism which may result in a sub-optimal resource usage.
The optimal value depends on the physical properties of the host machine and the endpoint's expected workload. The ideal number is lower than the average amount of parallel storage operations. It is recommended to start with a value of 10 and adjust the value based on the observed performance impact.
ServicePointManager.UseNagleAlgorithm = false;
Nagle's algorithm is a performance optimization for TCP/IP based networks but it has a negative impact on performance of requests when using Azure Storage services. See Microsoft's blog post Nagle’s Algorithm is Not Friendly towards Small Requests.
ServicePointManager.Expect100Continue = false;
Setting the Expect100Continue property to
false configures the client not to wait for a 100-Continue response from the server before transmitting data. Waiting for 100-Continue is an optimization to avoid sending larger payloads when the server rejects the request. That optimization isn't necessary for Azure Storage operations and disabling it may result in faster requests.
A secondary index record was not created by the persister contained in the
NServiceBus. package. To provide backward compatibilty, the
NServiceBus. package performs a full table scan across all partitions for secondary index records before creating a new saga. For systems that have only used the
NServiceBus. library, or have verified that all saga instances have a secondary index record, full table scans can be safely disabled by using the AssumeSecondaryIndicesExist setting.
Refer to Microsoft's Azure Storage Performance Checklist for more performance tips and design guidelines to optimize Azure Storage performance.
Multiple parallel read operations are used to improve message throughput. The amount of parallel read operations is the square root of the configured message processing concurrency. This value can be increased or decreased if needed by using the
DegreeOfReceiveParallelism configuration parameter. See Azure Storage Queues Transport Configuration on how to use this parameter.
DegreeOfReceiveParallelismwill influence the total number of storage operations against Azure Storage Services and can result in higher costs.