Performance Tuning

It is difficult to give performance tuning guidelines that will be generally applicable. Results may vary greatly depending on many factors such as bandwith, latency, client version, and much more. As always with performance tuning: Measure, don't assume.

The Amazon SQS transport uses HTTP/S connections to send and receive messages from the AWS webservices. The performance of the operations performed by the transport are subjected to the latency of the connection between the endpoint and SQS.

Parallel message retrieval

To increase throughput on a single endpoint it is possible to increase the maximum concurrency. For more information about how to tune the endpoint message processing consult the tuning guide.

In Version 4 and higher, the transport will automatically increase the degree of parallelism by applying the following formula

Degree of parallelism = Math.Ceiling(MaxConcurrency / NumberOfMessagesToFetch)

The following examples illustrate how the formula is applied when the concurrency is greater or equal to 10.

MaxConcurrencyDegreeOfReceiveParallelismNumberOfMessagesToFetch
111
212
313
414
515
616
717
818
919
10110
19210
21310
1001010

Each parallel message retrieval requires one long polling connection.

Changing the maximum concurrency will influence the total number of operations against SQS and can result in higher costs.

Number of connections

A single endpoint requires the multiple connections. Connections might be established or reused due to connection pooling of the http client infrastructure. By default a single SQS client has a connection limit of 50 connections. When more than 50 connections are used, the endpoint connections will get queued up and performance might decrease.

It is possible to set the ConnectionLimit property on the client programatically by overriding the client factory or set the ServicePointManager.DefaultConnectionLimit (recommended).

During message handling an endpoint is expected to be able to connect to external resources, such as remote services via HTTP.

If the endpoint is hosted in a process outside IIS, such as a Windows Service, by default the .NET Framework allows 2 concurrent outgoing HTTP requests per process. This can be a limitation on the overall throughput of the endpoint itself that ends up having outgoing HTTP requests queued and, as a consequence, a limitation in its ability to process incoming messages.

It is possible to change the default connection limit of a process via the static DefaultConnectionLimit property of the ServicePointManager class, as in the following sample:

ServicePointManager.DefaultConnectionLimit = 10;

The above code can be placed in the process startup.

See ServicePointManager on MSDN for more information.

Sending small messages

If the endpoint is sending a lot of small messages (http message size < 1460 bytes) it might be beneficial to turn off the NagleAlgorithm.

To disable Nagle for a specific endpoint URI use

var servicePoint = ServicePointManager.FindServicePoint(new Uri("sqs-endpoint-uri"));
servicePoint.UseNagleAlgorithm = false;

to find the endpoint URIs used consult the AWS Regions and Endpoints documentation

it is also possible to disable Nagle globally for the Application Domain by applying

ServicePointManager.UseNagleAlgorithm = false;

Known Limitations

  • The transport uses a single client for all operations on SQS. The throughput of a single endpoint is thus limited to the number of connections a single client can handle
  • Client side batching is not yet implemented for multiple outgoing messages that are sent as part of the same received message

Last modified