Endpoints running on the Azure Storage Queues transport using a single storage account are subject to potential throttling once the maximum number of concurrent requests to the storage account is reached. Multiple storage accounts can be used to overcome this limitation. To better understand scale out options with storage accounts, it is advised to first read carefully the Azure storage account scalability and performance targets article.
All messages in a queue are accessed via a single queue partition. A single queue is targeted to process up to 2,000 messages per second. Scalability targets for storage accounts can vary based on the region, reaching up to 20,000 messages per second (throughput achieved using an object size of 1KB). This is subject to change and should be periodically verified.
When the number of messages per second exceeds this quota, the storage service responds with an HTTP 503 Server Busy message. This message indicates that the platform is throttling the queue. If a single storage account is unable to handle an application's request rate, the application could leverage several different storage accounts using a storage account per endpoint. This ensures application scalability without saturating a single storage account. This also gives a discrete control over queue processing, based on the sensitivity and priority of the messages that are handled by different endpoints. For example, high priority endpoints could have more dedicated workers than low priority endpoints.
A typical implementation uses a single storage account to send and receive messages. All endpoints are configured to receive and send messages using the same storage account.
When the number of instances of endpoints is increased, all endpoints continue reading and writing to the same storage account. Once the limit of 2,000 message/sec per queue or 20,000 message/sec per storage account is reached, Azure storage service throttles messages throughput.
While an endpoint can only read from a single Azure storage account, it can send messages to multiple storage accounts. This way one can set up a solution using multiple storage accounts where each endpoint uses its own Azure storage account, thereby increasing message throughput.
Scaleout and splitting endpoints over multiple storage accounts works to a certain extent, but it cannot be applied infinitely while expecting throughput to increase linearly. Each resource and group of resources has certain throughput limitations.
A suitable technique to overcome this problem includes resource partitioning and usage of scale units. A scale unit is a set of resources with well determined throughput, where adding more resources to this unit does not result in increased throughput. When the scale unit is determined, to improve throughput more scale units can be created. Scale units do not share resources.
An example of a partitioned application with a different number of deployed scale units is an application deployed in various regions.
NServiceBus allows to specify destination addresses using an
"endpoint@physicallocation" when messages are dispatched, in various places such as the Send and Routing API or the
MessageEndpointMappings. In this notation the
physicallocation section represents the location where the endpoint's infrastructure is hosted, such as a storage account.
Using this notation it is possible to route messages to any endpoint hosted in any storage account.
Message endpoint mappings provide a way to configure message destinations using an XML config section. Configure the destination by specifying a connection string behind the endpoint name separated by an
@ sign. Each endpoint can have its own storage account to overcome the throughput limitations.
Example: Endpoint 1 sends messages to Endpoint 2. Endpoint 1 defines a message mapping with a connection string associated with the Endpoint 2 Azure storage account.
Message mapping for Endpoint 1:
<MessageEndpointMappings> <add Messages="Contracts" Namespace="Contracts.Commands.ForEndpoint2" Endpoint="Endpoint2@DefaultEndpointsProtocol=https;AccountName=[ACCOUNT];AccountKey=[KEY];" /> </MessageEndpointMappings>
The same applies when and endpoint is subscribing to and endpoint in another storage account. E.g Endpoint2 is subscribing to Endpoint 1.
Message mapping for Endpoint 2:
<MessageEndpointMappings> <add Messages="Contracts" Namespace="Contracts.Events.FromEndpoint1" Endpoint="Endpoint1@DefaultEndpointsProtocol=https;AccountName=[ACCOUNT];AccountKey=[KEY];" /> </MessageEndpointMappings>
The use of send options enables routing messages to any endpoint hosted in another storage account by specifying the storage account using the
endpointInstance.Send( destination: "sales@DefaultEndpointsProtocol=https;AccountName=[ACCOUNT];AccountKey=[KEY];", message: new MyMessage());
In order to prevent accidentally leaking connection string values, it is recommended to use aliases instead of raw connection strings. When applied, raw connection string values are replaced with registered aliases removing the possibility of leaking a connection string value. When using a single account, aliasing connection string is limited to just enabling it. When multiple accounts are used, an alias has to be registered for each storage account.
To enable sending from
account_B, following configuration has to be applied in the
var transport = endpointConfiguration.UseTransport<AzureStorageQueueTransport>(); transport.ConnectionString("account_A_connection_string"); transport.UseAccountAliasesInsteadOfConnectionStrings(); transport.DefaultAccountAlias("account_A"); var accountRouting = transport.AccountRouting(); accountRouting.AddAccount("account_B", "account_B_connection_string");
Aliases can be provided for both the endpoint's connection string as well as other accounts' connection strings. This enables using
@ notation for destination addresses
endpointInstance.Send( destination: "sales@accountName", message: new MyMessage());
default, to represent different storage accounts in different endpoints is highly discouraged as it introduces ambiguity in resolving addresses like
queue@defaultand may cause issues when e.g. replying. In that case an address is interpreted as a reply address, the name
defaultwill point to a different connection string.