Getting Started
Architecture
NServiceBus
Transports
Persistence
ServiceInsight
ServicePulse
ServiceControl
Monitoring
Samples

CosmosDB Persistence Saga Concurrency

NuGet Package: NServiceBus.Persistence.CosmosDB (3.x)
Target Version: NServiceBus 9.x

Default behavior

When simultaneously handling messages, conflicts may occur. See below for examples of the exceptions which are thrown. Saga concurrency explains how these conflicts are handled, and contains guidance for high-load scenarios.

This means that the relevant Handle method on the saga will be invoked, even though the message might be later rolled back. Hence it is important to ensure not to perform any work in saga handlers that can't roll back together with the message. This also means that should there be high levels of concurrency there will be N-1 rollbacks where N is the number of concurrent messages. This can cause throughput issues and might require design changes.

Starting a saga

Example exception:

The 'OrderSagaData' saga with id '7ac4d199-6560-4d1a-b83a-b3dad94b0802' could not be created possibly due to a concurrency conflict.

Updating or deleting saga data

By default, CosmosDB persistence uses optimistic concurrency control when updating or deleting saga data, though starting with NServiceBus.CosmosDB version 2.0, it's possible to configure the persister to use pessimistic locking. See later in this document for how to do this.

Example exception:

The 'OrderSagaData' saga with id '7ac4d199-6560-4d1a-b83a-b3dad94b0802' was updated by another process or no longer exists.

Sagas concurrency control

By default NServiceBus.CosmosDB uses optimistic concurrency control. Pessimistic locking can be enabled with the following API:

var sagaPersistenceConfiguration = persistence.Sagas();
sagaPersistenceConfiguration.UsePessimisticLocking();

Pessimistic locking internals

CosmosDB does not support pessimistic locking natively. The behavior is based on a spin lock that tries to acquire a lease on a resource by performing Container.PatchItemStreamAsync method.

It is recommended to choose pessimistic concurrency over optimistic concurrency whenever a saga is experiencing a high number of optimistic concurrency control errors.
When using pessimistic locking with provisioned throughput it is important to understand the additional patch operation attempts that are issued during the saga loading attempt will lead to higher RU usage. It is important to set the lease lock acquisition minimum and maximum refresh delay according in alignment with the saga contention scenarios to avoid using too much unnecessary RUs.

Pessimistic concurrency control settings

The pessimistic locking behavior can be customized using the following options:

Pessimistic lease lock duration

By default, the persister locks a saga data document for 60 seconds. Although it is not recommended to have sagas execute long-running logic, in some scenarios it might be required to increase the lease duration. The lease duration can be adjusted using the following API:

var pessimisticLockingConfiguration = sagaPersistenceConfiguration.UsePessimisticLocking();
pessimisticLockingConfiguration.SetLeaseLockTime(TimeSpan.FromMilliseconds(500));

Pessimistic lease lock acquisition timeout

By default, the persister waits 60 seconds to acquire the lock. The value can be adjusted using the following API:

pessimisticLockingConfiguration.SetLeaseLockAcquisitionTimeout(TimeSpan.FromMilliseconds(500));

Pessimistic lease lock acquisition minimum and maximum refresh delay

To prevent request synchronization, the persister randomizes the interval between lock acquisition requests. By default, the interval has a value between 500 and 1000 milliseconds. These values can be adjusted using the following API:

pessimisticLockingConfiguration.SetLeaseLockAcquisitionMinimumRefreshDelay(TimeSpan.FromMilliseconds(50));
pessimisticLockingConfiguration.SetLeaseLockAcquisitionMaximumRefreshDelay(TimeSpan.FromMilliseconds(100));

Related Articles

  • Saga concurrency
    NServiceBus ensures consistency between saga state and messaging.