Usage
endpointConfiguration.UseDataBus<AzureDataBus, SystemJsonDataBusSerializer>();
Cleanup strategies
Discarding old Azure Data Bus attachments can be done in one of the following ways:
- Using an Azure Durable Function
- Using the Blob Lifecycle Management policy
Using an Azure Durable Function
Review the Azure Blob Storage Data Bus cleanup with Azure Functions sample to see how to use a durable function to clean up attachments.
Using the Blob Lifecycle Management policy
Attachment blobs can be cleaned up using the Blob Storage Lifecycle feature. This method allows configuring a single policy for all data bus-related blobs. Those blobs can be either deleted or archived. The policy does not require custom code and is deployed directly to the storage account. This feature can only be used on GPv2 and Blob storage accounts, not on GPv1 accounts.
The lifecycle policy runs only once a day. The newly configured or updated policy can take up to 24 hours to go into effect. Once the policy is in effect, it could take up to 24 hours for some actions to run for the first time.
How lifecycle rules relate to Azure Blob Storage Databus settings
When creating a rule the blob prefix match filter setting should be set to the value of databus/
by default. If the Container()
or BasePath()
configuration options have been specified when configuring the data bus the blob prefix match filter setting must be modified to take into account the configured container and/or base path values.
Manage the Blob Lifecycle policy via Azure portal
A lifecycle management policy can be set directly on the azure storage account via the portal. Additional information on the configuration, can be found in azure blob lifecycle management policy
Manage the Blob Lifecycle policy via the Azure Command-Line Interface (CLI)
The lifecycle management policy can be set in a JSON document via the Azure CLI.
{
"rules": [
{
"enabled": true,
"name": "delete-databus-files",
"type": "Lifecycle",
"definition": {
"actions": {
"version": {
"delete": {
"daysAfterCreationGreaterThan": 90
}
},
"baseBlob": {
"tierToCool": {
"daysAfterModificationGreaterThan": 30
},
"tierToArchive": {
"daysAfterModificationGreaterThan": 90,
"daysAfterLastTierChangeGreaterThan": 7
},
"delete": {
"daysAfterModificationGreaterThan": 2555
}
}
},
"filters": {
"blobTypes": [
"blockBlob"
],
"prefixMatch": [
"databus/"
]
}
}
}
]
}
The data policy rules associated with the specified storage account can be created as follows.
az storage account management-policy create --account-name myaccount --policy @policy.json --resource-group myresourcegroup
Configuration
Configuring the BlobServiceClient
There are several ways to configure the BlobServiceClient
.
Using a preconfigured BlobServiceClient
A fully configured BlobServiceClient
can be set through the settings:
var serviceClient = new BlobServiceClient("connectionString");
endpointConfiguration.UseDataBus<AzureDataBus, SystemJsonDataBusSerializer>()
.UseBlobServiceClient(serviceClient);
Using a custom provider
A custom provider can be declared that provides a fully configured BlobServiceClient
:
public class CustomProvider : IProvideBlobServiceClient
{
// Leverage dependency injection to use a custom-configured BlobServiceClient
public CustomProvider(BlobServiceClient serviceClient)
{
Client = serviceClient;
}
public BlobServiceClient Client { get; }
}
The provider is then registered in the dependency injection container:
endpointConfiguration.UseDataBus<AzureDataBus, SystemJsonDataBusSerializer>();
endpointConfiguration.RegisterComponents(services => services.AddSingleton<IProvideBlobServiceClient, CustomProvider>());
Providing a connection string and container name
endpointConfiguration.UseDataBus<AzureDataBus, SystemJsonDataBusSerializer>()
.ConnectionString("connectionString")
.Container("containerName");
The container name is optional and will be set to the default when omitted.
Token-credentials
Enables usage of Microsoft Entra ID authentication such as managed identities for Azure resources instead of the shared secret in the connection string.
With a preconfigured BlobServiceClient
var serviceClient = new BlobServiceClient(new Uri("https://<account-name>.blob.core.windows.net"), new DefaultAzureCredential());
endpointConfiguration.UseDataBus<AzureDataBus, SystemJsonDataBusSerializer>()
.UseBlobServiceClient(serviceClient);
With Microsoft.Extensions.Azure
builder.Services.AddAzureClients(azureClients =>
{
azureClients.AddBlobServiceClient(new Uri("https://<account-name>.blob.core.windows.net"));
azureClients.UseCredential(new DefaultAzureCredential());
});
builder.Services.AddSingleton<IProvideBlobServiceClient, CustomProvider>();
Behavior
The following extension methods are available for changing the behavior of AzureDataBus
defaults:
var dataBus = endpointConfiguration.UseDataBus<AzureDataBus, SystemJsonDataBusSerializer>();
dataBus.ConnectionString(azureStorageConnectionString);
dataBus.Container(containerName);
dataBus.BasePath(basePathWithinContainer);
dataBus.MaxRetries(maxNumberOfRetryAttempts);
dataBus.NumberOfIOThreads(numberOfIoThreads);
dataBus.BackOffInterval(backOffIntervalBetweenRetriesInSecs);
ConnectionString()
: The connection string to the storage account for storing databus properties; defaults toUseDevelopmentStorage=true
.Container()
: Container name; defaults todatabus
.BasePath()
: The blobs' base path in the container; defaults to an empty string.MaxRetries
: Number of upload/download retries; defaults to 5 retries.NumberOfIOThreads
: Number of blocks that will be simultaneously uploaded; defaults to 1 thread.BackOffInterval
: The back-off time between retries; defaults to 30 seconds.