Getting Started
Architecture
NServiceBus
Transports
Persistence
ServiceInsight
ServicePulse
ServiceControl
Monitoring
Modernization
Samples

Topology

Messaging topology is a specific arrangement of messaging entities, such as queues, topics, subscriptions, and rules.

Azure Service Bus transport operates on a topology created on the broker. The topology handles exchanging messages between endpoints, by creating and configuring Azure Service Bus entities.

The topic-per-event topology dedicates one Azure Service Bus topic to each concrete event type. This design moves away from the single “bundle” topic and its SQL or Correlation filters, thereby reducing filter overhead and distributing messages more evenly across multiple topics.

In the topic-per-event topology:

  1. Publishers send an event to a specific topic named after the most concrete event type.
  2. Subscribers each create a subscription under each topic that matches the event(s) they are interested in.
  3. Because there is no single, central “bundle” topic to hold all messages, each published event flows to its own dedicated topic.
flowchart LR subgraph Publisher P[Publishes<br/>ConcreteEventA] end subgraph Service Bus T1[Topic: ConcreteEventA] T2[Topic: ConcreteEventB] end subgraph Subscriber S1[Subscribes to<br/>ConcreteEventA] S2[Subscribes to<br/>ConcreteEventB] end P -->|Publish| T1 S1 -->|Subscribe| T1 S2 -->|Subscribe| T2

This design can dramatically reduce filtering overhead, boosting performance and scalability. Distributing the messages across multiple topics avoids the single-topic bottleneck and mitigates the risk of hitting per-topic subscription and filter limits.

Quotas and limitations

A single Azure Service Bus topic can hold up to 2,000 subscriptions, and each Premium namespace (with one messaging unit) can have up to 1,000 topics.

  • Subscriptions per topic: 2,000 (Standard/Premium).
  • Topics per Premium namespace: 1,000 per messaging unit.
  • Topic size: 5 GB quota per topic.

By allocating a separate topic for each concrete event type, the overall system can scale more effectively:

  • Each topic is dedicated to one event type, so message consumption is isolated.
  • Failure domain size is reduced from the entire system to a single topic so if any single topic hits its 5 GB quota, only that event type is affected.
  • The maximum limit of 1,000 topics per messaging unit can comfortably support hundreds of event types, especially when factoring that not all event types are high-volume

Subscription rule matching

In this topology, no SQL or Correlation filtering is required on the topic itself, because messages in a topic are all of the same event type. Subscriptions can use a default “match-all” rule (1=1) or the default catch-all rule on each topic subscription.

Since there is only one event type per topic:

  • Subscribers don’t need to manage large numbers of SQL or Correlation filters.
  • Interface-based inheritance does require extra care if multiple interfaces or base classes are in play (see below).
Interface-based inheritance

A published message type can have multiple valid interfaces in its hierarchy representing a message type. For example:

namespace Shipping;

interface IOrderAccepted : IEvent { }
interface IOrderStatusChanged : IEvent { }

class OrderAccepted : IOrderAccepted, IOrderStatusChanged { }
class OrderDeclined : IOrderAccepted, IOrderStatusChanged { }

For a handler class OrderAcceptedHandler : IHandleMessages<OrderAccepted> the subscription will look like:

flowchart LR subgraph Publisher P[Publishes<br/>OrderAccepted] end subgraph Service Bus T1[Topic: Shipping.OrderAccepted] end subgraph Subscriber S1[Subscribes to<br/>OrderAccepted] end P -->|Publish| T1 S1 -->|Subscribe| T1

If the subscriber is interested only in the interface IOrderStatusChanged, it will declare a handler class OrderStatusChanged : IHandleMessages<IOrderStatusChanged> and a mapping to the corresponding topics where the types implementing that contract are published to.

topology.SubscribeTo<IOrderStatusChanged>("Shipping.OrderAccepted");

When a publisher starts publishing Shipping.OrderDeclined the event needs to be mapped

topology.SubscribeTo<IOrderStatusChanged>("Shipping.OrderAccepted");
topology.SubscribeTo<IOrderStatusChanged>("Shipping.OrderDeclined");

to opt into receiving the event into the subscriber's input queue and therefore requires a topology change.

flowchart LR subgraph Publisher P[Publishes<br/>OrderAccepted<br/>OrderDeclined] end subgraph Service Bus T1[Topic: Shipping.OrderAccepted] T2[Topic: Shipping.OrderDeclined] end subgraph Subscriber S1[Subscribes to<br/>OrderAccepted<br/>OrderDeclined] end P -->|Publish| T1 P -->|Publish| T2 S1 -->|Subscribe| T1 S1 -->|Subscribe| T2

Depending on the desired use-cases it is possible to map in two ways:

  • Subscriber-side
  • Publisher-side

On the subscriber-side the endpoint can be configure so that, although the type accepted by the handler is IOrderStatusChanged, the actual topics interested in are named after the derived types:

topology.SubscribeTo<IOrderStatusChanged>("Shipping.OrderAccepted");
topology.SubscribeTo<IOrderStatusChanged>("Shipping.OrderDeclined");

This will make auto-subscribe create these two topics instead and wire the subscription to them.

Alternatively, the publisher can be configure to publish all its derived events onto the single IOrderStatusChanged topic that multi-plexes all status changed related events:

topology.PublishTo<OrderAccepted>("Shipping.IOrderStatusChanged");
topology.PublishTo<OrderDeclined>("Shipping.IOrderStatusChanged");

This second option requires less entities on the broker side but forces the subscribers to all have handlers for all the multiplexed derived events that are and will be published to that topic since the subscription in this topology doesn't apply filtering by design.

Evolution of the message contract

As mentioned in versioning of shared contracts, and shown in the examples above, NServiceBus uses the fully-qualified assembly name in the message header. Evolving the message contract encourages creating entirely new contract types and then adding a version number to the original name. For example, when evolving Shipping.OrderAccepted, the publisher creates a new contract called Shipping.OrderAcceptedV2. When the publisher publishes Shipping.OrderAcceptedV2 events, by default, these are published to the Shipping.OrderAcceptedV2 topic and therefore existing subscribers interested in the previous version would not receive those events.

Use one of the following options when evolving message contracts:

  • Publish both versions of the event on the publisher side to individual topics and setting up the subscribers where necessary to receive both or
  • Multiplex all versions of the event to the same topic and filter the versions on the subscriber within specialized filter rules

When publishing both versions of the event, the subscribers need to opt-in to receiving those events by adding an explicit mapping:

topology.SubscribeTo<IOrderAccepted>("Shipping.OrderAccepted");
topology.SubscribeTo<IOrderAccepted>("Shipping.OrderAcceptedV2");

When multiplexing all versions of the event to the same topic the following configuration needs to be added on the publisher side:

topology.PublishTo<OrderAcceptedV2>("Shipping.OrderAccepted");
topology.PublishTo<OrderAcceptedV2>("Shipping.OrderAccepted");

and then a customization that promotes the full name to a property of the native message

transport.OutgoingNativeMessageCustomization = (operation, message) =>
{
    if (operation is MulticastTransportOperation multicastTransportOperation)
    {
        // Subject is used for demonstration purposes only, choose a property that fits your scenario
        message.Subject = multicastTransportOperation.MessageType.FullName;
    }
};

which would allow adding either a Correlation filter (preferred) or a SQL filter based on the promoted full name.

Advanced Multiplexing Strategies

While the topic-per-event topology offers strong benefits for performance, observability, and isolation, certain scenarios may benefit from strategically multiplexing multiple events onto a shared topic. These scenarios include:

  • Multi-tenancy architectures
  • Entity quota constraints
  • Deployment simplification (e.g., when using infrastructure-as-code)
  • Semantic grouping of related events

Publisher-Side Multiplexing

The publishing side can be configured to route multiple event types to a shared topic using the PublishTo API:

topology.PublishTo<CustomerCreated>("Tenant.CustomerLifecycle");
topology.PublishTo<CustomerUpdated>("Tenant.CustomerLifecycle");
topology.PublishTo<CustomerDeleted>("Tenant.CustomerLifecycle");

In this configuration, all listed events are published to the same Tenant.CustomerLifecycle topic. All subscribers to this topic must be prepared to handle all published event types:

class CustomerCreatedHandler : IHandleMessages<CustomerCreated> { ... }
class CustomerUpdatedHandler : IHandleMessages<CustomerUpdated> { ... }
class CustomerDeletedHandler : IHandleMessages<CustomerDeleted> { ... }
Subscriber-Side Multiplexing

Alternatively, interface-based event grouping can be employed by subscribing explicitly to multiple topics:

topology.SubscribeTo<ICustomerLifecycleEvent>("Tenant.CustomerCreated");
topology.SubscribeTo<ICustomerLifecycleEvent>("Tenant.CustomerUpdated");
topology.SubscribeTo<ICustomerLifecycleEvent>("Tenant.CustomerDeleted");

This approach preserves per-event topic isolation while grouping handler logic by shared interfaces.

Multiplexing Derived Events

For inheritance scenarios, it is possible to map multiple derived events to a common topic:

topology.PublishTo<OrderAccepted>("Shipping.IOrderStatusChanged");
topology.PublishTo<OrderDeclined>("Shipping.IOrderStatusChanged");

In this case, the topic Shipping.IOrderStatusChanged serves as a common destination for multiple concrete event types. Subscribers will receive all messages sent to that topic and must handle the full range of possible types.

Filtering Within a Multiplexed Topic

If selective consumption is required within a multiplexed topic, a promoted message property such as the full event type name can be added using the following customization (Subject used for demonstration purposes only):

transport.OutgoingNativeMessageCustomization = (operation, message) =>
{
    if (operation is MulticastTransportOperation multicast)
    {
        // Subject is used for demonstration purposes only, choose a property that fits your scenario
        message.Subject = multicast.MessageType.FullName;
    }
};

This property can then be used to define a CorrelationFilter (sample here uses bicep):

resource subscription 'Microsoft.ServiceBus/namespaces/topics/subscriptions@2021-06-01-preview' = {
  name: '${topic.name}/subscriber'
  properties: {
    rule: {
      name: 'CustomerUpdatedOnly'
      filterType: 'CorrelationFilter'
      correlationFilter: {
        subject: 'MyNamespace.CustomerUpdated'
      }
    }
  }
}

This configuration enables selective routing while using a shared topic, though it reintroduces filtering overhead and should be applied judiciously.

Strategy Comparison
StrategyTopic CountFilter RequiredSubscriber Code ComplexityRecommended Scenarios
Per-event topic (default)HighNoLowGeneral purpose and high isolation
Publisher-side multiplexingLowNoMediumAll consumers handle all related events
Subscriber-side multiplexingHighNoMediumInheritance- or Interface-driven subscriptions
Multiplexing with filteringLowYesHighSelective consumption with entity limits

Handling overflow and scaling

In the single-topic model, a high volume of messages in one event type can degrade overall system performance for all events when the topic is saturated. With the topic-per-event model, each event type has its own 5 GB quota and its own topic partitioning. This provides a more localized failure domain:

  • Failure isolation: If one event type experiences a surge, only that topic will be throttled or fill its quota.
  • Load distribution: The broker spreads load across multiple internal partitions, often improving throughput when compared to a single large topic.

Observability

Monitoring is often simpler because each event type’s topic can be tracked with distinct metrics (message count, size, etc.). You can see which event types are experiencing spikes without needing to filter a single large “bundle” topic.

Topology highlights

Decoupled Publishers / Subscribersyes
Polymorphic events supportyes (mapping API)
Event overflow protectionyes (per-topic)
Subscriber auto-scaling based on queue sizeyes (queues)
Reduced complexity for non-inherited eventsyes
Fine-grained resource usage / observabilityyes (each topic is distinct)