All Resources

BACnet Secure Connect brings TLS 1.3 encryption and mutual certificate authentication to building automation, and it solves a real problem. BACnet/IP traffic has always been unencrypted, and in an era where operational technology networks are increasingly targeted, encrypting the wire is overdue. The industry needs this, and the ASHRAE committee that developed the standard deserves credit for getting the cryptographic foundations right.

However, SC comes with operational cost, and not every device on a BACnet network warrants that cost. A chiller plant controller managing expensive equipment and interfacing with critical systems should absolutely be encrypted and authenticated. The same is true for the boiler plant, important air handlers and lab controllers. These are high-value assets with direct safety implications, and the overhead of certificate management is justified by the risk they represent.

The generic unit heater box in stairwell 3B is a different conversation. These are commodity endpoints that report a temperature, accept a setpoint, and cycle a relay. The risk profile of these devices does not justify the certificate lifecycle overhead that SC introduces, and pretending otherwise leads to deployment plans that never leave the whiteboard.

The realistic adoption path for most campuses is selective: encrypt the high-value, high-risk controllers first, leave the commodity devices on BACnet/IP, and expand SC coverage over the coming years as devices reach end of life and their replacements ship with SC support. This is not a temporary compromise or an intermediate step on the way to full SC adoption. For most campuses, it is the steady state for the next decade, and any platform that claims to support BACnet/SC needs to be designed around this reality rather than treating it as an edge case.

The Problems That a Mixed-Transport Network Creates

A campus that adopts SC selectively ends up with two incompatible transports on the same network, and the consequences of that incompatibility are more significant than they might appear at first glance.

BACnet/IP uses UDP datagrams on port 47808. BACnet/SC uses TLS 1.3 WebSocket connections through a central hub. These transports are incompatible: a BACnet/IP workstation cannot open a WebSocket connection to an SC controller, and an SC controller cannot send a UDP packet to a BACnet/IP controller. The encryption that makes SC valuable is the same mechanism that prevents it from communicating with the unencrypted devices that will coexist on the network for years to come.

Some vendors offer a workaround: BACnet routing between the two transports. This requires inserting a dedicated router inline with the data path at every boundary where IP and SC traffic needs to cross, and every cross-transport request must pass through it. The enterprise voice industry went through the same pattern during the PSTN-to-SIP migration. When a campus had a mix of analog and SIP phones, inline voice gateways were deployed at each boundary to translate between the two signaling protocols. Every call between an analog phone and a SIP phone traversed a gateway, and each gateway was a failure point, a source of latency, and a device that required its own configuration and maintenance. The industry moved away from that model by building multi-protocol support directly into the call management platform, eliminating the inline gateway. Scattering inline protocol translators across a BACnet fabric follows the same pattern that voice engineers already learned to avoid.

Certificate Lifecycle at Scale

Every SC device requires a signed X.509 certificate from a Certificate Authority, and every SC hub requires one as well. All certificates in a deployment must chain to the same CA. Device certificates expire, hub certificates expire, and when the CA itself is regenerated, every subordinate certificate across the entire deployment must be re-issued.

On a campus with 50 SC devices distributed across 8 buildings, certificate management is a non-trivial administrative burden that someone needs to own. At 500 SC devices, it becomes a significant operational workload unless it is automated. The current market offerings largely leave this responsibility to the integrator, with the expectation that certificates will be generated manually using command-line tools like OpenSSL, distributed by copying PEM files to each device and hub, and tracked through whatever means the integrator devises—typically a spreadsheet or a calendar reminder. This approach is workable in a lab environment or a small pilot deployment, but it does not scale to a production campus where certificates are expiring on a rolling basis and a missed renewal takes devices offline without warning.

The problem is compounded by the cascading nature of CA operations. If the root CA certificate is compromised or simply expires, every device and hub certificate signed by that CA becomes invalid simultaneously. Regenerating the CA requires re-issuing and redistributing every subordinate certificate on the network, a process that, without automation, means physically or remotely touching every SC device on every subnet across the campus.

Broadcast Amplification Under BACnet/SC

The SC hub model centralizes all communication through a single relay point. When any connected device sends a broadcast message—Who-Is, Who-Has, I-Am-Router-To-Network—the hub copies that message to every other connected device. For N devices on a hub, each broadcast generates N-1 copies. At 100 devices, a single Who-Is produces 99 relayed messages. At 500, it produces 499. With periodic discovery running on every device, the hub spends a significant fraction of its processing capacity relaying broadcast traffic that could have been answered locally.

This is the same fundamental problem described in our previous article on BACnet broadcast scalability, wearing different clothes. The delivery mechanism has changed from UDP broadcast forwarding via BBMDs to directed WebSocket connections via a hub, but the underlying discovery model—in which every device receives and must process every discovery request—remains unchanged.

This is not a deficiency in any vendor’s implementation. It is the architecture defined in the ASHRAE standard, which specifies the SC hub as a message relay. Any conformant hub implementation exhibits this scaling characteristic, and the vendors building SC hubs are implementing the standard correctly. The standard itself simply does not address broadcast containment.

The Centralized Hub as a Single Point of Failure

A single SC hub represents a single point of failure for every device connected to it. If the hub goes down, every SC device on the network loses connectivity until the hub recovers. If the hub reaches its connection limit, the operator must deploy an additional hub and redistribute devices across them, which means reconfiguring the hub URI on every affected device.

Some products offer a failover hub mechanism to mitigate this risk, but failover introduces its own complexity. Every device must be pre-provisioned with a secondary hub URI. When failover occurs, devices must detect the primary hub’s absence, reconnect to the secondary, and re-register. This adds configuration overhead to every deployment and introduces a recovery window during which SC devices are offline and unable to communicate.

If you have spent any time in IP networking, this architectural pattern should look familiar. It is equivalent to routing all inter-subnet traffic through a single core router with static routes—an approach that works at small scale but becomes a bottleneck and a reliability risk as the network grows.

A Familiar Problem: Hub vs. Switch, Then and Now

If you have spent any time in IP networking, the BACnet/SC hub model should look immediately familiar. It is an Ethernet hub operating at the application layer.

An Ethernet hub repeats every incoming frame to every connected port. It does not know or care which device is connected where. When one device sends a frame, every other device on the hub receives it and must decide whether the frame is relevant to them. This is exactly what a BACnet/SC hub does with broadcast messages: when one device sends a Who-Is, the hub copies it to every other connected device, and every device must process it.

The networking industry replaced hubs with switches for precisely this reason. A switch maintains a MAC address table — it learns which device is connected to which port, and when a frame arrives for a known destination, it forwards that frame only to the correct port. Broadcast traffic still floods to all ports, which is why VLANs were introduced to contain broadcast domains. But for unicast traffic, the switch eliminated the hub’s fundamental inefficiency: repeating everything to everyone.

The BACsync Platform (BSP-1000) applies the same principle to BACnet discovery. Each agent maintains a comprehensive directory — analogous to the switch’s MAC address table — and when a Who-Is arrives, the agent looks up the answer and sends a single directed response back to the requester. The broadcast is never forwarded on from the agent. No other device on the network sees it or has to process it.

The parallel extends further when you consider scale and resilience. A network built around a single centralized switch is simple but fragile: that switch is a single point of failure and a scaling bottleneck. The networking industry solved this by distributing switching and routing intelligence across the fabric. Protocols like OSPF share topology information across routers, with each router maintaining its own forwarding table and making local decisions. There is no single bottleneck. A router failure affects only its directly connected subnets. The network scales by adding routers, with each new device extending the fabric rather than loading a central node.

The BACnet/SC hub model is the centralized approach: every device depends on the hub, the hub must relay all traffic, and failure of the hub affects every connected device. BSP-1000 takes the distributed approach. Each agent functions as a lightweight hub at each building or subnet, and all agents share a device directory through a sync engine — analogous to OSPF’s link-state database. An agent failure affects only its locally connected devices. Devices on other agents continue to operate normally, and the failed agent’s devices reconnect automatically when it comes back online. No failover configuration is required because no other agent depends on it.

What the Market Offers Today

The SC hub implementations available today fall into two categories: controllers that double as hubs, and dedicated hub appliances.

In the first category, some building controllers now include SC hub functionality alongside their primary DDC role. These are physical devices with fixed processing capacity, and the number of SC connections they can support is constrained by that hardware. Practical deployments typically cap at a few hundred devices per hub. Some vendors have attempted to raise this ceiling by offering the hub as a virtual machine that can be allocated more resources, but adding horsepower to a relay does not change the fact that it is still a relay. The underlying architecture remains centralized, and every broadcast message is still copied to every connected node regardless of how much CPU is available to do the copying.

In the second category, dedicated SC hub appliances are available as purpose-built hardware. These devices support higher connection counts than controller-based hubs, but they operate independently. There is no shared device directory across hubs, which means that cross-hub discovery requires additional configuration at the IP layer or is simply unsupported. Scaling beyond a single appliance’s capacity means purchasing another unit and manually splitting the device population between them. Some of these products are licensed by connection count in fixed tiers, adding a per-device cost to SC adoption on top of the hardware investment.

Both categories share the same centralized architectural model: a hub that relays broadcasts to all connected devices. The broadcast amplification problem exists in all of them. Scaling means deploying more hubs and manually partitioning devices across them. Cross-hub discovery is either manual, vendor-specific, or unsupported. Certificate management is largely left to the integrator to solve independently.

Critically, none of these approaches eliminate the need for BBMD infrastructure on the IP side. SC solves transport security for the devices that support it, but the BACnet/IP devices that remain — which, for most campuses, will be the majority for years to come — still rely on BBMDs for cross-subnet discovery. A campus that adopts SC selectively does not retire its BBMDs. It runs both: SC hubs for the encrypted devices and a BBMD mesh for everything else, with separate gateway infrastructure to bridge traffic between the two. The operational complexity increases rather than decreases.

How BSP-1000 Handles Mixed-Transport Networks

BSP-1000 was designed from the ground up to operate in the mixed-transport environment described above. Rather than treating BACnet/IP and BACnet/SC as separate infrastructure requiring a gateway between them, BSP-1000 runs both transports natively on every agent.

Distributed hubs with a shared directory. Every BSP-1000 agent can run an SC hub alongside its BACnet/IP proxy, and all agents share a device directory through the sync engine. When an SC device connects to any hub and sends a Who-Is, the hub does not relay that message to other connected devices. Instead, it looks up the answer in its local copy of the directory and sends a single response back to the requester. This eliminates broadcast amplification entirely, regardless of device count. A Who-Is on a hub with 10 devices imposes the same processing cost as one on a hub with 5,000.

Transparent IP-to-SC bridging. Each agent runs both transports simultaneously: UDP on port 47808 for BACnet/IP, and WSS on port 8443 for BACnet/SC. When an IP device needs to communicate with an SC device, the agent translates the BACnet framing between transports while leaving the BACnet payload—the actual application data—untouched. ReadProperty, WriteProperty, COV subscriptions, alarms, and segmented transfers all work across the transport boundary. The IP device is unaware that the target is an SC device, and the SC device is unaware that the request originated from IP. This transparency extends across agents as well: an IP device on one subnet can communicate with an SC device on another subnet through the agents’ shared directory, without requiring a dedicated gateway appliance.

Centralized certificate management. SC certificates—including CA, hub, and device certificates—are generated, issued, downloaded, and revoked from the BSP-1000 web interface. The CA is generated once for the entire deployment. Hub certificates are issued per agent, and device certificates are issued per SC controller. All certificates are distributed to agents automatically through the sync engine; each agent pulls its certificates on startup without any manual file copying. Revoking a certificate in the web interface propagates the change to every agent automatically. Certificates can be downloaded in multiple formats—PEM, CRT, and PKCS#12—for devices that require manual installation. The entire certificate lifecycle is managed from a single location.

Migration mode for BBMD coexistence. For sites that still operate BBMD infrastructure, BSP-1000 agents can run in migration mode, participating in the existing BBMD mesh while simultaneously running SC hubs. Legacy devices on BBMD subnets remain discoverable alongside SC devices, which allows for a phased transition: deploy agents alongside existing BBMDs, migrate subnets one at a time, and decommission BBMDs as they become redundant.

What a Realistic Deployment Looks Like

Day one: the entire campus is BACnet/IP. BSP-1000 agents replace the existing BBMD infrastructure. Devices are discovered, quarantined until approved by an operator, and monitored for status changes. Cross-subnet broadcast storms are eliminated because discovery is resolved locally rather than forwarded across the network.

Month six: the internal cybersecurity operations team has mandated that critical systems need authentication and encryption in order to communicate across the campus IT network. The new chiller plant controller ships with BACnet/SC and fulfills that mandate. The operator enables the SC hub on that building’s agent and generates the necessary certificates in the web interface. The operator chooses whether the BACsync SC bridge is enabled and which device(s) in the fabric are allowed to communicate with it. From that point forward, the allowed devices across the campus network can communicate with the new chiller controller through the bridge, without any additional hardware and without any changes to any existing device on the network.

Year two: half the campus has been upgraded to SC and half remains on IP. Both transports coexist transparently, with the bridge(s) handling communication across the transport boundaries. As more devices move to SC and begin using Direct Connect for peer-to-peer communication, bridge traffic decreases naturally because devices that can speak the same transport no longer need translation.

Year ten: the last IP devices have been replaced, and SC is deployed across the entire campus. The bridge(s) is idle. The distributed BACsync agents continue to suppress broadcast traffic. Certificate management remains centralized. The transition is complete, and at no point during the ten-year process did it require a forklift upgrade, a dedicated gateway appliance, or expensive additional licenses.

Summary

Centralized SC Hub BSP-1000 Distributed Hubs
Broadcast handling Relayed to all connected nodes Suppressed—answered locally from shared directory
Failure impact All connected devices lose connectivity Only locally connected devices affected
Scaling Add hubs, manually partition devices Add agents—each extends the fabric
IP-to-SC communication Separate gateway or router required Built into every agent
Certificate management Manual (integrator responsibility) Centralized web UI with automatic distribution
Cross-hub discovery Manual or vendor-specific Automatic via shared directory
BBMD coexistence Not addressed Migration mode for phased transition
Per-device SC license Per-connection or per-tier None

BACnet/SC is the right direction for the industry, and the encryption and authentication it provides are both necessary and overdue. However, the hub relay model that the standard defines creates scaling, reliability, and interoperability challenges that the standard itself does not address. A distributed architecture—with shared state and local intelligence at the edge—solves those problems the same way that OSPF solved them for IP routing thirty years ago.

The campus that begins deploying SC today will have a mixed-transport network for the foreseeable future. The question is not whether to adopt SC, but how to manage the transition without replacing the infrastructure that already works.

If this resonates with what you are planning for your campus, we would welcome the opportunity to discuss it further. More information about BACsync is available at bacsync.com.


About the Author — Mark Van Weert is the founder and director of Humber Horizons Limited, a building automation and cybersecurity consulting firm based in Ontario, Canada. With thirteen years of experience in BACnet systems integration, CCNA/CCNP certifications, and a background in large campus deployments, Mark specializes in IT/OT convergence and network infrastructure for universities, hospitals, and commercial facilities.

Leave a Comment

Your email address will not be published. Required fields are marked *