Master’s Thesis Presentation • Systems and Networking • Scaling a Hierarchical Coordination Fabric for Geo-Distributed Consensus

Friday, April 24, 2026 2:00 pm - 3:00 pm EDT (GMT -04:00)

Please note: This master’s thesis presentation will take place in DC 2310.

Shashank Joshi, Master’s candidate
David R. Cheriton School of Computer Science

Supervisors: Professors Bernard Wong, Wojciech Golab

Decentralized Ledger Technology provides system resilience and security but faces significant performance bottlenecks when deployed across geographically distributed replicas. Standard Byzantine Fault Tolerance protocols employ a flat network model where communication costs between all participants are uniform. This abstraction ignores the reality of heterogeneous wide-area network links where inter-continental latencies are up to 60 times higher than local connections. Consequently, existing systems experience throughput ceilings and high coordination costs that worsen as the network scales geographically.

While recent topology-aware protocols attempt to address these disparities through geo-clustering, they remain constrained by rigid architectural limitations. Systems such as GeoBFT and LT-DBFT suffer from high inter-cluster communication complexity (ranging from O(z²f) to O(z³) for z clusters and f faults) and treat hierarchical layers as decoupled black boxes. Furthermore, they lack mitigation mechanisms to recover the stragglers in the system. These designs fail to exploit cross-layer visibility, leaving significant performance gains unrealized in the face of regional load variance and network jitter.

This thesis introduces Polaris, a hierarchical coordination framework designed to optimize geo-distributed consensus through cross-layer information sharing. Polaris enhances the geoscaling of any BFT protocol, assuming a pragmatic limit on the maximum number of faults in each geographic zone, by coordinating the actions of geographically localized BFT groups. Leveraging its hierarchical design, Polaris delegates expensive consensus operations locally at the BFT group level, thereby significantly minimizing cross-group communication while maintaining simpler and more lightweight coordination at the global level. It elects primary envoys (datacenter delegated with global coordination) based on correctness and timeliness, reducing inter-Byzantine Group bandwidth complexity to O(z²). To address the straggler problem, the system employs PredictiveSkip, which leverages metadata embedded in the cross-layer messages to synchronize lagging groups with the global execution window in a single round-trip. Additionally, it leverages a state-aware control loop to adjust execution pipeline depth based on observed network congestion and regional load drift. Finally, Chained Finality relaxes client-side transaction finality requirements from C + K to C + 1 (for cycle C), decoupling transaction finality and pipeline depth (K).

Experimental evaluation via network emulation demonstrates that Polaris achieves a peak throughput of more than 200,000 transactions per second, representing a 2.6x to 3.6x improvement over state-of-the-art topology-aware protocols at a 48-node scale deployed across 16 datacenters. The PredictiveSkip mechanism maintains high system liveness and incurs only a 16% throughput loss under 3x straggler delays compared to the 41–77% degradation observed in GeoBFT and Steward.