As Wimax reaches maturity, operators all over the world are moving field trials into actual deployment, and eventually to full operational availability. In Asia, the Indian government has spearheaded the effort to provide broadband connectivity to rural areas. Several Asian governments are sure to follow in the next few months. The emphasis is now shifting from “wireless-related issues” into “network-related issues.” Since Wimax technology is, by definition, wireless, how come mass deployment changes the focus from the “inherent” issues of wireless technologies?
The answer is that, once mass deployment kicks in, providing competitive services overrules technological issues. Mass deployment of 802,16d and 802.16e Wimax networks presents several challenging issues. These issues are particularly relevant to mass deployment, as they did not appear at the proof-of-concept stage.
Let’s look at some of these issues, and how Wimax addresses them.
Case #1: Backhaul architecture
The Wimax standard “assumes” that a backhaul network (ASN – Access Services Network) exists. However, once the base stations are deployed according to RF planning practices, the question of how to connect them to each other emerges.
This is no trivial question as it raises issues of scalability, QoS problems and increased capex.
Deploying a tree topology, which is the cellular 3G best practice, may result in the inability to scale up. This inability is due to the fact that the “Moore’s Law” version of bandwidth (bandwidth will rise to provide more subscribers and more Internet traffic) might push this architecture to a point where nodes will require constant upgrades in connectivity.
One may argue that cellular companies face the same problem, but that is not quite true. In traditional cellular networks, bandwidth is allocated according to usage statistics. A Wimax base station will just drop the bandwidth (hence the quality-of-experience) to all users once a new user signs in. Another aspect is, of course, the inability to protect the traffic as each node is connected via single connection.
A ring topology addresses the issue of protection, but its scalability might be even worse than that in a tree topology, because the ring is now required to carry several base stations, and upgrades require the full ring’s bandwidth.
This brings us to the problem of over-provisioning vs. QoS mechanisms. This problem dates to the beginning of the decade, and has an inherent effect on capex as well as network architecture. Over-provisioning means you build your network on Day 1 with more than the required resources, while QoS mechanisms require longer network design and better understanding of network management.
Case #2: ASN technology
Nowadays, there are two basic options for the ASN – Layer 2 or Layer 3.
A Layer 2 network centralizes the routing, making the network more robust, more manageable but less flexible. This means that the network has difficulties “solving” congestions, therefore requiring the operator to be more skilled and more hands-on.
A Layer 3 network is highly flexible and, in small scale, operates quite well and can be easily managed. However, once a Layer 3 network passes a certain number of nodes, network effects such as “Avalanche” (the nodes routing tables are being changed domain after domain due to node failure) and “Flip Flop” (the routing tables change in several domains to collide and then change again several times.) These effects can make the operators’ day-to-day operations very complicated.