In this five-part series, we’re evaluating the realities of 5G, including the good, bad, and exaggerated. As we cover 5G, we’ll break this series down into a handful of topics: use cases (this article), radio and spectrum (part 3), go-to-market and business models (part 4), and costs, timeline, and implementation (part 5).
Because 5G is so broad, Figure 1 represents my personal generalization of 5G. These four categories are roughly independent segments of traction for cellular. In this series, I am focusing on 5G’s fit in an enterprise wireless landscape. Some topics inevitably span across all boundaries of interest, but this series will primarily follow the private enterprise.
Figure 1
We’re thinking through the lens of an IT team that’s been working with Wi-Fi and other local area (LAN) or private area (PAN) technologies. This same IT team is now paddling upstream against a zealous river of marketing to understand 5G and figure out actual 5G use cases in real-world deployments.
First off, 5G is not just one technology for one problem. As an enterprise, most of the 5G hype is not aimed at you, so it requires some technical discernment to see what is for you.
5G includes many separate layers of technology from radio to core, some of which have contradictory use case targets, and thus, contradictory implementation goals. For example, to crack the ultra-low latency nut, operators (I mean anyone that deploys or operates a network) have to reshuffle the traditional architecture and move computing as close as possible to the client device (also known as user equipment, or UE, in telecom speak). Conversely, to support IoT device density, operators push the computing back into central datacenters or hosted cloud systems for scalability. So, which one is it? Near the user, or far away in a datacenter? Figure 2 illustrates this contradiction.
Figure 2
Both low latency and IoT are parts of 5G, but they’re different permutations for different perspective values. In the froth and lather of marketing hype, it’s easy to miss the contentious plurality of 5G and mistake it for one harmonious multi-purpose miracle of a new technology.
5G also has many different architectural variants, spectrum models, and radio bands to use, which are disharmonized by country, operator, license, site, and deployment model. Additionally, 5G has a cadre of possible radio and core features to adopt across many 3GPP releases. And, this functionality may come from an operator, vendor, or solution provider near you in iterative phases of rollout over the course of a decade or more, if it ever comes at all. So, beware that your 5G may not be the same as someone else’s 5G.
Don’t let the acronyms and jargon scare you away. The best place to start making sense of 5G’s plurality is its three broad use cases: enhanced mobile broadband (eMBB), mission-critical services with ultra-reliable low-latency communication (URLLC), and massive machine-type communications (mMTC) for IoT.
Enhanced Mobile Broadband (eMBB) simply means mobile connectivity via 5G is all-around better than 4G: faster, better density, lower latency, smoother handoffs, and better looking with a great personality.
eMBB represents the variety of 5G enhancements that benefit mobile users in everyday usage scenarios—multimedia and entertainment, communication and collaboration, navigation and mapping, and more. Certainly, 5G should improve user experience in those areas. It will also help the case for using cellular networks for fixed wireless access (FWA), especially for rural home Internet, or as a primary or secondary business Internet option.
There’s a long list of nerdy technical enhancements behind eMBB, like massive MIMO (fancy antenna arrays that improve signal quality and device density); client-focused mobility features to improve battery life and handovers; and latency enhancements as well. But, new (to cellular) mmWave spectrum is perhaps responsible for the biggest boost, at least on paper. 5G uses mmWave frequencies above 30 GHz (also some cmWave in 20-30 GHz) because of the huge continuous blocks of open spectrum found above 30 GHz. With lots of spectrum, you can use wider channel bandwidths to drive those world-beating speeds. But, mmWave’s higher radio frequencies come with RF coverage challenges. We’ll dive into this in the next blog.
Despite the obvious improvement for general-purpose mobile connectivity, I also see a lot of cliché 5G marketing with overdone futuristic imagery (like Figure 3), usually claiming 5G as the conduit for AR/VR everywhere, head-mounted displays, remote surgery (it already exists today), and autonomous vehicles. Let's not forget the claims of mass adoption of cellular as the primary connectivity for laptops, tablets, displays, printers, and connected things. The hyperbole of 5G eMBB is fierce. Sometimes the hype exposes a deeper truth, which is that the “killer use case” for 5G is still unknown, so futuristic appropriation runs amok. And for marketing momentum, 5G has also successfully—though not justifiably in many cases—piggybacked on significant industry trends like IoT, AI, and cloud for extra hype factor.
Figure 3
While 5G gobbles up the attention, 4G continues its iterative decade-long plod. New cellular generations don’t automatically replace prior generations. As a matter of fact, LTE (4G) has many broadband capabilities that have yet to be fully utilized, so there’s still interest in expanding LTE for mobile, enterprise, and industrial solutions. Technologies like LTE-Advanced Plus (some call it 4.9G) blur the lines between 4G and 5G, making 5G look like just another increment of progress, and supporting the “just another G” argument even further. No big leap here, just an iterative set of steps for another decade.
Building on the features of eMBB, 5G also targets new classes of service reliability and latency, which fit under the umbrella term ultra-reliable low-latency communications (URLLC). Think of this term as a catch-all for important safety-first, ultra-responsive, no-downtime services (I like over-using hyphens too). But, notice its two parts: ultra-reliable and low-latency. They might go together. Or not. It depends on the use case.
There’s an awful lot of pie-in-the-sky on this topic, but also some serious business. Here’s a look at the use cases that make marketing headlines:
All these technologies will materialize in the next decade, but it’s still unclear to me whether they’re realistic 5G drivers. Specifically, I wonder how many devices and critical services will depend on a remote application instead of a local onboard system (such as autonomous cars). I also wonder if devices and critical services will require a wireless link instead of a more deterministic wired link (such as remote surgery, which already occurs today over redundant wired links). It’s one thing for 5G providers to want remote surgery over 5G, but quite another for practitioners to actually use it. And of course, one more critical question is whether there will actually be ubiquitous 5G coverage that some mobile use cases depend on (again, autonomous driving).
The low latency point has been a huge marketing angle for 5G, but low latency is actually a lesser-known value of Wi-Fi. I was chatting with a good friend and Wi-Fi expert recently about 5G, and he casually mirrored the public message that cellular networks are much better than Wi-Fi when it comes to latency. And from the marketing energy about latency, you’d think it were true. Just for kicks, I did a quick informal test in my lab and saw Wi-Fi round-trip pings (to the Ethernet switch) averaging ~4ms. Think about that for a second. Round-trip means it crossed the Wi-Fi link twice, once uplink and once downlink. And, the AP sent the packet to the switch, and the switch replied. In other words, today, Wi-Fi can already deliver <2ms one-way latency in a light traffic load scenario.
All of the fancy 5G latency claims and specifications are focused on one-way (not round-trip) measurements of latency for the radio network only (not the backhaul or core). Mobile networks today may not hit 10ms, but the 5G URLLC long-term target is less than 1ms. Note the word “target,” not “reality.” In fairness, I won’t pretend for a minute that my home network is the same as a congested enterprise or public access Wi-Fi network. Still, you could easily apply the same critique to best-case 5G numbers, which won’t happen at load for bulk traffic either. And I know, that’s not the intent of URLLC. It's not for bulk traffic—it's for the special devices/apps with special treatment and privileged access slices, but even then, there are still challenges and tradeoffs, as Dean Bubley writes about (very effectively, I might add). My point is simply that we need to be a little more discerning about 5G’s low latency mania. It’s a future, it won’t be the ubiquitous norm, it’s not really better than Wi-Fi today. And, Wi-Fi is only getting better with new QoS mechanisms as well as the incorporation of cellular technologies like OFDMA.
There’s another critical point to the URLLC conversation, which is the reliability and latency of the backhaul network, the core, and the application itself. A good friend of mine always says that QoS is end-to-end or nothing at all, which means you’ve got to follow the reliability and latency trail from start to finish (and back). In other words, an ultra-reliable low-latency mobile service depends on an ultra-reliable low-latency RAN, backhaul, core, AND application, as seen in Figure 3. Fail at one, fail at all.
Figure 4
To that end, URLLC needs mobile edge computing (MEC). The goal of MEC is to bring the application (and all of its computing) into the operator core (or onto the enterprise) so that it is closer to the user. Today, most applications are delivered remotely across the QoS-less Internet in a public cloud. Despite the many virtues of the public cloud, if URLLC is your ambition, you need to cut out all unnecessary servers, networks, peering links, and intermediate hops by moving the application as close as possible to the device. This means the application might need to live in the operator’s data center (and for some private 5G, probably all the way back on-premises).
Adopting MEC may require enterprises to be willing and able to run mission-critical apps in the operator’s network, which has implications for security, trust, control, lock-in, (in)flexibility, geo-distribution, and more. But there’s already been some early market activity here. Hyperscalers (i.e. the tier1 cloud providers like Amazon Web Services, Microsoft Azure, and Google Cloud Platform) do not want to miss the 5G opportunity, so they’ve invested in edge compute platforms like AWS Wavelength and Azure Edge Zones and Google GMEC. Enterprises have become accustomed to building in the cloud with toolsets from the hyperscalers. So, the MEC approach puts a more portable version of that same cloud compute stack and toolset in the service provider’s datacenter. That way, it is closer to the RAN/edge while preserving the cloud-native development approach.
Finally, we get to massive Machine Type Communications (mMTC), which is 5G for IoT. IoT is a slippery little pig to define because it has thousands of meanings and applications. Usually, this topic for 5G revolves around the forthcoming billions of devices, their demand for connectivity and data, the shift to automation with Industry 4.0, and blah blah blah.
From my perch, it looks like Bluetooth Low Energy (BLE) and Wi-Fi are currently dominant forces in IoT, at least for the bulk of connected things (especially consumer widgets) in a personal area or local area network. The broad enterprise also looks to be leaning heavily towards Wi-Fi, BLE/Bluetooth, and other unlicensed options like Zigbee. At least initially, the cellular IoT appeal is built upon its nearly ubiquitous coverage and mobility. For powered or rechargeable mobile devices, it’s hard to beat LTE/5G’s WAN model, even if it means another subscription. LTE and 5G will also have appeal as a low-power WAN (LP-WAN) alternative for connecting low-bitrate things outdoors. In this space, cellular competes with LoRa and SigFox (among others), which are struggling to get traction because of the ridiculous IoT fragmentation. There’s likely a longer-term future for private LTE/5G on the LAN in a subset of verticals like manufacturing and logistics. But for most applications, 5G does not look like a competitor to Wi-Fi, BLE, or Zigbee.
On that point about LP-WAN, we need to circle back again to LTE. We all know LTE as our mobile phone’s 4G broadband data service, but LTE also supports two variants (NB-IoT and LTE-M) that are modified to serve the long-range, cheap cost, low power, high device density, and low data requirements of IoT. Think about that for a second. LTE/5G networks today are optimized and sold primarily for mobile subscriber data with a heavy focus on smartphones and bandwidth. But this is diametrically opposite to the needs of low-power IoT devices, which don’t need much data, may have a 10-year coin battery, and need a low-cost connection. So, Narrowband-Internet of Things (NB-IoT) and LTE-M (LTE-MTC [Machine Type Communication]) are a significant repackaging of cellular for IoT, both in terms of the tech stack and the sales model. Both variants are forward compatible with 5G, but if you see “5G for IoT” being sold today, it’s probably 4G/LTE.
There’s one more architecture point to chew on as well, which is that the MEC approach of URLLC is pretty much the opposite of mMTC. In mMTC, one of the heavy burdens is to refactor the traditional mobile phone-centric approach to device density and start thinking about much higher density with things. mMTC has many more devices connecting per square mile/kilometer, and that has scalability implications for radio and core services. Thus, the theme of mMTC is to utilize more scalable centralized computing models instead of highly distributed models. Centralized cloud solutions often scale up with greater efficiency (cost, compute, and operational), so the mMTC-optimized network looks quite different than either eMBB or URLLC. Refer back to Figure 2.
It’s worth pointing out that mobile networks have other pain points that 5G does not cure. We’ll dive into this in more detail in the third blog focused on business and go-to-market approaches.
Indoor coverage has been a challenge for enterprises and venue operators, and 5G’s prominent technologies straddle both sides of this problem. mmWave gives 5G speed, but has atrocious penetration properties, much worse than current cellular bands (in lower frequencies), so there’s no hope for outside-in approaches (meaning, the outdoor public network reaching the interiors of buildings for indoor coverage). Then there’s the low-band (lower frequency) IoT focus, which is very use case-specific and doesn’t address the broad need for ubiquitous mobile broadband indoors and out. eMBB enhancements may help a tad, but there’s no outdoor-in game-changer here.
That problem leads to the second, which is neutral host services. Enterprises routinely struggle with the cost and complexity of supporting multiple carriers in their indoor environments when outdoor-in coverage is poor, even if just to support employee and guest mobile devices. But there’s no real neutralizing influence that provides a reasonably priced indoor solution with simple carrier integration that doesn’t require racks full of equipment for each carrier and DAS.
So, let’s sum it up:
5G is not a sprint, despite the marketing. It is a marathon of incremental improvement that will eventually amass into a great enabler of future use cases and applications, certainly some that we aren’t even thinking about today. Part of its appeal is its diverse application approaches, but this is also an area of industry confusion, especially to new audiences that only know 5G as a cellular subscription. But despite its variety, 5G plays only a part in the connectivity portfolio, and doesn’t displace existing technologies like Wi-Fi, BLE/Bluetooth, Zigbee, NFC, and many other low-cost unlicensed options.
This article is the second of a 5-part series. The next blog will focus on the radio and spectrum aspects of 5G in more detail.
This blog was originally authored by Marcus Burton, Architect, Cloud Technology