Will Personal Computers Become Personal Displays?
The current computing era has seen the maturity of on-board Personal Computing (PC), as well as the constant miniaturization of processors alongside the emergence and ubiquity of smartphones. For computers and smartphones, more applications have moved from being native to being on the cloud. But often, due to unstable networks and unavailability, they still require onboard computing to run certain applications. CPUs are performing better than ever, and GPUs have since taken the lead in the race for processing power.
The upcoming intelligent mobile broadband networks are about to change that.
The move to “Personal Displays” will be due to intelligent networks & superfast mobile broadband
Imagine if you didn’t have to worry about running upgrades, updates, or antivirus software, but instead could buy licenses according to what you used. You could just have a display connected to the cloud through always-on, very high bandwidth mobile broadband. Not a personal computer with all the local processing needed to run everything. And not a thin client neither.
The upfront hardware costs would be much lower as the hardware parts are much simpler.
And those personal displays would be much more modular and future proof, since they would have fewer parts: you could change the camera to a 3D sensor or a very high resolution capture device, or swap batteries and keyboard types, and so on. It would also always run the fastest CPU and GPU, since those are on the cloud and are the responsibility of the cloud provider.
Examples of thin clients (Source: HP / Insight)
There have been trials in the past to port the computing tasks on the server side and connect to the video feed. In a way, Citrix, Windows Remote Desktops and other VDIs have been doing that for a long time, and there have been increasing numbers of thin clients on offer, too. But they have often suffered from latency issues where the video doesn’t feel as real-time and the connection drops. They are essentially a computer within another.
Recent trials of high bandwidth and low latency video needed to run VR applications, have been experiencing better results but still with issues. VR has been a great test bed since it requires the highest possible bandwidth to bring 90 frames per seconds and low latency without lag for the user to feel immersed in virtual reality.
To make those connections stable and real-time would require a large and stable “pipe” with a network infrastructure robust enough to intelligently manage the video signal at every point of the network. With the amount of nodes, masts, obstacles, faults and devices involved, that might seem like an impossible task.
Fortunately, the latest mobile broadband technologies combined with artificial intelligence seem to be making that possible. Bandwidths have increased steadily to 4.5G, forcing network infrastructure and operators to develop technologies to deliver better and more stable coverage. But, they aren’t fast enough to deliver real-time, low-latency video feeds. 5G promises to do just that, but the great thing about 5G isn’t just the great promise; it’s the challenges it poses: how can we keep this high bandwidth stable across the network to deliver 100 Mbps and under 5 ms latency?
As I explained earlier this year in “The importance of AI Machine Learning to deliver cloud based virtual reality experiences over 5G“, machine learning intelligence combined with IoT network infrastructure is helping to make that possible. They enable stable high bandwidth network across all serving locations: home, office, underground and outdoors. And they don’t just enable CPU on the cloud, but also GPU, TPU and, QPU – Quantum Processing Units, if those ever become such a thing.
That means complex 3D data, very high resolution video can be run, including 24K and VR, one of the highest video requirement in terms of data and latency.
Displays according to function
We would therefore move to a personal display usage according to function: handheld use pocket size, tablets, desks, walls, and immersive AR and VR. In a way, web apps are already a good example of applications that run on the cloud instead, some of which are CAD solutions. But the difference is major. As with full cloud computing over mobile networks such as 5G, even the most demanding applications can be run on any operating system with fully elastic computing power always available and as a video stream.
To provide backup, distributed computing is possible where computers as well as servers are able to bring a local backup when needed, or in areas where servers aren’t yet reachable with the current infrastructure. Improbable Ltd’s technology is a good example of making use of distributed computing to run a large amount of data at scale, and it works.
There are many doubts about 5G speeds and indeed many challenges, but there have also many advancements in mobile broadband computing and the integration of intelligence into various network interfaces and at the device level. That means infrastructure and the network are constantly learning from faults, highly concentrated areas, black spots and bandwidth usage, and this creates an intelligent network.
Although it may seem theoretical and abstract, proofs of concepts and testing have proven to deliver results when it comes to high bandwidth VR and, for example, 8K 360 degrees video use cases. The upcoming 2018 MBB (Mobile Broadband) forum and its Cloud XR Challenge will demonstrate such use cases, as VR applications will be run in the cloud and delivered directly on VR headsets without being connected to computers.
Example of a Cloud VR setup (Source: Huawei X Labs)
The near future
This is a transformative change that will change the way the various technologies stakeholders operate, as well as their business models. On one side, technology providers will work hand in hand with network carriers to deliver high performing reliable connections; on the other, cloud and software companies will be working together (expect a lot of M&A) to provide fully elastic software propositions to users. This will also enable fully transparent licensing.
In the same way as 5G will be implemented, this will not happen overnight. It will be gradual in order to ensure that end users are receiving a high performing and stable connection and, especially, that their new computing platform – the personal display – is superior to the personal computer they’ve been used to. That means that there will first be hybrid setups, where personal displays are connected to clouds but also have backup local networks or on-board computing, and thin clients storing the most important files and apps.
Source: Huawei Cloud X / X Labs
For the PC companies, this means a big change is coming. The hardware, personal displays or hybrids, might be simpler but they will either have to become a cloud provider, or work hand in hand with cloud providers and software companies to make that magic happen.
Disclaimer: Any views and/or opinions expressed in this post by individual authors or contributors are their personal views and/or opinions and do not necessarily reflect the views and/or opinions of Huawei Technologies.