The use of augmented reality (AR) in smartphone applications has become relatively common. Still, due to the ergonomic challenges of holding the device at arm’s length, reduced field of view, and lack of stereoscopic visualization, most people don’t use such applications for extended periods. As a result, there has been significant investment and advancement in the development of extended reality (XR) head-mounted devices (HMDs) over the past five years.
Commercially available XR HMDs are commonly designed to physically tether to smartphones – to achieve a better physical form factor. This is done by offloading the compute, connectivity, and often the battery/power responsibilities to the connected smartphone.
While modern smartphones are a valuable part of the XR ecosystem, even the fastest ones have compute limitations that can significantly limit the quality of the rendered user experience.
The Glass House Project
Deutsche Telekom, MobiledgeX and NVIDIA wanted to explore extending XR HMDs to leverage Edge Computing with remote data center GPU resources hosted on 5G edge networks to significantly increase the performance and quality of the rendered user experience. A Glass House visualization app was selected because it presented unique and complex rendering challenges, which served as an excellent testbed.
Deutsche Telekom’s Edge team built the prototype client app using a game engine that runs on a smartphone connected to commercially available XR HMDs. The smartphone was connected through 5G to a Windows virtual machine (VM) server instance running on an Edge Computing node in the network of Deutsche Telekom. The VM was running the Glass House experience built using another game engine. Orchestration of access to 5G edge data centers where the Glass House app ran was provided through the MobiledgeX Edge-Cloud platform to select the most performant connection for the user. NVIDIA CloudXR technology coordinates the bidirectional data flow between the user’s XR HMD and the remote rendering tasks running on NVIDIA RTX GPUs.
The main goal in building this prototype application was to demonstrate an end-to-end remote rendering pipeline for high-end augmented reality user experiences using Edge Computing and 5G.
One question that often comes from customers is how remote rendering compares to local device rendering. As part of the application, we wanted to enable users to quickly switch back and forth between local and remote rendering so that the differences between the two became more immediately apparent.
Finally, customers wanted to understand the specific benefits of RTX GPU-based remote rendering (e.g., ray tracing, reflection, shadows).
With 5G, improvements for better latency behavior of the network are introduced. 5G enables shorter transmission time intervals (TTI), shorter Hybrid Automatic Repeat Request Round Trip Time (HARQ RTT), etc,. to reduce radio interface latency. 5G also provides broader spectrum for greater capacity and throughput. New methods such as Managed Latency with L4S are also examined to achieve a stable latency with minimum jitter.
The completed project successfully demonstrates the significant qualitative difference between local on-device rendering and RTX GPU-enabled remote rendering. Switching between the different rendering sources is instantaneous – making the comparison easy to visualize and understand. The project showcases the advantages of coordinating remote NVIDIA GPU resources using NVIDIA’s CloudXR technology and MobiledgeX’s edge compute discovery and orchestrating services. And most importantly, the XR HMD user experience is highly immersive, interactive, and at maximum quality and performance.
To better explain how everything works, we created a demonstration video that shows the user experience from a first-person PoV and highlights specific technical aspects of the project:
There are several advantages in utilizing remote rendering based on RTX GPUs. We focused on a few specific qualitative ones.
Remote rendering can properly compute reflections and shadows using ray tracing. Correctly rendered shadows are particularly key to creating highly immersive and blended augmented reality experiences.
Another aspect of how shadows are properly rendered is factoring in the lighting and geometry of objects. Screen space ambient occlusion darkens creases, holes and surfaces that are close to each other – creating a highly realistic shadow.
A third benefit we demonstrate is the ability to compute refractions. As you can see in the example above, the way the lampshade reflections distort the light passing through the glass creates a more realistic rendering result for the user.
Remote Rendering Process Flow
The diagram above provides a summary overview of how the demonstration project was enabled and the flow between the different enabling components. In particular, this describes how the pipeline works when the user requests remote server rendering of the Glass House application. Let’s get into the details.
User starts the XR application built using Game Engine 1 on their smartphone.
After the user connects their tethered XR HMD to their smartphone, they launch a smartphone app built using Game Engine 1 (GE1) and containing an NVIDIA CloudXR client. The app connects to the network over 5G and utilizes MobiledgeX APIs to locate the best choice Edge Computing cloudlet. A VM instance is spun up in the selected cloudlet, containing an NVIDIA CloudXR server and remote rendering application built using Game Engine 2 (GE2). The application pipeline is now up and running.
GE1 client sends XR HMD positional tracking data to GE2 client
Since we are projecting a view for the user based on where they are looking in the real world, we need to track changes in the user’s actual head position, so we know where the user is looking and provide instructions to render the proper frame. This head pose data is relatively simple and requires little bandwidth.
GE1 running on the user’s smartphone gets the user’s head pose data intrinsics from the XR HMD. GE1 then packages this with additional data describing the scene to be rendered and then uses its CloudXR client to connect to the remote CloudXR server and transmit the data.
The NVIDIA CloudXR server then forwards the data package to GE2 via OpenVR.
GE2 remotely renders the image using advanced RTX GPU features
Using the package of head pose and other data sent by GE1, the desired scene image renders using the advanced features of the associated RTX GPU. Now that we have the image, we need to send it back to GE1 to show it to the user!
The remotely rendered image is sent back to the local GE1 client
A simple way to think of this is that we are reversing the flow of the original data package. GE2 packages up the rendered image and sends that to the connected NVIDIA Cloud server via OpenVR. The rendered frame image continues to get shuttled by NVIDIA CloudXR server to the NVIDIA CloudXR client and into the GE1 application.
GE1 presents the rendered image on the XR HMD
GE1 streams the rendered frame directly to the display stack on the XR HMD. Finally, the display optics of the XR HMD render the frame so the user can see it correctly.
Key Applications for Remote Rendering
In addition to our prototype Glass House project, there are several applications where high-level remote rendering quality and performance matters for XR use cases. A key example is Architecture, Engineering & Construction (AEC) projects which commonly require the ability to visualize and interact with tremendously high-resolution CAD/CAM models or point clouds. No smartphone made today is capable of rendering such data locally. Architects can bring clients into virtual models of buildings before construction, preventing costly changes requested by the client after the buildings are completed. Access to 3D plans can be matched against the building in progress in the construction industry, alerting managers to clashes detected between the plans and the actual build out.
Another powerful use case is remote collaboration, where people work together interactively with complex 3D models. (An excellent resource for learning more about XR collaboration solutions and best practices can be found here: https://xrcollaboration.com)
Other key areas which can significantly benefit from more advanced remote rendering are field support, healthcare and medical, education, business meetings, and trade shows.
The following chart provides context for the qualitative and quantitative differences between local on device and remote server rendering.
|SMARTPHONE Adreno GPU||NVIDIA A40 GPU|
|Screen Space Ambient Occlusion||NO||YES|
GPU Power/Memory Comparison
Approximate power and memory specifications for mobile GPUs and Systems on a Chip (SoC) and workstation GPUs. Workstation GPUs can offer significantly more computing power and memory profiles.
|QCOM XR2 (SoC)*||NVIDIA A10 GPU||NVIDIA A40 GPU|
|Power (TDP)||10 W||150 W||300 W|
|Memory||8 GB||24 GB||48 GB|
Glossary of Terms
XR HMD – a head-worn device that displays a digital overlay on the user’s view of the real world.
Game Engine 1 – a commercially available game engine
Game Engine 2 – another commercially available game engine
OpenVR – an API and runtime that allows access to VR hardware from multiple vendors without requiring that applications have specific knowledge of the hardware they are targeting.
NVIDIA CloudXR – NVIDIA CloudXR provides a powerful edge computing platform for extended reality. Built on NVIDIA RTX and virtual GPU technology, NVIDIA CloudXR is an advanced streaming technology that delivers VR and AR across 5G and Wi-Fi networks.
MobiledgeX – MobiledgeX SDKs simplify the development process of connecting to the best- deployed application instance (closest cloudlet). This ensures that the edge computing connection has low network latency and low network jitter.
RTX – NVIDIA RTX enables real-time ray tracing – generating interactive images that react to lighting, shadows, and reflections.
GPU – A single-chip processor with integrated transform, lighting, triangle setup/clipping, and rendering engines.
5G – Fifth-generation wireless (5G) is the latest iteration of cellular technology, engineered to greatly increase the speed and responsiveness of wireless networks.
Edge Computing – The computational processing of data close to the logical edge of the network, closer to user devices and sources of data..
L4S – Low-Latency, Low-Loss, Scalable Throughput (L4S) is a method for fast congestion indication and can be the basis for a common rate adaptation framework in 5G. As part of this framework, using L4S-capable quality-of-service (QoS) flows can ensure that time-critical high data rate applications work well over 5G.
AML – Adaptive Managed Latency (AML) is a concept that enables low jitter over 5G to support latency-critical applications. It provides Quality of Service (QoS) in combination with a fast feedback-loop from the network to the application in order to perform rate adaptation (using L4S). This results in a stable latency and smoother uniform experience.
Latency – Latency is the time it takes for data to be transferred between its original source and its destination, usually measured in milliseconds. Latency is often measured as the delay between a user’s action and a network application’s response to that action, also referred to as total round trip time.
Jitter – variation in the time between data packets arriving, typically caused by network congestion or route changes.
NVIDIA CloudXR 3.0
This project was created using NVIDIA CloudXR 1.2. Since then, NVIDIA has implemented several improvements. The latest CloudXR release of 3.0 adds bidirectional audio support to streamed XR, enabling users to improve collaboration within any immersive experience. Bidirectional audio delivers real-time communication capability for any XR environment, including immersive automotive design reviews, collaborative AEC approvals, and interactive training. Users can discuss design options with colleagues while immersed in virtual or augmented environments even from a mobile device.
Since this project was initially created, MobiledgeX has launched the 3.0 version of their Edge-Cloud SDKs. Key new features include:
- Support for Nvidia’s vGPU technology, which enables scale-out their GPU infrastructure and allows applications to consume the right amount of GPU resources necessary for their application (vGPU Blog)
- Dynamic Edge Connectivity with MobiledgeX Edge Events enables clients to receive events at run time to determine if there is a better server for their client to connect with. (Edge Events Blog)
- “Trusted Cloudlets” for private edge support. Extends MobiledgeX Edge-Cloud’s common key management, security, and management architecture to work with public cloud capabilities. This unlocks multi-cloud support and control necessary to consume edge resources for operator 5G applications alongside third-party applications while still benefiting from cloud economics.
- Improved insights and management of cloudlets for operators. Operators can use MobiledgeX Edge-Cloud 3.0 for better resource management and visibility in the cloud versus relying on hardware to determine resource availability.
Credits and Further Reading
There were several people involved in this collaboration project. Particular thanks go to:
Deutsche Telekom – Dominik Schnieders, Marie Kacerovsky, Mihail Luchian, Jan Wollner, Terry xR. Schussler
MobiledgeX – Vasanth Mohan, Thomas Vits
NVIDIA – Gregory Jones, Veronica Yip
We recommend reading the following web pages and documents to get further information on the topics discussed in this article:
Deutsche Telekom: https://www.telekom.com/en/company/topic-specials/special-5g
MobiledgeX Edge-Cloud 3.0: https://mobiledgex.com/product/
NVIDIA CloudXR 3.0: https://www.nvidia.com/en-us/design-visualization/solutions/cloud-xr/
“Enabling time-critical applications over 5G with rate adaptation” white paper co-authored by Ericsson and Deutsche Telekom: https://www.telekom.com/resource/blob/628060/db8412520298f03744f938dc33b0dc9a/dl-210526-whitepaper-data.pdf