Streaming of High-Resolution Progressive Meshes over the Internet

High-resolution 3D meshes are increasingly available in networked applications, such as digital museum, online game, and virtual reality. The amount of data constituting a high-resolution 3D mesh can be huge, leading to long downloading time. To reduce the waiting time of users, a common technique for remote viewing is progressive streaming, which allows a low-resolution version of the mesh to be transmitted and rendered with low latency. The quality of the transmitted mesh is incrementally improved by continuously transmitting the refinement information.
Progressive mesh is commonly used to support progressive streaming. Streaming of high-resolution progressive meshes is considerably different to video streaming, which has been extensively studied. Frames of a video are usually sent following the time order. Vertex splits of a progressive mesh, however, can be sent in various orders. New research problems arise due to the flexibility in sending order of vertex splits. In this thesis, three such problems are addressed.
First, the progressive coding of meshes introduces dependencies among the vertex splits, and the descendants cannot be decoded before their ancestors are all decoded. Therefore, when a progressive mesh is transmitted over a lossy network, a packet loss will delay the decoding of any following vertex split that depends on a vertex split in this lost packet. Hence, the effect of dependency needs to be considered in choosing the sending order of vertex splits. In this thesis, an analytical model is proposed to quantitatively analyze the effects of dependency by modeling the distribution of decoding time of each vertex split as a function of mesh properties and network parameters. Consequently, different sending orders can be efficiently evaluated without simulations, and this model can help in developing a sending strategy to improve the quality curve during transmission. The accuracy of the analytical model proposed in this thesis is validated under a variety of network conditions, including bursty losses, fluctuating RTT, and varying sending rate. The values predicted from our model match the measured value reasonably well in all cases except when losses are too bursty.
Second, to improve the quality of rendered image in the receiver side quickly, the viewpoint of the user can be considered in deciding the sending order. In existing solutions of view-dependent streaming, the sender decides the sending order. The sender needs to maintain the rendering state of each receiver to avoid sending duplicate data. Due to the stateful design, the sender-driven approach cannot be easily extended to support many receivers with caching proxy and peer-to-peer (P2P) system, two common solutions to scalability. In this thesis, a receiver-driven protocol is proposed to improve the scalability. In this protocol, the receiver decides the sending order and explicitly requests the vertex splits, while the sender simply sends the data requested. The sending order is computed at the receiver by estimating the visibility and visual contributions of the refinements. The sender becomes stateless, so caching proxy and P2P streaming systems can be applied to improve the scalability without adding more servers.
Third, based on the receiver-driven protocol we proposed, P2P techniques are applied to view-dependent progressive mesh streaming in this thesis. In the implementation of P2P mesh streaming system, two issues are considered: how to partition a progressive mesh into chunks and how to lookup the provider of a chunk. For the latter issue, we investigated two solutions, which trade off server overhead and response time. The first uses a simple centralized lookup service, while the second organizes peers into groups according to the hierarchical structure of the progressive mesh to take advantage of access pattern. We have implemented a prototype and test its performance with synthetic traces we generated based on real traces logged from 37 users. Simulation results show that our proposed systems are robust under high churn rate. It reduces the server overhead by more than 90%, keeps control overhead and average response time low.