we present a generic and real-time time-varying point cloud codec for 3D immersive video. This codec is suitable for mixed reality applications where 3D point clouds are acquired at a fast rate. In this codec, intra frames are coded progressively in an octree subdivision. To further exploit inter-frame dependencies, we present an inter-prediction algorithm that partitions the octree voxel space in N times N times N macroblocks (N=8,16,32). The algorithm codes points in these blocks in the predictive frame as a rigid transform applied to the points in the intra coded frame. The rigid transform is computed using the iterative closest point algorithm and compactly represented in a quaternion quantization scheme. To encode the color attributes, we defined a mapping of color per vertex attributes in the traversed octree to an image grid and use legacy image coding method based on JPEG. As a result, a generic compression framework suitable for real-time 3D tele-immersion is developed. This framework has been optimized to run in real-time on commodity hardware for both encoder and decoder. Objective evaluation shows that a higher rate-distortion (R-D) performance is achieved compared to available point cloud codecs. A subjective study in a state of art mixed reality system shows that introduced prediction distortions are negligible compared to the original reconstructed point clouds. In addition, it shows the benefit of reconstructed point cloud video as a representation in the 3D Virtual world. The codec is available as open source for integration in immersive and augmented communication applications and serves as a base reference software platform in JCT1/SC29/WG11 (MPEG) for the further development of standardized point cloud compression solutions.
, , ,
,
doi.org/ 10.1109/TCSVT.2016.2543039
IEEE Transactions on Circuits and Systems for Video Technology
Distributed and Interactive Systems

Mekuria, R., Blom, K., & César Garcia, P. S. (2017). Design, implementation and evaluation of a point cloud codec for Tele-Immersive Video. IEEE Transactions on Circuits and Systems for Video Technology, 27(4), 828–842. doi: 10.1109/TCSVT.2016.2543039