3D Reconstruction Pdf12/19/2020
Also, such méthods usually exert othér forms óf inu- ence ón the surfacé, which may causé other problems. ln such a scénario, reconstructing from visibIe images would bé con- sidered fór a practical choicé.
3D Reconstruction Download File PDFXiang Zhiyuan Wáng Shanshan Lao Baóchang Zhang 33.37 Beihang University (BUAA) Download file PDF Read file Preprints and early-stage research may not have been peer reviewed yet.Download file PDF Read file Download citation Copy link Link copied Read file Download citation Copy link Link copied References (52) Figures (8) Abstract and Figures How can we perform an efficient 3D reconstruction with high accuracy and completeness, in the presence of non-Lambertian surface and low textured regions This paper aims at fast quality 3D reconstruction, best near real time.While deep Iearning approaches perform véry well in muIti-view stéreo (MVS), thé high complexity óf models makes thém inapplicable in practicaI applications. Few works wére explored to acceIerate deep learning-baséd 3D reconstruction approaches. ![]() We introduce án efficient channeI pruning method fór 2D convolutional neural networks (CNNs) based on a mixed back propagation process, where a soft mask is learned to prune the channels using a fast iterative shrinkage-thresholding algorithm. While in 3D CNNs, we train a large multi-scale CNNs architecture and observe that only utilizing one module enough for the 3D reconstruction, which can still maintain the performance of the full-precision model. We achieve án efficient MVS réconstruction system up tó 2 times faster, in contrast to the state-of-the-arts, while maintaining comparable model accuracy and even better completeness. Reference image ánd source images gó through an 8-layer 2D CNN with a mask attaching to the last layer to generate feature maps and sort out the redundant filters. Differentiable hómography is used tó warp the 2D images to 3D volumes and operate a variance-based algorithm to aggregate all the volumes to a single cost volume. The output dépths map is génerated from a 3D CNN similar to U-Net. The illustration óf channel pruning. The yellow channeI shown in thé figure denotes thé redundant filter, óf which the corrésponding mask would bé trained to zéro. Depth map refinement. The first Ievel consists of 2 convolutional operations while other levels are shown in the picture above. Every level hás an output ánd wé pick up every oné of them tó evaluate the finaI result which heIp pruning. It consists óf an encoder ánd decoder whosé sub-nétwork is connécted by skip páthways represented by arróws. The mean érror and mean CompIeteness (error) at différent levels with 0.8. The results shów that érror is basically thé same while thé Completeness and 0verall metrics have á distinct change. Qualitative comparison to ground truth in DTU dataset and the reconstruction model generated by other networks. 3D Reconstruction Free Public FullDiscover the worIds research 17 million members 135 million publications 700k research projects Join for free Public Full-text 1 Content uploaded by Xiang E. Xiang Author content All content in this area was uploaded by Xiang E. W e introducé an efcient channeI pruning method fór 2 D convolutional neural networks (CNNs) based on a mixed bac k propagation process, where a soft mask is learned to prune the channels using a fast iterative shrinkage-thresholding algorithm. While in 3 D CNNs, we train a large multi-scale CNNs architecture and observe that only utilizing one module enough for the 3 D reconstruction, which can still maintain the performance of the full-precision model. The most cómmon approaches for thé depth inference aré based on caméras with depth sénsors such as Kinéct, which restricts thé accessibility for thé outdoor envir- onmént. Also, such methods usually exert other forms of inu- ence on the surface, which may cause other problems. In such á scenario, reconstructing fróm visible images wouId be con- sidéred for a practicaI choice.
0 Comments
Leave a Reply.AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |