TY - GEN
T1 - Active Assembly Guidance with Online Video Parsing
AU - Wang, Bin
AU - Wang, Guofeng
AU - Sharf, Andrei
AU - Li, Yangyan
AU - Zhong, Fan
AU - Qin, Xueying
AU - Cohenor, Daniel
AU - Chen, Baoquan
N1 - Funding Information:
gram of China (No. 2016YFB1001501), the Fundamental Research Funds of Shandong University (No. 2015JC051) and the Joint NSFC-ISF Research Program (No. 61561146397) jointly funded by the National Natural Science Foundation of China and the Israel Science Foundation.
Funding Information:
The authors thank the anonymous reviewers for their valuable comments. This work is supported by NSF of China (No. 61572290, No. 61672326), the National Key Research and Development Pro-
Publisher Copyright:
© 2018 IEEE.
PY - 2018
Y1 - 2018
N2 - In this paper, we introduce an online video-based system that actively assists users in assembly tasks. The system guides and monitors the assembly process by providing instructions and feedback on possibly erroneous operations, enabling easy and effective guidance in AR/MR applications. The core of our system is an online video-based assembly parsing method that can understand the assembly process, which is known to be extremely hard previously. Our method exploits the availability of the participating parts to significantly alleviate the problem, reducing the recognition task to an identification problem, within a constrained search space. To further constrain the search space, and understand the observed assembly activity, we introduce a tree-based global-inference technique. Our key idea is to incorporate part-interaction rules as powerful constraints which significantly regularize the search space and correctly parse the assembly video at interactive rates. Complex examples demonstrate the effectiveness of our method.
AB - In this paper, we introduce an online video-based system that actively assists users in assembly tasks. The system guides and monitors the assembly process by providing instructions and feedback on possibly erroneous operations, enabling easy and effective guidance in AR/MR applications. The core of our system is an online video-based assembly parsing method that can understand the assembly process, which is known to be extremely hard previously. Our method exploits the availability of the participating parts to significantly alleviate the problem, reducing the recognition task to an identification problem, within a constrained search space. To further constrain the search space, and understand the observed assembly activity, we introduce a tree-based global-inference technique. Our key idea is to incorporate part-interaction rules as powerful constraints which significantly regularize the search space and correctly parse the assembly video at interactive rates. Complex examples demonstrate the effectiveness of our method.
KW - Computing methodologies
KW - Computer graphics
KW - Mixed / augmented reality
UR - http://www.scopus.com/inward/record.url?scp=85053850686&partnerID=8YFLogxK
U2 - 10.1109/VR.2018.8446602
DO - 10.1109/VR.2018.8446602
M3 - Conference contribution
AN - SCOPUS:85053850686
SN - 9781538633656
T3 - 25th IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2018 - Proceedings
SP - 459
EP - 466
BT - 25th IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2018 - Proceedings
A2 - Steinicke, Frank
A2 - Thomas, Bruce
A2 - Kiyokawa, Kiyoshi
A2 - Welch, Greg
PB - Institute of Electrical and Electronics Engineers
T2 - 25th IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2018
Y2 - 18 March 2018 through 22 March 2018
ER -