TY - GEN
T1 - Person Re-ID Testbed with Multi-Modal Sensors
AU - Zhao, Guangliang
AU - Ben-Yosef, Guy
AU - Qiu, Jianwei
AU - Zhao, Yang
AU - Janakaraj, Prabhu
AU - Boppana, Sriram
AU - Schnore, Austars R.
N1 - Publisher Copyright:
© 2021 ACM.
PY - 2021/11/15
Y1 - 2021/11/15
N2 - Person Re-ID is a challenging problem and is gaining more attention due to demands in security, intelligent system and other applications. Most person Re-ID works are vision-based, such as image, video, or broadly speaking, face recognition-based techniques. Recently, several multi-modal person Re-ID datasets were released, including RGB+IR, RGB+text, RGB+WiFi, which shows the potential of the multi-modal sensor-based person Re-ID approach. However, there are several common issues in public datasets, such as short time duration, lack of appearance change, and limited activities, resulting in un-robust models. For example, vision-based Re-ID models are sensitive to appearance change. In this work, a person Re-ID testbed with multi-modal sensors is created, allowing the collection of sensing modalities including RGB, IR, depth, WiFi, radar, and audio. This novel dataset will cover normal daily office activities with large time span over multi-seasons. Initial analytic results are obtained for evaluating different person Re-ID models, based on small datasets collected in this testbed.
AB - Person Re-ID is a challenging problem and is gaining more attention due to demands in security, intelligent system and other applications. Most person Re-ID works are vision-based, such as image, video, or broadly speaking, face recognition-based techniques. Recently, several multi-modal person Re-ID datasets were released, including RGB+IR, RGB+text, RGB+WiFi, which shows the potential of the multi-modal sensor-based person Re-ID approach. However, there are several common issues in public datasets, such as short time duration, lack of appearance change, and limited activities, resulting in un-robust models. For example, vision-based Re-ID models are sensitive to appearance change. In this work, a person Re-ID testbed with multi-modal sensors is created, allowing the collection of sensing modalities including RGB, IR, depth, WiFi, radar, and audio. This novel dataset will cover normal daily office activities with large time span over multi-seasons. Initial analytic results are obtained for evaluating different person Re-ID models, based on small datasets collected in this testbed.
KW - Computer Vision
KW - Deep Learning
KW - Face Recognition
KW - Multi-Modal
KW - Neural Network
KW - Person Re-ID
KW - WiFi
UR - http://www.scopus.com/inward/record.url?scp=85120900153&partnerID=8YFLogxK
U2 - 10.1145/3485730.3494113
DO - 10.1145/3485730.3494113
M3 - Conference contribution
AN - SCOPUS:85120900153
T3 - SenSys 2021 - Proceedings of the 2021 19th ACM Conference on Embedded Networked Sensor Systems
SP - 526
EP - 531
BT - SenSys 2021 - Proceedings of the 2021 19th ACM Conference on Embedded Networked Sensor Systems
PB - Association for Computing Machinery, Inc
T2 - 19th ACM Conference on Embedded Networked Sensor Systems, SenSys 2021
Y2 - 15 November 2021 through 17 November 2021
ER -