Performance of RGB-D camera for different object types in greenhouse conditions

Ola Ringdahl, Polina Kurtser, Yael Edan

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

5 Scopus citations

Abstract

RGB-D cameras play an increasingly important role in localization and autonomous navigation of mobile robots. Reasonably priced commercial RGB-D cameras have recently been developed for operation in greenhouse and outdoor conditions. They can be employed for different agricultural and horticultural operations such as harvesting, weeding, pruning and phenotyping. However, the depth information extracted from the cameras varies significantly between objects and sensing conditions. This paper presents an evaluation protocol applied to a commercially available Fotonic F80 time-of-flight RGB-D camera for eight different object types. A case study of autonomous sweet pepper harvesting was used as an exemplary agricultural task. Each of the objects chosen is a possible item that an autonomous agricultural robot must detect and localize to perform well. A total of 340 rectangular regions of interests (ROI) were marked for the extraction of performance measures of point cloud density, and variability around center of mass, 30-100 ROIs per object type. An additional 570 ROIs were generated (57 manually and 513 replicated) to evaluate the repeatability and accuracy of the point cloud. A statistical analysis was performed to evaluate the significance of differences between object types. The results show that different objects have significantly different point density. Specifically metallic materials and black colored objects had significantly less point density compared to organic and other artificial materials introduced to the scene as expected. The point cloud variability measures showed no significant differences between object types, except for the metallic knife that presented significant outliers in collected measures. The accuracy and repeatability analysis showed that 1-3 cm errors are due to the the difficulty for a human to annotate the exact same area and up to ±4 cm error is due to the sensor not generating the exact same point cloud when sensing a fixed object.

Original languageEnglish
Title of host publication2019 European Conference on Mobile Robots, ECMR 2019 - Proceedings
EditorsLibor Preucil, Sven Behnke, Miroslav Kulich
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781728136059
DOIs
StatePublished - 1 Sep 2019
Event2019 European Conference on Mobile Robots, ECMR 2019 - Prague, Czech Republic
Duration: 4 Sep 20196 Sep 2019

Publication series

Name2019 European Conference on Mobile Robots, ECMR 2019 - Proceedings

Conference

Conference2019 European Conference on Mobile Robots, ECMR 2019
Country/TerritoryCzech Republic
CityPrague
Period4/09/196/09/19

ASJC Scopus subject areas

  • Artificial Intelligence
  • Control and Optimization
  • Mechanical Engineering

Fingerprint

Dive into the research topics of 'Performance of RGB-D camera for different object types in greenhouse conditions'. Together they form a unique fingerprint.

Cite this