MLP-based multimodal tomato detection in complex scenarios: Insights from task-specific analysis of feature fusion architectures

Wenjun Chen, Yuan Rao, Fengyi Wang, Yu Zhang, Tan Wang, Xiu Jin, Wenhui Hou, Zhaohui Jiang, Wu Zhang

Research output: Contribution to journalArticlepeer-review

17 Scopus citations

Abstract

Accurate and efficient tomato detection is essential for the practical deployment of robotic picking in practical agricultural applications, but it still remains significantly challenging to detect tomatoes in complex scenarios with fluctuating light, overlapping fruits, and occlusion from branches and leaves when solely using RGB images. The recent development of RGB-D sensors has brought one promising opportunity to adopt multimodal fusion for implementing high-quality fruit detection. However, the feasibility of the existing multimodal fusion and feature extraction architectures for lightweight tomato detection tasks, especially in complex agricultural scenarios, raises questions that need to be explored. As a remedy, we proposed a multimodal fusion encoder that leveraged depth and near-infrared modalities to assist RGB images in making full use of multimodal data. Moreover, the encoder contained a plug-and-play structure capable of being implemented as MLP-based (Multi-Layer Perceptron), ViT-based (Vision Transformer), or CNN-based (Convolutional Neural Networks) architectures. Furthermore, we developed a lightweight experimental detection framework based on YOLOv7-tiny by means of integrating the multimodal fusion encoder, and YOLO-DNA (Depth and Near-infrared Assisted) was put forward based on the MLP-based architecture after conducting comprehensive analysis of the aforementioned three architectures. In addition, a tomato multimodal dataset containing visible, depth, and near-infrared images was established. Experimental results demonstrated that YOLO-DNA achieved mAP0.5 of 98.13% and mAP0.5:0.95 of 74.0%, an average increase of 5.01% in mAP0.5 and 14.55% in mAP0.5:0.95 over mainstream lightweight detection models, with a detection speed of 37.12 FPS, meeting the demand of real-time tomato detection. This finding has the potential to advance research on fruit detection in the field of intelligent agricultural harvesting.

Original languageEnglish
Article number108951
JournalComputers and Electronics in Agriculture
Volume221
DOIs
StatePublished - 1 Jun 2024
Externally publishedYes

Keywords

  • Complex scenarios
  • Feature fusion
  • Multimodal
  • Tomato detection
  • YOLO

ASJC Scopus subject areas

  • Forestry
  • Agronomy and Crop Science
  • Computer Science Applications
  • Horticulture

Fingerprint

Dive into the research topics of 'MLP-based multimodal tomato detection in complex scenarios: Insights from task-specific analysis of feature fusion architectures'. Together they form a unique fingerprint.

Cite this