A CNN-based framework for estimation of root length, diameter, and color from in situ minirhizotron images

Faina Khoroshevsky, Kaining Zhou, Aharon Bar-Hillel, Ofer Hadar, Shimon Rachmilevitch, Jhonathan E. Ephrath, Naftali Lazarovitch, Yael Edan

Research output: Contribution to journalArticlepeer-review

Abstract

This work presents a framework based on convolutional neural networks (CNNs) to estimate root traits (length, diameter, and color) from minirhizotron (MR) imagery. The proposed framework uses a set of reusable sub-network modules to compose different networks for object (i.e., root) detection and attribute (i.e., trait) estimation for per-root and per-image root phenotyping tasks. It provides a solution without requiring root segmentation. The first step in per-root phenotyping involves detecting the roots in the image; the traits of each detected root are then estimated. Per-image root phenotyping estimates aggregated root trait values, including total root length (TRL), mean root diameter, and percentage of white root. Regression-based and objects’ points-detection-based variations are demonstrated for both per-root and per-image root trait estimation. Five network architectures are presented, two of which were previously used for TRL estimation (and are now evaluated for estimating mean root diameter and white root percentage), and three of which are new. The proposed framework is demonstrated on an annotated grapevine root dataset comprising 531 images, made publicly available as part of this paper. All images were acquired in situ using an MR system and annotated with Rootfly software. Regression-based modules used for individual detected roots yielded errors of 8.8%, 15.5%, and 23.5% for color, length, and diameter, respectively. The points-detection-based modules resulted in errors of 9.1%, 14.9%, and 25.0% for the same parameters. The image-level estimates showed errors of 11.5%–16.5% for white root percentage, 13.7%–16.0% for TRL, and 17.6%–22.1% for mean root diameter. We demonstrate that aggregating per-root estimations of diameter and color obtained with the new suggested architectures improves the per-image estimations of these traits relative to the direct per-image estimation that does not include per-root estimations. To demonstrate further the practicality of the suggested framework in deriving the vertical distribution of various root traits, an additional dataset of 132 root images from two different grapevine graft combinations was annotated (and also made publicly available as part of this paper). In this dataset, the per-image root traits were estimated for different soil depths and visually compared with human annotation results.

Original languageEnglish
Article number109457
JournalComputers and Electronics in Agriculture
Volume227
DOIs
StatePublished - 1 Dec 2024

Keywords

  • Convolutional neural network
  • Grapevine root dataset
  • Minirhizotron
  • Root phenotyping
  • Segmentation free

ASJC Scopus subject areas

  • Forestry
  • Agronomy and Crop Science
  • Computer Science Applications
  • Horticulture

Fingerprint

Dive into the research topics of 'A CNN-based framework for estimation of root length, diameter, and color from in situ minirhizotron images'. Together they form a unique fingerprint.

Cite this