Smartphone apps under development to aid pest monitoring

Take home message

  • A silverleaf whitefly and cotton aphid detection app is currently under development for the cotton industry to enhance pest management decision making
  • Preliminary image analysis from smartphone images demonstrated ability to achieve up to 75% detection rate of silverleaf whitefly nymphs on the underside of cotton leaves
  • Machine vision can be transferred to pest and crop sensing in grains with mobile or static camera installations (e.g. traps).

Introduction

Cotton pests such as silverleaf whitefly (SLW) and cotton aphids cause yield loss through plant feeding as well as the lint contamination from waste secretions. The management of these pests to prevent economic loss currently depends on manually counting these pests on leaves sampled from cotton fields to determine if changes in pest abundance warrants control action. This is a time-consuming process requiring 20-30 leaves to be sampled per 25 hectares of cotton and examined by eye for the presence and density of each pest. As a result, crop agronomists’ sample and examine hundreds of leaves each week to complete a very menial task. The current decision chart in the Australian Cotton Pest Management Guide (CRDC 2019) that maps SLW numbers against day degrees is shown in Figure 1.

This figure illustrates the SLW management matrix. Source: CRDC (2019)

Figure 1. SLW management matrix. Source: CRDC (2019)

Image analysis techniques have potential to autonomously interpret images of cotton leaves, detecting and logging pest density to inform pest management decisions. Deep learning has been deployed in similar settings to detect whitefly adults (mCROPS 2019) on beat sheets (Xuesong et al. 2017), and insects in pest traps (Sun et al. 2017). These deep learning models are typically trained from images in controlled lighting and have F-scores (an indicator of accuracy) of 90-95%. Further development is required to adapt these techniques/models for infield application by agronomists collecting and analysing SLW samples. The accuracy of deep learning models trained from images in outdoor, commercial conditions are expected to be lower than 90-95% because of less consistency between images.

Embedded processing of a deep learning model on a smartphone app classifying and counting insects would allow agronomists to reduce the time spent in the field. Image collection would replace the need to examine and manually record insect presence on each leaf.  This would underpin larger sampling and greater efficiency and accuracy for decision making.

Materials and method

Initial glasshouse trials and commercial field trials were conducted over the 2018/19 cotton season to enable the development of a sampling protocol. The two initial questions were (i) which smartphones could capture suitable images and (ii) what accuracy can be achieved with the smartphone models, with different camera resolutions. Three smartphone models were compared to identify a suitable minimum standard for camera quality.

An initial training dataset of 480 images and evaluation dataset of 80 images were collected from QDAF glasshouse cultures and Queensland commercial cotton farms near Goondiwindi, St George and the Darling Downs. In the evaluation dataset, approximately half of the images from each smartphone contained nymphs, and the other half were clean leaves. All images were ground truthed by manually recoding the numbers and locations of SLW nymphs. Potential nymph locations were also recorded where it was not clear on the image due to either small size or lack of focus. An example of nymph and potential nymph locations on a leaf in the evaluation dataset is shown in a smartphone image in Figure 2.

This photograph is a cotton leaf with three whitefly nymphs on an image from the evaluation dataset (blue circles), and one potential nymph (red circle), which is difficult to class by looking at the image

Figure 2. Three whitefly nymphs on an image from the evaluation dataset (blue circles), and one potential nymph (red circle), which is difficult to class by looking at the image

Deep learning models were trained using ground truthing data and images from different combinations of each smartphone to determine the influence of each device on model accuracy. The accuracy of the deep learning models were calculated using the precision, recall and F-score (equations 1-3). The F-score reflects both the precision and recall of the models and is used to evaluate models reported in the current literature. This enables comparison between the developed models and models in the literature. The F-score (Eq. 1-3) was calculated using true positives, false positives (Type I errors), and false negatives (Type II errors), and not true negatives as these could not be quantified in this application. Higher F-scores indicate higher accuracy in the deep learning model.

Equation- F-score calculations

where:
Tp= True positives;
Fp= False positives; and
Fn= False negatives.

Results

The results for the best F-score from each training set are presented in Table 1. All deep learning models which used the iPhone model on its own or with other smartphone models produced the highest F-scores of all the combinations (71.7-75.8%). In contrast, the deep learning models that incorporated the Sony and/or Samsung models without the iPhone model produced F-scores of 44.0-55.6%. The performance disparity between smartphone models is likely due to dataset size and quality, something that will be addressed by using external collaborators to greatly increase image collection for the database. This variation in performance between models may also be caused by the different pre-processing that occurs on each smartphone brand and model that can further add variability to the appearance of the insects.

Overall, the trained models produced F-scores that were lower than 90-95% that have been reported in the literature. This is likely to have occurred because the images were captured outdoors, whilst those in the cited literature were captured in controlled indoor conditions. Controlled lighting provides a more consistent view of the subject, reducing the level of training data required and improving accuracy. In addition, this study has performed validation using multiple sensors, whereas training and validation were carried out using the one image sensor in earlier reported studies. The detection accuracy will be brought as close to 90% as possible through algorithm revising and defining an image capture protocol to limit poor lighting conditions in images from the field.

Table 1. Deep learning SLW detection test using optimal training and detection parameters.

Smartphone models used for training

Positive samples

Precision (%)

Recall (%)

F-score (%)

Samsung

330

46.0

42.1

44.0

Sony

718

31.7

33.7

32.7

Samsung and Sony

1048

51.8

60

55.6

iPhone

609

72.8

70.5

71.7

Samsung and iPhone

939

73.7

73.7

73.7

iPhone and Sony

1527

75.8

75.8

75.8

All

1857

76.7

69.5

72.9

App deployment

A closed alpha version of the app was deployed to a small test group of 8-10 agronomists for the 2019/20 cotton season. The alpha app included logging features so that agronomists could create farm units and subsequent pest management units and then log samples under each management unit. All images were collated into a new database for algorithm training which is ongoing in preparation for next season. Agronomists responded positively to the logging system and expect the app to be much faster than manually referencing the pest management guide.

For the 2020/21 cotton season, the app will enter a closed beta stage which includes an updated algorithm with cotton aphid detection and which attempts to sub-classify SLW nymphs into key groups (i.e. healthy, dead, emerged). This additional information will further aid agronomists and better inform pest control management decisions.

Conclusion

A silverleaf whitefly and cotton aphid detection app is currently undergoing development and testing for the Australian cotton industry, which will benefit IPM by reducing sampling times, enabling more precise detection and recording of pests, increasing sampling consistency between field personnel, and providing a digital storage platform on which future area-wide management strategies could be based. There is potential for this technology to be transferred across other crops (e.g. grains) for insect pest counting, and detection of symptoms or stress.

Acknowledgements

The authors are grateful to the Cotton Research and Development Corporation for funding this research (project code NEC1901) and facilitating field testing, Jacob Balzer and Leisa Bradburn from QDAF for assistance with sample collection and analysis, and X-Lab for facilitating workshops with agronomists for assisting with project direction.

References

CRDC (2019) Cotton Research Development Corporation’s Cotton Pest Management Guide. (Accessed 11 June 2020)

mCROPS (2019) mCROPS: Automating pest count and symptom measurement. (Accessed 11 June 2020)

Sun, Y., Cheng, H., Cheng, Q., Zhou, H., Li, M., Fan, Y., Shan, G., Damerow, L., Lammers, P.S. and Jones, S.B. (2017) A smart-vision algorithm for counting whiteflies and thrips on sticky traps using two-dimensional Fourier transform spectrum, Biosystems Engineering, 153:82-88

Xuesong, S., Zi, L., Lei, Su., Jiao, W. and Yang, Z. (2017) Aphid Identification and Counting Based on Smartphone and Machine Vision, Journal of Sensors, vol. 2017

Contact details

Alison McCarthy
Centre for Agricultural Engineering
P9 Building, University of Southern Queensland, West St, Toowoomba 4350
Ph: 07 4631 2189
Email: alison.mccarthy@usq.edu.au

Derek Long
Centre for Agricultural Engineering
P9 Building, University of Southern Queensland, West St, Toowoomba 4350
Ph: 07 4631 1515
Email: derek.long@usq.edu.au

Paul Grundy
Department of Agriculture, Fisheries (Queensland)
203 Tor Street, Toowoomba 4350
Ph: 0427 929 172
Email: paul.grundy@daf.qld.gov.au