Source: KANSAS STATE UNIV submitted to NRP
DEEP LEARNING FOR BEE IDENTIFICATION: DEVELOPING AN AUTOMATED TOOL FOR FAST AND RELIABLE BEE IDENTIFICATION FROM IMAGES
Sponsoring Institution
National Institute of Food and Agriculture
Project Status
COMPLETE
Funding Source
Reporting Frequency
Annual
Accession No.
1022366
Grant No.
2020-67013-31862
Cumulative Award Amt.
$429,988.00
Proposal No.
2019-06143
Multistate No.
(N/A)
Project Start Date
Jul 1, 2020
Project End Date
Jun 30, 2023
Grant Year
2020
Program Code
[A1113]- Pollinator Health: Research and Application
Recipient Organization
KANSAS STATE UNIV
(N/A)
MANHATTAN,KS 66506
Performing Department
Entomology
Non Technical Summary
Striving to ensure pollinator populations remain healthy and capable of providing importantpollination services requires research by the scientific community and application by theagricultural community, the public, and conservationists. Reliable identification of pollinators,such as bees, is a critical to maintaining bee health. However, because bees can have only subtlemorphological differences, species-level identification can be difficult, requiring specializedtaxonomic knowledge. This results in a bottleneck that is expensive and time consuming, whichslows the pace of research and adoption of new applications. Moreover, setbacks for pollinatorresearch can result from errors based on misidentification if experts are unavailable or if funds areinsufficient. However, new technology that is currently being developed in the fields of machinelearning and computer vision are enabling fast and reliable automated identification of objectsfrom images. Cutting-edge techniques, such as convolutional neural networks (CNNs), are beingemployed in diverse fields and helping to drive advances in precision agriculture and insectidentification. We propose to develop a large image dataset of expertly identified bees to use asinput to CNNs for automated bee identification. We will focus on (1) the bumble bees (Bombus)of North America and (2) the bees (Anthophila) of Kansas, USA, which represent bee subsets ofinterest for addressing bee health. We will also produce a mobile app that will allow non-expertsto identify bees to species from images. Using state-of-the-art technology, our project will thus puta greatly needed tool into the hands of those acting to maintain pollinator health.
Animal Health Component
25%
Research Effort Categories
Basic
25%
Applied
25%
Developmental
50%
Classification

Knowledge Area (KA)Subject of Investigation (SOI)Field of Science (FOS)Percent
13630852080100%
Goals / Objectives
There is a critical need for quick and reliable methods of bee identification. State-of-the-art deep learning technology can provide for this. We propose to gather image data on North American bees to complete three objectives:Develop a CNN model to classify the 46 bumble bee species of North AmericaDevelop a CNN model to classify the 305 bee species of KansasCreate a mobile app for novices to experts capable of identifying bees from images of pinned specimens and active bees in natureTo complete the proposed objectives, we will develop an image dataset composed of expertly identified images of pinned specimens from museum and research collections, combined with images of live bees active in nature. Models capable of classifying the complete bee fauna of the world (>20,000 species) or even North America (>4,000) are infeasible for this study. Therefore, in objectives 1 and 2 we will focus on two subsets of North American bee species: all bumble bee (Bombus) species occurring in North America (north of Mexico) and all bee species recorded within the state of Kansas since 1950. Focusing these subsets of North American bees will allow us to address two goals: to provide a reliable model for anyone in the US and Canada to use to identify bumble bees and (2) to take the first steps toward developing a complete dataset and set of models to classify all bee species in North America and beyond. Objective 2 will thus provide a scalable framework for expanding coverage of bee species to ever larger regions. With objective 3, we will provide stakeholders with a tool - a mobile app - for identifying bees. Our long-term goal is to put expert-level bee identification capability in the hands of anyone who needs it.
Project Methods
Dataset development: We will generate a dataset composed of images of pinned specimens with at least 510 images per species. Depending on availability, we will photograph 30 intact and high-quality individuals of each species. Specimens will be obtained from the K-State Department of Entomology collections, PD Spiesman's research collection, and specimens obtained on loan from the University of Kansas Biodiversity Institute, as well as other museum and research collections. Each individual will be photographed from 17 different viewing angles: one from the top, 8 images in parallel plane (face, rear, left side, right side, left and right front and back oblique angles), and 8 images looking down at 45° angles (face, rear, left side, right side, left and right front and back oblique angles). So, for the 341 species to be imaged (North American bumble bees + other Kansas bees) this will require imaging 10,230 individual bees resulting in 173,910 images.Preliminary lab testing indicated that, after initial setup, it takes less than 4 minutes to acquire the 17 images, record the specimen and image ID numbers, and replace the specimen with the next in line. Thus, conservatively, working at it for 32 hrs./week would take approximately 21 person-weeks to complete the imaging. This is quite feasible to complete over the course of a summer, especially when dividing the work between a graduate student and an undergraduate assistant.Images will be taken using a digital SLR camera set up on a secure stand set so that the subject fills the frame and with a focal length that allows the subject to be completely in focus. Images will be stored as 12-megapixel jpegs, a resolution that is more than enough for CNN model input, which is usually less than 500 × 500 pixels. A rotating stage will be set at a fixed distance so that specimens can be quickly replaced and rotated in front of the camera for multi-angle imaging.We will also attempt to gather 500 labeled images per species of live bees in nature. In combination with images of pinned specimens, this will result in a total of over 1,000 images per species. Which puts us well over the often-reported guideline that at least 100 images/class are required for CNN analysis. Images of live bees will be acquired from online databases including the Wisconsin Bumble Bee Brigade, Bumble Bee Watch, BugGuide (https://bugguide.net), iNaturalist (https://www.inaturalist.org/), and others. We will only include images that have been verified by experts. Image acquisition has already begun for bumble bees and to date we have assembled more than 6,700 images belonging to 43 of the 46 North American species.To validate CNN models, we take new photographs of bees visiting flowers in the field. We will target bee groups with rare representation in field-based image datasets. For the targeted bee groups, we will take multiple photos of each individual before they are captured and returned to the lab for identification. Captured bees will also be photographed after they have been pinned for identification. The quantity and targets of these photos will be determined by data needs once our image datasets are complete.Model development for objectives 1 and 2. We will implement our CNN models for objectives 1 and 2 in the Python programming language utilizing keras with a TensorFlow backend. As a starting point, we will use the VGG16 CNN architecture, which is frequently used as a base model because of its ability to generalize and for its relatively simple structure compared to other models such as Google's Inception or Microsoft's ResNet. VGG16 is composed of five blocks of convolutional layers. Each block is comprised of 2 or 3 convolutional layers, which apply a number of 3×3 filters to the preceding input. Blocks end with subsampling using 2×2 MaxPooling. Three fully connected layers follow the five blocks of convolutional layers ending with the class prediction layer. The input images will be subsampled form 224×224-pixel images, which is the standard VGG16 input size. We will explore variations of CNN architecture that may better fit our application to bee classification. For example, we will explore different image input sizes, convolutional layer dimensions, as well as ways to improve performance and limit overfitting by using dropout and image standardization. This set of model architectures will be trained on the full dataset, including both images of pinned specimens and bees in the field. We will divide the dataset into smaller subsets for more efficient model exploration with each subset including 80% of the data for training and 20% for testing. Additional image data from field-caught specimens will be used for validation. Similar proportions will be used for training and testing on the entire dataset. We will assess model performance based on model loss, accuracy, precision, and recall, as well as run time.Mobile app development for objective 3. The proposed work depends on deployed mechanisms for citizen data science, specifically: a mobile application for rapid data acquisition, preparation, and transmission to a hybrid cloud platform. Basic requirements for this application are that it support a wide variety of smartphones and other mobile devices such as tablets and camera watches, producing images on the order of up to 10 megapixels (10Mb file size, for 0.1 - 10Gb per day bandwidth per user); scales well for the intended use, by hundreds to hundreds of thousands of users (10Gb - 1Pb per day enterprise-wide); is usable by nontechnical bee-watchers; and is extensible by enthusiasts and mobile app developers. We plan to develop feature-parallel and interoperable iOS (Swift) and Android (Java) versions with a shared metadata format. This will be backed by custom cloud services running on Beocat, the K-State high-performance computing (HPC) cluster. Co-PI Hsu's lab has experience with mobile applications development and citizen data science for social good, and extensive experience with web-enabled front ends for uploading both training data for deep learning and individual images or batches of images for inferencing using previously or incrementally trained models.

Progress 07/01/20 to 06/30/23

Outputs
Target Audience:We have reached scientists interested in pollinators and computer science through invited presentations, and through social media. We have also participated in working groups attended by pollinator scientists, that were aimed at leveraging machine learning and computer vision for pollinator science. We have reached non-scientist pollinator enthusiasts through invited presentations and social media (Facebook and Twitter) and through in-person outreach events. Changes/Problems: Nothing Reported What opportunities for training and professional development has the project provided?Over the past year, we have mentored three undergraduate students as part of a research experience program. They have learned skills in computer vision model development, database management, and app development. Graduate students in Entomology and Computer Science have used the datasets in their research and presented their findings at conferences. We are collaborating withK-State graduate student and a former postdoc in computer science (now faculty at Vilanova)on a USDA proposal for additional funds to develop system to use drones to autonomously captue information about pollinator habitat and bumble bee diversity. How have the results been disseminated to communities of interest?We have a paper in revision at PlosOne describing a new bee species classification model aimed at identifying bees that are traditionally much more difficult to identify than Bombus. We have presented results of our work at 2 professional conferences, at 3 invited seminars, on social media, and at local outreach events. Outreach events have included educational events for grade school students and larger outreach events with the general public. What do you plan to do during the next reporting period to accomplish the goals? Nothing Reported

Impacts
What was accomplished under these goals? 1. We have continued to update our CNN classification model to include well over the 46 species originally proposed. Our model now includes 108 Bombus species and 78 other bee taxa from around the world. We have developed a pipeline so that our classification model will be improved (more taxa and greater reliablility) as new image data for training become available. 2.We continue to build the dataset necessary to classify other native bee species besides Bombus. Our manuscript describing a CNN classification model for smaller and more difficult to ID bees, such as Lasioglossum (Dialictus) recevied positive reviews and is in the revision process. Although the period of this project is over, we are continuing the process of image data collection for updating and expanding classification algorithms. For example, we are now partnering with the USDA ARS facility in Logan Utah to image additional bee groups from their large collection of pinned specimens. Other opportunities for collaboration and imaging have been identified. We have two proposals pending for funds to continue our work. 3. Our mobile app has been publicly released. The app allows users to take new photos or input stored photos to receive an identification from our classification algorithm. User sightings are saved in so they can review them and plot them on a map. Sightings for all users are stored in a cloud database that will soon be freely available for public use. Although officially released, the app will continue to be updated with new classification models and to include additional educational material.

Publications

  • Type: Journal Articles Status: Under Review Year Published: 2023 Citation: Spiesman BJ, C Gratton, E Gratton, and H. Hines. 2023. Deep learning for identifying bee species from images of wings and pinned specimens. PlosOne. Under revision.


Progress 07/01/21 to 06/30/22

Outputs
Target Audience:We have reached scientists interested in pollinators and computer science throughinvited presentations, and through social media. We have also participated in three working groups attended by pollinator scientists, that were aimed at leveraging machine learning and computer vision for pollinator science. We have reached non-scientist pollinator enthusiasts through invited presentations and social media (Facebook and Twitter) and through in-person outreach events. Changes/Problems: Nothing Reported What opportunities for training and professional development has the project provided?We have mentored two undergraduate students as part of a research experience program. They have learned skills in computer vision model development, database management, and app development. Graduate students in Entomology and Computer Science have used the datasets in their research and presented their findings at conferences. A graduate student and a postdoc in computer science have used preliminary data and written a NSF proposal for additional funds to develop a web-based annotation tool to help build image datasets for continued vision modle training. How have the results been disseminated to communities of interest?We have submitted a paper to PlosOne describing our classification model of small, difficult to identify bees. We have presented our research findings at 2 conferences and at one invited seminar talk. We have also promoted our research products on social media and at in-person outreach events on the K-State campus and surrounding community. What do you plan to do during the next reporting period to accomplish the goals?During the next reporting period we will continue to develop image datasets to improve and expand our bee identification models. We will continue to improve our mobile app and promote it for wide use. We will also begin to develop the website so that it can be a companion to the mobile app, where account holders can access their data and interact online by verifying machine-generated results. We will provid a way to download anonymized user sighting data for scientific research.

Impacts
What was accomplished under these goals? 1. Our first objective was to develop a deep learning classification model to identify the Bombus of North America. Our current model classifies most North American species. We continue to develop the image dataset so we can include the few remainingNorth American species. We have also expanded the scope of the Bombus classifier so that it now classifies 100 species from around the world, including South America, Europe, and parts of Asia. The updated model performs better than previous versions with an overall accuracy of 93.7% 2. We continue to build the dataset necessary to classify other native bee species besides Bombus. We recently submitted a manuscript describing a CNN classification model for smaller and more difficult to ID bees, such as Lasioglossum (Dialictus). This model performs very well with an overall accuracy of 95%. We will continue to develop the datasets necessary for enhancing and expanding our bee species classification models. 3. We have developed a mobile app for Android and iOS devices that is capable of identifying bumble bee species from around the world. It uses our Bombus classifier (objective 1). The app allows users to take new photos or input stored photos of bees for identification. Sightings are stored in user accounts along with location, date, and other metadata, which will be provided freely for conservation science. The app is currently available on the app stores but we are working out a few bugs that cause performance issues before we promote the app widely. We also continue to update and make improvements on the web app, https://beemachine.ai. It has been updated with the current classification model.

Publications

  • Type: Journal Articles Status: Under Review Year Published: 2022 Citation: Spiesman BJ and C Gratton. 2022. Deep learning for identifying bee species from images of wings and pinned specimens. PlosOne. Under review.


Progress 07/01/20 to 06/30/21

Outputs
Target Audience:We have reached scientists interested in pollinators and computer science through a publication (Spiesman et al. 2021, Sci Reports), through invited presentations, and through social media. We have also participated in four working groups attended by pollinator scientists, that were aimed at leveraging machine learning and big data for pollinator science. We have reached non-scientist pollinator enthusiasts through invited presentations and social media (Facebook and Twitter). Changes/Problems: Nothing Reported What opportunities for training and professional development has the project provided?The image dataset that we have put together is being used by three other graduate students as part of their dissertations. They are working on novel image classification and object detection systems. A recently graduate MS student is now using our image acquisiiton pipeline to be trained in image annotation and database management. How have the results been disseminated to communities of interest?We have published a paper describing our bumble bee classification model results (Spiesman et al. 2021, Scientific Reports). We have also established social media accounts to promote the BeeMachine website on Facebook and Twitter. We have used the accounts to provide update notices, describe best practices for using our website, and to engage with users. What do you plan to do during the next reporting period to accomplish the goals?Over the next year we will continue to develop the mobile app and hope to have a version available by March 2022 in time for the spring bee emergence. We will process images and specimens that were gathered in summer 2021 and incorporate these images into computer vision model updates.

Impacts
What was accomplished under these goals? 1. We have developed a computer vision model capable of classifying 41 of the 46 bumble bee species in North America at an overall accuracy of >92%. 2. We have developed a preliminary model that includes 25 species from Kansas. We collected physical specimens and corresponding image data over the summer 2021 that are currently being processed for identification. These images will be incorporated into an update of the vision model. 3. In addition to the website, which can be accessed and used on mobile devices, we have started development of a dedicated mobile app. The app is being developed using React Native so that it can be used natively on both Android and IOS devices. The app is currently capable of using device cameras to take a photo and process the image using our bumble bee vision algorithm to classify the species.

Publications

  • Type: Websites Status: Published Year Published: 2020 Citation: https://beemachine.ai
  • Type: Conference Papers and Presentations Status: Other Year Published: 2021 Citation: Spiesman, BJ. 2021. AI and computer vision in bee ecology, conservation, and citizen science. Linda Hall Library. Invited Talk.
  • Type: Conference Papers and Presentations Status: Other Year Published: 2021 Citation: Spiesman, BJ. 2021. Deep learning for bee identification. Department of Computer Science, Kansas State University. Invited Talk.
  • Type: Conference Papers and Presentations Status: Other Year Published: 2020 Citation: Spiesman, BJ. 2020. Grassland pollinators: How disturbance impacts biodiversity and plant-pollinator interactions, Department of Entomology, University of Nebraska. Invited Talk.
  • Type: Journal Articles Status: Published Year Published: 2021 Citation: Spiesman, BJ, C Gratton. RG Hatfield, WH Hsu, S Jepsen, B McCornack, K Patel, G Wang. 2021. Assessing the potential for deep learning and computer vision to identify bumble bee species from images. Scientific Reports 11:7580.