Performing Department
Medical Sciences
Non Technical Summary
Delayed detection of lameness and Digital dermatitis (DD, syn. Hairy Heel Warts) in cattle is an example for major animal health and well-being concerns affecting more than 95% of US cattle. Tools for early detection, treatment, prevention and management of DD affected cattle are crucial for Good Practice Antimicrobial treatment, residue avoidance and DD prevention using customized footbathing without hazardous chemicals. Since DD is considered to be an infectious claw disease in cattle, disinfecting footbaths are used to prevent DD. Early detection of DD reduces the number of topical treatments, and systematic detection facilitates improved, strategic usage of less disinfecting foot bath chemicals such as the environmentally problematic copper sulfate and the carcinogenic formalin. Current reality is that lameness detection and DD treatment and prevention are often initiated too late for full recovery of cattle and many cows' lives and production potential are lost as a consequence. Too many caustic and antimicrobial topical treatment agents are applied to chronic DD lesions with poor prognosis for recovery and problematic foot bath chemicals are wasted on severely lame and DD affected cattle. Deep Learning and Computer Vision applications are excellent approaches for image and video classification with regards to early detection of health events. Therefore, we propose to further improve, validate and implement an existing Computer Vision tool (i) to automatically detect DD, (ii) to generate treatment lists in real time for dairy and beef cattle and (iii) to generate a web-based application that classifies DD lesions in real-time based on images provided by IP Webcams in tablets and phones. Such tools would be extremely important assets to producers in Wisconsin and the US to enhance sustainable productivity and quality of livestock and agriculture. The benefit of this project for the Wisconsin cattle industry is the creation of tools for the early detection of DD in dairy and beef cattle.
Animal Health Component
100%
Research Effort Categories
Basic
(N/A)
Applied
100%
Developmental
(N/A)
Goals / Objectives
Digital dermatitis is the most identified bovine infectious claw disease in North American and global cattle industries. This disease causes outbreaks of lameness and it severely impacts cattle well-being, production, and food security. The overall goal of the project is to apply and automate a Computer Vision assisted tool for early, automated detection and prevention of clinical stages of Digital dermatitis, the so-called M-stages, in dairy and beef cattle. This detection system will prevent animal suffering, optimize prevention, and control measures.The following four objectives are formulated to reach this goal:Obj. 1: add labeled pictures for DD M-stages scores for dairy and beef cattle to existing collection of labeled 5000 picturesObj. 2: improve own, existing Computer Vision tools for DD scoring and detection to validate and optimize DD detection on dairy and beef cattle farmsObj. 3: package existing Computer Vision tool for DD detection into a web application to generate treatment lists based on automated DD detection; detection images are fed into model using IP Webcams on tablets or phones.Obj. 4: apply an automated Computer Vision prediction model to new DD and lameness images and videos from dairy farms and one feed yard to validate predictions in practice resulting in a written publication of the results for a peer-reviewed journal.
Project Methods
Obj 1: More images will be added to the existing 5000 labeled images with emphasis on feet of beef cattle. Pictures and video sequences used for the proposed Deep Learning models originate from rotary and robotic milking parlors, parlor exit lanes and cow pens. Additional images and videos result from DD in beef cattle from feed yards. In addition, we have obtained new image and video data from 8 hours each of 12 robots from a 36-robot dairy farm, 8 hours x 6 visits from a 4,400-cow rotary parlor dairy farm and from 2 cycles of 1,000 beef cattle. On the beef feed yard, 8 hrs at the beginning and 8 hrs at end of the finishing period have been recorded on a 10,000 head feed yard in IA (there are about 150 days between the beginning and end of finishing period and image data are taken when cattle exit monthly foot baths or when they are processed in restraining chutes). We have completed the collection of herd management data from the respective herds for the duration of time when the images and videos were generated and data have been merged with labeled image and video files by their time stamps using freeware R. Additional costs beyond the budgeted mileage in year 1 and 2 for travel to collect the new data are covered by routine herd investigation visits aimed at prevention of DD and lameness.Obj 2: Deep learning approaches in Keras (40) and Tensorflow using YOLOv2 or YOLOv3 classifiers, (https://pjreddie.com/darknet/yolo/) score DD lesions in feet using a minimum of about 1000 training images labeled for more than 2 DD lesion categories; we will expand into detecting 5 DD lesion types (M0=healthy, M2=active ulcerative lesions, M2P=active ulcers with proliferations, M4H=chronic with hyperkeratosis and M4P=chronic DD with proliferations). As described under Obj 1, we will expand the number of images used for the purpose of achieving better predictions of DD M-stages with less risk for overfitting of the models. A 10-fold cross validation process with three iterations of splitting data into 70% training and 30% test of data into subsets will be used to optimize balanced accuracy of model predictions. Unbalanced categories within features will be accounted for using resampling techniques. GPU enhanced computing capacity for Transfer Learning and training of the models is available at the School of Veterinary Medicine or free cloud computing services, such as Google Lab, will be used.The feature extraction for the Computer Vision model is facilitated by the fact that cattle and feet are presented systematically in the same position: eartag # and box # are imaged from the front or partial side view of the cattle, toplines of cows are always presented from the back (conventional and rotary milker) or the front and back (robotic milker), feet with DD lesions scored in hindfeet are always presented from the back. Labeling of the 'ground truth' for the observed DD lesions and lameness scores is done by trained personnel in the Dopfer lab. Cows are identified using antennas and their eartags, so-called RFID's. Time stamps of cows recorded while exiting and standing in the rotary parlor or milking robots have been matched to images of eartags, and by RFID antennas in the walking alleys.Time stamps were extracted using the software FFmpeg addressed by the 'systems commands' in R. Images from cameras attached to herdsmen during penwalks, penrides and quad drives and images from cattle in stand-up restraining chutes or walking alleys present feet and cattle so that the images can be readily used for the classification of DD in dairy as well as in beef cattle. The same Deep Learning prediction models applied to dairy cattle are applicable to beef cattle. Time stamps of images of visual identifications of beef cattle and their electronic ID readings (RFID) when in processing chutes can be used to identify cattle in feed yards.We will implement strategies similar to dairy cattle for generating images and videos from beef cattle while being processed upon receiving at the beginning and at the end of their finishing period in feed yards. We propose to validate and optimize the Transfer Learning of the YOLO models by multiple iterations of 10-fold cross validation, hypermarameter tuning (6) to increase prediction accuracy, precision, recall, and to flag cattle with all M2 and M4 proliferative as opposed to the other DD stages. The regression-based flagging results in treatment lists as output from the prediction models. Our preliminary DD classifiers recognize DD lesions from healthy claw skin, but the models need to recognize all 5 DD stages in dairy and beef cattle.Obj 3 and Obj 4: Once the prediction models are validated and optimized under Obj 2, we will automate them using the shiny library in R, deploy models using Docker, and making use of GPU enhanced computers and free cloud computing services such as Google Lab. By now, images and videos are already being fed into the DD detection model using IP Webcams on tablet and cell phones in real-time with DD detection in real-time as well. This results in the availability of the pre-trained prediction models using customized data sets from old and new images and videos (see Obj 1 for information about the new images and videos). The prediction for DD using new images and videos from Obj 1 will validate the application of the Computer Vision tools on commercial dairy and beef cattle farms.During additional activities for Obj 4, we will disseminate the results in the form of at least one peer-reviewed paper. One paper reporting the current YOLO DD detection model in dairy cattle has been accepted for publication in the Journal of Dairy Science.