Progress 06/01/24 to 05/31/25
Outputs Target Audience:The target audiences for this project are producers, animal scientists, academia, and animal industry professionals. Additionally, participants in the conference. Changes/Problems:No changes as of now. What opportunities for training and professional development has the project provided?Dr. Lalman has hired a post-doctoral fellow and Dr. Mariana is handling animal studies. We have hired three other graduate students to work on AI, biosystem engineering, and animal science studies. How have the results been disseminated to communities of interest?Yes, results have been presented at a national meeting and also discussed with the Industry Advisory Group. What do you plan to do during the next reporting period to accomplish the goals?Continuing and making progress with proposed research.
Impacts What was accomplished under these goals?
Approximately 150 spring-calving and 150 fall-calving Angus-sired cows were used in year 1 of the experiment. The cows and their progeny were maintained at the Range Cow Research Center near Stillwater, Oklahoma. To investigate the influence of genetic capacity for growth rate and the impact of calving season on sudden death and stress in feedlot cattle, Angus cows were mated to sires that were divergent in genetic capacity for growth rate. Approximately six sires that ranked in the top 5th percentile for growth and six sires that were below the 50th percentile for growth were used in year 1 to create medium and high-growth rate cattle. All spring and fall-calving cows were synchronized for timed artificial insemination. High- and moderate-growth natural service sires were out with the cows beginning 10 days following timed artificial insemination. Within the calving season, all cows were managed as a contemporary group throughout the production cycle. Winter feeding and supplementation practices were managed to maintain herd average body condition scores of 5.0 through the winter months. Spring-born calves (born in February and March 2024) were weaned using the fence line weaning technique at approximately 205 days of age. Calves were backgrounded for 60 days at the Range Cow Research Center before being shipped to the Willard Sparks Beef Cattle Research Center for finishing. During the weaning/backgrounding phase, calves were fed grass hay and approximately five pounds of a concentrate supplement. Fall-born calves (born August, September, and October 2024) will follow the same weaning and backgrounding protocol. Finishing phase: Following the 60-day preconditioning phase, spring-born steers were transported to the Willard Sparks Beef Research Center (WSBRC) for finishing under typical feedlot conditions, with unrestricted group access to feed, water, and shade. Upon arrival at the WSBRC, steers were fitted with radio-frequency identification (RFID) tags, received preventive sanitary care, and were randomly assigned to two pens, balanced in number of steers.. Steers were offered diets for a common step-up routine, beginning with a 30% roughage starter diet with four step-up diets to an 8% roughage finishing diet over 28 days. After adaptation to the finishing diet, the two groups of steers were relocated to new pens fitted with Insentec Roughage Intake Control (RIC) feeders to monitor and record their daily individual feeding behavior. Steers were adapted to the Insentec RIC feeders for a 28-d period. Following training and when all steers were confirmed to have regular use of the RIC Insentec feeding units, data collection period was initiated. Steer weights were collected at 14-day intervals at 0700 before feed delivery, and growth performance was estimated by regression of body weights over the number of days to determine average daily gain. Growth rate and feed intake were monitored for 70 days following the start of the collection period, and feed efficiency (kg gain per kg of feed) and residual feed intake were determined for each steer. The studies and data collections are on going. Citation: Vavilthota, Venkata Ragavendra, Ranjith Ramanathan, and Sathyanarayanan N. Aakur. "Capturing Temporal Components for Time Series Classification." In International Conference on Pattern Recognition, pp. 215-230. Cham: Springer Nature Switzerland, 2024. https://doi.org/10.1007/978-3-031-78107-0 Abstract: Analyzing sequential data is crucial in many domains, particularly due to the abundance of data collected from the Internet of Things paradigm. Time series classification, the task of categorizing sequential data, has gained prominence, with machine learning approaches demonstrating remarkable performance on public benchmark datasets. However, progress has primarily been in designing architectures for learning representations from raw data at fixed (or ideal) time scales, which can fail to generalize to longer sequences. This work introduces a compositional representation learning approach trained on statistically coherent components extracted from sequential data. Based on a multiscale change space, an unsupervised approach is proposed to segment the sequential data into chunks with similar statistical properties. A sequence-based encoder model is trained in a multi-task setting to learn compositional representations from these temporal components for time series classification. We demonstrate its effectiveness through extensive experiments on publicly available time series classification benchmarks. Evaluating the coherence of segmented components shows its competitive performance on the unsupervised segmentation task.
Publications
- Type:
Conference Papers and Presentations
Status:
Published
Year Published:
2024
Citation:
Vavilthota, Venkata Ragavendra, Ranjith Ramanathan, and Sathyanarayanan N. Aakur. "Capturing Temporal Components for Time Series Classification." In International Conference on Pattern Recognition, pp. 215-230. Cham: Springer Nature Switzerland, 2024.
|
Progress 06/01/23 to 05/31/24
Outputs Target Audience:The target audiences for this project are producers, animal scientists, academia, and animal industry professionals. Changes/Problems:
Nothing Reported
What opportunities for training and professional development has the project provided?Dr. Lalman has hired a post-doctoral fellow. The postdoctoral fellow is working with sensors to be used in this research. How have the results been disseminated to communities of interest?
Nothing Reported
What do you plan to do during the next reporting period to accomplish the goals?We are starting the animal experiments in October. In addition, we will start working on objective 2.
Impacts What was accomplished under these goals?
We started working on the objective 1a. Various algorithms to work on multi-modal data are in progress. Multimodal data, such as those collected from sensors, video, and audio data from farm animals provide a rich representation of the various factors that affect the stress levels in animals. However, while they provide a comprehensive view of the animal's behavior, they can possess noisy observations and, sometimes, irrelevant information that does not add to the inference process to assess the stress levels of the animal. As a first step in AI-based livestock monitoring, we worked on two aspects of multimodal livestock activity monitoring and evaluated them on publicly available datasets. Specifically, we explored how time series data, such as sensor data (ECG, EEG, IMU, etc.) is temporally structured and exploit these structures to learn robust representations using deep learning frameworks. Additionally, we also explored how individual behaviors evolve when present in a social dynamic. Specifically, we aim to understand livestock behavior changes as they move and interact with other individuals around them in a group setting. This will allow us to model their stress levels as a function of social experience. We briefly describe the two works below. In the first work, we worked on the problem of Social Activity Recognition (SAR), a critical component in real-world tasks like livestock activity surveillance and stress monitoring. This work is currently under review at an international conference focused on machine learning and pattern recognition and provides one of the first approaches to tackle SAR in an unsupervised manner and from streaming videos, i.e., without any labeled data and without storing the data locally or in the cloud. Unlike traditional event understanding approaches, SAR necessitates modeling individual actors' appearance and motions and contextualizing them within their social interactions. Traditional action localization methods fall short due to their single-actor, single-action assumption. Previous SAR research has relied heavily on densely annotated data, but privacy concerns limit their applicability in real-world settings. In this work, we propose a self-supervised approach based on multi-actor predictive learning for SAR in streaming videos. Using a visual-semantic graph structure, we model social interactions, enabling relational reasoning for robust performance with minimal labeled data. In this work, we make three specific contributions towards action understanding from videos. First, we are the first to tackle the problem of self-supervised social activity detection in streaming videos. Second, we show that relational reasoning over the proposed visual-semantic graph structure by spatial and temporal graph smoothing can help learn the social structure of cluttered scenes in a self-supervised manner requiring only a single pass through the training data to achieve robust performance. Third, we show that the framework can generalize to arbitrary action localization without bells and whistles to achieve competitive performance on publicly available benchmarks. The proposed framework achieves competitive performance on standard human group activity recognition benchmarks. Evaluation of three publicly available human action localization benchmarks demonstrates its generalizability to arbitrary action localization. In the second work, we tackle the problem of time series classification, the task of categorizing sequential data. This work is also under review at an international conference focused on machine learning and pattern recognition. Analyzing sequential data is crucial in making actionable outcomes based on the data collected from the Internet of Things paradigm. Machine learning approaches demonstrate remarkable performance on public benchmark datasets. However, progress has primarily been in designing architectures for learning representations from raw data at fixed (or ideal) time scales, which can fail to generalize to longer sequences. This work introduces a compositional representation learning approach trained on statistically coherent components extracted from sequential data. Based on a multi-scale change space, an unsupervised approach is proposed to segment the sequential data into chunks with similar statistical properties. A sequence-based encoder model is trained in a multi-task setting to learn compositional representations from these temporal components for time series classification. We make four specific contributions to multimodal understanding in this work. First, we are, to the best of our knowledge, to introduce a multi-scale change space for time series data to segment them into statistically atomic components. Second, we introduce the notion of compositional feature learning from temporally segmented components in time series data rather than modeling the raw data points. Third, we show that the temporal components detected by the algorithm are highly correlated with natural boundaries in time series data by evaluating it on the time series segmentation task, achieving state-of-the-art performance compared with other non-learning-based approaches. Finally, we establish a competitive baseline that provides competitive performance with state-of-the-art approaches on benchmark datasets for both time series classification and segmentation with limited training needs and without explicit handcrafting. We demonstrate its effectiveness through extensive experiments on publicly available time series classification benchmarks. Evaluating the coherence of segmented components shows its competitive performance on the unsupervised segmentation task.
Publications
|