Theses and Dissertations
Permanent URI for this collectionhttps://hdl.handle.net/10217/100389
Browse
Recent Submissions
Item Embargo Scalable predictive modeling for spatiotemporally evolving phenomena(Colorado State University. Libraries, 2025) Khandelwal, Paahuni, author; Pallickara, Sangmi Lee, advisor; Pallickara, Shrideep, committee member; Ghosh, Sudipto, committee member; Andales, Allan, committee memberSpatiotemporally evolving phenomena occur in epidemiology, atmospheric sciences, agriculture, and traffic management, among others. Models can be used to understand and inform decision-making in these settings. There has been a growth in both mechanistic and physics-informed methods to model phenomena. A challenge in such models is the need for extensive parametrization and calibration, which can be difficult for modeling phenomena at the continental scale. This has occurred alongside the availability of diverse data that can be leveraged by model-fitting algorithms. This dissertation focuses on leveraging deep learning methods to model spatiotemporally evolving phenomena by combining sparse but high-precision in situ measurement data with voluminous, low-precision satellite imagery. The research explores techniques to integrate scientific models and make use of diverse data sources, overcoming their disparities in precision, spatial coverage, and temporal resolution. We explore several methods to leverage scientific models and harness available data despite disparities in their precision, spatial scope, or temporal frequencies. We also regulate how the networks learn by designing custom multipart loss functions that combine traditional measures of accuracy alongside physics/domain-informed variability. As data volumes increase, there is a corresponding increase in the resource requirements – GPU, memory, disk, and network I/O – requirements for model training. To address scalability issues, we designed a framework that manages multi-dimensional data volumes, partitions data effectively, curtails modeling costs, and transfer learning schemes to improve the efficiency of model training workflows. By incorporating scientific knowledge into the learning process, this research addresses the challenges of limited data availability and the data-intensive nature of deep neural networks. The method generalize effectively, enabling the way for scalable and accurate models in data-scarce domains.Item Embargo Multi-stream deep learning for isolated sign language recognition in videos(Colorado State University. Libraries, 2025) Alsharif, Muhammad H., author; Anderson, Charles, advisor; Kirby, Michael, committee member; Blanchard, Nathaniel, committee member; Peterson, Christopher, committee memberIsolated sign language recognition is the task of identifying signs performed in isolation across multiple frames in a video. Advances in this field have significant implications, such as improving visual communication between humans and machines and bridging the communication gap between deaf and hearing individuals. However, practical applications of this domain have been limited by two key challenges: the computational complexity of current models and the limited availability of training data for many vocabularies in sign languages. This dissertation addresses these challenges driven by improving recognition accuracy and computational efficiency. 3D convolutional models with RGB and optical flow inputs have been widely utilized in state-of-the-art methods for action recognition. Despite their significant computational costs, a systematic evaluation of their contribution to sign recognition has been limited. We first evaluate the effectiveness of 3D convolutional networks, showing that they significantly outperform their 2D counterparts on several sign recognition datasets, even when compared to a deeper 2D architecture. Additionally, this research challenges conventual assumptions about optical flow, demonstrating through ablation studies that its primary value lies in masking irrelevant (static) regions rather than improving the learning of motion patterns for sign recognition. In addition to RGB and optical flow, this work investigates skeleton-based sign language recognition using recurrent, transformer, and spatiotemporal convolutional graph networks. Our experimental results demonstrate the importance of the spatiotemporal sparse graph representation of skeleton data (coordinates of body and hand joints) in improving accuracy and interpretability through edge importance weighting. To address the limited number of training data for many signs, we propose a coarse-to-fine transfer learning approach to adapt spatiotemporal features learned from large action recognition and Turkish sign language datasets to American sign language datasets. This approach results in significant improvement for multiple modalities and benchmarks. To combine different models in a multi-stream network, we propose several methods for fusing the stream outputs before and after classification. To find the best combination of models using RGB, optical flow, or skeleton as input modalities, we train and evaluate all possible combinations in two- and three-stream networks on three sign recognition datasets. Our findings show that combining RGB and skeleton-based streams provides the most significant gain over the RGB baseline, due to greater diversity in stream predictions. In contrast, combining RGB and optical flow-based streams significantly increases the computational cost, due to optical flow extraction, without improving accuracy over two RGB streams. Our two- and three-stream networks, using only RGB and skeleton data as input modalities, achieve new state-of-the-art accuracy on the two largest ASL video datasets, which include 1000 and 2000 signs. Our approach achieves over 90% top-5 recognition accuracy on all benchmarks while significantly reducing computational costs compared to state-of-the-art methods. These findings facilitate real-time applications on mobile devices aimed at improving convenience in the daily lives of deaf individuals and helping to overcome communication barriers.Item Open Access Optimizations of polyhedral reductions and their use in algorithm-based fault tolerance(Colorado State University. Libraries, 2025) Narmour, Louis, author; Rajopadhye, Sanjay, advisor; Pouchet, Louis-Noël, committee member; Prabhu, Vinayak, committee member; Pezeshki, Ali, committee memberIn this dissertation, we study the optimization of programs containing reductions and motivate a deeper connection between two ostensibly unrelated problems, one involving techniques for algorithmic improvement and another in the domain of Algorithm-Based Fault Tolerance. Reductions combine collections of inputs with an associative and often commutative operator to produce collections of outputs. Such operations are interesting because they often require special handling to obtain good performance. When the same value contributes to multiple outputs, there is an opportunity to reuse partial results, enabling reduction simplification. Prior work showed how to exploit this and obtain a reduction (pun intended) in the program's asymptotic complexity through a program transformation called simplification. We propose extensions to prior work on simplification and provide the first complete push-button implementation of reduction simplification in a compiler and show how to handle a strictly more general class of programs than previously supported. We evaluate its effectiveness and show that simplification rediscovers several key results in algorithmic improvement across multiple domains, previously only obtained through clever manual human analysis and effort. Additionally, we complement this and study the automation of generalized and automated fault tolerance against transient errors, such as those occurring due to cosmic radiation or hardware component aging and degradation, using Algorithm-Based Fault Tolerance (ABFT). ABFT methods typically work by adding some redundant computation in the form of invariant checksums (i.e., reductions), which, by definition, should not change as the program executes. By computing and monitoring checksums, it is possible to detect errors by observing differences in the checksum values. However, this is challenging for two key reasons: (1) it requires careful manual analysis of the input program, and (2) care must be taken to subsequently carry out the checksum computations efficiently enough for it to be worth it. We propose automation techniques for a class of scientific codes called stencil computations and give methods to carry out this analysis at compile time. This is the first work to propose such an analysis in a compiler.Item Open Access Formal verification of source-to-source transformations for high-level synthesis(Colorado State University. Libraries, 2025) Tucker, Emily, author; Pouchet, Louis-Noël, advisor; Prabhu, Vinayak, committee member; Ortega, Francisco, committee member; Wilson, James, committee memberHardware processors are designed using a complex optimization flow, starting from a high-level description of the functionalities to be implemented. This description is then progressively lowered to concrete hardware: Register-Transfer Level (RTL) functional behavior, timing between operations, and eventually actual logic gates are produced. High-level synthesis (HLS) can greatly facilitate the description of complex hardware implementations, by raising the level of abstraction up to a classical imperative language such as C/C++, usually augmented with vendor-specific pragmas and APIs. HLS automatically compiles a large class of C/C++ programs to highly optimized RTL. Despite productivity improvements, attaining high performance for the final design remains a challenge, and higher-level tools like source-to-source compilers have been developed to generate programs targeting HLS toolchains. These tools may generate highly complex HLS-ready C/C++ code, reducing the programming effort and enabling critical optimizations. However, whether these HLS-friendly programs are produced by a human or a tool, validating their correctness or exposing bugs otherwise remains a fundamental challenge. In this work we target the problem of efficiently checking the semantic equivalence between two programs written in C/C++ as a means to ensuring the correctness of the description provided to the HLS toolchain, by proving an optimized code version fully preserves the semantics of the unoptimized one. We introduce a novel formal verification approach that combines concrete and abstract interpretation with a hybrid symbolic analysis. Notably, our approach is mostly agnostic to how control-flow, data storage, and dataflow are implemented in the two programs. It can prove equivalence under complex bufferization and loop/syntax transformations, for a rich class of programs with statically interpretable control-flow. We present our techniques and their complete end-to-end implementation, demonstrating how our system can verify the correctness of highly complex programs generated by source-to-source compilers for HLS, and detect bugs that may elude co-simulation.Item Open Access Time series analysis over sparse, non-stationary datasets with variational mode decomposition and transfer learning(Colorado State University. Libraries, 2025) Patterson, Katherine, author; Pallickara, Shrideep, advisor; Pallickara, Sangmi, advisor; Andales, Allan, committee memberData volumes have been growing exponentially across many domains. However, in fields such as ecology and environmental monitoring, data remains sparse, creating unique challenges. One such challenge is detecting extreme events (sudden spikes or anomalies in the data) and understanding their causes based on spatiotemporal patterns. The difficulty is exacerbated by time lags between an observed outlier and its underlying trigger, making causal attribution and forecasts difficult. These challenges have implications, particularly for environmental protection and regulatory compliance. This thesis explores the issue of time-series analysis over sparse, non-stationary datasets to support outlier detection and forecasts. We mitigate non-stationarity using variational mode decomposition (VMD) to break the signal into multiple seasonal components. To tackle the challenges of long-term seasonality, we leverage information obtained from the frequency domain regarding dominant lagged relationships within these signals. Finally, we leverage transfer learning to warm-start models at spatial extents where the data are sparse. We validate these ideas in the context of nutrient runoff into surface waters, where identifying and explaining anomalies is critical for the protection of ecosystems. Challenges arise due to three main factors: (1) nutrient time series are naturally non-stationary, which complicates the identification of underlying patterns; (2) temporal models often struggle over an entire season's span; and (3) water quality measurements are often sporadic and sparse. Results showed that the historical similarity mapping of these spatiotemporal profiles and their frequency-motivated seasonality characteristics improved prediction performance in each target series. Additionally, the final proposed model captured more series fluctuations than the base models.Item Embargo Harnessing large language models for permission fidelity analysis from android application descriptions(Colorado State University. Libraries, 2025) Tamrakar, Yunik, author; Ray, Indrakshi, advisor; Banerjee, Ritwik, advisor; Ghosh, Sudipto, committee member; Simske, Steve, committee memberAndroid applications are very popular these days and as of mid-2024 there are over 2 million applications in the Google Play Store. With such a large number of applications available for download, the threat of privacy leakage increases considerably, primarily due to the users' limited knowledge in distinguishing the necessary app permissions. This makes accurate and consistent checking of the permissions collected by the applications necessary to ensure the protection of the user's privacy. Studies have indicated that inferring permissions from app descriptions is an effective way to determine whether the collected permissions are necessary or not. Previous research in the permission inference space has explored techniques such as keyword-based matching, Natural Language Processing methods (including part-of-speech tagging and named entity recognition), as well as deep learning based approaches using Recurrent Neural Networks. However, app descriptions are often vague and may omit details to meet sentence length restrictions, resulting in suboptimal performance of these models. This limitation motivated our choice of large language models (LLMs), as their advanced contextual understanding and ability to infer implicit information can directly address the weaknesses observed in previous approaches. In this work, we explore various LLM architectures for the permission inference task and provide a detailed comparison across various models. We evaluate both zero-shot learning and fine-tuning based approaches, demonstrating that fine-tuned models can achieve state-of-the-art performance. Additionally, by employing targeted generative AI based training data augmentation techniques, we show that these fine-tuned models can significantly outperform baseline methods. Furthermore, we illustrate the potential of leveraging paraphrasing to boost fine-tuned performance by over 50 percent, all while using only a very small number of annotated samples—a rarity for LLMs.Item Open Access Resiliency analysis of mission-critical systems using formal methods(Colorado State University. Libraries, 2025) Abdelgawad, Mahmoud A., author; Ray, Indrakshi, advisor; Malaiya, Yashwant, committee member; Sreedharan, Sarath, committee member; Daily, Jeremy, committee memberMission-critical systems, such as navigational spacecraft and drone surveillance systems, play a crucial role in a nation's safety and security. These systems consist of heterogeneous systems that work together to accomplish critical missions. However, they are susceptible to cyberattacks and physical incidents that can have devastating consequences. Thus, missions must be designed so that mission-critical systems can withstand adverse events and continue to operate effectively, even with the occurrence of adverse events. In other words, critical mission engineers must specify, analyze, and anticipate potential threats, identify where adverse events may occur, and develop mitigation strategies before deploying a mission-critical system. This work presents an end-to-end methodology for analyzing the resiliency of critical missions. The methodology first specifies a mission in the form of a workflow. The mission workflow is then converted into a formal representation using Colored Petri Nets (CPN). Threat models are also extracted from the mission specification to tackle the CPN mission with various attack scenarios. These threat models are represented as CPN attacks. The methodology exploits the state transitions of the CPN mission attached to CPN attacks to analyze the resiliency of the mission. The analysis identifies which states the mission succeeds, fails, and is incomplete. We established a mission for a mission-critical formation consisting of a military vehicle and two route reconnaissance drones that collaborate to monitor a national border and respond promptly to physical threats. The effectiveness of the methodology is demonstrated in identifying vulnerabilities, modeling adversarial conditions, and evaluating mission continuity under disruptions. The result shows how to refine the mission to enhance the resilience of such formations. The findings contribute to the early-stage resilience analysis framework and help address the limitations associated with manual verification of mission-critical systems.Item Open Access Enabling programmatic interfaces for explorations over voluminous spatiotemporal data collections(Colorado State University. Libraries, 2025) Barram, Kassidy M., author; Pallickara, Shrideep, advisor; Pallickara, Sangmi, advisor; Arabi, Mazdak, committee memberThis thesis focuses on enabling programmatic interfaces to perform exploratory analyses over voluminous data collections. The data we consider can be encoded in diverse formats and managed using diverse data storage frameworks. Our framework, Scrybe, manages the competing pulls of expressive computations and the need to conserve resource utilization in shared clusters. The framework includes support for differentiated quality of services allowing preferentially higher resource utilization for certain users. We have validated our methodology with voluminous data collections housed in relational, NoSQL/document, and hybrid storage systems. Our benchmarks demonstrate the effectiveness of our methodology across evaluation metrics such as latencies, throughputs, preservation of resource thresholds, and differentiated services. These quantitative measures of performance are complemented using qualitative metrics that profile user interactions with the framework.Item Open Access Preventing malicious modifications to firmware using hardware root of trust (HRoT)(Colorado State University. Libraries, 2025) Podder, Rakesh, author; Ray, Indrajit, advisor; Sreedharan, Sarath, advisor; Ray, Indrakshi, committee member; Jayasumana, Anura, committee memberAs computing devices such as servers, workstations, laptops, and embedded systems are transported from one site to another, they are susceptible to unauthorized firmware modifications. Additionally, traditional over-the-air (OTA) firmware update mechanisms often lack robust security features, exposing devices to threats such as unauthorized updates, malware injection, etc. While the industry has made efforts to secure the boot process using a hardware root of trust (HRoT), post-boot firmware tampering remains a significant risk. In this work, we introduce a comprehensive framework that addresses firmware security across both transit and remote update phases by leveraging HRoT and cryptographic techniques. To prevent unauthorized firmware modifications during device shipment, we propose the PIT-Cerberus (Protection In Transit) framework, which enhances the HRoT's attestation capabilities to securely lock and unlock BIOS/UEFI. In addition, we introduce the Secure Remote Firmware Update Protocol (S-RFUP) to fortify OTA firmware updates by incorporating industry standards such as Platform Level Data Model (PLDM) and Management Component Transport Protocol (MCTP). These standards enable interoperability across diverse platforms while reducing management complexity. The protocol enhances security and operational integrity during updates, ensuring that only authenticated and verified firmware modifications occur. Both frameworks are implemented within a trusted microcontroller as part of Project Cerberus, an open-source security platform for server hardware. We present a security analysis, implementation details, and validation results, demonstrating the effectiveness of our approach in securing firmware both in transit and during remote updates.Item Open Access Rapid interactive explorations of voluminous spatial temporal datasets(Colorado State University. Libraries, 2025) Young, Matthew Branley, author; Pallickara, Shrideep, advisor; Pallickara, Sangmi, advisor; Arabi, Mazdak, committee memberSpatial data volumes have grown exponentially alongside the proliferation of sensing equipment and networked observational devices. In this thesis, we describe the framework aQua for performing visualizations and exploration of spatiotemporally evolving phenomena at scale, and Rubiks, which supports effective summarizations and explorations at scale over arbitrary spatiotemporal scopes, which encapsulate the spatial extents, temporal bounds, or combinations thereof over the data space of interest. We validate these ideas in the context of data from the National Hydrology Database (NHD) and the Environmental Protection Agency (EPA) to support longitudinal analysis (53 years of data) for the vast majority of water bodies in the United States. Our methodology addresses issues relating to preserving interactivity, effective analysis, dynamic query generation, and scaling. We extend the concept of data cubes to encompass spatiotemporal datasets with high-dimensionality and where there might be significant gaps in the data because measurements (or observations) of diverse variables are not synchronized and may occur at diverse rates. We consider optimizations and refinements at the server-side, client-side, and how information exchange occurs between the client and server-side. We report both quantitative and qualitative assessments of several aspects of our tool to demonstrate its suitability. Finally, our methodology is broadly applicable to domains where visualization-driven explorations of spatiotemporally evolving phenomena are needed.Item Embargo Privacy threats to mobile health apps: an analysis of data collection practices(Colorado State University. Libraries, 2025) Myers, Charles Ethan, author; Ray, Indrakshi, advisor; Ortega, Francisco, committee member; Ray, Indrajit, committee member; Jayasumana, Anura, committee memberUsers often install mobile health applications (mHealth apps) to improve their health and lifestyle. mHealth apps collect sensitive personal health related information and may share them with various stakeholders. Many of these mHealth apps that consumers use for their personal lifestyle benefits are not required to be compliant with any regulation, such as the Health Insurance Portability and Accountability Act (HIPAA) or General Data Protection Regulation (GDPR). Our investigation reveals that there is a mismatch between what an app description states about privacy, what permissions it requests from the end user as declared in its manifest file, privacy regulations (GDPR), and what privacy practices are actually enforced by the app. We provide a formal definition of mHealth apps and discuss an automated approach that uses a pre-trained language model to identify and analyze 13,177 mHealth apps from Google Playstore. We identify the ten most common privacy threats in mHealth apps and map them to GDPR policy violations. Privacy violations pertaining to GDPR include the absence of a consent management system, inconsistent permissions with respect to the app description, and sharing personally identifiable information (PII) without consent. Our analysis reveals that only 4.28% had a consent mechanism, over 88% of app network transmissions shared some form of personally identifiable information (PII) without consent, and nearly 83.7% requested permissions from the users without explaining their use cases. Our research has been successful in building automated tools for detecting privacy violations for some, but not all, of the identified threats.Item Open Access MAGELLAN: enabling effective search over voluminous, high-dimensional scientific datasets(Colorado State University. Libraries, 2025) Larrieu, Federico A., author; Pallickara, Shrideep B., advisor; Pallickara, Sangmi L., advisor; Vijayasarathy, Leo R., committee member; Ghosh, Sudipto, committee memberAs high-dimensional, voluminous datasets continue to become available, they present opportunities for users to perform richer explorations that lead to insights. Most explorations are however limited by the query semantics enforced by the underlying storage system. This precludes identification of connections that exists within and across datasets. This study describes, Magellan, a system that is designed for richer, iterative explorations that allow users to explore connections within and across datasets. Our methodology combines aspects of ontologies and metadata to support analysis that are domain informed and statistically richer. Our performance benchmarks demonstrate the suitability of our methodology to inform explorations interactively and at scale.Item Embargo Cooking up a better AR experience: notification design and the liabilities of imperfect cues in augmented reality(Colorado State University. Libraries, 2024) Raikwar, Aditya R., author; Ortega, Francisco R., advisor; Ray, Indrakshi, committee member; Moraes, Marcia, committee member; Soto, Hortensia, committee memberThis dissertation investigates optimizing user experience in Augmented Reality (AR). A virtual cooking environment (ARtisan Bistro) serves as a testbed to explore factors influencing user interaction with AR interfaces. The research starts with notification design, examining strategically placed visual and audio notifications in ARtisan Bistro (Chapter 4). Building on this, Chapter 5 explores optimizing these designs for user awareness and delivering critical information, especially when audio is impractical. This involved exploring visual-only notifications, revealing consistent user performance and attention capture comparable to combined visual-audio notifications (no significant difference found). The research demonstrates that well-designed notifications can significantly improve user experience, but it also raises a crucial question: can users always trust the information presented in AR environments? The possibility of imperfect information delivery underscores the importance of reliable information delivery. Chapter 6 explores the impact of imperfect cues generated by machine learning (ML) on user performance in AR visual search tasks. This research highlights the potential for automation bias when users rely heavily on unreliable cues. By investigating both notification design and the limitations of ML systems for reliable information delivery, this dissertation emphasizes the importance of creating a well-rounded user experience in AR environments. The findings underscore the need for further research on optimizing visual notifications, mitigating automation bias, and ensuring reliable information delivery in AR applications.Item Open Access Smart transfers: challenges and opportunities in boosting low-resource language models with high-resource language power(Colorado State University. Libraries, 2024) Manafi, Shadi, author; Krishnaswamy, Nikhil, advisor; Ortega, Francisco R., committee member; Blanchard, Nathaniel, committee member; Chong, Edwin K. P., committee memberLarge language models (LLMs) are predominantly built for high-resource languages (HRLs), leaving low-resource languages (LRLs) underrepresented. To bridge this gap, knowledge transfer from HRLs to LRLs is crucial, but it must be sensitive to low-resource language (LRL)-specific traits and not biased toward an high-resource language (HRL) with larger training data. This dissertation addresses the opportunities and challenges of cross-lingual transfer in two main streams. The first stream explores cross-lingual zero-shot learning in Multilingual Language Models (MLLMs) like mBERT and XLM-R for tasks such as Named Entity Recognition (NER) and section-title prediction. The research introduces adversarial test sets by replacing named entities and modifying common words to evaluate transfer accuracy. Results show that word overlap between languages is essential for both tasks, highlighting the need to account for language-specific features and biases. The second stream develops sentence Transformers, which generate sentence embeddings by mean-pooling contextualized word embeddings. However, these embeddings often struggle to capture sentence similarities effectively. To address this, we fine-tuned an English sentence Transformer by leveraging a word-to-word translation approach and a triplet loss function. Despite using a pre-trained English BERT model and only word-by-word translations without accounting for sentence structure, the results were competitive. This suggests that mean-pooling may weaken attention mechanisms, causing the model to rely more on word embeddings than sentence structure, potentially limiting comprehension of sentence meaning. Together, these streams reveal the complexities of cross-lingual transfer, guiding more effective and equitable use of HRLs to support LRLs in NLP applications.Item Open Access From neuro-inspired attention methods to generative diffusion: applications to weather and climate(Colorado State University. Libraries, 2024) Stock, Jason, author; Anderson, Chuck, advisor; Ebert-Uphoff, Imme, committee member; Krishnaswamy, Nikhil, committee member; Sreedharan, Sarath, committee memberMachine learning presents new opportunities for addressing the complexities of atmospheric science, where high-dimensional, sparse, and variable data challenge traditional methods. This dissertation introduces a range of algorithms, motivated specifically by the intricacies of weather and climate applications. These challenges complement those that are fundamental in machine learning, such as extracting relevant features, generating high-quality imagery, and providing interpretable model predictions. To this end, we propose methods to integrate adaptive wavelets and spatial attention into neural networks, showing improvements on tasks with limited data. We design a memory-based model of sequential attention to expressively contextualize a subset of image regions. Additionally, we explore transformer models for image translation, with an emphasis on explainability, that overcome the limitations of convolutional networks. Lastly, we discover meaningful long-range dynamics in oscillatory data from an autoregressive generative diffusion model---a very different approach from the current physics-based models. These methods collectively improve predictive performance and deepen our understanding of both the underlying algorithmic and physical processes. The generality of most of these methods is demonstrated on synthetic data and classical vision tasks, but we place a particular emphasis on their impact in weather and climate modeling. Some notable examples include an application to estimate synthetic radar from satellite imagery, predicting the intensity of tropical cyclones, and modeling global climate variability from observational data for intraseasonal predictability. These approaches, however, are flexible and hold potential for adaptation across various application domains and data modalities.Item Embargo Learning technical Spanish with virtual environments(Colorado State University. Libraries, 2024) Siebert, Caspian, author; Ortega, Francisco R., advisor; Miller De Rutté, Alyssia, committee member; Krishnaswamy, Nikhil, committee memberAs the world becomes increasingly interconnected through the internet and travel, foreign language learning is essential for accurate communication and a deeper appreciation of diverse cultures. This study explores the effectiveness of a virtual learning environment employing Artificial Intelligence (AI) designed to facilitate Spanish language acquisition among veterinary students in the context of diagnosing a pet. Students' engagement with virtual scenarios that simulate real-life veterinary consultations in Spanish is examined using a qualitative thematic analysis. Participants have conversations with a virtual pet owner, discussing symptoms, diagnosing conditions, and recommending treatments, all in Spanish. Data was collected through recorded interactions with the application and a semi-structured interview. Findings suggest that immersive virtual environments enhance user engagement and interest, and several suggestions were made to improve the application's features. The study highlights the potential for virtual simulations to bridge the gap between language learning and professional training in specialized fields such as veterinary medicine. Finally, a set of implications of design for future systems is provided.Item Open Access Towards generating a pre-training image transformer framework for preserving spatio-spectral properties in hyperspectral satellite images(Colorado State University. Libraries, 2024) Faruk, Tanjim Bin, author; Pallickara, Sangmi Lee, advisor; Pallickara, Shrideep, advisor; Cotrufo, M. Francesca, committee memberHyperspectral images facilitate advanced geospatial analysis without the need for expensive ground surveys. Machine learning approaches are particularly well-suited for handling the geospatial coverage required by these applications. While self-supervised learning is a promising methodology for managing voluminous datasets with limited labels, existing encoders in self-supervised learning face challenges when applied to hyperspectral images due to the large number of spectral channels. We propose a novel hyperspectral image encoding framework designed to generate highly representative embeddings for subsequent geospatial analysis. Our framework extends the Vision Transformer model with dynamic masking strategies to enhance model performance in regions with high spatial variability. We introduce a novel loss function that incorporates spectral quality metrics and employs the unique channel grouping strategy to leverage spectral similarity across channels. We demonstrate the effectiveness of our approach through a downstream model for estimating soil texture at a 30-meter resolution.Item Open Access Towards heterogeneity-aware automatic optimization of time-critical systems via graph machine learning(Colorado State University. Libraries, 2024) Canizales Turcios, Ronaldo Armando, author; McClurg, Jedidiah, advisor; Rajopadhye, Sanjay, committee member; Pasricha, Sudeep, committee memberModern computing's hardware architecture is increasingly heterogeneous, making optimization challenging; particularly on time-critical systems where correct results are as important as low execution time. First, we explore a study case about the manual optimization of an earthquake engineering-related application, where we parallelized accelerographic records processing. Second, we present egg-no-graph, our novel code-to-graph representation based on equality saturation, which outperforms state-of-the-art methods at estimating execution time. Third, we show how our 150M+ instances heterogeneity-aware dataset was built. Lastly, we redesign a graph-level embedding algorithm, making it converge orders of magnitude faster while maintaining similar accuracy than state-of-the-art on our downstream task, thus being feasible for use on time-critical systems.Item Open Access In pursuit of industrial like MAXSAT with reduced MAX-3SAT random generation(Colorado State University. Libraries, 2024) Floyd, Noah R., author; Whitley, Darrell, advisor; Sreedharan, Sarath, committee member; Aristoff, David, committee memberIn the modern landscape of MAXSAT, there are two broad classifications of problems: Random MAX-3SAT and Industrial SAT. Random MAX-3SAT problems by randomly sampling variables with a uniform probability and randomly assigning signs to the variable, one clause at a time. Industrial MAX-SAT consists of MAX-3SAT problems as encountered in the real world, and generally have a lower nonlinearity than random MAX-3SAT instances. One of the goals of recent research has been to figure out which rules and structures these industrial problems follow and how to replicate them randomly. This paper builds off of the paper" Reduction-Based MAX-3SAT with Low Nonlinearity and Lattices Under Recombination", implementing its approach to MAX-3SAT clause generation and determining what it can reveal about industrial MAX-13SAT and random MAX-3SAT. This builds off of the transformation from SAT to MAX-SAT problems and hopes to create random MAXSAT problems that are more representative of industrial MAXSAT problems. All this would be in the pursuit of random MAX-3SAT that more accurately maps onto real-world MAX-3SAT instances so that more efficient MAX-3SAT solvers can be produced.Item Open Access Exploring remote sensing data with high temporal resolutions for wildfire spread prediction(Colorado State University. Libraries, 2024) Fitzgerald, Jack, author; Blanchard, Nathaniel, advisor; Krishnaswamy, Nikhil, committee member; Zimmerle, Dan, committee memberThe severity of wildfires has been steadily increasing in the United States over the past few decades, burning up many millions of acres and costing billions of dollars in suppression efforts each year. However, in the same few decades there have been great strides made to advance our technological capabilities. Machine learning is one such technology that has seen spectacular improvements in many areas such as computer vision and natural language processing, and is now being used extensively to model spatiotemporal phenomena such as wildfires via deep learning. Leveraging deep learning to model how wildfires spread can help facilitate evacuation efforts and assist wildland firefighters by highlighting key areas where containment and suppression efforts should be focused. Many recent works have examined the feasibility of using deep learning models to predict when and where wildfires will spread to, which has been enabled in part due to the wealth of geospatial information that is now publicly available and easily accessible on platforms such as Google Earth Engine. In this work, the First Week Wildfire Spread dataset is introduced, which seeks to address some of the limitations with previously released datasets by having an increased focus on geospatial data with high temporal resolutions. The new dataset contains weather, fuel, topography, and fire location data for the first 7 days of 56 megafires that occurred in the Contiguous United States from 2020 to 2024. Fire location data is collected by the Advanced Baseline Imager aboard the GOES-16 satellite, which provides updates every 5 minutes. Baseline experiments are performed using U-Net and ConvLSTM models to demonstrate some of the various ways that the First Week Wildfire Spread dataset can be used and to highlight its versatility.