Browsing by Author "Ortega, Francisco R., committee member"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Open Access Smart transfers: challenges and opportunities in boosting low-resource language models with high-resource language power(Colorado State University. Libraries, 2024) Manafi, Shadi, author; Krishnaswamy, Nikhil, advisor; Ortega, Francisco R., committee member; Blanchard, Nathaniel, committee member; Chong, Edwin K. P., committee memberLarge language models (LLMs) are predominantly built for high-resource languages (HRLs), leaving low-resource languages (LRLs) underrepresented. To bridge this gap, knowledge transfer from HRLs to LRLs is crucial, but it must be sensitive to low-resource language (LRL)-specific traits and not biased toward an high-resource language (HRL) with larger training data. This dissertation addresses the opportunities and challenges of cross-lingual transfer in two main streams. The first stream explores cross-lingual zero-shot learning in Multilingual Language Models (MLLMs) like mBERT and XLM-R for tasks such as Named Entity Recognition (NER) and section-title prediction. The research introduces adversarial test sets by replacing named entities and modifying common words to evaluate transfer accuracy. Results show that word overlap between languages is essential for both tasks, highlighting the need to account for language-specific features and biases. The second stream develops sentence Transformers, which generate sentence embeddings by mean-pooling contextualized word embeddings. However, these embeddings often struggle to capture sentence similarities effectively. To address this, we fine-tuned an English sentence Transformer by leveraging a word-to-word translation approach and a triplet loss function. Despite using a pre-trained English BERT model and only word-by-word translations without accounting for sentence structure, the results were competitive. This suggests that mean-pooling may weaken attention mechanisms, causing the model to rely more on word embeddings than sentence structure, potentially limiting comprehension of sentence meaning. Together, these streams reveal the complexities of cross-lingual transfer, guiding more effective and equitable use of HRLs to support LRLs in NLP applications.Item Open Access Something is fishy! - How ambiguous language affects generalization of video action recognition networks(Colorado State University. Libraries, 2022) Patil, Dhruva Kishor, author; Beveridge, J. Ross, advisor; Krishnaswamy, Nikhil, advisor; Ortega, Francisco R., committee member; Clegg, Benjamin, committee memberModern neural networks designed for video action recognition are able to classify video snippets with high degrees of confidence and accuracy. The success of these models lies in the complex feature representations they learn from the training data, but the limitations of these models are rarely linked on a deeper level to the inconsistent quality of the training data. Although newer and better approaches pride themselves on higher evaluation metrics, this dissertation questions whether these networks are recognizing the peculiarities of dataset labels. A reason for these peculiarities lies in the deviation from standardized data collection and curation protocols that ensure quality labels. Consequently, the models may learn data properties that are irrelevant or even undesirable when trained using only a forced choice technique. One solution for these shortcomings is to reinspect the training data and gain better insights towards designing more efficient algorithms. The Something-Something dataset, a popular dataset for video action recognition, has large semantic overlaps both visually as well as linguistically between different labels provided for each video sample. It can be argued that there are multiple possible interpretations of actions in videos and the restriction of one label per video can limit or even negatively impact the network's ability to generalize to even the dataset's own testing data. To validate this claim, this dissertation introduces a human-in-the-loop procedure to review the legacy labels and relabel the Something-Something validation data. When the new labels thus obtained are used to reassess the performance of video action recognition networks, significant gains of almost 12% and 3% in the top-1 and top-5 accuracies respectively are reported. This hypothesis is further validated by visualizing the layer-wise internals of the networks using Grad-CAM to show that the model focuses on relevant salient regions when predicting an action in a video.