PhD Defence • Natural Language Processing • Leveraging the Knowledge of Generalist LLMs for Diverse Objective and Subjective NLP Tasks

Friday, December 6, 2024 10:00 am - 1:00 pm EST (GMT -05:00)

Please note: This PhD defence will take place in DC 2310 and online.

Gaurav Sahu, PhD candidate
David R. Cheriton School of Computer Science

Supervisor: Professor Olga Vechtomova

Recent advances in natural language processing (NLP), particularly in the subspace of large language modeling, have led to a major paradigm shift. Large language models (LLMs), like the GPT and LLaMA family of models, are trained on a massive Internet corpus covering data from a gamut of diverse domains. In addition, the billions of parameters in these models also invoke emergent capabilities in them, leading to strong improvements across diverse NLP tasks without much task-specific tuning; however, effectively harnessing the knowledge of these generalist models for real-world data still remains a major challenge as the LLMs can produce inconsistent, biased, and unsatisfactory outputs. In this thesis, we propose task-specific strategies for effectively leveraging LLMs for a number of challenging NLP tasks, such as (low-resource) text classification, text summarization, and modeling artistic preferences of creative individuals.

Our results suggest that LLMs can serve as excellent data generators and data labelers for well-defined tasks like classification and summarization, crucially in data-scarce settings, where models trained on LLM-generated data achieved competitive performance to oracle models trained on a much larger labeled training data. On the other hand, for more subjective tasks like modeling artistic preferences among creative individuals, we demonstrate that while LLMs might not be able to discern between the likes and dislikes of artists, we can still use LLMs to extract the key linguistic and poetic properties from text that can later be employed to infer artistic preferences among different individuals. Overall, this work provides a set of effective strategies for adapting LLMs to diverse NLP tasks for effective utilization of their knowledge.


To attend this PhD defence in person, please go to DC 2310. You can also attend virtually using Zoom.