Description
Sarcasm detection in news headlines presents a unique challenge due to the inherent formal nature of reporting language forcing the use of subtle cues and contextual dependencies in its classification. In our project, we explore the use of transformer-based embeddings like BERT, graph convolutional networks (GCNs), and pretrained large language models (LLMs) to enhance sarcasm detection. By leveraging contextualized representations and structural relationships between words, we aim to improve interpretability and performance over traditional models. This research hopes to contribute to the broader field of NLP and human computer interaction by automating sarcasm detection using AI.