Graduate Seminar | Diving Deep into Disinformation: From Sample-level Detection to Fine-grained Explanation
Diving Deep into Disinformation: From Sample-level Detection to Fine-grained Explanation
Dr. Chris Thomas
Friday, December 2, 2022
2150 Torgersen Hall
The prevalence of continuously created and rapidly spread disinformation online poses major societal risks. Sophisticated malicious actors employ a number of techniques for creating disinformation including generative language and vision models, while simpler actors may manually create false stories or use images out of context. While much recent work has been invested in developing systems that can automatically detect disinformation, such content is often created by misusing or falsifying only a small number of the claims made in a document. The ability to explain precisely which claims are false is critical to both analysts studying disinformation campaigns at scale as well as to end users who wish to know what in their social media post caused it to be flagged. Such fine-grained detection is challenging due to the lack of fine-grained training data, the continuous emergence of new disinformative claims not seen at train time, and the complex reasoning required to understand how specific claims relate to other modalities, other documents, and background knowledge. In this talk, I will present three avenues of our recent research in fine-grained disinformation detection. First, I will discuss our work in detecting machine generated disinformation by identifying fine-grained inconsistencies across modalities. Our method improves the state of the art at detecting machine generated articles substantially, but more importantly, generates fine-grained explanations of why a particular article is inconsistent. Next, I will present our work on predicting how fine-grained textual claims logically relate to visual content in a new multimodal reasoning task. Our model is able to identify which claims are entailed or contradicted by visual content, or those which are neither, without any fine-grained labeled training data. Finally, I will present our ongoing work on fine-grained contextual claim entailment, which considers how specific claims relate to background multimodal context.
Chris Thomas is an Assistant Professor in the Department of Computer Science at Virginia Tech. His research interests lie at the intersection of computer vision and natural language processing. A primary focus of his research is to build robust AI systems that can understand and reason about real-world multimedia content (images, text, video, and audio) as humans do. His work has appeared in top conferences and journals, including CVPR, ACL, NeurIPS, ECCV, IEEE PAMI, and IJCV. He has diverse interdisciplinary collaborations with faculty from other fields and institutions, including Columbia, UIUC, and the University of Pennsylvania.