Overview

Explanation has long been a part of communications, where humans use language to elucidate each other and transmit information about the mechanisms of events. There have been numerous works that study the structures of the explanations and their utility to humans.

At the same time, explanation relates to a collection of research directions in natural language processing (and more broadly, computer vision and machine learning) where researchers develop computational approaches to explain the (usually deep neural network) models. Explanation has received rising attention.

In recent months, the advance of large language models (LLMs) provides unprecedented opportunities to leverage their reasoning abilities, both as tools to produce explanations and as the subjects of explanation analysis. On the other hand, the sheer sizes and the opaque nature of LLMs introduce challenges to the explanation methods. In this tutorial, we intend to review these opportunities and challenges of explanations in the era of LLMs, connect some researches that were previously studied by different research groups, and hopefully sparkle the thoughts of new research directions.

Speakers

Zining Zhu
SIT
Hanjie Chen
Rice
Xi Ye
UAlberta
Chenhao Tan
UChicago
Ana Marasović
Utah
Sarah Wiegreffe
AI2
Veronica Qing Lyu
UPenn