Open Theses and Projects

Can ChatGPT Replace Literary and Cultural Studies Scholars?

ChatGPT and other large language models (LLMs) have recently exhibited impressive capabilities, including some kind of “self-understanding”, causal reasoning, and annotating and even interpreting texts. But it must have its limits, right? In this Master thesis we are interested in finding these limits and will test the capabilities of ChatGPT and other LLMs when faced with tasks from the field of literary studies, in particular text analysis and interpretation.

Specifically, relying on a large, multilingual corpus of TEI(XML)-encoded issues of 18th century periodicals, we will ask these LLMs to answer questions about the gender stereotypes within these historical texts and compare the answers to the body of knowledge acquired by literary scholars. Examples for such questions are:

  • Which female characters were named in this issue?
  • Are female characters depicted positively or negatively in this issue?
  • Can you summarize the depiction of female characters in this issue?
  • How did the depiction of female characters change between these two issues?

Tasks:

  • Set up at least two LLMs for experiments (e.g., ChatGPT via API, locally running version of Alpaka, etc.)
  • Check if (or under which conditions) these LLMs can process TEI/XML-encoded texts
  • Set up a series of questions (and their answers) together with literary scholars
  • Qualitatively compare the answers provided by the LLM with answers from literary scholars

The thesis will be supervised by an interdisciplinary team including Roman Kern (rkern@tugraz.at; NLP, LLM), Yvonne Völkl (yvonne.voelkl@tugraz.at; literary studies), Elisabeth Hobisch (elisabeth.hobisch@tugraz.at; literary studies), and Bernhard Geiger (bgeiger@know-center.at; machine learning).

Is MC Dropout Sensitive to Dead Neurons?

Monte Carlo Dropout is a popular method for uncertainty estimation in neural networks that, in essence, performs dropout in the inference phase (rather than just in the training phase). It has been shown that MC Dropout is similar to certain variants of Gaussian processes. On the other hand, it was shown (arXiv:2008.02627) that the uncertainty estimate, i.e., the posterior variance, depends on the mean of the posterior and on the dropout rate. Furthermore, MC Dropout seems to underestimate the uncertainty "in between" input regions with high data density, at least for shallow networks (arXiv:1909.00719). Finally, if all neurons are inactive, then the uncertainty estimate is zero (arXiv:2008.02627). 

The purpose of this thesis it to evaluate whether the uncertainty estimate provided by MC Dropout depends on the number of inactive neurons inside the network. Such a dependence would be difficult to control and could potentially limit the limit the utility of MC Dropout. 

Tasks:

  • Literature survey on the capabilities and limitations of MC Dropout
  • Evaluating the (nonlinear) correlation between the fraction of inactive neurons and the posterior variance determined by MC Dropout for several network architectures, e.g.,
    • deep linear networks in a simple Gaussian regression setting that admits a closed-form posterior
    • single-layer ReLU networks with different layer widths
    • multi-layer ReLU networks with different sets of layer widths
    • etc.
  • (It may be necessary to train these networks in a special way to tune the number of inactive neurons for a given architecture.)

Requirements:

  • Basic understanding of Bayesian probability, Bayesian neural networks, and/or Gaussian processes
  • Good programming skills (PyTorch, TensorFlow, etc.)
  • Interest in theoretical work