My research stems from the idea that while machine learning is a wonderful tool for understanding and working with written human communication, it is hampered by the fact that even the most powerful models fail, often in ways that are difficult for humans to anticipate or even recognize in-situ. That makes it hard for people to effectively and ethically collaborate with these models on the types of tasks we’d like to use them for, such as making high-stakes decisions about pieces of text or extracting domain insights about large corpora.
I see model interpretablity as a big part of fixing this problem. If we can decompose an unverifiable model prediction into individually-verifiable pieces, then we can more effectively incorporate it into our downstream decision-making. If we look at corpus-level explanations rather than corpus-level predictions, we can extract deeper domain insights from our content analysis.
The upshot of all this is that there are a few types of projects I am particularly excited to supervise (although this list is non-exhaustive and I’m always open to hearing about a cool NLP idea a student would like to work on with me).
- Novel interpretability or uncertainty estimation methods for NLP models
- Human-subject experimentation to better understand or improve human-model collaboration
- Human-subject experimentation to generally better understand human collaborative decision-making
- Interdisciplinary applications of interpretable NLP
If you are a current or incoming UNH masters or undergraduate student, or a potential UNH PhD student, send me an email at samuel DOT carton AT unh.edu with your CV and I’ll be happy to set up a meeting to chat further.