Supervision
Ongoing
Maximilian Bleick (with Aljoscha Burchardt) – BA thesis @ TU Berlin: An Investigation of LLM Chatbots Concerning the Echo Chamber Effect
Yi-Sheng Hsu (with Sherzod Hakimov) – MSc project @ Uni Potsdam: Engineering LLM-generated Explanations with Metric-based Readability Control
Completed
Qianli Wang (with Leonhard Hennig) – MSC thesis @ TU Berlin: A Singular LLM Is All You Need for Dialogue-based Explanation Regarding NLP Tasks
Konstantin Biskupski (with Eleftherios Avramidis) – MSc thesis @ TU Berlin: Quality estimation of machine-translated texts with fine-grained classification of errors
Kiran Rohra (with Philippe Thomas) – MSc thesis @ TU Berlin: Comparative error analysis of biomedical image labelling and captioning models
Ajay Madhavan Ravichandran (with Philippe Thomas) – MSc thesis @ TU Berlin: Evaluating text quality of generated radiology reports
Mika Rebensburg (with Tim Polzehl & Stefan Hillmann) - BSc thesis @ TU Berlin : Automatic Evaluation of Chatbot Dialogs Using Pre-Trained Language Models in the Customer Support Domain
Daniel Fernau (with Tim Polzehl & Stefan Hillmann) – MSc thesis @ TU Berlin: Towards Adaptive Conversational Agents: Fine-tuning Language-Models for User Classification to enhance Usability
Research assistant supervisions
Ongoing
Yi-Sheng Hsu – Rationalization with Metric-based Readability Control
Maximilian Dustin Nasert & Christopher Ebert – Data-based Interpretability and Evaluation of Self-Rationalizing LLMs
Completed
Qianli Wang – Interactive NLP model exploration through dialogue systems
João Lucas Mendes de Lemos Lins – Instructional explanations
Sahil Chopra – Rationale generation for dialogue-based explanations
Ajay Madhavan Ravichandran – Conceptualizing dialogue-based explanations
Courses
2022-10 - 2023-03 : Explainability in Natural Language Processing @ TU Berlin. Topics: (1) Contrastive Explanations of Text Generation Models. (2) Explainable Fact Checking.
2021-10 - 2022-03 : MSc/BSc software project @ TU Berlin: Assessing the Quality of Machine-translated Text (with Eleftherios Avramidis & Vivien Macketanz)
Open topics
Conversational Model Refinement
References: Yao et al. (2021); Gu et al. (2023); Malaviya et al. (2023); He et al. (2023)
Analyzing User Behavior in Fact Checking Systems
References: Si et al. (2023); Mohseni et al. (2021); Linder et al. (2021)
LLM-based Evaluation of Instructional Explanations
References: Wachsmuth & Alshomary (2022); Kupor et al. (2023); Lee et al. (2023); Rooein et al. (2023)
Efficiency for Explanation Evaluation
References: Parcalabescu & Frank (2023); Larionov et al. (2023); Schwarzenberg et al. (2021)
Explaining Disagreements in Automated Text Evaluation Metrics
References: Jiang et al. (2023); Ribeiro et al. (2023); Wadhwa et al. (2023); He et al. (2023)
Learning from Human Feedback to Natural Language Explanations
References: Li et al. (2022); Wang et al. (2023); Ye et al. (2022)