Kolloquium FS 2021: Berichte aus der aktuellen Forschung am Institut, Bachelor- und Master-Arbeiten, Programmierprojekte, Gastvorträge
Speaker & Topic
Overview of New Projects
Sarah Ebling: EU project EASIER, SNF project SMILE II
Anastassia Shaitarova & Anne Göhring: The Impact of Computer-generated Language (NCCR Evolving Language)
Gerold Schneider: Hate-speech Detection (UFSP Digital Religion)
Martin Volk: Bullinger digital
|Thursday, 18. March, 17.15h||Sebastian Ruder: Cross-lingual Transfer Learning|
Hanna Fechner: Search in Semantic Memory across the Lifespan
Nora Hollenstein: Leveraging Cognitive Processing Signals for Natural Language Understanding
Susie Rao: Fraud Detection in E-commerce
Duygu Ataman: Character-level Neural Machine Translation: History and Challenges
Phillip Ströbel: Learning to read the Bullinger Correspondence - Handwritten Text Recognition for 16th Century Letters
Chantal Amrhein: Subwords or Characters? - Evaluating Segmentation Strategies on Morphological Phenomena in Machine Translation
Jannis Vamvas: Evaluating Disambiguation in Machine Translation
Ann-Sophie Gnehm: Text Mining on Job Advertisements
Norbert Fuchs: The Law of Inertia and the Frame Problem in Attempto Controlled English
Anne Göhring + Manfred Klenner: Sentiment Inference: The Identification of Negative Actors
Dr. Norbert Fuchs, "The Law of Inertia and the Frame Problem in Attempto Controlled English", 25.05.2021
Daily experience shows that a situation remains unchanged unless somebody/something changes it. Leibniz called this experience the law of inertia. Early attempts to formalise the law of inertia failed because they offered no easy way to describe that after a partial change of a situation the unaffected rest remains unchanged. This so-called frame problem was efficiently solved by later approaches of which I will present two, the event calculus and the default logic. I will show how these approaches cannot only be expressed in first-order logic but also quite naturally in Attempto Controlled English. Furthermore, I will use the Attempto reasoner RACE to reason with the law of inertia.
Anne Göhring & Dr. Manfred Klenner, "Sentiment Inference: The Identification of Negative Actors", 25.05.2021
We give an overview of a running SNF project dealing with Sentiment Inference. Currently, we pursue two strands:
a) fixing the writer perspective in terms of pro (in favour) and con (agaist)
relations given an author's response to some question - we use x-stance (Vamvas & Sennrich) as data - and
b) detecting noun phrases denoting actors and quantifying their polar load.
While task a) means classification, b) also includes regression. We discuss our approaches, the results, the problems and next steps.
Jannis Vamvas, "Evaluating Disambiguation in Machine Translation", 11. May 2021
Disambiguation is a challenging problem in machine translation. Disambiguation failure leads to mistranslation and can, where it occurs systematically, cause model bias. A well-known example are English occupation nouns with an ambiguous gender. However, evaluating disambiguation in several languages is a time-consuming task. In my talk, I will present our recently concluded research on source scoring, which we propose as a reference-free black-box method for evaluating disambiguation in machine translation. I will also present our case study of bias in distillation, where we used source scoring to highlight a phenomenon undetected by BLEU: Student models that are trained on data generated by a teacher imitate its disambiguation biases, and further amplify them.
Ann-Sophie Gnehm, "Text Mining on Job Advertisements", 11. May 2021
The Swiss Job Market Monitor is extracting information from job advertisements to monitor and analyze trends on the Swiss job market. For the NRP77 project “Digital Transformation” we analyze how digitalization is changing job tasks and skill requirements of workers over time. Our multilingual data consists of print and online job advertisements in German, French, English and Italian, and covers the time span from 1950 up to today.
To answer this question based on job ad texts, two different goals are equally important. In a top down approach, we recognize and classify concepts of labor market ontologies or classification systems in job ads. Second, detecting shifts over time and appropriate adaption of ontologies or classification systems calls for data-driven, bottom up methods.
In this talk, I present experiments on the benefits of domain-adapted, contextualized embeddings. We structure job ads into text zones and classify them into professions, industries and management functions. Challenges arise from data shift over time, and, from the fact that our data is multilingual.
Phillip Ströbel, "Learning to read the Bullinger correspondence", 27. April 2021
During his lifetime the Swiss reformer Heinrich Bullinger (1504-1575) exchanged a huge number of letters with colleagues all over Europe. Preserved are 2,000 letters that he wrote and and 10,000 letters that he received, which makes this one of the largest letter collections of the 16th century. While around 3'000 letters are available as edited texts and another 5,700 letters have been transcribed (albeit with varying quality), the texts of approx. 3,300 letters are not available yet. In order to close this gap, we will use handwritten text recognition (HTR) to extract the text from the scanned letters. In a first step, we establish a baseline with a popular tool called Transkribus. We not only show how the quantity of training material influences the quality of the results, but also how the use of base models influences the learning and recognition process, especially in the case of infrequent authors. In the outlook we present the subsequent steps: (a) finding methods of choosing the best model for a letter (= a handwriting), (b) training models which generalise well with a meta-learning approach, and (c) include language models to improve the output.
Chantal Amrhein, "Subwords or Characters? - Evaluating Segmentation Strategies on Morphological Phenomena in Machine Translation", 27. April 2021
Subword-level representation is well-known and widely used in machine translation. Since unknown words can be split into smaller, known units this largely solves the open vocabulary problem. However, data-driven subword segmentation may not coincide with morpheme boundaries, and a mere segmentation may not be sufficiently abstract to learn non-concatenative morphological phenomena such as reduplication or vowel harmony. In this talk, I will present our ongoing research to compare how well subword-level and character-level translation models can translate such morphological phenomena in different settings. I will talk about our design choices for a test suite to evaluate novel segmentation strategies and show preliminary results of our experiments with three segmentation strategies.
Susi Rao, "Fraud Detection in E-commerce", 13. April 2021
A Graph Neural Network (GNN) is a type of neural network which directly operates on graph-structured data with message passing and aggregation. Popular variants include graph convolutional networks (GCN), graph attention networks (GAT), and GraphSAGE. The use of GNNs are found in many state-of-the-art applications like recommender systems, knowledge graphs, and fraud detection. In the NLP community, many researchers have been adopting GNNs in tasks where organizing data in a graph-structured fashion is natural, e.g., text classification, sequence labeling, neural machine translation, relation extraction, event extraction, and text generation.
In this talk, I present our work with Ebay on fraud detection using GNNs. At online retail platforms, it is crucial to actively detect risks of fraudulent transactions to improve the customer experience, minimize loss, and prevent unauthorized chargebacks. Traditional rule-based methods and simple feature-based models are either inefficient or brittle and uninterpretable.
The graph structure that exists among the heterogeneous typed entities of the transaction logs is informative and difficult to fake. To utilize the heterogeneous graph relationships and enrich the explainability, we present xFraud, an explainable fraud transaction prediction system. In our experiments on two real transaction networks with up to ten millions transactions, we achieved an area under a curve (AUC) score that outperforms baseline models and graph embedding methods. In addition, we show how the explainer benefits model predictions and enhances model trustworthiness for real-world fraud transaction cases.
Dr. Duygu Ataman, "Character-level Neural Machine Translation: History and Challenges", 13. April 2021
Neural machine translation (NMT) models typically operate over a fixed-size vocabulary which can be problematic in morphologically-rich languages with extremely sparse vocabularies where atomic treatment of surface forms is unrealistic. This problem is typically addressed by either pre-processing words into subword units or performing translation directly at the level of characters. The former is based on word segmentation algorithms optimized using corpus-level statistics with no regard to the translation task. The latter approach has shown significant benefits for translating morphologically-rich languages, although practical applications are still limited due to increased requirements in terms of model capacity. In this talk, we present an overview of recent character-based approaches to NMT and open challenges related to their future deployment.
Dr. Hanna B. Fechner, «Search in Semantic Memory across the Lifespan», 23.03.2021
Search in memory is essential for mastering tasks of everyday life. Search success depends on two aspects of cognition: People’s capacities for retrieving and maintaining information and strategies for navigating the search using retrieval cues. During normal aging, cognitive capacities change substantially. Age-related differences in memory search were found in semantic-fluency tasks in which participants search as much information as possible from a specific domain in a limited amount of time. In these studies, older (compared with younger) adults recalled less information, switched between semantic categories less frequently, and switched more frequently between information that often occurs together. Previous research concluded that mechanisms of decline in capacities impair memory search in older adults.
This project contrasts mechanisms of declining capacities with mechanisms of compensatory strategic adjustments to age-related changes in retrieval efficiency. These strategic mechanisms rely on retrieval quality of recalled information to determine the best cues for guiding the search. We test the candidate mechanisms implemented as computational cognitive models in a Bayesian framework on cross-sectional and longitudinal data from semantic-fluency tasks of the Berlin Aging Study. The models share computational assumptions from the cognitive architecture ACT-R and receive activation levels for semantic memories that are estimated from large text corpora. Our model comparison will show which candidate mechanisms best explain the data. This will reveal to what extent strategic adjustments beyond decline contribute to our understanding of older adults’ memory search.
Dr. Nora Hollenstein, "A human-centered approach to Natural Language Processing", 23.03.2021
As we know, natural language processing (NLP) has made great progress in recent years. The progress of NLP and machine learning (ML) is intricately linked with human behavior. In this talk, I will present two projects on opposite ends of the NLP pipeline related to human-centred ML. First, we analyze human labelling behavior. What happens in the brain while we annotate? What patterns of human behavior emerge in crowd-sourcing settings? On the other end of the pipeline, we use human language processing signals to evaluate NLP models. How well do word embeddings align with brain representations? Do state-of-the-art transformer language models reflect human reading behavior? Within these projects, we approach two key challenges in machine learning: gathering labelled data and explaining the inner workings of NLP models.