title
stringlengths 10
192
| authors
stringlengths 5
1.08k
| abstract
stringlengths 0
5.84k
| url
stringlengths 0
108
| detail_url
stringlengths 0
108
| abs
stringlengths 0
64
| OpenReview
stringlengths 0
42
| Download PDF
stringlengths 0
115
| tags
stringclasses 32
values | source_dataset
stringclasses 6
values | source_config
stringclasses 1
value | source_split
stringclasses 33
values |
|---|---|---|---|---|---|---|---|---|---|---|---|
Neural Media Bias Detection Using Distant Supervision With BABE - Bias Annotations By Experts
|
Timo Spinde, Manuel Plank, Jan-David Krieger, Terry Ruas, Bela Gipp, Akiko Aizawa
|
Media coverage has a substantial effect on the public perception of events. Nevertheless, media outlets are often biased. One way to bias news articles is by altering the word choice. The automatic identification of bias by word choice is challenging, primarily due to the lack of a gold standard data set and high context dependencies. This paper presents BABE, a robust and diverse data set created by trained experts, for media bias research. We also analyze why expert labeling is essential within this domain. Our data set offers better annotation quality and higher inter-annotator agreement than existing work. It consists of 3,700 sentences balanced among topics and outlets, containing media bias labels on the word and sentence level. Based on our data, we also introduce a way to detect bias-inducing sentences in news articles automatically. Our best performing BERT-based model is pre-trained on a larger corpus consisting of distant labels. Fine-tuning and evaluating the model on our proposed supervised data set, we achieve a macro F1-score of 0.804, outperforming existing methods.
|
https://aclanthology.org/2021.findings-emnlp.101
|
https://aclanthology.org/2021.findings-emnlp.101.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Learning and Evaluating a Differentially Private Pre-trained Language Model
|
Shlomo Hoory, Amir Feder, Avichai Tendler, Sofia Erell, Alon Peled-Cohen, Itay Laish, Hootan Nakhost, Uri Stemmer, Ayelet Benjamini, Avinatan Hassidim, Yossi Matias
|
{'tex-math': ['\\epsilon=1.1', '\\epsilon'], '#text': 'Contextual language models have led to significantly better results, especially when pre-trained on the same data as the downstream task. While this additional pre-training usually improves performance, it can lead to information leakage and therefore risks the privacy of individuals mentioned in the training data. One method to guarantee the privacy of such individuals is to train a differentially-private language model, but this usually comes at the expense of model performance. Also, in the absence of a differentially private vocabulary training, it is not possible to modify the vocabulary to fit the new data, which might further degrade results. In this work we bridge these gaps, and provide guidance to future researchers and practitioners on how to improve privacy while maintaining good model performance. We introduce a novel differentially private word-piece algorithm, which allows training a tailored domain-specific vocabulary while maintaining privacy. We then experiment with entity extraction tasks from clinical notes, and demonstrate how to train a differentially private pre-trained language model (i.e., BERT) with a privacy guarantee of and with only a small degradation in performance. Finally, as it is hard to tell given a privacy parameter what was the effect on the trained representation, we present experiments showing that the trained model does not memorize private information.'}
|
https://aclanthology.org/2021.findings-emnlp.102
|
https://aclanthology.org/2021.findings-emnlp.102.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Simulated Chats for Building Dialog Systems: Learning to Generate Conversations from Instructions
|
Biswesh Mohapatra, Gaurav Pandey, Danish Contractor, Sachindra Joshi
|
Popular dialog datasets such as MultiWOZ are created by providing crowd workers an instruction, expressed in natural language, that describes the task to be accomplished. Crowd workers play the role of a user and an agent to generate dialogs to accomplish tasks involving booking restaurant tables, calling a taxi etc. In this paper, we present a data creation strategy that uses the pre-trained language model, GPT2, to simulate the interaction between crowd workers by creating a user bot and an agent bot. We train the simulators using a smaller percentage of actual crowd-generated conversations and their corresponding instructions. We demonstrate that by using the simulated data, we achieve significant improvements in low-resource settings on two publicly available datasets - MultiWOZ dataset and the Persona chat dataset.
|
https://aclanthology.org/2021.findings-emnlp.103
|
https://aclanthology.org/2021.findings-emnlp.103.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Past, Present, and Future: Conversational Emotion Recognition through Structural Modeling of Psychological Knowledge
|
Jiangnan Li, Zheng Lin, Peng Fu, Weiping Wang
|
Conversational Emotion Recognition (CER) is a task to predict the emotion of an utterance in the context of a conversation. Although modeling the conversational context and interactions between speakers has been studied broadly, it is important to consider the speaker’s psychological state, which controls the action and intention of the speaker. The state-of-the-art method introduces CommonSense Knowledge (CSK) to model psychological states in a sequential way (forwards and backwards). However, it ignores the structural psychological interactions between utterances. In this paper, we propose a pSychological-Knowledge-Aware Interaction Graph (SKAIG). In the locally connected graph, the targeted utterance will be enhanced with the information of action inferred from the past context and intention implied by the future context. The utterance is self-connected to consider the present effect from itself. Furthermore, we utilize CSK to enrich edges with knowledge representations and process the SKAIG with a graph transformer. Our method achieves state-of-the-art and competitive performance on four popular CER datasets.
|
https://aclanthology.org/2021.findings-emnlp.104
|
https://aclanthology.org/2021.findings-emnlp.104.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
An unsupervised framework for tracing textual sources of moral change
|
Aida Ramezani, Zining Zhu, Frank Rudzicz, Yang Xu
|
Morality plays an important role in social well-being, but people’s moral perception is not stable and changes over time. Recent advances in natural language processing have shown that text is an effective medium for informing moral change, but no attempt has been made to quantify the origins of these changes. We present a novel unsupervised framework for tracing textual sources of moral change toward entities through time. We characterize moral change with probabilistic topical distributions and infer the source text that exerts prominent influence on the moral time course. We evaluate our framework on a diverse set of data ranging from social media to news articles. We show that our framework not only captures fine-grained human moral judgments, but also identifies coherent source topics of moral change triggered by historical events. We apply our methodology to analyze the news in the COVID-19 pandemic and demonstrate its utility in identifying sources of moral change in high-impact and real-time social events.
|
https://aclanthology.org/2021.findings-emnlp.105
|
https://aclanthology.org/2021.findings-emnlp.105.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Topic-Aware Contrastive Learning for Abstractive Dialogue Summarization
|
Junpeng Liu, Yanyan Zou, Hainan Zhang, Hongshen Chen, Zhuoye Ding, Caixia Yuan, Xiaojie Wang
|
Unlike well-structured text, such as news reports and encyclopedia articles, dialogue content often comes from two or more interlocutors, exchanging information with each other. In such a scenario, the topic of a conversation can vary upon progression and the key information for a certain topic is often scattered across multiple utterances of different speakers, which poses challenges to abstractly summarize dialogues. To capture the various topic information of a conversation and outline salient facts for the captured topics, this work proposes two topic-aware contrastive learning objectives, namely coherence detection and sub-summary generation objectives, which are expected to implicitly model the topic change and handle information scattering challenges for the dialogue summarization task. The proposed contrastive objectives are framed as auxiliary tasks for the primary dialogue summarization task, united via an alternative parameter updating strategy. Extensive experiments on benchmark datasets demonstrate that the proposed simple method significantly outperforms strong baselines and achieves new state-of-the-art performance. The code and trained models are publicly available via .
|
https://aclanthology.org/2021.findings-emnlp.106
|
https://aclanthology.org/2021.findings-emnlp.106.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
TWT: Table with Written Text for Controlled Data-to-Text Generation
|
Tongliang Li, Lei Fang, Jian-Guang Lou, Zhoujun Li
|
Large pre-trained neural models have recently shown remarkable progress in text generation. In this paper, we propose to generate text conditioned on the structured data (table) and a prefix (the written text) by leveraging the pre-trained models. We present a new data-to-text dataset, Table with Written Text (TWT), by repurposing two existing datasets: ToTTo and TabFact. TWT contains both factual and logical statements that are faithful to the structured data, aiming to serve as a useful benchmark for controlled text generation. Compared with existing data-to-text task settings, TWT is more intuitive, the prefix (usually provided by the user) controls the topic of the generated text. Existing methods usually output hallucinated text that is not faithful on TWT. Therefore, we design a novel approach with table-aware attention visibility and copy mechanism over the table. Experimental results show that our approach outperforms state-of-the-art methods under both automatic and human evaluation metrics.
|
https://aclanthology.org/2021.findings-emnlp.107
|
https://aclanthology.org/2021.findings-emnlp.107.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
ArabicTransformer: Efficient Large Arabic Language Model with Funnel Transformer and ELECTRA Objective
|
Sultan Alrowili, Vijay Shanker
|
Pre-training Transformer-based models such as BERT and ELECTRA on a collection of Arabic corpora, demonstrated by both AraBERT and AraELECTRA, shows an impressive result on downstream tasks. However, pre-training Transformer-based language models is computationally expensive, especially for large-scale models. Recently, Funnel Transformer has addressed the sequential redundancy inside Transformer architecture by compressing the sequence of hidden states, leading to a significant reduction in the pre-training cost. This paper empirically studies the performance and efficiency of building an Arabic language model with Funnel Transformer and ELECTRA objective. We find that our model achieves state-of-the-art results on several Arabic downstream tasks despite using less computational resources compared to other BERT-based models.
|
https://aclanthology.org/2021.findings-emnlp.108
|
https://aclanthology.org/2021.findings-emnlp.108.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Which is Making the Contribution: Modulating Unimodal and Cross-modal Dynamics for Multimodal Sentiment Analysis
|
Ying Zeng, Sijie Mai, Haifeng Hu
|
Multimodal sentiment analysis (MSA) draws increasing attention with the availability of multimodal data. The boost in performance of MSA models is mainly hindered by two problems. On the one hand, recent MSA works mostly focus on learning cross-modal dynamics, but neglect to explore an optimal solution for unimodal networks, which determines the lower limit of MSA models. On the other hand, noisy information hidden in each modality interferes the learning of correct cross-modal dynamics. To address the above-mentioned problems, we propose a novel MSA framework Modulation Model for Multimodal Sentiment Analysis (M3SA) to identify the contribution of modalities and reduce the impact of noisy information, so as to better learn unimodal and cross-modal dynamics. Specifically, modulation loss is designed to modulate the loss contribution based on the confidence of individual modalities in each utterance, so as to explore an optimal update solution for each unimodal network. Besides, contrary to most existing works which fail to explicitly filter out noisy information, we devise a modality filter module to identify and filter out modality noise for the learning of correct cross-modal embedding. Extensive experiments on publicly datasets demonstrate that our approach achieves state-of-the-art performance.
|
https://aclanthology.org/2021.findings-emnlp.109
|
https://aclanthology.org/2021.findings-emnlp.109.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
CVAE-based Re-anchoring for Implicit Discourse Relation Classification
|
Zujun Dou, Yu Hong, Yu Sun, Guodong Zhou
|
Training implicit discourse relation classifiers suffers from data sparsity. Variational AutoEncoder (VAE) appears to be the proper solution. It is because ideally VAE is capable of generating inexhaustible varying samples, and this facilitates selective data augmentation. However, our experiments show that coupling VAE with the RoBERTa-based classifier results in severe performance degradation. We ascribe the unusual phenomenon to erroneous sampling that would happen when VAE pursued variations. To overcome the problem, we develop a re-anchoring strategy, where Conditional VAE (CVAE) is used for estimating the risk of erroneous sampling, and meanwhile migrating the anchor to reduce the risk. The test results on PDTB v2.0 illustrate that, compared to the RoBERTa-based baseline, re-anchoring yields substantial improvements. Besides, we observe that re-anchoring can cooperate with other auxiliary strategies (transfer learning and interactive attention mechanism) to further improve the baseline, obtaining the F-scores of about 55%, 63%, 80% and 44% for the four main relation types (Comparison, Contingency, Expansion, Temporality) in the binary classification (Yes/No) scenario.
|
https://aclanthology.org/2021.findings-emnlp.110
|
https://aclanthology.org/2021.findings-emnlp.110.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Combining Curriculum Learning and Knowledge Distillation for Dialogue Generation
|
Qingqing Zhu, Xiuying Chen, Pengfei Wu, JunFei Liu, Dongyan Zhao
|
Curriculum learning, a machine training strategy that feeds training instances to the model from easy to hard, has been proven to facilitate the dialogue generation task. Meanwhile, knowledge distillation, a knowledge transformation methodology among teachers and students networks can yield significant performance boost for student models. Hence, in this paper, we introduce a combination of curriculum learning and knowledge distillation for efficient dialogue generation models, where curriculum learning can help knowledge distillation from data and model aspects. To start with, from the data aspect, we cluster the training cases according to their complexity, which is calculated by various types of features such as sentence length and coherence between dialog pairs. Furthermore, we employ an adversarial training strategy to identify the complexity of cases from model level. The intuition is that, if a discriminator can tell the generated response is from the teacher or the student, then the case is difficult that the student model has not adapted to yet. Finally, we use self-paced learning, which is an extension to curriculum learning to assign weights for distillation. In conclusion, we arrange a hierarchical curriculum based on the above two aspects for the student model under the guidance from the teacher model. Experimental results demonstrate that our methods achieve improvements compared with competitive baselines.
|
https://aclanthology.org/2021.findings-emnlp.111
|
https://aclanthology.org/2021.findings-emnlp.111.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Improving End-to-End Task-Oriented Dialog System with A Simple Auxiliary Task
|
Yohan Lee
|
The paradigm of leveraging large pre-trained language models has made significant progress on benchmarks on task-oriented dialogue (TOD) systems. In this paper, we combine this paradigm with multi-task learning framework for end-to-end TOD modeling by adopting span prediction as an auxiliary task. In end-to-end setting, our model achieves new state-of-the-art results with combined scores of 108.3 and 107.5 on MultiWOZ 2.0 and MultiWOZ 2.1, respectively. Furthermore, we demonstrate that multi-task learning improves not only the performance of model but its generalization capability through domain adaptation experiments in the few-shot setting. The code is available at github.com/bepoetree/MTTOD.
|
https://aclanthology.org/2021.findings-emnlp.112
|
https://aclanthology.org/2021.findings-emnlp.112.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
EDTC: A Corpus for Discourse-Level Topic Chain Parsing
|
Longyin Zhang, Xin Tan, Fang Kong, Guodong Zhou
|
Discourse analysis has long been known to be fundamental in natural language processing. In this research, we present our insight on discourse-level topic chain (DTC) parsing which aims at discovering new topics and investigating how these topics evolve over time within an article. To address the lack of data, we contribute a new discourse corpus with DTC-style dependency graphs annotated upon news articles. In particular, we ensure the high reliability of the corpus by utilizing a two-step annotation strategy to build the data and filtering out the annotations with low confidence scores. Based on the annotated corpus, we introduce a simple yet robust system for automatic discourse-level topic chain parsing.
|
https://aclanthology.org/2021.findings-emnlp.113
|
https://aclanthology.org/2021.findings-emnlp.113.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Multilingual Neural Machine Translation: Can Linguistic Hierarchies Help?
|
Fahimeh Saleh, Wray Buntine, Gholamreza Haffari, Lan Du
|
Multilingual Neural Machine Translation (MNMT) trains a single NMT model that supports translation between multiple languages, rather than training separate models for different languages. Learning a single model can enhance the low-resource translation by leveraging data from multiple languages. However, the performance of an MNMT model is highly dependent on the type of languages used in training, as transferring knowledge from a diverse set of languages degrades the translation performance due to negative transfer. In this paper, we propose a Hierarchical Knowledge Distillation (HKD) approach for MNMT which capitalises on language groups generated according to typological features and phylogeny of languages to overcome the issue of negative transfer. HKD generates a set of multilingual teacher-assistant models via a selective knowledge distillation mechanism based on the language groups, and then distills the ultimate multilingual model from those assistants in an adaptive way. Experimental results derived from the TED dataset with 53 languages demonstrate the effectiveness of our approach in avoiding the negative transfer effect in MNMT, leading to an improved translation performance (about 1 BLEU score in average) compared to strong baselines.
|
https://aclanthology.org/2021.findings-emnlp.114
|
https://aclanthology.org/2021.findings-emnlp.114.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Self Question-answering: Aspect-based Sentiment Analysis by Role Flipped Machine Reading Comprehension
|
Guoxin Yu, Jiwei Li, Ling Luo, Yuxian Meng, Xiang Ao, Qing He
|
The pivot for the unified Aspect-based Sentiment Analysis (ABSA) is to couple aspect terms with their corresponding opinion terms, which might further derive easier sentiment predictions. In this paper, we investigate the unified ABSA task from the perspective of Machine Reading Comprehension (MRC) by observing that the aspect and the opinion terms can serve as the query and answer in MRC interchangeably. We propose a new paradigm named Role Flipped Machine Reading Comprehension (RF-MRC) to resolve. At its heart, the predicted results of either the Aspect Term Extraction (ATE) or the Opinion Terms Extraction (OTE) are regarded as the queries, respectively, and the matched opinion or aspect terms are considered as answers. The queries and answers can be flipped for multi-hop detection. Finally, every matched aspect-opinion pair is predicted by the sentiment classifier. RF-MRC can solve the ABSA task without any additional data annotation or transformation. Experiments on three widely used benchmarks and a challenging dataset demonstrate the superiority of the proposed framework.
|
https://aclanthology.org/2021.findings-emnlp.115
|
https://aclanthology.org/2021.findings-emnlp.115.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Generalization in Text-based Games via Hierarchical Reinforcement Learning
|
Yunqiu Xu, Meng Fang, Ling Chen, Yali Du, Chengqi Zhang
|
Deep reinforcement learning provides a promising approach for text-based games in studying natural language communication between humans and artificial agents. However, the generalization still remains a big challenge as the agents depend critically on the complexity and variety of training tasks. In this paper, we address this problem by introducing a hierarchical framework built upon the knowledge graph-based RL agent. In the high level, a meta-policy is executed to decompose the whole game into a set of subtasks specified by textual goals, and select one of them based on the KG. Then a sub-policy in the low level is executed to conduct goal-conditioned reinforcement learning. We carry out experiments on games with various difficulty levels and show that the proposed method enjoys favorable generalizability.
|
https://aclanthology.org/2021.findings-emnlp.116
|
https://aclanthology.org/2021.findings-emnlp.116.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
A Finer-grain Universal Dialogue Semantic Structures based Model For Abstractive Dialogue Summarization
|
Yuejie Lei, Fujia Zheng, Yuanmeng Yan, Keqing He, Weiran Xu
|
Although abstractive summarization models have achieved impressive results on document summarization tasks, their performance on dialogue modeling is much less satisfactory due to the crude and straight methods for dialogue encoding. To address this question, we propose a novel end-to-end Transformer-based model FinDS for abstractive dialogue summarization that leverages Finer-grain universal Dialogue semantic Structures to model dialogue and generates better summaries. Experiments on the SAMsum dataset show that FinDS outperforms various dialogue summarization approaches and achieves new state-of-the-art (SOTA) ROUGE results. Finally, we apply FinDS to a more complex scenario, showing the robustness of our model. We also release our source code.
|
https://aclanthology.org/2021.findings-emnlp.117
|
https://aclanthology.org/2021.findings-emnlp.117.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Constructing contrastive samples via summarization for text classification with limited annotations
|
Yangkai Du, Tengfei Ma, Lingfei Wu, Fangli Xu, Xuhong Zhang, Bo Long, Shouling Ji
|
Contrastive Learning has emerged as a powerful representation learning method and facilitates various downstream tasks especially when supervised data is limited. How to construct efficient contrastive samples through data augmentation is key to its success. Unlike vision tasks, the data augmentation method for contrastive learning has not been investigated sufficiently in language tasks. In this paper, we propose a novel approach to construct contrastive samples for language tasks using text summarization. We use these samples for supervised contrastive learning to gain better text representations which greatly benefit text classification tasks with limited annotations. To further improve the method, we mix up samples from different classes and add an extra regularization, named Mixsum, in addition to the cross-entropy-loss. Experiments on real-world text classification datasets (Amazon-5, Yelp-5, AG News, and IMDb) demonstrate the effectiveness of the proposed contrastive learning framework with summarization-based data augmentation and Mixsum regularization.
|
https://aclanthology.org/2021.findings-emnlp.118
|
https://aclanthology.org/2021.findings-emnlp.118.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
End-to-end Neural Information Status Classification
|
Yufang Hou
|
Most previous studies on information status (IS) classification and bridging anaphora recognition assume that the gold mention or syntactic tree information is given (Hou et al., 2013; Roesiger et al., 2018; Hou, 2020; Yu and Poesio, 2020). In this paper, we propose an end-to-end neural approach for information status classification. Our approach consists of a mention extraction component and an information status assignment component. During the inference time, our system takes a raw text as the input and generates mentions together with their information status. On the ISNotes corpus (Markert et al., 2012), we show that our information status assignment component achieves new state-of-the-art results on fine-grained IS classification based on gold mentions. Furthermore, our system performs significantly better than other baselines for both mention extraction and fine-grained IS classification in the end-to-end setting. Finally, we apply our system on BASHI (Roesiger, 2018) and SciCorp (Roesiger, 2016) to recognize referential bridging anaphora. We find that our end-to-end system trained on ISNotes achieves competitive results on bridging anaphora recognition compared to the previous state-of-the-art system that relies on syntactic information and is trained on the in-domain datasets (Yu and Poesio, 2020).
|
https://aclanthology.org/2021.findings-emnlp.119
|
https://aclanthology.org/2021.findings-emnlp.119.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
EventKE: Event-Enhanced Knowledge Graph Embedding
|
Zixuan Zhang, Hongwei Wang, Han Zhao, Hanghang Tong, Heng Ji
|
Relations in most of the traditional knowledge graphs (KGs) only reflect static and factual connections, but fail to represent the dynamic activities and state changes about entities. In this paper, we emphasize the importance of incorporating events in KG representation learning, and propose an event-enhanced KG embedding model EventKE. Specifically, given the original KG, we first incorporate event nodes by building a heterogeneous network, where entity nodes and event nodes are distributed on the two sides of the network inter-connected by event argument links. We then use entity-entity relations from the original KG and event-event temporal links to inner-connect entity and event nodes respectively. We design a novel and effective attention-based message passing method, which is conducted on entity-entity, event-entity, and event-event relations to fuse the event information into KG embeddings. Experimental results on real-world datasets demonstrate that events can greatly improve the quality of the KG embeddings on multiple downstream tasks.
|
https://aclanthology.org/2021.findings-emnlp.120
|
https://aclanthology.org/2021.findings-emnlp.120.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Modeling Concentrated Cross-Attention for Neural Machine Translation with Gaussian Mixture Model
|
Shaolei Zhang, Yang Feng
|
Cross-attention is an important component of neural machine translation (NMT), which is always realized by dot-product attention in previous methods. However, dot-product attention only considers the pair-wise correlation between words, resulting in dispersion when dealing with long sentences and neglect of source neighboring relationships. Inspired by linguistics, the above issues are caused by ignoring a type of cross-attention, called concentrated attention, which focuses on several central words and then spreads around them. In this work, we apply Gaussian Mixture Model (GMM) to model the concentrated attention in cross-attention. Experiments and analyses we conducted on three datasets show that the proposed method outperforms the baseline and has significant improvement on alignment quality, N-gram accuracy, and long sentence translation.
|
https://aclanthology.org/2021.findings-emnlp.121
|
https://aclanthology.org/2021.findings-emnlp.121.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Inconsistency Matters: A Knowledge-guided Dual-inconsistency Network for Multi-modal Rumor Detection
|
Mengzhu Sun, Xi Zhang, Jianqiang Ma, Yazheng Liu
|
Rumor spreaders are increasingly utilizing multimedia content to attract the attention and trust of news consumers. Though a set of rumor detection models have exploited the multi-modal data, they seldom consider the inconsistent relationships among images and texts. Moreover, they also fail to find a powerful way to spot the inconsistency information among the post contents and background knowledge. Motivated by the intuition that rumors are more likely to have inconsistency information in semantics, a novel Knowledge-guided Dual-inconsistency network is proposed to detect rumors with multimedia contents. It can capture the inconsistent semantics at the cross-modal level and the content-knowledge level in one unified framework. Extensive experiments on two public real-world datasets demonstrate that our proposal can outperform the state-of-the-art baselines.
|
https://aclanthology.org/2021.findings-emnlp.122
|
https://aclanthology.org/2021.findings-emnlp.122.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
EfficientBERT: Progressively Searching Multilayer Perceptron via Warm-up Knowledge Distillation
|
Chenhe Dong, Guangrun Wang, Hang Xu, Jiefeng Peng, Xiaozhe Ren, Xiaodan Liang
|
{'tex-math': ['\\times', '\\times', '\\rm_{BASE}', '\\rm_{TINY}', '_4'], 'i': ['test', 'dev'], 'url': 'https://github.com/cheneydon/efficient-bert', '#text': 'Pre-trained language models have shown remarkable results on various NLP tasks. Nevertheless, due to their bulky size and slow inference speed, it is hard to deploy them on edge devices. In this paper, we have a critical insight that improving the feed-forward network (FFN) in BERT has a higher gain than improving the multi-head attention (MHA) since the computational cost of FFN is 2~3 times larger than MHA. Hence, to compact BERT, we are devoted to designing efficient FFN as opposed to previous works that pay attention to MHA. Since FFN comprises a multilayer perceptron (MLP) that is essential in BERT optimization, we further design a thorough search space towards an advanced MLP and perform a coarse-to-fine mechanism to search for an efficient BERT architecture. Moreover, to accelerate searching and enhance model transferability, we employ a novel warm-up knowledge distillation strategy at each search stage. Extensive experiments show our searched EfficientBERT is 6.9 smaller and 4.4 faster than BERT, and has competitive performances on GLUE and SQuAD Benchmarks. Concretely, EfficientBERT attains a 77.7 average score on GLUE , 0.7 higher than MobileBERT, and achieves an 85.3/74.5 F1 score on SQuAD v1.1/v2.0 , 3.2/2.7 higher than TinyBERT even without data augmentation. The code is released at .'}
|
https://aclanthology.org/2021.findings-emnlp.123
|
https://aclanthology.org/2021.findings-emnlp.123.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Uni-FedRec: A Unified Privacy-Preserving News Recommendation Framework for Model Training and Online Serving
|
Tao Qi, Fangzhao Wu, Chuhan Wu, Yongfeng Huang, Xing Xie
|
News recommendation techniques can help users on news platforms obtain their preferred news information. Most existing news recommendation methods rely on centrally stored user behavior data to train models and serve users. However, user data is usually highly privacy-sensitive, and centrally storing them in the news platform may raise privacy concerns and risks. In this paper, we propose a unified news recommendation framework, which can utilize user data locally stored in user clients to train models and serve users in a privacy-preserving way. Following a widely used paradigm in real-world recommender systems, our framework contains a stage for candidate news generation (i.e., recall) and a stage for candidate news ranking (i.e., ranking). At the recall stage, each client locally learns multiple interest representations from clicked news to comprehensively model user interests. These representations are uploaded to the server to recall candidate news from a large news pool, which are further distributed to the user client at the ranking stage for personalized news display. In addition, we propose an interest decomposer-aggregator method with perturbation noise to better protect private user information encoded in user interest representations. Besides, we collaboratively train both recall and ranking models on the data decentralized in a large number of user clients in a privacy-preserving way. Experiments on two real-world news datasets show that our method can outperform baseline methods and effectively protect user privacy.
|
https://aclanthology.org/2021.findings-emnlp.124
|
https://aclanthology.org/2021.findings-emnlp.124.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Mapping Language to Programs using Multiple Reward Components with Inverse Reinforcement Learning
|
Sayan Ghosh, Shashank Srivastava
|
Mapping natural language instructions to programs that computers can process is a fundamental challenge. Existing approaches focus on likelihood-based training or using reinforcement learning to fine-tune models based on a single reward. In this paper, we pose program generation from language as Inverse Reinforcement Learning. We introduce several interpretable reward components and jointly learn (1) a reward function that linearly combines them, and (2) a policy for program generation. Fine-tuning with our approach achieves significantly better performance than competitive methods using Reinforcement Learning (RL). On the VirtualHome framework, we get improvements of up to 9.0% on the Longest Common Subsequence metric and 14.7% on recall-based metrics over previous work on this framework (Puig et al., 2018). The approach is data-efficient, showing larger gains in performance in the low-data regime. Generated programs are also preferred by human evaluators over an RL-based approach, and rated higher on relevance, completeness, and human-likeness.
|
https://aclanthology.org/2021.findings-emnlp.125
|
https://aclanthology.org/2021.findings-emnlp.125.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Topic-Guided Abstractive Multi-Document Summarization
|
Peng Cui, Le Hu
|
A critical point of multi-document summarization (MDS) is to learn the relations among various documents. In this paper, we propose a novel abstractive MDS model, in which we represent multiple documents as a heterogeneous graph, taking semantic nodes of different granularities into account, and then apply a graph-to-sequence framework to generate summaries. Moreover, we employ a neural topic model to jointly discover latent topics that can act as cross-document semantic units to bridge different documents and provide global information to guide the summary generation. Since topic extraction can be viewed as a special type of summarization that “summarizes” texts into a more abstract format, i.e., a topic distribution, we adopt a multi-task learning strategy to jointly train the topic and summarization module, allowing the promotion of each other. Experimental results on the Multi-News dataset demonstrate that our model outperforms previous state-of-the-art MDS models on both Rouge scores and human evaluation, meanwhile learns high-quality topics.
|
https://aclanthology.org/2021.findings-emnlp.126
|
https://aclanthology.org/2021.findings-emnlp.126.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
An Edge-Enhanced Hierarchical Graph-to-Tree Network for Math Word Problem Solving
|
Qinzhuo Wu, Qi Zhang, Zhongyu Wei
|
Math word problem solving has attracted considerable research interest in recent years. Previous works have shown the effectiveness of utilizing graph neural networks to capture the relationships in the problem. However, these works did not carefully take the edge label information and the long-range word relationship across sentences into consideration. In addition, during generation, they focus on the most relevant areas of the currently generated word, while neglecting the rest of the problem. In this paper, we propose a novel Edge-Enhanced Hierarchical Graph-to-Tree model (EEH-G2T), in which the math word problems are represented as edge-labeled graphs. Specifically, an edge-enhanced hierarchical graph encoder is used to incorporate edge label information. This encoder updates the graph nodes hierarchically in two steps: sentence-level aggregation and problem-level aggregation. Furthermore, a tree-structured decoder with a split attention mechanism is applied to guide the model to pay attention to different parts of the input problem. Experimental results on the MAWPS and Math23K dataset showed that our EEH-G2T can effectively improve performance compared with state-of-the-art methods.
|
https://aclanthology.org/2021.findings-emnlp.127
|
https://aclanthology.org/2021.findings-emnlp.127.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
SciXGen: A Scientific Paper Dataset for Context-Aware Text Generation
|
Hong Chen, Hiroya Takamura, Hideki Nakayama
|
Generating texts in scientific papers requires not only capturing the content contained within the given input but also frequently acquiring the external information called context. We push forward the scientific text generation by proposing a new task, namely context-aware text generation in the scientific domain, aiming at exploiting the contributions of context in generated texts. To this end, we present a novel challenging large-scale Scientific Paper Dataset for ConteXt-Aware Text Generation (SciXGen), consisting of well-annotated 205,304 papers with full references to widely-used objects (e.g., tables, figures, algorithms) in a paper. We comprehensively benchmark, using state-of-the-arts, the efficacy of our newly constructed SciXGen dataset in generating description and paragraph. Our dataset and benchmarks will be made publicly available to hopefully facilitate the scientific text generation research.
|
https://aclanthology.org/2021.findings-emnlp.128
|
https://aclanthology.org/2021.findings-emnlp.128.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Don’t Miss the Potential Customers! Retrieving Similar Ads to Improve User Targeting
|
Yi Feng, Ting Wang, Chuanyi Li, Vincent Ng, Jidong Ge, Bin Luo, Yucheng Hu, Xiaopeng Zhang
|
User targeting is an essential task in the modern advertising industry: given a package of ads for a particular category of products (e.g., green tea), identify the online users to whom the ad package should be targeted. A (ad package specific) user targeting model is typically trained using historical clickthrough data: positive instances correspond to users who have clicked on an ad in the package before, whereas negative instances correspond to users who have not clicked on any ads in the package that were displayed to them. Collecting a sufficient amount of positive training data for training an accurate user targeting model, however, is by no means trivial. This paper focuses on the development of a method for automatic augmentation of the set of positive training instances. Experimental results on two datasets, including a real-world company dataset, demonstrate the effectiveness of our proposed method.
|
https://aclanthology.org/2021.findings-emnlp.129
|
https://aclanthology.org/2021.findings-emnlp.129.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Cross-lingual Transfer for Text Classification with Dictionary-based Heterogeneous Graph
|
Nuttapong Chairatanakul, Noppayut Sriwatanasakdi, Nontawat Charoenphakdee, Xin Liu, Tsuyoshi Murata
|
In cross-lingual text classification, it is required that task-specific training data in high-resource source languages are available, where the task is identical to that of a low-resource target language. However, collecting such training data can be infeasible because of the labeling cost, task characteristics, and privacy concerns. This paper proposes an alternative solution that uses only task-independent word embeddings of high-resource languages and bilingual dictionaries. First, we construct a dictionary-based heterogeneous graph (DHG) from bilingual dictionaries. This opens the possibility to use graph neural networks for cross-lingual transfer. The remaining challenge is the heterogeneity of DHG because multiple languages are considered. To address this challenge, we propose dictionary-based heterogeneous graph neural network (DHGNet) that effectively handles the heterogeneity of DHG by two-step aggregations, which are word-level and language-level aggregations. Experimental results demonstrate that our method outperforms pretrained models even though it does not access to large corpora. Furthermore, it can perform well even though dictionaries contain many incorrect translations. Its robustness allows the usage of a wider range of dictionaries such as an automatically constructed dictionary and crowdsourced dictionary, which are convenient for real-world applications.
|
https://aclanthology.org/2021.findings-emnlp.130
|
https://aclanthology.org/2021.findings-emnlp.130.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Improving Distantly-Supervised Named Entity Recognition with Self-Collaborative Denoising Learning
|
Xinghua Zhang, Bowen Yu, Tingwen Liu, Zhenyu Zhang, Jiawei Sheng, Xue Mengge, Hongbo Xu
|
Distantly supervised named entity recognition (DS-NER) efficiently reduces labor costs but meanwhile intrinsically suffers from the label noise due to the strong assumption of distant supervision. Typically, the wrongly labeled instances comprise numbers of incomplete and inaccurate annotations, while most prior denoising works are only concerned with one kind of noise and fail to fully explore useful information in the training set. To address this issue, we propose a robust learning paradigm named Self-Collaborative Denoising Learning (SCDL), which jointly trains two teacher-student networks in a mutually-beneficial manner to iteratively perform noisy label refinery. Each network is designed to exploit reliable labels via self denoising, and two networks communicate with each other to explore unreliable annotations by collaborative denoising. Extensive experimental results on five real-world datasets demonstrate that SCDL is superior to state-of-the-art DS-NER denoising methods.
|
https://aclanthology.org/2021.findings-emnlp.131
|
https://aclanthology.org/2021.findings-emnlp.131.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Entity-Based Semantic Adequacy for Data-to-Text Generation
|
Juliette Faille, Albert Gatt, Claire Gardent
|
While powerful pre-trained language models have improved the fluency of text generation models, semantic adequacy -the ability to generate text that is semantically faithful to the input- remains an unsolved issue. In this paper, we introduce a novel automatic evaluation metric, Entity-Based Semantic Adequacy, which can be used to assess to what extent generation models that verbalise RDF (Resource Description Framework) graphs produce text that contains mentions of the entities occurring in the RDF input. This is important as RDF subject and object entities make up 2/3 of the input. We use our metric to compare 25 models from the WebNLG Shared Tasks and we examine correlation with results from human evaluations of semantic adequacy. We show that while our metric correlates with human evaluation scores, this correlation varies with the specifics of the human evaluation setup. This suggests that in order to measure the entity-based adequacy of generated texts, an automatic metric such as the one proposed here might be more reliable, as less subjective and more focused on correct verbalisation of the input, than human evaluation measures.
|
https://aclanthology.org/2021.findings-emnlp.132
|
https://aclanthology.org/2021.findings-emnlp.132.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
MiRANews: Dataset and Benchmarks for Multi-Resource-Assisted News Summarization
|
Xinnuo Xu, Ondřej Dušek, Shashi Narayan, Verena Rieser, Ioannis Konstas
|
One of the most challenging aspects of current single-document news summarization is that the summary often contains ‘extrinsic hallucinations’, i.e., facts that are not present in the source document, which are often derived via world knowledge. This causes summarisation systems to act more like open-ended language models tending to hallucinate facts that are erroneous. In this paper, we mitigate this problem with the help of multiple supplementary resource documents assisting the task. We present a new dataset MiraNews and benchmark existing summarisation models. In contrast to multi-document summarization, which addresses multiple events from several source documents, we still aim at generating a summary for a single document. We show via data analysis that it’s not only the models which are to blame: more than 27% of facts mentioned in the gold summaries of MiraNews are better grounded on assisting documents than in the main source articles. An error analysis of generated summaries from pretrained models fine-tuned on MIRANEWS reveals that this has an even bigger effects on models: assisted summarisation reduces 55% of hallucinations when compared to single-document summarisation models trained on the main article only.
|
https://aclanthology.org/2021.findings-emnlp.133
|
https://aclanthology.org/2021.findings-emnlp.133.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
A Conditional Generative Matching Model for Multi-lingual Reply Suggestion
|
Budhaditya Deb, Guoqing Zheng, Milad Shokouhi, Ahmed Hassan Awadallah
|
We study the problem of multilingual automated reply suggestions (RS) model serving many languages simultaneously. Multilingual models are often challenged by model capacity and severe data distribution skew across languages. While prior works largely focus on monolingual models, we propose Conditional Generative Matching models (CGM), optimized within a Variational Autoencoder framework to address challenges arising from multilingual RS. CGM does so with expressive message conditional priors, mixture densities to enhance multilingual data representation, latent alignment for language discrimination, and effective variational optimization techniques for training multilingual RS. The enhancements result in performance that exceed competitive baselines in relevance (ROUGE score) by more than 10% on average, and 16%for low resource languages. CGM also shows remarkable improvements in diversity (80%) illustrating its expressiveness in representation of multi-lingual data.
|
https://aclanthology.org/2021.findings-emnlp.134
|
https://aclanthology.org/2021.findings-emnlp.134.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Rethinking Sentiment Style Transfer
|
Ping Yu, Yang Zhao, Chunyuan Li, Changyou Chen
|
{'i': ['i.e.,', '“attribute hit”'], '#text': 'Though remarkable efforts have been made in non-parallel text style transfer, the evaluation system is unsatisfactory. It always evaluates over samples from only one checkpoint of the model and compares three metrics, transfer accuracy, BLEU score, and PPL score. In this paper, we argue the inappropriateness of both existing evaluation metrics and the evaluation method. Specifically, for evaluation metrics, we make a detailed analysis and comparison from three aspects: style transfer, content preservation, and naturalness; for the evaluation method, we reiterate the fallacy of picking one checkpoint for model comparison. As a result, we establish a robust evaluation method by examining the trade-off between style transfer and naturalness, and between content preservation and naturalness. Notably, we elaborate the human evaluation and automatically identify the inaccurate measurement of content preservation computed by the BLEU score. To overcome this issue, we propose a graph-based method to extract attribute content and attribute-independent content from input sentences in the YELP dataset and IMDB dataset. With the modified datasets, we design a new evaluation metric called and propose an efficient regularization to leverage the attribute-dependent content and attribute-independent content as guiding signals. Experimental results have demonstrated the effectiveness of the proposed strategy.'}
|
https://aclanthology.org/2021.findings-emnlp.135
|
https://aclanthology.org/2021.findings-emnlp.135.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
HypoGen: Hyperbole Generation with Commonsense and Counterfactual Knowledge
|
Yufei Tian, Arvind krishna Sridhar, Nanyun Peng
|
A hyperbole is an intentional and creative exaggeration not to be taken literally. Despite its ubiquity in daily life, the computational explorations of hyperboles are scarce. In this paper, we tackle the under-explored and challenging task: sentence-level hyperbole generation. We start with a representative syntactic pattern for intensification and systematically study the semantic (commonsense and counterfactual) relationships between each component in such hyperboles. We then leverage commonsense and counterfactual inference to generate hyperbole candidates based on our findings from the pattern, and train neural classifiers to rank and select high-quality hyperboles. Automatic and human evaluations show that our generation method is able to generate hyperboles with high success rate, intensity, funniness, and creativity.
|
https://aclanthology.org/2021.findings-emnlp.136
|
https://aclanthology.org/2021.findings-emnlp.136.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Profiling News Discourse Structure Using Explicit Subtopic Structures Guided Critics
|
Prafulla Kumar Choubey, Ruihong Huang
|
We present an actor-critic framework to induce subtopical structures in a news article for news discourse profiling. The model uses multiple critics that act according to known subtopic structures while the actor aims to outperform them. The content structures constitute sentences that represent latent subtopic boundaries. Then, we introduce a hierarchical neural network that uses the identified subtopic boundary sentences to model multi-level interaction between sentences, subtopics, and the document. Experimental results and analyses on the NewsDiscourse corpus show that the actor model learns to effectively segment a document into subtopics and improves the performance of the hierarchical model on the news discourse profiling task.
|
https://aclanthology.org/2021.findings-emnlp.137
|
https://aclanthology.org/2021.findings-emnlp.137.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
ProtoInfoMax: Prototypical Networks with Mutual Information Maximization for Out-of-Domain Detection
|
Iftitahu Nimah, Meng Fang, Vlado Menkovski, Mykola Pechenizkiy
|
The ability to detect Out-of-Domain (OOD) inputs has been a critical requirement in many real-world NLP applications. For example, intent classification in dialogue systems. The reason is that the inclusion of unsupported OOD inputs may lead to catastrophic failure of systems. However, it remains an empirical question whether current methods can tackle such problems reliably in a realistic scenario where zero OOD training data is available. In this study, we propose ProtoInfoMax, a new architecture that extends Prototypical Networks to simultaneously process in-domain and OOD sentences via Mutual Information Maximization (InfoMax) objective. Experimental results show that our proposed method can substantially improve performance up to 20% for OOD detection in low resource settings of text classification. We also show that ProtoInfoMax is less prone to typical overconfidence errors of Neural Networks, leading to more reliable prediction results.
|
https://aclanthology.org/2021.findings-emnlp.138
|
https://aclanthology.org/2021.findings-emnlp.138.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Learning from Language Description: Low-shot Named Entity Recognition via Decomposed Framework
|
Yaqing Wang, Haoda Chu, Chao Zhang, Jing Gao
|
In this work, we study the problem of named entity recognition (NER) in a low resource scenario, focusing on few-shot and zero-shot settings. Built upon large-scale pre-trained language models, we propose a novel NER framework, namely SpanNER, which learns from natural language supervision and enables the identification of never-seen entity classes without using in-domain labeled data. We perform extensive experiments on 5 benchmark datasets and evaluate the proposed method in the few-shot learning, domain transfer and zero-shot learning settings. The experimental results show that the proposed method can bring 10%, 23% and 26% improvements in average over the best baselines in few-shot learning, domain transfer and zero-shot learning settings respectively.
|
https://aclanthology.org/2021.findings-emnlp.139
|
https://aclanthology.org/2021.findings-emnlp.139.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
BERT might be Overkill: A Tiny but Effective Biomedical Entity Linker based on Residual Convolutional Neural Networks
|
Tuan Lai, Heng Ji, ChengXiang Zhai
|
Biomedical entity linking is the task of linking entity mentions in a biomedical document to referent entities in a knowledge base. Recently, many BERT-based models have been introduced for the task. While these models achieve competitive results on many datasets, they are computationally expensive and contain about 110M parameters. Little is known about the factors contributing to their impressive performance and whether the over-parameterization is needed. In this work, we shed some light on the inner workings of these large BERT-based models. Through a set of probing experiments, we have found that the entity linking performance only changes slightly when the input word order is shuffled or when the attention scope is limited to a fixed window size. From these observations, we propose an efficient convolutional neural network with residual connections for biomedical entity linking. Because of the sparse connectivity and weight sharing properties, our model has a small number of parameters and is highly efficient. On five public datasets, our model achieves comparable or even better linking accuracy than the state-of-the-art BERT-based models while having about 60 times fewer parameters.
|
https://aclanthology.org/2021.findings-emnlp.140
|
https://aclanthology.org/2021.findings-emnlp.140.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Char2Subword: Extending the Subword Embedding Space Using Robust Character Compositionality
|
Gustavo Aguilar, Bryan McCann, Tong Niu, Nazneen Rajani, Nitish Shirish Keskar, Thamar Solorio
|
Byte-pair encoding (BPE) is a ubiquitous algorithm in the subword tokenization process of language models as it provides multiple benefits. However, this process is solely based on pre-training data statistics, making it hard for the tokenizer to handle infrequent spellings. On the other hand, though robust to misspellings, pure character-level models often lead to unreasonably long sequences and make it harder for the model to learn meaningful words. To alleviate these challenges, we propose a character-based subword module (char2subword) that learns the subword embedding table in pre-trained models like BERT. Our char2subword module builds representations from characters out of the subword vocabulary, and it can be used as a drop-in replacement of the subword embedding table. The module is robust to character-level alterations such as misspellings, word inflection, casing, and punctuation. We integrate it further with BERT through pre-training while keeping BERT transformer parameters fixed–and thus, providing a practical method. Finally, we show that incorporating our module to mBERT significantly improves the performance on the social media linguistic code-switching evaluation (LinCE) benchmark.
|
https://aclanthology.org/2021.findings-emnlp.141
|
https://aclanthology.org/2021.findings-emnlp.141.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Exploring Multitask Learning for Low-Resource Abstractive Summarization
|
Ahmed Magooda, Diane Litman, Mohamed Elaraby
|
This paper explores the effect of using multitask learning for abstractive summarization in the context of small training corpora. In particular, we incorporate four different tasks (extractive summarization, language modeling, concept detection, and paraphrase detection) both individually and in combination, with the goal of enhancing the target task of abstractive summarization via multitask learning. We show that for many task combinations, a model trained in a multitask setting outperforms a model trained only for abstractive summarization, with no additional summarization data introduced. Additionally, we do a comprehensive search and find that certain tasks (e.g. paraphrase detection) consistently benefit abstractive summarization, not only when combined with other tasks but also when using different architectures and training corpora.
|
https://aclanthology.org/2021.findings-emnlp.142
|
https://aclanthology.org/2021.findings-emnlp.142.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Conical Classification For Efficient One-Class Topic Determination
|
Sameer Khanna
|
As the Internet grows in size, so does the amount of text based information that exists. For many application spaces it is paramount to isolate and identify texts that relate to a particular topic. While one-class classification would be ideal for such analysis, there is a relative lack of research regarding efficient approaches with high predictive power. By noting that the range of documents we wish to identify can be represented as positive linear combinations of the Vector Space Model representing our text, we propose Conical classification, an approach that allows us to identify if a document is of a particular topic in a computationally efficient manner. We also propose Normal Exclusion, a modified version of Bi-Normal Separation that makes it more suitable within the one-class classification context. We show in our analysis that our approach not only has higher predictive power on our datasets, but is also faster to compute.
|
https://aclanthology.org/2021.findings-emnlp.143
|
https://aclanthology.org/2021.findings-emnlp.143.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Improving Dialogue State Tracking with Turn-based Loss Function and Sequential Data Augmentation
|
Jarana Manotumruksa, Jeff Dalton, Edgar Meij, Emine Yilmaz
|
While state-of-the-art Dialogue State Tracking (DST) models show promising results, all of them rely on a traditional cross-entropy loss function during the training process, which may not be optimal for improving the joint goal accuracy. Although several approaches recently propose augmenting the training set by copying user utterances and replacing the real slot values with other possible or even similar values, they are not effective at improving the performance of existing DST models. To address these challenges, we propose a Turn-based Loss Function (TLF) that penalises the model if it inaccurately predicts a slot value at the early turns more so than in later turns in order to improve joint goal accuracy. We also propose a simple but effective Sequential Data Augmentation (SDA) algorithm to generate more complex user utterances and system responses to effectively train existing DST models. Experimental results on two standard DST benchmark collections demonstrate that our proposed TLF and SDA techniques significantly improve the effectiveness of the state-of-the-art DST model by approximately 7-8% relative reduction in error and achieves a new state-of-the-art joint goal accuracy with 59.50 and 54.90 on MultiWOZ2.1 and MultiWOZ2.2, respectively.
|
https://aclanthology.org/2021.findings-emnlp.144
|
https://aclanthology.org/2021.findings-emnlp.144.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
TIAGE: A Benchmark for Topic-Shift Aware Dialog Modeling
|
Huiyuan Xie, Zhenghao Liu, Chenyan Xiong, Zhiyuan Liu, Ann Copestake
|
Human conversations naturally evolve around different topics and fluently move between them. In research on dialog systems, the ability to actively and smoothly transition to new topics is often ignored. In this paper we introduce TIAGE, a new topic-shift aware dialog benchmark constructed utilizing human annotations on topic shifts. Based on TIAGE, we introduce three tasks to investigate different scenarios of topic-shift modeling in dialog settings: topic-shift detection, topic-shift triggered response generation and topic-aware dialog generation. Experiments on these tasks show that the topic-shift signals in TIAGE are useful for topic-shift response generation. On the other hand, dialog systems still struggle to decide when to change topic. This indicates further research is needed in topic-shift aware dialog modeling.
|
https://aclanthology.org/2021.findings-emnlp.145
|
https://aclanthology.org/2021.findings-emnlp.145.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Optimal Neural Program Synthesis from Multimodal Specifications
|
Xi Ye, Qiaochu Chen, Isil Dillig, Greg Durrett
|
{'i': 'optimal neural synthesis', '#text': 'Multimodal program synthesis, which leverages different types of user input to synthesize a desired program, is an attractive way to scale program synthesis to challenging settings; however, it requires integrating noisy signals from the user, like natural language, with hard constraints on the program’s behavior. This paper proposes an approach where the goal is to find a program that satisfies user-provided constraints while also maximizing the program’s score with respect to a neural model. Specifically, we focus on multimodal synthesis tasks in which the user intent is expressed using a combination of natural language (NL) and input-output examples. At the core of our method is a top-down recurrent neural model that places distributions over abstract syntax trees conditioned on the NL input. This model not only allows for efficient search over the space of syntactically valid programs, but it allows us to leverage automated program analysis techniques for pruning the search space based on infeasibility of partial programs with respect to the user’s constraints. The experimental results on a multimodal synthesis dataset (StructuredRegex) show that our method substantially outperforms prior state-of-the-art techniques in terms of accuracy and efficiency, and finds model-optimal programs more frequently.'}
|
https://aclanthology.org/2021.findings-emnlp.146
|
https://aclanthology.org/2021.findings-emnlp.146.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Sent2Span: Span Detection for PICO Extraction in the Biomedical Text without Span Annotations
|
Shifeng Liu, Yifang Sun, Bing Li, Wei Wang, Florence T. Bourgeois, Adam G. Dunn
|
The rapid growth in published clinical trials makes it difficult to maintain up-to-date systematic reviews, which require finding all relevant trials. This leads to policy and practice decisions based on out-of-date, incomplete, and biased subsets of available clinical evidence. Extracting and then normalising Population, Intervention, Comparator, and Outcome (PICO) information from clinical trial articles may be an effective way to automatically assign trials to systematic reviews and avoid searching and screening—the two most time-consuming systematic review processes. We propose and test a novel approach to PICO span detection. The major difference between our proposed method and previous approaches comes from detecting spans without needing annotated span data and using only crowdsourced sentence-level annotations. Experiments on two datasets show that PICO span detection results achieve much higher results for recall when compared to fully supervised methods with PICO sentence detection at least as good as human annotations. By removing the reliance on expert annotations for span detection, this work could be used in a human-machine pipeline for turning low-quality, crowdsourced, and sentence-level PICO annotations into structured information that can be used to quickly assign trials to relevant systematic reviews.
|
https://aclanthology.org/2021.findings-emnlp.147
|
https://aclanthology.org/2021.findings-emnlp.147.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
When in Doubt: Improving Classification Performance with Alternating Normalization
|
Menglin Jia, Austin Reiter, Ser-Nam Lim, Yoav Artzi, Claire Cardie
|
We introduce Classification with Alternating Normalization (CAN), a non-parametric post-processing step for classification. CAN improves classification accuracy for challenging examples by re-adjusting their predicted class probability distribution using the predicted class distributions of high-confidence validation examples. CAN is easily applicable to any probabilistic classifier, with minimal computation overhead. We analyze the properties of CAN using simulated experiments, and empirically demonstrate its effectiveness across a diverse set of classification tasks.
|
https://aclanthology.org/2021.findings-emnlp.148
|
https://aclanthology.org/2021.findings-emnlp.148.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
APGN: Adversarial and Parameter Generation Networks for Multi-Source Cross-Domain Dependency Parsing
|
Ying Li, Meishan Zhang, Zhenghua Li, Min Zhang, Zhefeng Wang, Baoxing Huai, Nicholas Jing Yuan
|
Thanks to the strong representation learning capability of deep learning, especially pre-training techniques with language model loss, dependency parsing has achieved great performance boost in the in-domain scenario with abundant labeled training data for target domains. However, the parsing community has to face the more realistic setting where the parsing performance drops drastically when labeled data only exists for several fixed out-domains. In this work, we propose a novel model for multi-source cross-domain dependency parsing. The model consists of two components, i.e., a parameter generation network for distinguishing domain-specific features, and an adversarial network for learning domain-invariant representations. Experiments on a recently released NLPCC-2019 dataset for multi-domain dependency parsing show that our model can consistently improve cross-domain parsing performance by about 2 points in averaged labeled attachment accuracy (LAS) over strong BERT-enhanced baselines. Detailed analysis is conducted to gain more insights on contributions of the two components.
|
https://aclanthology.org/2021.findings-emnlp.149
|
https://aclanthology.org/2021.findings-emnlp.149.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
“Let Your Characters Tell Their Story”: A Dataset for Character-Centric Narrative Understanding
|
Faeze Brahman, Meng Huang, Oyvind Tafjord, Chao Zhao, Mrinmaya Sachan, Snigdha Chaturvedi
|
When reading a literary piece, readers often make inferences about various characters’ roles, personalities, relationships, intents, actions, etc. While humans can readily draw upon their past experiences to build such a character-centric view of the narrative, understanding characters in narratives can be a challenging task for machines. To encourage research in this field of character-centric narrative understanding, we present LiSCU – a new dataset of literary pieces and their summaries paired with descriptions of characters that appear in them. We also introduce two new tasks on LiSCU: Character Identification and Character Description Generation. Our experiments with several pre-trained language models adapted for these tasks demonstrate that there is a need for better models of narrative comprehension.
|
https://aclanthology.org/2021.findings-emnlp.150
|
https://aclanthology.org/2021.findings-emnlp.150.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Towards Developing a Multilingual and Code-Mixed Visual Question Answering System by Knowledge Distillation
|
Humair Raj Khan, Deepak Gupta, Asif Ekbal
|
Pre-trained language-vision models have shown remarkable performance on the visual question answering (VQA) task. However, most pre-trained models are trained by only considering monolingual learning, especially the resource-rich language like English. Training such models for multilingual setups demand high computing resources and multilingual language-vision dataset which hinders their application in practice. To alleviate these challenges, we propose a knowledge distillation approach to extend an English language-vision model (teacher) into an equally effective multilingual and code-mixed model (student). Unlike the existing knowledge distillation methods, which only use the output from the last layer of the teacher network for distillation, our student model learns and imitates the teacher from multiple intermediate layers (language and vision encoders) with appropriately designed distillation objectives for incremental knowledge extraction. We also create the large-scale multilingual and code-mixed VQA dataset in eleven different language setups considering the multiple Indian and European languages. Experimental results and in-depth analysis show the effectiveness of the proposed VQA model over the pre-trained language-vision models on eleven diverse language setups.
|
https://aclanthology.org/2021.findings-emnlp.151
|
https://aclanthology.org/2021.findings-emnlp.151.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
An Iterative Multi-Knowledge Transfer Network for Aspect-Based Sentiment Analysis
|
Yunlong Liang, Fandong Meng, Jinchao Zhang, Yufeng Chen, Jinan Xu, Jie Zhou
|
Aspect-based sentiment analysis (ABSA) mainly involves three subtasks: aspect term extraction, opinion term extraction, and aspect-level sentiment classification, which are typically handled in a separate or joint manner. However, previous approaches do not well exploit the interactive relations among three subtasks and do not pertinently leverage the easily available document-level labeled domain/sentiment knowledge, which restricts their performances. To address these issues, we propose a novel Iterative Multi-Knowledge Transfer Network (IMKTN) for end-to-end ABSA. For one thing, through the interactive correlations between the ABSA subtasks, our IMKTN transfers the task-specific knowledge from any two of the three subtasks to another one at the token level by utilizing a well-designed routing algorithm, that is, any two of the three subtasks will help the third one. For another, our IMKTN pertinently transfers the document-level knowledge, i.e., domain-specific and sentiment-related knowledge, to the aspect-level subtasks to further enhance the corresponding performance. Experimental results on three benchmark datasets demonstrate the effectiveness and superiority of our approach.
|
https://aclanthology.org/2021.findings-emnlp.152
|
https://aclanthology.org/2021.findings-emnlp.152.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Semantic Alignment with Calibrated Similarity for Multilingual Sentence Embedding
|
Jiyeon Ham, Eun-Sol Kim
|
Measuring the similarity score between a pair of sentences in different languages is the essential requisite for multilingual sentence embedding methods. Predicting the similarity score consists of two sub-tasks, which are monolingual similarity evaluation and multilingual sentence retrieval. However, conventional methods have mainly tackled only one of the sub-tasks and therefore showed biased performances. In this paper, we suggest a novel and strong method for multilingual sentence embedding, which shows performance improvement on both sub-tasks, consequently resulting in robust predictions of multilingual similarity scores. The suggested method consists of two parts: to learn semantic similarity of sentences in the pivot language and then to extend the learned semantic structure to different languages. To align semantic structures across different languages, we introduce a teacher-student network. The teacher network distills the knowledge of the pivot language to different languages of the student network. During the distillation, the parameters of the teacher network are updated with the slow-moving average. Together with the distillation and the parameter updating, the semantic structure of the student network can be directly aligned across different languages while preserving the ability to measure the semantic similarity. Thus, the multilingual training method drives performance improvement on multilingual similarity evaluation. The suggested model achieves the state-of-the-art performance on extended STS 2017 multilingual similarity evaluation as well as two sub-tasks, which are extended STS 2017 monolingual similarity evaluation and Tatoeba multilingual retrieval in 14 languages.
|
https://aclanthology.org/2021.findings-emnlp.153
|
https://aclanthology.org/2021.findings-emnlp.153.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
fBERT: A Neural Transformer for Identifying Offensive Content
|
Diptanu Sarkar, Marcos Zampieri, Tharindu Ranasinghe, Alexander Ororbia
|
Transformer-based models such as BERT, XLNET, and XLM-R have achieved state-of-the-art performance across various NLP tasks including the identification of offensive language and hate speech, an important problem in social media. In this paper, we present fBERT, a BERT model retrained on SOLID, the largest English offensive language identification corpus available with over 1.4 million offensive instances. We evaluate fBERT’s performance on identifying offensive content on multiple English datasets and we test several thresholds for selecting instances from SOLID. The fBERT model will be made freely available to the community.
|
https://aclanthology.org/2021.findings-emnlp.154
|
https://aclanthology.org/2021.findings-emnlp.154.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
WIKIBIAS: Detecting Multi-Span Subjective Biases in Language
|
Yang Zhong, Jingfeng Yang, Wei Xu, Diyi Yang
|
Biases continue to be prevalent in modern text and media, especially subjective bias – a special type of bias that introduces improper attitudes or presents a statement with the presupposition of truth. To tackle the problem of detecting and further mitigating subjective bias, we introduce a manually annotated parallel corpus WIKIBIAS with more than 4,000 sentence pairs from Wikipedia edits. This corpus contains annotations towards both sentence-level bias types and token-level biased segments. We present systematic analyses of our dataset and results achieved by a set of state-of-the-art baselines in terms of three tasks: bias classification, tagging biased segments, and neutralizing biased text. We find that current models still struggle with detecting multi-span biases despite their reasonable performances, suggesting that our dataset can serve as a useful research benchmark. We also demonstrate that models trained on our dataset can generalize well to multiple domains such as news and political speeches.
|
https://aclanthology.org/2021.findings-emnlp.155
|
https://aclanthology.org/2021.findings-emnlp.155.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
UnClE: Explicitly Leveraging Semantic Similarity to Reduce the Parameters of Word Embeddings
|
Zhi Li, Yuchen Zhai, Chengyu Wang, Minghui Qiu, Kailiang Li, Yin Zhang
|
Natural language processing (NLP) models often require a massive number of parameters for word embeddings, which limits their application on mobile devices. Researchers have employed many approaches, e.g. adaptive inputs, to reduce the parameters of word embeddings. However, existing methods rarely pay attention to semantic information. In this paper, we propose a novel method called Unique and Class Embeddings (UnClE), which explicitly leverages semantic similarity with weight sharing to reduce the dimensionality of word embeddings. Inspired by the fact that words with similar semantic can share a part of weights, we divide the embeddings of words into two parts: unique embedding and class embedding. The former is one-to-one mapping like traditional embedding, while the latter is many-to-one mapping and learn the representation of class information. Our method is suitable for both word-level and sub-word level models and can be used to reduce both input and output embeddings. Experimental results on the standard WMT 2014 English-German dataset show that our method is able to reduce the parameters of word embeddings by more than 11x, with about 93% performance retaining in BLEU metrics. For language modeling task, our model can reduce word embeddings by 6x or 11x on PTB/WT2 dataset at the cost of a certain degree of performance degradation.
|
https://aclanthology.org/2021.findings-emnlp.156
|
https://aclanthology.org/2021.findings-emnlp.156.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Grounded Graph Decoding improves Compositional Generalization in Question Answering
|
Yu Gai, Paras Jain, Wendi Zhang, Joseph Gonzalez, Dawn Song, Ion Stoica
|
{'url': 'https://github.com/gaiyu0/cfq', '#text': 'Question answering models struggle to generalize to novel compositions of training patterns. Current end-to-end models learn a flat input embedding which can lose input syntax context. Prior approaches improve generalization by learning permutation invariant models, but these methods do not scale to more complex train-test splits. We propose Grounded Graph Decoding, a method to improve compositional generalization of language representations by grounding structured predictions with an attention mechanism. Grounding enables the model to retain syntax information from the input that significantly improves generalization to complex inputs. By predicting a structured graph containing conjunctions of query clauses, we learn a group invariant representation without making assumptions on the target domain. Our model performs competitively on the Compositional Freebase Questions (CFQ) dataset, a challenging benchmark for compositional generalization in question answering. Especially, our model effectively solves the MCD1 split with 98% accuracy. All source is available at .'}
|
https://aclanthology.org/2021.findings-emnlp.157
|
https://aclanthology.org/2021.findings-emnlp.157.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Enhancing Visual Dialog Questioner with Entity-based Strategy Learning and Augmented Guesser
|
Duo Zheng, Zipeng Xu, Fandong Meng, Xiaojie Wang, Jiaan Wang, Jie Zhou
|
Considering the importance of building a good Visual Dialog (VD) Questioner, many researchers study the topic under a Q-Bot-A-Bot image-guessing game setting, where the Questioner needs to raise a series of questions to collect information of an undisclosed image. Despite progress has been made in Supervised Learning (SL) and Reinforcement Learning (RL), issues still exist. Firstly, previous methods do not provide explicit and effective guidance for Questioner to generate visually related and informative questions. Secondly, the effect of RL is hampered by an incompetent component, i.e., the Guesser, who makes image predictions based on the generated dialogs and assigns rewards accordingly. To enhance VD Questioner: 1) we propose a Related entity enhanced Questioner (ReeQ) that generates questions under the guidance of related entities and learns entity-based questioning strategy from human dialogs; 2) we propose an Augmented Guesser that is strong and is optimized for VD especially. Experimental results on the VisDial v1.0 dataset show that our approach achieves state-of-the-art performance on both image-guessing task and question diversity. Human study further verifies that our model generates more visually related, informative and coherent questions.
|
https://aclanthology.org/2021.findings-emnlp.158
|
https://aclanthology.org/2021.findings-emnlp.158.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
A Pretraining Numerical Reasoning Model for Ordinal Constrained Question Answering on Knowledge Base
|
Yu Feng, Jing Zhang, Gaole He, Wayne Xin Zhao, Lemao Liu, Quan Liu, Cuiping Li, Hong Chen
|
Knowledge Base Question Answering (KBQA) is to answer natural language questions posed over knowledge bases (KBs). This paper targets at empowering the IR-based KBQA models with the ability of numerical reasoning for answering ordinal constrained questions. A major challenge is the lack of explicit annotations about numerical properties. To address this challenge, we propose a pretraining numerical reasoning model consisting of NumGNN and NumTransformer, guided by explicit self-supervision signals. The two modules are pretrained to encode the magnitude and ordinal properties of numbers respectively and can serve as model-agnostic plugins for any IR-based KBQA model to enhance its numerical reasoning ability. Extensive experiments on two KBQA benchmarks verify the effectiveness of our method to enhance the numerical reasoning ability for IR-based KBQA models.
|
https://aclanthology.org/2021.findings-emnlp.159
|
https://aclanthology.org/2021.findings-emnlp.159.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
RoR: Read-over-Read for Long Document Machine Reading Comprehension
|
Jing Zhao, Junwei Bao, Yifan Wang, Yongwei Zhou, Youzheng Wu, Xiaodong He, Bowen Zhou
|
{'url': 'https://quac.ai/', '#text': 'Transformer-based pre-trained models, such as BERT, have achieved remarkable results on machine reading comprehension. However, due to the constraint of encoding length (e.g., 512 WordPiece tokens), a long document is usually split into multiple chunks that are independently read. It results in the reading field being limited to individual chunks without information collaboration for long document machine reading comprehension. To address this problem, we propose RoR, a read-over-read method, which expands the reading field from chunk to document. Specifically, RoR includes a chunk reader and a document reader. The former first predicts a set of regional answers for each chunk, which are then compacted into a highly-condensed version of the original document, guaranteeing to be encoded once. The latter further predicts the global answers from this condensed document. Eventually, a voting strategy is utilized to aggregate and rerank the regional and global answers for final prediction. Extensive experiments on two benchmarks QuAC and TriviaQA demonstrate the effectiveness of RoR for long document reading. Notably, RoR ranks 1st place on the QuAC leaderboard () at the time of submission (May 17th, 2021).'}
|
https://aclanthology.org/2021.findings-emnlp.160
|
https://aclanthology.org/2021.findings-emnlp.160.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Span Pointer Networks for Non-Autoregressive Task-Oriented Semantic Parsing
|
Akshat Shrivastava, Pierce Chuang, Arun Babu, Shrey Desai, Abhinav Arora, Alexander Zotov, Ahmed Aly
|
An effective recipe for building seq2seq, non-autoregressive, task-oriented parsers to map utterances to semantic frames proceeds in three steps: encoding an utterance x, predicting a frame’s length |y|, and decoding a |y|-sized frame with utterance and ontology tokens. Though empirically strong, these models are typically bottlenecked by length prediction, as even small inaccuracies change the syntactic and semantic characteristics of resulting frames. In our work, we propose span pointer networks, non-autoregressive parsers which shift the decoding task from text generation to span prediction; that is, when imputing utterance spans into frame slots, our model produces endpoints (e.g., [i, j]) as opposed to text (e.g., “6pm”). This natural quantization of the output space reduces the variability of gold frames, therefore improving length prediction and, ultimately, exact match. Furthermore, length prediction is now responsible for frame syntax and the decoder is responsible for frame semantics, resulting in a coarse-to-fine model. We evaluate our approach on several task-oriented semantic parsing datasets. Notably, we bridge the quality gap between non-autogressive and autoregressive parsers, achieving 87 EM on TOPv2 (Chen et al. 2020). Furthermore, due to our more consistent gold frames, we show strong improvements in model generalization in both cross-domain and cross-lingual transfer in low-resource settings. Finally, due to our diminished output vocabulary, we observe 70% reduction in latency and 83% reduction in memory at beam size 5 compared to prior non-autoregressive parsers.
|
https://aclanthology.org/2021.findings-emnlp.161
|
https://aclanthology.org/2021.findings-emnlp.161.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Language Resource Efficient Learning for Captioning
|
Jia Chen, Yike Wu, Shiwan Zhao, Qin Jin
|
Due to complex cognitive and inferential efforts involved in the manual generation of one caption per image/video input, the human annotation resources are very limited for captioning tasks. We define language resource efficient as reaching the same performance with fewer annotated captions per input. We first study the performance degradation of caption models in different language resource settings. Our analysis of caption models with SC loss shows that the performance degradation is caused by the increasingly noisy estimation of reward and baseline with fewer language resources. To mitigate this issue, we propose to reduce the variance of noise in the baseline by generalizing the single pairwise comparison in SC loss and using multiple generalized pairwise comparisons. The generalized pairwise comparison (GPC) measures the difference between the evaluation scores of two captions with respect to an input. Empirically, we show that the model trained with the proposed GPC loss is efficient on language resource and achieves similar performance with the state-of-the-art models on MSCOCO by using only half of the language resources. Furthermore, our model significantly outperforms the state-of-the-art models on a video caption dataset that has only one labeled caption per input in the training set.
|
https://aclanthology.org/2021.findings-emnlp.162
|
https://aclanthology.org/2021.findings-emnlp.162.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Translation as Cross-Domain Knowledge: Attention Augmentation for Unsupervised Cross-Domain Segmenting and Labeling Tasks
|
Ruixuan Luo, Yi Zhang, Sishuo Chen, Xu Sun
|
The nature of no word delimiter or inflection that can indicate segment boundaries or word semantics increases the difficulty of Chinese text understanding, and also intensifies the demand for word-level semantic knowledge to accomplish the tagging goal in Chinese segmenting and labeling tasks. However, for unsupervised Chinese cross-domain segmenting and labeling tasks, the model trained on the source domain frequently suffers from the deficient word-level semantic knowledge of the target domain. To address this issue, we propose a novel paradigm based on attention augmentation to introduce crucial cross-domain knowledge via a translation system. The proposed paradigm enables the model attention to draw cross-domain knowledge indicated by the implicit word-level cross-lingual alignment between the input and its corresponding translation. Aside from the model requiring cross-lingual input, we also establish an off-the-shelf model which eludes the dependency on cross-lingual translations. Experiments demonstrate that our proposal significantly advances the state-of-the-art results of cross-domain Chinese segmenting and labeling tasks.
|
https://aclanthology.org/2021.findings-emnlp.163
|
https://aclanthology.org/2021.findings-emnlp.163.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
ContractNLI: A Dataset for Document-level Natural Language Inference for Contracts
|
Yuta Koreeda, Christopher Manning
|
Reviewing contracts is a time-consuming procedure that incurs large expenses to companies and social inequality to those who cannot afford it. In this work, we propose “document-level natural language inference (NLI) for contracts”, a novel, real-world application of NLI that addresses such problems. In this task, a system is given a set of hypotheses (such as “Some obligations of Agreement may survive termination.”) and a contract, and it is asked to classify whether each hypothesis is “entailed by”, “contradicting to” or “not mentioned by” (neutral to) the contract as well as identifying “evidence” for the decision as spans in the contract. We annotated and release the largest corpus to date consisting of 607 annotated contracts. We then show that existing models fail badly on our task and introduce a strong baseline, which (a) models evidence identification as multi-label classification over spans instead of trying to predict start and end tokens, and (b) employs more sophisticated context segmentation for dealing with long documents. We also show that linguistic characteristics of contracts, such as negations by exceptions, are contributing to the difficulty of this task and that there is much room for improvement.
|
https://aclanthology.org/2021.findings-emnlp.164
|
https://aclanthology.org/2021.findings-emnlp.164.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Japanese Zero Anaphora Resolution Can Benefit from Parallel Texts Through Neural Transfer Learning
|
Masato Umakoshi, Yugo Murawaki, Sadao Kurohashi
|
Parallel texts of Japanese and a non-pro-drop language have the potential of improving the performance of Japanese zero anaphora resolution (ZAR) because pronouns dropped in the former are usually mentioned explicitly in the latter. However, rule-based cross-lingual transfer is hampered by error propagation in an NLP pipeline and the frequent lack of transparency in translation correspondences. In this paper, we propose implicit transfer by injecting machine translation (MT) as an intermediate task between pretraining and ZAR. We employ a pretrained BERT model to initialize the encoder part of the encoder-decoder model for MT, and eject the encoder part for fine-tuning on ZAR. The proposed framework empirically demonstrates that ZAR performance can be improved by transfer learning from MT. In addition, we find that the incorporation of the masked language model training into MT leads to further gains.
|
https://aclanthology.org/2021.findings-emnlp.165
|
https://aclanthology.org/2021.findings-emnlp.165.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Grouped-Attention for Content-Selection and Content-Plan Generation
|
Bayu Distiawan Trisedya, Xiaojie Wang, Jianzhong Qi, Rui Zhang, Qingjun Cui
|
Content-planning is an essential part of data-to-text generation to determine the order of data mentioned in generated texts. Recent neural data-to-text generation models employ Pointer Networks to explicitly learn content-plan given a set of attributes as input. They use LSTM to encode the input, which assumes a sequential relationship in the input. This may be sub-optimal to encode a set of attributes, where the attributes have a composite structure: the attributes are disordered while each attribute value is an ordered list of tokens. We handle this problem by proposing a neural content-planner that can capture both local and global contexts of such a structure. Specifically, we propose a novel attention mechanism called GSC-attention. A key component of the GSC-attention is grouped-attention, which is token-level attention constrained within each input attribute that enables our proposed model captures both local and global context. Moreover, our content-planner explicitly learns content-selection, which is integrated into the content-planner to select the most important data to be included in the generated text via an attention masking procedure. Experimental results show that our model outperforms the competitors by 4.92%, 4.70%, and 16.56% in terms of Damerau-Levenshtein Distance scores on three real-world datasets.
|
https://aclanthology.org/2021.findings-emnlp.166
|
https://aclanthology.org/2021.findings-emnlp.166.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
An Explicit-Joint and Supervised-Contrastive Learning Framework for Few-Shot Intent Classification and Slot Filling
|
Han Liu, Feng Zhang, Xiaotong Zhang, Siyang Zhao, Xianchao Zhang
|
Intent classification (IC) and slot filling (SF) are critical building blocks in task-oriented dialogue systems. These two tasks are closely-related and can flourish each other. Since only a few utterances can be utilized for identifying fast-emerging new intents and slots, data scarcity issue often occurs when implementing IC and SF. However, few IC/SF models perform well when the number of training samples per class is quite small. In this paper, we propose a novel explicit-joint and supervised-contrastive learning framework for few-shot intent classification and slot filling. Its highlights are as follows. (i) The model extracts intent and slot representations via bidirectional interactions, and extends prototypical network to achieve explicit-joint learning, which guarantees that IC and SF tasks can mutually reinforce each other. (ii) The model integrates with supervised contrastive learning, which ensures that samples from same class are pulled together and samples from different classes are pushed apart. In addition, the model follows a not common but practical way to construct the episode, which gets rid of the traditional setting with fixed way and shot, and allows for unbalanced datasets. Extensive experiments on three public datasets show that our model can achieve promising performance.
|
https://aclanthology.org/2021.findings-emnlp.167
|
https://aclanthology.org/2021.findings-emnlp.167.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Retrieve, Discriminate and Rewrite: A Simple and Effective Framework for Obtaining Affective Response in Retrieval-Based Chatbots
|
Xin Lu, Yijian Tian, Yanyan Zhao, Bing Qin
|
Obtaining affective response is a key step in building empathetic dialogue systems. This task has been studied a lot in generation-based chatbots, but the related research in retrieval-based chatbots is still in the early stage. Existing works in retrieval-based chatbots are based on Retrieve-and-Rerank framework, which have a common problem of satisfying affect label at the expense of response quality. To address this problem, we propose a simple and effective Retrieve-Discriminate-Rewrite framework. The framework replaces the reranking mechanism with a new discriminate-and-rewrite mechanism, which predicts the affect label of the retrieved high-quality response via discrimination module and further rewrites the affect unsatisfied response via rewriting module. This can not only guarantee the quality of the response, but also satisfy the given affect label. In addition, another challenge for this line of research is the lack of an off-the-shelf affective response dataset. To address this problem and test our proposed framework, we annotate a Sentimental Douban Conversation Corpus based on the original Douban Conversation Corpus. Experimental results show that our proposed framework is effective and outperforms competitive baselines.
|
https://aclanthology.org/2021.findings-emnlp.168
|
https://aclanthology.org/2021.findings-emnlp.168.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Span Fine-tuning for Pre-trained Language Models
|
Rongzhou Bao, Zhuosheng Zhang, Hai Zhao
|
Pre-trained language models (PrLM) have to carefully manage input units when training on a very large text with a vocabulary consisting of millions of words. Previous works have shown that incorporating span-level information over consecutive words in pre-training could further improve the performance of PrLMs. However, given that span-level clues are introduced and fixed in pre-training, previous methods are time-consuming and lack of flexibility. To alleviate the inconvenience, this paper presents a novel span fine-tuning method for PrLMs, which facilitates the span setting to be adaptively determined by specific downstream tasks during the fine-tuning phase. In detail, any sentences processed by the PrLM will be segmented into multiple spans according to a pre-sampled dictionary. Then the segmentation information will be sent through a hierarchical CNN module together with the representation outputs of the PrLM and ultimately generate a span-enhanced representation. Experiments on GLUE benchmark show that the proposed span fine-tuning method significantly enhances the PrLM, and at the same time, offer more flexibility in an efficient way.
|
https://aclanthology.org/2021.findings-emnlp.169
|
https://aclanthology.org/2021.findings-emnlp.169.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
DIRECT: Direct and Indirect Responses in Conversational Text Corpus
|
Junya Takayama, Tomoyuki Kajiwara, Yuki Arase
|
We create a large-scale dialogue corpus that provides pragmatic paraphrases to advance technology for understanding the underlying intentions of users. While neural conversation models acquire the ability to generate fluent responses through training on a dialogue corpus, previous corpora have mainly focused on the literal meanings of utterances. However, in reality, people do not always present their intentions directly. For example, if a person said to the operator of a reservation service “I don’t have enough budget.”, they, in fact, mean “please find a cheaper option for me.” Our corpus provides a total of 71,498 indirect–direct utterance pairs accompanied by a multi-turn dialogue history extracted from the MultiWoZ dataset. In addition, we propose three tasks to benchmark the ability of models to recognize and generate indirect and direct utterances. We also investigated the performance of state-of-the-art pre-trained models as baselines.
|
https://aclanthology.org/2021.findings-emnlp.170
|
https://aclanthology.org/2021.findings-emnlp.170.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Retrieval, Analogy, and Composition: A framework for Compositional Generalization in Image Captioning
|
Zhan Shi, Hui Liu, Martin Renqiang Min, Christopher Malon, Li Erran Li, Xiaodan Zhu
|
Image captioning systems are expected to have the ability to combine individual concepts when describing scenes with concept combinations that are not observed during training. In spite of significant progress in image captioning with the help of the autoregressive generation framework, current approaches fail to generalize well to novel concept combinations. We propose a new framework that revolves around probing several similar image caption training instances (retrieval), performing analogical reasoning over relevant entities in retrieved prototypes (analogy), and enhancing the generation process with reasoning outcomes (composition). Our method augments the generation model by referring to the neighboring instances in the training set to produce novel concept combinations in generated captions. We perform experiments on the widely used image captioning benchmarks. The proposed models achieve substantial improvement over the compared baselines on both composition-related evaluation metrics and conventional image captioning metrics.
|
https://aclanthology.org/2021.findings-emnlp.171
|
https://aclanthology.org/2021.findings-emnlp.171.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
TURINGBENCH: A Benchmark Environment for Turing Test in the Age of Neural Text Generation
|
Adaku Uchendu, Zeyu Ma, Thai Le, Rui Zhang, Dongwon Lee
|
{'url': 'https://turingbench.ist.psu.edu/', '#text': 'Recent progress in generative language models has enabled machines to generate astonishingly realistic texts. While there are many legitimate applications of such models, there is also a rising need to distinguish machine-generated texts from human-written ones (e.g., fake news detection). However, to our best knowledge, there is currently no benchmark environment with datasets and tasks to systematically study the so-called ”Turing Test” problem for neural text generation methods. In this work, we present the TURINGBENCH benchmark environment, which is comprised of (1) a dataset with 200K human- or machine-generated samples across 20 labels Human, GPT-1, GPT-2_small, GPT-2_medium, GPT-2_large,GPT-2_xl, GPT-2_PyTorch, GPT-3, GROVER_base, GROVER_large, GROVER_mega, CTRL, XLM, XLNET_base, XLNET_large, FAIR_wmt19, FAIR_wmt20, TRANSFORMER_XL, PPLM_distil, PPLM_gpt2, (2) two benchmark tasks–i.e., Turing Test (TT) and Authorship Attribution (AA), and (3) a website with leaderboards. Our preliminary experimental results using TURINGBENCH show that GPT-3 and FAIR_wmt20 are the current winners, among all language models tested, in generating the most human-like indistinguishable texts with the lowest F1 score by five state-of-the-art TT detection models. The TURINGBENCH is available at:'}
|
https://aclanthology.org/2021.findings-emnlp.172
|
https://aclanthology.org/2021.findings-emnlp.172.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Say ‘YES’ to Positivity: Detecting Toxic Language in Workplace Communications
|
Meghana Moorthy Bhat, Saghar Hosseini, Ahmed Hassan Awadallah, Paul Bennett, Weisheng Li
|
Workplace communication (e.g. email, chat, etc.) is a central part of enterprise productivity. Healthy conversations are crucial for creating an inclusive environment and maintaining harmony in an organization. Toxic communications at the workplace can negatively impact overall job satisfaction and are often subtle, hidden, or demonstrate human biases. The linguistic subtlety of mild yet hurtful conversations has made it difficult for researchers to quantify and extract toxic conversations automatically. While offensive language or hate speech has been extensively studied in social communities, there has been little work studying toxic communication in emails. Specifically, the lack of corpus, sparsity of toxicity in enterprise emails, and well-defined criteria for annotating toxic conversations have prevented researchers from addressing the problem at scale. We take the first step towards studying toxicity in workplace emails by providing (1) a general and computationally viable taxonomy to study toxic language at the workplace (2) a dataset to study toxic language at the workplace based on the taxonomy and (3) analysis on why offensive language and hate-speech datasets are not suitable to detect workplace toxicity.
|
https://aclanthology.org/2021.findings-emnlp.173
|
https://aclanthology.org/2021.findings-emnlp.173.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Natural SQL: Making SQL Easier to Infer from Natural Language Specifications
|
Yujian Gan, Xinyun Chen, Jinxia Xie, Matthew Purver, John R. Woodward, John Drake, Qiaofu Zhang
|
Addressing the mismatch between natural language descriptions and the corresponding SQL queries is a key challenge for text-to-SQL translation. To bridge this gap, we propose an SQL intermediate representation (IR) called Natural SQL (NatSQL). Specifically, NatSQL preserves the core functionalities of SQL, while it simplifies the queries as follows: (1) dispensing with operators and keywords such as GROUP BY, HAVING, FROM, JOIN ON, which are usually hard to find counterparts in the text descriptions; (2) removing the need of nested subqueries and set operators; and (3) making the schema linking easier by reducing the required number of schema items. On Spider, a challenging text-to-SQL benchmark that contains complex and nested SQL queries, we demonstrate that NatSQL outperforms other IRs, and significantly improves the performance of several previous SOTA models. Furthermore, for existing models that do not support executable SQL generation, NatSQL easily enables them to generate executable SQL queries, and achieves the new state-of-the-art execution accuracy.
|
https://aclanthology.org/2021.findings-emnlp.174
|
https://aclanthology.org/2021.findings-emnlp.174.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Mitigating Data Scarceness through Data Synthesis, Augmentation and Curriculum for Abstractive Summarization
|
Ahmed Magooda, Diane Litman
|
This paper explores three simple data manipulation techniques (synthesis, augmentation, curriculum) for improving abstractive summarization models without the need for any additional data. We introduce a method of data synthesis with paraphrasing, a data augmentation technique with sample mixing, and curriculum learning with two new difficulty metrics based on specificity and abstractiveness. We conduct experiments to show that these three techniques can help improve abstractive summarization across two summarization models and two different small datasets. Furthermore, we show that these techniques can improve performance when applied in isolation and when combined.
|
https://aclanthology.org/2021.findings-emnlp.175
|
https://aclanthology.org/2021.findings-emnlp.175.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Self- and Pseudo-self-supervised Prediction of Speaker and Key-utterance for Multi-party Dialogue Reading Comprehension
|
Yiyang Li, Hai Zhao
|
Multi-party dialogue machine reading comprehension (MRC) brings tremendous challenge since it involves multiple speakers at one dialogue, resulting in intricate speaker information flows and noisy dialogue contexts. To alleviate such difficulties, previous models focus on how to incorporate these information using complex graph-based modules and additional manually labeled data, which is usually rare in real scenarios. In this paper, we design two labour-free self- and pseudo-self-supervised prediction tasks on speaker and key-utterance to implicitly model the speaker information flows, and capture salient clues in a long dialogue. Experimental results on two benchmark datasets have justified the effectiveness of our method over competitive baselines and current state-of-the-art models.
|
https://aclanthology.org/2021.findings-emnlp.176
|
https://aclanthology.org/2021.findings-emnlp.176.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Few-Shot Novel Concept Learning for Semantic Parsing
|
Soham Dan, Osbert Bastani, Dan Roth
|
Humans are capable of learning novel concepts from very few examples; in contrast, state-of-the-art machine learning algorithms typically need thousands of examples to do so. In this paper, we propose an algorithm for learning novel concepts by representing them as programs over existing concepts. This way the concept learning problem is naturally a program synthesis problem and our algorithm learns from a few examples to synthesize a program representing the novel concept. In addition, we perform a theoretical analysis of our approach for the case where the program defining the novel concept over existing ones is context-free. We show that given a learned grammar-based parser and a novel production rule, we can augment the parser with the production rule in a way that provably generalizes. We evaluate our approach by learning concepts in the semantic parsing domain extended to the few-shot novel concept learning setting, showing that our approach significantly outperforms end-to-end neural semantic parsers.
|
https://aclanthology.org/2021.findings-emnlp.177
|
https://aclanthology.org/2021.findings-emnlp.177.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Compositional Data and Task Augmentation for Instruction Following
|
Soham Dan, Xinran Han, Dan Roth
|
Executing natural language instructions in a physically grounded domain requires a model that understands both spatial concepts such as “left of” and “above”, and the compositional language used to identify landmarks and articulate instructions relative to them. In this paper, we study instruction understanding in the blocks world domain. Given an initial arrangement of blocks and a natural language instruction, the system executes the instruction by manipulating selected blocks. The highly compositional instructions are composed of atomic components and understanding these components is a necessary step to executing the instruction. We show that while end-to-end training (supervised only by the correct block location) fails to address the challenges of this task and performs poorly on instructions involving a single atomic component, knowledge-free auxiliary signals can be used to significantly improve performance by providing supervision for the instruction’s components. Specifically, we generate signals that aim at helping the model gradually understand components of the compositional instructions, as well as those that help it better understand spatial concepts, and show their benefit to the overall task for two datasets and two state-of-the-art (SOTA) models, especially when the training data is limited—which is usual in such tasks.
|
https://aclanthology.org/2021.findings-emnlp.178
|
https://aclanthology.org/2021.findings-emnlp.178.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Are Factuality Checkers Reliable? Adversarial Meta-evaluation of Factuality in Summarization
|
Yiran Chen, Pengfei Liu, Xipeng Qiu
|
{'url': 'https://github.com/zide05/AdvFact', '#text': 'With the continuous upgrading of the summarization systems driven by deep neural networks, researchers have higher requirements on the quality of the generated summaries, which should be not only fluent and informative but also factually correct. As a result, the field of factual evaluation has developed rapidly recently. Despite its initial progress in evaluating generated summaries, the meta-evaluation methodologies of factuality metrics are limited in their opacity, leading to the insufficient understanding of factuality metrics’ relative advantages and their applicability. In this paper, we present an adversarial meta-evaluation methodology that allows us to (i) diagnose the fine-grained strengths and weaknesses of 6 existing top-performing metrics over 24 diagnostic test datasets, (ii) search for directions for further improvement by data augmentation. Our observations from this work motivate us to propose several calls for future research. We make all codes, diagnostic test datasets, trained factuality models available: .'}
|
https://aclanthology.org/2021.findings-emnlp.179
|
https://aclanthology.org/2021.findings-emnlp.179.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
On the Effects of Transformer Size on In- and Out-of-Domain Calibration
|
Soham Dan, Dan Roth
|
Large, pre-trained transformer language models, which are pervasive in natural language processing tasks, are notoriously expensive to train. To reduce the cost of training such large models, prior work has developed smaller, more compact models which achieves a significant speedup in training time while maintaining competitive accuracy to the original model on downstream tasks. Though these smaller pre-trained models have been widely adopted by the community, it is not known how well are they calibrated compared to their larger counterparts. In this paper, focusing on a wide range of tasks, we thoroughly investigate the calibration properties of pre-trained transformers, as a function of their size. We demonstrate that when evaluated in-domain, smaller models are able to achieve competitive, and often better, calibration compared to larger models, while achieving significant speedup in training time. Post-hoc calibration techniques further reduce calibration error for all models in-domain. However, when evaluated out-of-domain, larger models tend to be better calibrated, and label-smoothing instead is an effective strategy to calibrate models in this setting.
|
https://aclanthology.org/2021.findings-emnlp.180
|
https://aclanthology.org/2021.findings-emnlp.180.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Detecting Polarized Topics Using Partisanship-aware Contextualized Topic Embeddings
|
Zihao He, Negar Mokhberian, António Câmara, Andres Abeliuk, Kristina Lerman
|
Growing polarization of the news media has been blamed for fanning disagreement, controversy and even violence. Early identification of polarized topics is thus an urgent matter that can help mitigate conflict. However, accurate measurement of topic-wise polarization is still an open research challenge. To address this gap, we propose Partisanship-aware Contextualized Topic Embeddings (PaCTE), a method to automatically detect polarized topics from partisan news sources. Specifically, utilizing a language model that has been finetuned on recognizing partisanship of the news articles, we represent the ideology of a news corpus on a topic by corpus-contextualized topic embedding and measure the polarization using cosine distance. We apply our method to a dataset of news articles about the COVID-19 pandemic. Extensive experiments on different news sources and topics demonstrate the efficacy of our method to capture topical polarization, as indicated by its effectiveness of retrieving the most polarized topics.
|
https://aclanthology.org/2021.findings-emnlp.181
|
https://aclanthology.org/2021.findings-emnlp.181.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
GenerativeRE: Incorporating a Novel Copy Mechanism and Pretrained Model for Joint Entity and Relation Extraction
|
Jiarun Cao, Sophia Ananiadou
|
Previous neural Seq2Seq models have shown the effectiveness for jointly extracting relation triplets. However, most of these models suffer from incompletion and disorder problems when they extract multi-token entities from input sentences. To tackle these problems, we propose a generative, multi-task learning framework, named GenerativeRE. We firstly propose a special entity labelling method on both input and output sequences. During the training stage, GenerativeRE fine-tunes the pre-trained generative model and learns the special entity labels simultaneously. During the inference stage, we propose a novel copy mechanism equipped with three mask strategies, to generate the most probable tokens by diminishing the scope of the model decoder. Experimental results show that our model achieves 4.6% and 0.9% F1 score improvements over the current state-of-the-art methods in the NYT24 and NYT29 benchmark datasets respectively.
|
https://aclanthology.org/2021.findings-emnlp.182
|
https://aclanthology.org/2021.findings-emnlp.182.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Re-entry Prediction for Online Conversations via Self-Supervised Learning
|
Lingzhi Wang, Xingshan Zeng, Huang Hu, Kam-Fai Wong, Daxin Jiang
|
In recent years, world business in online discussions and opinion sharing on social media is booming. Re-entry prediction task is thus proposed to help people keep track of the discussions which they wish to continue. Nevertheless, existing works only focus on exploiting chatting history and context information, and ignore the potential useful learning signals underlying conversation data, such as conversation thread patterns and repeated engagement of target users, which help better understand the behavior of target users in conversations. In this paper, we propose three interesting and well-founded auxiliary tasks, namely, Spread Pattern, Repeated Target user, and Turn Authorship, as the self-supervised signals for re-entry prediction. These auxiliary tasks are trained together with the main task in a multi-task manner. Experimental results on two datasets newly collected from Twitter and Reddit show that our method outperforms the previous state-of-the-arts with fewer parameters and faster convergence. Extensive experiments and analysis show the effectiveness of our proposed models and also point out some key ideas in designing self-supervised tasks.
|
https://aclanthology.org/2021.findings-emnlp.183
|
https://aclanthology.org/2021.findings-emnlp.183.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
proScript: Partially Ordered Scripts Generation
|
Keisuke Sakaguchi, Chandra Bhagavatula, Ronan Le Bras, Niket Tandon, Peter Clark, Yejin Choi
|
Scripts – prototypical event sequences describing everyday activities – have been shown to help understand narratives by providing expectations, resolving ambiguity, and filling in unstated information. However, to date they have proved hard to author or extract from text. In this work, we demonstrate for the first time that pre-trained neural language models can be finetuned to generate high-quality scripts, at varying levels of granularity, for a wide range of everyday scenarios (e.g., bake a cake). To do this, we collect a large (6.4k) crowdsourced partially ordered scripts (named proScript), that is substantially larger than prior datasets, and develop models that generate scripts by combining language generation and graph structure prediction. We define two complementary tasks: (i) edge prediction: given a scenario and unordered events, organize the events into a valid (possibly partial-order) script, and (ii) script generation: given only a scenario, generate events and organize them into a (possibly partial-order) script. Our experiments show that our models perform well (e.g., F1=75.7 on task (i)), illustrating a new approach to overcoming previous barriers to script collection. We also show that there is still significant room for improvement toward human level performance. Together, our tasks, dataset, and models offer a new research direction for learning script knowledge.
|
https://aclanthology.org/2021.findings-emnlp.184
|
https://aclanthology.org/2021.findings-emnlp.184.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Speaker Turn Modeling for Dialogue Act Classification
|
Zihao He, Leili Tavabi, Kristina Lerman, Mohammad Soleymani
|
Dialogue Act (DA) classification is the task of classifying utterances with respect to the function they serve in a dialogue. Existing approaches to DA classification model utterances without incorporating the turn changes among speakers throughout the dialogue, therefore treating it no different than non-interactive written text. In this paper, we propose to integrate the turn changes in conversations among speakers when modeling DAs. Specifically, we learn conversation-invariant speaker turn embeddings to represent the speaker turns in a conversation; the learned speaker turn embeddings are then merged with the utterance embeddings for the downstream task of DA classification. With this simple yet effective mechanism, our model is able to capture the semantics from the dialogue content while accounting for different speaker turns in a conversation. Validation on three benchmark public datasets demonstrates superior performance of our model.
|
https://aclanthology.org/2021.findings-emnlp.185
|
https://aclanthology.org/2021.findings-emnlp.185.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Unsupervised Domain Adaptation Method with Semantic-Structural Alignment for Dependency Parsing
|
Boda Lin, Mingzheng Li, Si Li, Yong Luo
|
Unsupervised cross-domain dependency parsing is to accomplish domain adaptation for dependency parsing without using labeled data in target domain. Existing methods are often of the pseudo-annotation type, which generates data through self-annotation of the base model and performing iterative training. However, these methods fail to consider the change of model structure for domain adaptation. In addition, the structural information contained in the text cannot be fully exploited. To remedy these drawbacks, we propose a Semantics-Structure Adaptative Dependency Parser (SSADP), which accomplishes unsupervised cross-domain dependency parsing without relying on pseudo-annotation or data selection. In particular, we design two feature extractors to extract semantic and structural features respectively. For each type of features, a corresponding feature adaptation method is utilized to achieve domain adaptation to align the domain distribution, which effectively enhances the unsupervised cross-domain transfer capability of the model. We validate the effectiveness of our model by conducting experiments on the CODT1 and CTB9 respectively, and the results demonstrate that our model can achieve consistent performance improvement. Besides, we verify the structure transfer ability of the proposed model by introducing Weisfeiler-Lehman Test.
|
https://aclanthology.org/2021.findings-emnlp.186
|
https://aclanthology.org/2021.findings-emnlp.186.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Devil’s Advocate: Novel Boosting Ensemble Method from Psychological Findings for Text Classification
|
Hwiyeol Jo, Jaeseo Lim, Byoung-Tak Zhang
|
We present a new form of ensemble method–Devil’s Advocate, which uses a deliberately dissenting model to force other submodels within the ensemble to better collaborate. Our method consists of two different training settings: one follows the conventional training process (Norm), and the other is trained by artificially generated labels (DevAdv). After training the models, Norm models are fine-tuned through an additional loss function, which uses the DevAdv model as a constraint. In making a final decision, the proposed ensemble model sums the scores of Norm models and then subtracts the score of the DevAdv model. The DevAdv model improves the overall performance of the other models within the ensemble. In addition to our ensemble framework being based on psychological background, it also shows comparable or improved performance on 5 text classification tasks when compared to conventional ensemble methods.
|
https://aclanthology.org/2021.findings-emnlp.187
|
https://aclanthology.org/2021.findings-emnlp.187.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
SideControl: Controlled Open-domain Dialogue Generation via Additive Side Networks
|
Wanyu Du, Yangfeng Ji
|
Transformer-based pre-trained language models boost the performance of open-domain dialogue systems. Prior works leverage Transformer-based pre-trained language models to generate texts with desired attributes in two general approaches: (1) gradient-based methods: updating all latent representations of pre-trained models with gradients from attribute models; (2) weighted-decoding methods: re-ranking beam candidates from pre-trained models with attribute functions. However, gradient-based methods lead to high computation cost and can easily get overfitted on small training sets, while weighted-decoding methods are inherently constrained by the low-variance high-bias pre-trained model. In this work, we propose a novel approach to control the generation of Transformer-based pre-trained language models: the SideControl framework, which leverages a novel control attributes loss to incorporate useful control signals, and is shown to perform well with very limited training samples. We evaluate our proposed method on two benchmark open-domain dialogue datasets, and results show that the SideControl framework has better controllability, higher generation quality and better sample-efficiency than existing gradient-based and weighted-decoding baselines.
|
https://aclanthology.org/2021.findings-emnlp.188
|
https://aclanthology.org/2021.findings-emnlp.188.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Is BERT a Cross-Disciplinary Knowledge Learner? A Surprising Finding of Pre-trained Models’ Transferability
|
Wei-Tsung Kao, Hung-yi Lee
|
This paper investigates whether the power of the models pre-trained on text data, such as BERT, can be transferred to general token sequence classification applications. To verify pre-trained models’ transferability, we test the pre-trained models on text classification tasks with meanings of tokens mismatches, and real-world non-text token sequence classification data, including amino acid, DNA, and music. We find that even on non-text data, the models pre-trained on text converge faster, perform better than the randomly initialized models, and only slightly worse than the models using task-specific knowledge. We also find that the representations of the text and non-text pre-trained models share non-trivial similarities.
|
https://aclanthology.org/2021.findings-emnlp.189
|
https://aclanthology.org/2021.findings-emnlp.189.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Geo-BERT Pre-training Model for Query Rewriting in POI Search
|
Xiao Liu, Juan Hu, Qi Shen, Huan Chen
|
Query Rewriting (QR) is proposed to solve the problem of the word mismatch between queries and documents in Web search. Existing approaches usually model QR with an end-to-end sequence-to-sequence (seq2seq) model. The state-of-the-art Transformer-based models can effectively learn textual semantics from user session logs, but they often ignore users’ geographic location information that is crucial for the Point-of-Interest (POI) search of map services. In this paper, we proposed a pre-training model, called Geo-BERT, to integrate semantics and geographic information in the pre-trained representations of POIs. Firstly, we simulate POI distribution in the real world as a graph, in which nodes represent POIs and multiple geographic granularities. Then we use graph representation learning methods to get geographic representations. Finally, we train a BERT-like pre-training model with text and POIs’ graph embeddings to get an integrated representation of both geographic and semantic information, and apply it in the QR of POI search. The proposed model achieves excellent accuracy on a wide range of real-world datasets of map services.
|
https://aclanthology.org/2021.findings-emnlp.190
|
https://aclanthology.org/2021.findings-emnlp.190.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Leveraging Bidding Graphs for Advertiser-Aware Relevance Modeling in Sponsored Search
|
Shuxian Bi, Chaozhuo Li, Xiao Han, Zheng Liu, Xing Xie, Haizhen Huang, Zengxuan Wen
|
Recently, sponsored search has become one of the most lucrative channels for marketing. As the fundamental basis of sponsored search, relevance modeling has attracted increasing attention due to the tremendous practical value. Most existing methods solely rely on the query-keyword pairs. However, keywords are usually short texts with scarce semantic information, which may not precisely reflect the underlying advertising intents. In this paper, we investigate the novel problem of advertiser-aware relevance modeling, which leverages the advertisers’ information to bridge the gap between the search intents and advertising purposes. Our motivation lies in incorporating the unsupervised bidding behaviors as the complementary graphs to learn desirable advertiser representations. We further propose a Bidding-Graph augmented Triple-based Relevance model BGTR with three towers to deeply fuse the bidding graphs and semantic textual data. Empirically, we evaluate the BGTR model over a large industry dataset, and the experimental results consistently demonstrate its superiority.
|
https://aclanthology.org/2021.findings-emnlp.191
|
https://aclanthology.org/2021.findings-emnlp.191.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
GPT3Mix: Leveraging Large-scale Language Models for Text Augmentation
|
Kang Min Yoo, Dongju Park, Jaewook Kang, Sang-Woo Lee, Woomyoung Park
|
Large-scale language models such as GPT-3 are excellent few-shot learners, allowing them to be controlled via natural text prompts. Recent studies report that prompt-based direct classification eliminates the need for fine-tuning but lacks data and inference scalability. This paper proposes a novel data augmentation technique that leverages large-scale language models to generate realistic text samples from a mixture of real samples. We also propose utilizing soft-labels predicted by the language models, effectively distilling knowledge from the large-scale language models and creating textual perturbations simultaneously. We perform data augmentation experiments on diverse classification tasks and show that our method hugely outperforms existing text augmentation methods. We also conduct experiments on our newly proposed benchmark to show that the augmentation effect is not only attributed to memorization. Further ablation studies and a qualitative analysis provide more insights into our approach.
|
https://aclanthology.org/2021.findings-emnlp.192
|
https://aclanthology.org/2021.findings-emnlp.192.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Context-aware Entity Typing in Knowledge Graphs
|
Weiran Pan, Wei Wei, Xian-Ling Mao
|
{'url': 'https://github.com/CCIIPLab/CET', '#text': 'Knowledge graph entity typing aims to infer entities’ missing types in knowledge graphs which is an important but under-explored issue. This paper proposes a novel method for this task by utilizing entities’ contextual information. Specifically, we design two inference mechanisms: i) N2T: independently use each neighbor of an entity to infer its type; ii) Agg2T: aggregate the neighbors of an entity to infer its type. Those mechanisms will produce multiple inference results, and an exponentially weighted pooling method is used to generate the final inference result. Furthermore, we propose a novel loss function to alleviate the false-negative problem during training. Experiments on two real-world KGs demonstrate the effectiveness of our method. The source code and data of this paper can be obtained from .'}
|
https://aclanthology.org/2021.findings-emnlp.193
|
https://aclanthology.org/2021.findings-emnlp.193.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Attribute Alignment: Controlling Text Generation from Pre-trained Language Models
|
Dian Yu, Zhou Yu, Kenji Sagae
|
Large language models benefit from training with a large amount of unlabeled text, which gives them increasingly fluent and diverse generation capabilities. However, using these models for text generation that takes into account target attributes, such as sentiment polarity or specific topics, remains a challenge. We propose a simple and flexible method for controlling text generation by aligning disentangled attribute representations. In contrast to recent efforts on training a discriminator to perturb the token level distribution for an attribute, we use the same data to learn an alignment function to guide the pre-trained, non-controlled language model to generate texts with the target attribute without changing the original language model parameters. We evaluate our method on sentiment- and topic-controlled generation, and show large performance gains over previous methods while retaining fluency and diversity.
|
https://aclanthology.org/2021.findings-emnlp.194
|
https://aclanthology.org/2021.findings-emnlp.194.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Generate & Rank: A Multi-task Framework for Math Word Problems
|
Jianhao Shen, Yichun Yin, Lin Li, Lifeng Shang, Xin Jiang, Ming Zhang, Qun Liu
|
{'url': 'https://github.com/huawei-noah/noah-research', '#text': 'Math word problem (MWP) is a challenging and critical task in natural language processing. Many recent studies formalize MWP as a generation task and have adopted sequence-to-sequence models to transform problem descriptions to mathematical expressions. However, mathematical expressions are prone to minor mistakes while the generation objective does not explicitly handle such mistakes. To address this limitation, we devise a new ranking task for MWP and propose Generate & Rank, a multi-task framework based on a generative pre-trained language model. By joint training with generation and ranking, the model learns from its own mistakes and is able to distinguish between correct and incorrect expressions. Meanwhile, we perform tree-based disturbance specially designed for MWP and an online update to boost the ranker. We demonstrate the effectiveness of our proposed method on the benchmark and the results show that our method consistently outperforms baselines in all datasets. Particularly, in the classical Math23k, our method is 7% (78.4% to 85.4%) higher than the state-of-the-art. Code could be found at .'}
|
https://aclanthology.org/2021.findings-emnlp.195
|
https://aclanthology.org/2021.findings-emnlp.195.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
MIRTT: Learning Multimodal Interaction Representations from Trilinear Transformers for Visual Question Answering
|
Junjie Wang, Yatai Ji, Jiaqi Sun, Yujiu Yang, Tetsuya Sakai
|
In Visual Question Answering (VQA), existing bilinear methods focus on the interaction between images and questions. As a result, the answers are either spliced into the questions or utilized as labels only for classification. On the other hand, trilinear models such as the CTI model efficiently utilize the inter-modality information between answers, questions, and images, while ignoring intra-modality information. Inspired by this observation, we propose a new trilinear interaction framework called MIRTT (Learning Multimodal Interaction Representations from Trilinear Transformers), incorporating the attention mechanisms for capturing inter-modality and intra-modality relationships. Moreover, we design a two-stage workflow where a bilinear model reduces the free-form, open-ended VQA problem into a multiple-choice VQA problem. Furthermore, to obtain accurate and generic multimodal representations, we pre-train MIRTT with masked language prediction. Our method achieves state-of-the-art performance on the Visual7W Telling task and VQA-1.0 Multiple Choice task and outperforms bilinear baselines on the VQA-2.0, TDIUC and GQA datasets.
|
https://aclanthology.org/2021.findings-emnlp.196
|
https://aclanthology.org/2021.findings-emnlp.196.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
UniteD-SRL: A Unified Dataset for Span- and Dependency-Based Multilingual and Cross-Lingual Semantic Role Labeling
|
Rocco Tripodi, Simone Conia, Roberto Navigli
|
{'url': 'https://github.com/SapienzaNLP/united-srl', '#text': 'Multilingual and cross-lingual Semantic Role Labeling (SRL) have recently garnered increasing attention as multilingual text representation techniques have become more effective and widely available. While recent work has attained growing success, results on gold multilingual benchmarks are still not easily comparable across languages, making it difficult to grasp where we stand. For example, in CoNLL-2009, the standard benchmark for multilingual SRL, language-to-language comparisons are affected by the fact that each language has its own dataset which differs from the others in size, domains, sets of labels and annotation guidelines. In this paper, we address this issue and propose UniteD-SRL, a new benchmark for multilingual and cross-lingual, span- and dependency-based SRL. UniteD-SRL provides expert-curated parallel annotations using a common predicate-argument structure inventory, allowing direct comparisons across languages and encouraging studies on cross-lingual transfer in SRL. We release UniteD-SRL v1.0 at .'}
|
https://aclanthology.org/2021.findings-emnlp.197
|
https://aclanthology.org/2021.findings-emnlp.197.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
Enhancing Dual-Encoders with Question and Answer Cross-Embeddings for Answer Retrieval
|
Yanmeng Wang, Jun Bai, Ye Wang, Jianfei Zhang, Wenge Rong, Zongcheng Ji, Shaojun Wang, Jing Xiao
|
Dual-Encoders is a promising mechanism for answer retrieval in question answering (QA) systems. Currently most conventional Dual-Encoders learn the semantic representations of questions and answers merely through matching score. Researchers proposed to introduce the QA interaction features in scoring function but at the cost of low efficiency in inference stage. To keep independent encoding of questions and answers during inference stage, variational auto-encoder is further introduced to reconstruct answers (questions) from question (answer) embeddings as an auxiliary task to enhance QA interaction in representation learning in training stage. However, the needs of text generation and answer retrieval are different, which leads to hardness in training. In this work, we propose a framework to enhance the Dual-Encoders model with question answer cross-embeddings and a novel Geometry Alignment Mechanism (GAM) to align the geometry of embeddings from Dual-Encoders with that from Cross-Encoders. Extensive experimental results show that our framework significantly improves Dual-Encoders model and outperforms the state-of-the-art method on multiple answer retrieval datasets.
|
https://aclanthology.org/2021.findings-emnlp.198
|
https://aclanthology.org/2021.findings-emnlp.198.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
A Neural Graph-based Local Coherence Model
|
Mohsen Mesgar, Leonardo F. R. Ribeiro, Iryna Gurevych
|
Entity grids and entity graphs are two frameworks for modeling local coherence. These frameworks represent entity relations between sentences and then extract features from such representations to encode coherence. The benefits of convolutional neural models for extracting informative features from entity grids have been recently studied. In this work, we study the benefits of Relational Graph Convolutional Networks (RGCN) to encode entity graphs for measuring local coherence. We evaluate our neural graph-based model for two benchmark coherence evaluation tasks: sentence ordering (SO) and summary coherence rating (SCR). The results show that our neural graph-based model consistently outperforms the neural grid-based model for both tasks. Our model performs competitively with a strong baseline coherence model, while our model uses 50% fewer parameters. Our work defines a new, efficient, and effective baseline for local coherence modeling.
|
https://aclanthology.org/2021.findings-emnlp.199
|
https://aclanthology.org/2021.findings-emnlp.199.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
|||
GiBERT: Enhancing BERT with Linguistic Information using a Lightweight Gated Injection Method
|
Nicole Peinelt, Marek Rei, Maria Liakata
|
Large pre-trained language models such as BERT have been the driving force behind recent improvements across many NLP tasks. However, BERT is only trained to predict missing words – either through masking or next sentence prediction – and has no knowledge of lexical, syntactic or semantic information beyond what it picks up through unsupervised pre-training. We propose a novel method to explicitly inject linguistic information in the form of word embeddings into any layer of a pre-trained BERT. When injecting counter-fitted and dependency-based embeddings, the performance improvements on multiple semantic similarity datasets indicate that such information is beneficial and currently missing from the original model. Our qualitative analysis shows that counter-fitted embedding injection is particularly beneficial, with notable improvements on examples that require synonym resolution.
|
https://aclanthology.org/2021.findings-emnlp.200
|
https://aclanthology.org/2021.findings-emnlp.200.pdf
|
EMNLP 2021
|
AIM-Harvard/EMNLP-Accepted-Papers
|
default
|
emnlp_findings_2021
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.