-
CoLV: A Collaborative Latent Variable Model for Knowledge-Grounded Dialogue Generation
Haolan Zhan, Lei Shen, Hongshen Chen and Hainan Zhang
EMNLP 2021
-
Adaptive Bridge between Training and Inference for Dialogue Generation
Haoran Xu, Hainan Zhang, Yanyan Zou, Hongshen Chen, Zhuoye Ding and Yanyan Lan
EMNLP 2021
-
Topic-Aware Contrastive Learning for Abstractive Dialogue Summarization
Junpeng Liu, Yanyan Zou, Hainan Zhang, Hongshen Chen, Zhuoye Ding, Caixia Yuan and Xiaojie Wang
EMNLP 2021 findings
-
FCM: A Fine-grained Comparison Model for Multi-turn Dialogue Reasoning
Xu Wang, Hainan Zhang, Shuai Zhao, Yanyan Zou, Hongshen Chen, Zhuoye Ding, Bo Cheng and Yanyan Lan
EMNLP 2021 findings
-
Identifying Untrustworthy Samples: Data Filtering for Open-domain Dialogues with Bayesian Optimization
Lei Shen, Haolan Zhan, Xin Shen, Hongshen Chen, Xiaofang Zhao and Xiaodan Zhu
CIKM 2021
-
Improving Sequential Recommendation Consistency with Self-Supervised Imitation.
Xu Yuan, Hongshen Chen, Yonghao Song, Xiaofang Zhao, Zhuoye Ding, Zhen He, Zhuoye Ding and Bo Long
IJCAI 2021
-
Augmenting Knowledge-grounded Dialog Generation with Sequential Knowledge Transition.
Haolan Zhan, Hainan Zhang, Hongshen Chen, Zhuoye Ding, Yongjun Bao and Yanyan Lan
NAACL 2021
-
Collaborative Group Learning.
Shaoxiong Feng, Hongshen Chen, Xuancheng Ren, Zhuoye Ding, Kan Li and Xu Sun
AAAI 2021
pdf
motivation
Abstract: Collaborative learning has successfully applied knowledge transfer to guiding a pool of small student networks towards robust local minima. However, previous approaches typically struggle with drastically aggravated student homogenization when the number of students rises. In this paper, we propose Collaborative Group Learning, an efficient framework that aims to diversify the feature representation and conduct an effective regularization. Intuitively, similar to the human group study mechanism, we induce students to learn and exchange different parts of course knowledge as collaborative groups. First, each student is established by randomly routing on a modular neural network, which facilitates flexible knowledge communication between students due to random levels of representation sharing and branching. Second, to resist the student homogenization, students first compose diverse feature sets by exploiting the inductive bias from sub-sets of training data, and then aggregate and distill different complementary knowledge by imitating a random sub-group of students at each time step. Besides, the above mechanisms are beneficial for maximizing the student population to further improve the model generalization without sacrificing computational efficiency. Empirical evaluations on both image and text tasks indicate that our method significantly outperforms various state-of-the-art collaborative approaches whilst enhancing computational efficiency.
-
Probing Product Description Generation via Posterior Distillation.
Haolan Zhan, Hainan Zhang, Hongshen Chen, Lei Shen, Zhuoye Ding, Yongjun Bao, Weipeng Yan and Yanyan Lan
AAAI 2021
pdf
motivation
Abstract: In product description generation (PDG), the user-cared aspect is critical for the recommendation system, which can not only improve user's experiences but also obtain more clicks. High-quality customer reviews can be considered as an ideal source to mine user-cared aspects. However, in reality, a large number of new products (known as long-tailed commodities) cannot gather sufficient amount of customer reviews, which brings a big challenge in the product description generation task. Existing works tend to generate the product description solely based on item information, i.e., product attributes or title words, which leads to tedious contents and cannot attract customers effectively. To tackle this problem, we propose an adaptive posterior network based on Transformer architecture that can utilize user-cared information from customer reviews. Specifically, we first extend the self-attentive Transformer encoder to encode product titles and attributes. Then, we apply an adaptive posterior distillation module to utilize useful review information, which integrates user-cared aspects to the generation process. Finally, we apply a Transformer-based decoding phase with copy mechanism to automatically generate the product description. Besides, we also collect a large-scare Chinese product description dataset to support our work and further research in this field. Experimental results show that our model is superior to traditional generative models in both automatic indicators and human evaluation.
-
Multi-resolution Interactive Empathetic Dialogue Generation.
Qintong Li, Hongshen Chen, Zhaochun Ren, Pengjie Ren, Zhaopeng Tu and Zhumin Chen
Coling 2020
pdf
-
Group-wise Contrastive Learning for Neural Dialogue Generation.
Hengyi Cai, Hongshen Chen, Yonghao Song, Zhuoye Ding, Yongjun Bao, Weipeng Yan and Xiaofang Zhao
EMNLP 2020
pdf
code
-
Regularizing Dialogue Generation by Imitating Implicit Scenarios.
Shaoxiong Feng, Xuancheng Ren, Hongshen Chen, Bin Sun, Kan Li and Xu Sun
EMNLP 2020
pdf
-
User-Inspired Posterior Network for Recommendation Reason Generation.
Haolan Zhan, Hainan Zhang, Hongshen Chen, Lei Shen, Yanyan Lan, Zhuoye Ding and Dawei Yin.
In Proceedings of the 43rd Annual ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2020)
motivation
pdf
Abstract: Recommendation reason generation, aiming at showing the selling points of products for customers, plays a vital role in attracting customers' attention as well as improving user experience. A simple and effective way is to extract keywords directly from the knowledge-base of products, i.e., attributes or title, as the recommendation reason. However, generating recommendation reason from product knowledge doesn't naturally respond to users' interests. Fortunately, on some E-commerce websites, there exists more and more user-generated content (user-content for short), i.e., product question-answering (QA) discussions, which reflect user-cared aspects. Therefore, in this paper, we consider generating the recommendation reason by taking into account not only the product attributes but also the customer-generated product QA discussions. In reality, adequate user-content is only possible for the most popular commodities, whereas large sums of long-tail products or new products cannot gather a sufficient number of user-content. To tackle this problem, we propose a user-inspired multi-source posterior transformer (MSPT), which induces the model reflecting the users' interests with a posterior multiple QA discussions module, and generating recommendation reasons containing the product attributes as well as the user-cared aspects. Experimental results show that our model is superior to traditional generative models. Additionally, the analysis also shows that our model can focus more on the user-cared aspects than baselines.
-
Modeling Topical Relevance for Multi-Turn Dialogue Generation
Hainan Zhang, Yanyan Lan, Liang Pang, Hongshen Chen, Zhuoye Ding, Dawei Yin.
In Proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI 2020)
pdf
-
Exemplar Guided Neural Dialogue Generation.
Hengyi Cai, Hongshen Chen, Yonghao Song, Xiaofang Zhao, Dawei Yin.
In Proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI 2020)
motivation
pdf
Abstract: Humans benefit from previous experiences when taking actions. Similarly, related examples from the training data also provide exemplary information for neural dialogue models when responding to a given input message. However, effectively fusing such exemplary information into dialogue generation is non-trivial: useful exemplars are required to be not only literally-similar, but also topic-related with the given context. Noisy exemplars impair the neural dialogue models understanding the conversation topics and even corrupt the response generation. To address the issues, we propose an exemplar guided neural dialogue generation model where exemplar responses are retrieved in terms of both the text similarity and the topic proximity through a two-stage exemplar retrieval model. In the first stage, a small subset of conversations is retrieved from a training set given a dialogue context. These candidate exemplars are then finely ranked regarding the topical proximity to choose the best-matched exemplar response. To further induce the neural dialogue generation model consulting the exemplar response and the conversation topics more faithfully, we introduce a multi-source sampling mechanism to provide the dialogue model with both local exemplary semantics and global topical guidance during decoding. Empirical evaluations on a large-scale conversation dataset show that the proposed approach significantly outperforms the state-of-the-art in terms of both the quantitative metrics and human evaluations.
-
Data Manipulation: Towards Effective Instance Learning for Neural Dialogue Generation via Learning to Augment and Reweight.
Hengyi Cai, Hongshen Chen, Yonghao Song, Cheng Zhang, Xiaofang Zhao, Dawei Yin
In Proceedings of the 58th Annual Conference of the Association for Computational Linguistics(ACL 2020), Seattle, Washington.
motivation
pdf
Abstract: Current state-of-the-art neural dialogue models learn from human conversations following the data-driven paradigm. As such, a reliable training corpus is the crux of building a robust and well-behaved dialogue model. However, due to the open-ended nature of human conversations, the quality of user-generated training data varies greatly, and effective training samples are typically insufficient while noisy samples frequently appear. This impedes the learning of those data-driven neural dialogue models. Therefore, effective dialogue learning requires not only more reliable learning samples, but also fewer noisy samples. In this paper, we propose a data manipulation framework to proactively reshape the data distribution towards reliable samples by augmenting and highlighting effective learning samples as well as reducing the effect of inefficient samples simultaneously. In particular, the data manipulation model selectively augments the training samples and assigns an importance weight to each instance to reform the training data. Note that, the proposed data manipulation framework is fully data-driven and learnable. It not only manipulates training samples to optimize the dialogue generation model, but also learns to increase its manipulation skills through gradient descent with validation samples. Extensive experiments show that our framework can improve the dialogue generation performance with respect to 13 automatic evaluation metrics and human judgments.
Motivation:
- Training data for neural dialogue models is quite noisy.
- Enable the model learning to choose and modify the training data by itself.
- Choose better learning instances, and infer other instances from them.
-
Learning from Easy to Complex: Adaptive Multi-curricula Learning for Neural Dialogue Generation.
Hengyi Cai, Hongshen Chen, Cheng Zhang, Yonghao Song, Xiaofang Zhao, Yangxi Li, Dongsheng Duan, Dawei Yin.
In Proceedings of the 34th AAAI Conference on Artificial Intelligence (AAAI 2020), New York, USA.
motivation
pdf
Abstract: Current state-of-the-art neural dialogue systems are mainly data-driven and are trained on human-generated responses. However, due to the subjectivity and open-ended nature of human conversations, the complexity of training dialogues varies greatly. The noise and uneven complexity of query-response pairs impede the learning efficiency and effects of the neural dialogue generation models. What is more, so far, there are no unified dialogue complexity measurements, and the dialogue complexity embodies multiple aspects of attributes---specificity, repetitiveness, relevance, etc. Inspired by human behaviors of learning to converse, where children learn from easy dialogues to complex ones and dynamically adjust their learning progress, in this paper, we first analyze five dialogue attributes to measure the dialogue complexity in multiple perspectives on three publicly available corpora. Then, we propose an adaptive multi-curricula learning framework to schedule a committee of the organized curricula. The framework is established upon the reinforcement learning paradigm, which automatically chooses different curricula at the evolving learning process according to the learning status of the neural dialogue generation model. Extensive experiments conducted on five state-of-the-art models demonstrate its learning efficiency and effectiveness with respect to 13 automatic evaluation metrics and human judgments.
Motivation:
- Training data for neural dialogue models is quite noisy.
- Learn from clean and easy samples first, and then gradually increase the data complexity. (The spirits of curriculum learning)
- Organize the curriculum in terms of multiple empirical attributes---specificity, repetitiveness, relevance, etc.
-
Posterior-GAN: Towards Informative and Coherent Response Generation with Posterior Generative Adversarial Network.
Shaoxiong Feng, Hongshen Chen, Kan Li, Dawei Yin.
In Proceedings of the 34th AAAI Conference on Artificial Intelligence (AAAI 2020), New York, USA.
motivation
pdf
Abstract: Neural conversational models learn to generate responses by taking into account the dialog history. These models are typically optimized over the
query-response pairs with a maximum likelihood estimation objective. However, the query-response tuples are naturally loosely coupled, and there exist multiple responses that can respond to a given query, which leads the conversational model learning burdensome. Besides, the general dull response problem is even worsened when the model is confronted with meaningless response training instances. Intuitively, a high-quality response not only responds to the given query but also links up to the future conversations, in this paper, we leverage the
query-response-future turn triples to induce the generated responses that consider both the given context and the future conversations. To facilitate the modeling of these triples, we further propose a novel encoder-decoder based generative adversarial learning framework, Posterior Generative Adversarial Network (Posterior-GAN), which consists of a forward and a backward generative discriminator to cooperatively encourage the generated response to be informative and coherent by two complementary assessment perspectives. Experimental results demonstrate that our method effectively boosts the informativeness and coherence of the generated response on both automatic and human evaluation, which verifies the advantages of considering two assessment perspectives.
Motivation:
- A high-quality response not only responds to the given query but also links up to the future conversations.
- Leverage the query-response-future turn triples for training instead of *query-response* pairs.
- Posterior-GAN enables triples training and improves the informativeness and coherence.
-
Adaptive Parameterization for Neural Dialogue Generation.
Hengyi Cai, Hongshen Chen, Cheng Zhang, Yonghao Song, Xiaofang Zhao and Dawei Yin.
In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP 2019), Hong Kong, China, Nov. 2019.
motivation
pdf
bibtex
code
Abstract: Neural conversation systems generate responses based on the sequence-to-sequence (SEQ2SEQ) paradigm. Typically, the model is equipped with a single set of learned parameters to generate responses for given input contexts. When confronting diverse conversations, its adaptability is rather limited and the model is hence prone to generate generic responses. In this work, we propose an Adaptive Neural Dialogue generation model, AdaND, which manages various conversations with conversation-specific parameterization. For each conversation, the model generates parameters of the encoder-decoder by referring to the input context. In particular, we propose two adaptive parameterization mechanisms: a context-aware and a topic-aware parameterization mechanism. The context-aware parameterization directly generates the parameters by capturing local semantics of the given context. The topic-aware parameterization enables parameter sharing among conversations with similar topics by first inferring the latent topics of the given context and then generating the parameters with respect to the distributional topics. Extensive experiments conducted on a large-scale real-world conversational dataset show that our model achieves superior performance in terms of both quantitative metrics and human evaluations.
Motivation:
- Neural dialogue generation model is prone to generate generic responses when conversations are extremely diverse.
- A single model with diverse parameters manage diverse conversations.
- A context-sensitive local parameterization and a topic-aware global parameterization mechanisms are introduced.
@inproceedings{cai-etal-2019-adaptive,
title = "Adaptive Parameterization for Neural Dialogue Generation",
author = "Cai, Hengyi and Chen, Hongshen and Zhang, Cheng and Song, Yonghao and Zhao, Xiaofang and Yin, Dawei",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-1188",
doi = "10.18653/v1/D19-1188",
pages = "1793--1802"
}
-
A Dynamic Product-aware Learning Model for E-commerce Query Intent Understanding.
Jiashu Zhao, Hongshen Chen and Dawei Yin.
In Proceedings of the 28th ACM Conference on Information and Knowledge Management (CIKM 2019), Beijing, China, Oct. 2019.
motivation
pdf
bibtex
Abstract: Query intent understanding is a fundamental and essential task in searching, which promotes personalized retrieval results and users' satisfaction. In E-commerce, query understanding is particularly referring to bridging the gap between query representations and product representations. In this paper, we aim to map the queries into the predefined tens of thousands of fine-grained categories extracted from the product descriptions. The problem is very challenging in several aspects. First, a query may be related to multiple categories and to identify all the best matching categories could eventually drive the search engine for high recall and diversity. Second, the same query may have dynamic intents under various scenarios and there is a need to distinguish the differences to promote accurate categories of products. Third, the tail queries are particularly difficult for understanding due to noise and lack of customer feedback information. To better understand the queries, we firstly conduct analysis on the search queries and behaviors in the E-commerce domain and identified the uniqueness of our problem (e.g. longer sessions). Then we propose a
Dynamic
Product-aware
Hierarchical
Attention (
DPHA) framework to capture the explicit and implied meanings of a query given its context information in the session. Specifically,
DPHA automatically learns the bidirectional query-level and self-attentional session-level representations which can capture both complex long range dependencies and structural information. Extensive experimental results on a real E-commerce query data set demonstrate the effectiveness of the proposed
DPHA compared to the state-of-art baselines.
Motivation:
- Understand query intent through session-level representation with self-attention mechanism.
- Illustrate query-intent distributions.
@inproceedings{zhao2019dynamic,
title={A Dynamic Product-aware Learning Model for E-commerce Query Intent Understanding},
author={Zhao, Jiashu and Chen, Hongshen and Yin, Dawei},
booktitle={Proceedings of the 28th ACM International Conference on Information and Knowledge Management},
pages={1843--1852},
year={2019},
organization={ACM}
}
-
Fine-Grained Product Categorization in E-commerce.
Hongshen Chen, Jiashu Zhao and Dawei Yin.
In Proceedings of the 28th ACM Conference on Information and Knowledge Management (CIKM 2019), Beijing, China, Oct. 2019.
motivation
pdf
bibtex
Abstract: E-commerce sites usually leverage taxonomies for better organizing products. The fine-grained categories, regarding the leaf categories in taxonomies, are defined by the most descriptive and specific words of products. Fine-grained product categorization remains challenging, due to blurred concepts of fine grained categories (i.e. multiple equivalent or synonymous categories), instable category vocabulary (i.e. the emerging new products and the evolving language habits), and lack of labelled data. To address these issues, we proposes a novel
Neural
Product
Categorization model---NPC to identify fine-grained categories from the product content. NPC is equipped with a character-level convolutional embedding layer to learn the compositional word representations, and a spiral residual layer to extract the word context annotations capturing complex long range dependencies and structural information. To perform categorization beyond predefined categories, NPC categorizes a product by jointly recognizing categories from the product content and predicting categories from predefined category vocabularies. Furthermore, to avoid extensive human labors, NPC is able to adapt to weak labels, generated by mining the search logs, where the customers' behaviors naturally connect products with categories. Extensive experiments performed on a real e-commerce platform datasets illustrate the effectiveness of the proposed models.
Motivation:
- Product categories can be recognized from produc contents and classified from product category vocabulary.
- Instead of a manual labelling corpus, large scale corpus with weak labels can be mined from search logs.
@inproceedings{chen2019fine,
title={Fine-Grained Product Categorization in E-commerce},
author={Chen, Hongshen and Zhao, Jiashu and Yin, Dawei},
booktitle={Proceedings of the 28th ACM International Conference on Information and Knowledge Management},
pages={2349--2352},
year={2019},
organization={ACM}
}
-
Explicit State Tracking with Semi-Supervision for Neural Dialogue Generation.
Xisen Jin, Wenqiang Lei, Zhaochun Ren, Hongshen Chen, Shangsong Liang, Yihong Eric Zhao, Dawei Yin.
In Proceedings of the 27th ACM Conference on Information and Knowledge Management (CIKM 2018), Turin, Italy, Oct. 2018.
pdf
bibtex
@inproceedings{jin2018explicit,
title={Explicit State Tracking with Semi-Supervisionfor Neural Dialogue Generation},
author={Jin, Xisen and Lei, Wenqiang and Ren, Zhaochun and Chen, Hongshen and Liang, Shangsong and Zhao, Yihong and Yin, Dawei},
booktitle={Proceedings of the 27th ACM International Conference on Information and Knowledge Management},
pages={1403--1412},
year={2018},
organization={ACM}
}
-
Knowledge Diffusion for Neural Dialogue Generation.
Liu Shuman, Hongshen Chen, Zhaochun Ren, Yang Feng, Qun Liu and Dawei Yin.
ACL 2018, Melbourne, Australia, 2018.
pdf
bibtex
corpus
@inproceedings{liu-etal-2018-knowledge,
title = "Knowledge Diffusion for Neural Dialogue Generation",
author = "Liu, Shuman and Chen, Hongshen and Ren, Zhaochun and Feng, Yang and Liu, Qun and Yin, Dawei",
booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2018",
address = "Melbourne, Australia",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/P18-1138",
doi = "10.18653/v1/P18-1138",
pages = "1489--1498",
abstract = "End-to-end neural dialogue generation has shown promising results recently, but it does not employ knowledge to guide the generation and hence tends to generate short, general, and meaningless responses. In this paper, we propose a neural knowledge diffusion (NKD) model to introduce knowledge into dialogue generation. This method can not only match the relevant facts for the input utterance but diffuse them to similar entities. With the help of facts matching and entity diffusion, the neural dialogue generation is augmented with the ability of convergent and divergent thinking over the knowledge base. Our empirical study on a real-world dataset prove that our model is capable of generating meaningful, diverse and natural responses for both factoid-questions and knowledge grounded chi-chats. The experiment results also show that our model outperforms competitive baseline models significantly."
}
-
Learning Tag Dependencies for Sequence Tagging.
Yuan Zhang, Hongshen Chen, Yihong Eric Zhao, Qun Liu, Dawei Yin.
IJCAI, 2018.
pdf
bibtex
@inproceedings{ijcai2018-0637,
title = {Learning Tag Dependencies for Sequence Tagging},
author = {Yuan Zhang and Hongshen Chen and Yihong Zhao and Qun Liu and Dawei Yin},
booktitle = {Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, {IJCAI-18}},
publisher = {International Joint Conferences on Artificial Intelligence Organization},
pages = {4581--4587},
year = {2018},
month = {7},
doi = {10.24963/ijcai.2018/637},
url = {https://doi.org/10.24963/ijcai.2018/637}
}
-
Hierarchical Variational Memory Network for Dialogue Generation.
Hongshen Chen, Zhaochun Ren, Jiliang Tang, Yihong Eric Zhao and Dawei Yin.
WWW,2018.
pdf
bibtex
code&corpus
@inproceedings{chen2018hierarchical,
title={Hierarchical variational memory network for dialogue generation},
author={Chen, Hongshen and Ren, Zhaochun and Tang, Jiliang and Zhao, Yihong Eric and Yin, Dawei},
booktitle={Proceedings of the 2018 World Wide Web Conference},
pages={1653--1662},
year={2018},
organization={International World Wide Web Conferences Steering Committee}
}
-
A Survey on Dialogue Systems: Recent Advances and New Frontiers.
Hongshen Chen, Xiaorui Liu, Dawei Yin and Jiliang Tang.
SIGKDD Explorations, 2018.
pdf
bibtex
@article{chen2017survey,
title={A survey on dialogue systems: Recent advances and new frontiers},
author={Chen, Hongshen and Liu, Xiaorui and Yin, Dawei and Tang, Jiliang},
journal={Acm Sigkdd Explorations Newsletter},
volume={19},
number={2},
pages={25--35},
year={2017},
publisher={ACM}
}
-
Learning dependency edge transfer rule representation using encoder-decoder (in Chinese).
Hongshen Chen, Qun Liu.
Scientia Sinica Informationis, 2017.
-
Neural Network for Heterogeneous Annotations.
Hongshen Chen, Yue Zhang, Qun Liu.
EMNLP, 2016.
pdf
bibtex
code&corpus
@inproceedings{chen-etal-2016-neural,
title = "Neural Network for Heterogeneous Annotations",
author = "Chen, Hongshen and
Zhang, Yue and
Liu, Qun",
booktitle = "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2016",
address = "Austin, Texas",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D16-1070",
doi = "10.18653/v1/D16-1070",
pages = "731--741",
}
-
A Dependency Edge Transfer Translation Rule Generator.
Hongshen Chen, Qun Liu.
CCL, 2016.
-
A Dependency Edge-based Transfer Model for Statistical Machine Translation.
Hongshen Chen, Jun Xie, Fandong Meng, Weibin Jiang, Qun Liu.
COLING, 2014.
pdf
bibtex
@inproceedings{chen-etal-2014-dependency,
title = "A Dependency Edge-based Transfer Model for Statistical Machine Translation",
author = "Chen, Hongshen and
Xie, Jun and
Meng, Fandong and
Jiang, Wenbin and
Liu, Qun",
booktitle = "Proceedings of {COLING} 2014, the 25th International Conference on Computational Linguistics: Technical Papers",
month = aug,
year = "2014",
address = "Dublin, Ireland",
publisher = "Dublin City University and Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/C14-1104",
pages = "1103--1113",
}