Presentation is loading. Please wait.

Presentation is loading. Please wait.

Abdalghani Abujabal, Rishiraj Saha Roy, Mohamed Yahya, Gerhard Weikum

Similar presentations


Presentation on theme: "Abdalghani Abujabal, Rishiraj Saha Roy, Mohamed Yahya, Gerhard Weikum"— Presentation transcript:

1 Abdalghani Abujabal, Rishiraj Saha Roy, Mohamed Yahya, Gerhard Weikum
Never-Ending Learning for Open-Domain Question Answering over Knowledge Bases Abdalghani Abujabal, Rishiraj Saha Roy, Mohamed Yahya, Gerhard Weikum www’18

2 Exsiting method rely on a clear separation between an offline training phase, where a model is learned, and an online phase where this model is deployed. Shortcomings: they require access to a large annotated training set that is not always readily available they fail on questions from before-unseen domains. they are limited to the language learned at training time 离线训练与在线部署分离。 1、需要大量的训练集,不是很好获取。 2、对于之前没有见过的领域会失败。 3、受限于训练时的语言。

3 contribution a KB-QA system that can be seeded with a small number of training examples and supports continuous learning to improve its answering performance over time; a similarity function-based answering mechanism that enables NEQA to answer questions with previously-unseen syntactic structures, thereby extending its coverage; a user feedback component that judiciously asks non-expert users to select satisfactory answers, thus closing the loop between users and the system and enabling continuous learning; extensive experimental results on two benchmarks demonstrating the viability of our continuous learning approach, and the ability to answer questions from previously-unseen domains. 1、一个新的KBQA系统,可以只需要少量的训练集,然后持续学习来改善效果。 2、提出了一个相似函数回答系统,使得NEQA可以回答之前没有见过的句法结构。 3、包含一个用户反馈用于持续学习 4、实验证明

4 abstract KBQA: translate natural language questions to semantic representation (such as SPARQL) Offline, NEQA automatically learns templates from a small number of training question-answer pairs. Once deployed, continuous learning is triggered on cases where templates are insufficient periodically re-trains its underlying models KBQA需要将自然语言转化成语义表示。 离线的时候,通过少量的QA对来生成模板。 部署后,当模板不够时会触发自动学习机制。 阶段性地重新训练底层模型。

5 bank:u1 = “which film awards was bill carraro nominated for?”
unew = “which president was lincoln succeeded by?” unew = “what are the film award nominations that bill carraro received?” 有两个bank,模板库和question-query库。

6 Offline training(template bank)
nominatedFor ?x BillCarraro “which film awards was bill carraro nominated for?” type movieAward 1、训练实例:question ,answer set对 2、通过question/answer set中的实体(一个一个地),在原图中找一个最小连通子图,这个子图就被当作查询 3、已经有了question和query要生成模板,是一个对齐的过程。 《Automated Template Generation for Question Answering over Knowledge Graphs》 Weakly supervised Training instance: (u,set Au) the smallest connected subgraph of the KG that contains the above entities found in the question as well as a.

7 alignment “which film awards was bill carraro nominated for?” use the Stanford dependency parser to build a dependency parse tree Predicate and Class Lexicons(web pages,freebase) (e1 p e2), “e1 r e2”,r->p “e and other np”, (e type c),np->c Weight:corpus frequency. named entity recognition ILP(maximize the total weight of the mapped phrases) 通过斯坦福的工具生成一个依赖语法树。 生成了谓语和类的词典(加权的二部图),将文本内容对应到谓语和类。 通过entity recognition,将文本内容对应到实体。 文本和对应的内容不是一一对应的,所以通过一个ILP来生成对齐方式,使得映射的短语的权重和最大。

8 Question-query bank u1 = “which film awards was bill carraro nominated for?” q1=“BillCarraro nominatedFor ?x . ?x type movieAward (query).” u1 = “which film awards was ENTITY nominated for?” q1=“ENTITY nominatedFor ?x . ?x type movieAward (query).”

9 Answer with templates Match template
unew = “which president was lincoln succeeded by?” Match template Generate top-K query(learning to rank) Fetch answer sets User feedback:choose answer from answer sets q*, add to question-query bank None, Answer via similarity function

10 Answer via similarity function
bank:u1 = “which film awards was ENTITY nominated for?” unew = “what are the film award nominations that bill carraro received?” Template-based failed uses a semantic similarity function to retrieve the k most semantically similar questions to unew from its question-query bank instantiated with entities Fetch answer sets User feedback:choose answer from answer sets q*, add to question-query bank, obtain a new template (ut ,qt ), add to template bank

11 Similarity function question likelihood based on a language model,
word embedding-based similarity obtained through word2vec w可以是unigram,bigram,trigram 语言模型:对于新来的question的每一个w,用ui来预测的最大似然概率。后面的是一个平滑因子,w在语料库中的最大似然概率。 Word-embedding:就是两两做cos sim的和。

12 experiment 1.Training set: bank,LTR,language model
开发集 1.Training set: bank,LTR,language model 2. Development set: γ和α调参

13 System performace in two mode
No user feedback 1、top-ranking answer 2、use similarity function only if the list obtained using templates is empty. 有无用户反馈在两个数据集上的表现。 s:similarity function时top1不对,对齐的时候出问题,所以没有模板生成 e:复杂问题,用户常常判断没有正确答案。

14 Comparision with state-of-the-art
a,online learning template,similarity function b, similarity function

15 Static-learning with continuous learning disabled

16 Open-domain question answer
method F1 NEQA 50.3 NEQA without user feedback 41.5 AQQU 20.3 训练集中去掉了三个domain的训练数据

17 analysis Impact of templates and similarity fuction
cannot completely decouple both branches On WQ, with user feedback, 1184 questions were answered with templates, while 848 were answered via the similarity function. For the no-feedback configuration,1788 out of 2032 were handled by the learned templates,and the similarity function answered 244 questions. Similarity function ablation study


Download ppt "Abdalghani Abujabal, Rishiraj Saha Roy, Mohamed Yahya, Gerhard Weikum"

Similar presentations


Ads by Google