Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 5 : Sequence Tagging and Language Processing

Similar presentations


Presentation on theme: "Lecture 5 : Sequence Tagging and Language Processing"— Presentation transcript:

1 Lecture 5 : Sequence Tagging and Language Processing
楊立偉教授 台灣大學工管系 2017

2 So Far What We Have L1: Document Similarity L2: Co-occurrence
"a Bag of Words" Model, Term Weighting (TF-IDF, χ2, MI) Vector Space Model (VSM) L2: Co-occurrence Association Link analysis : Co-citation & Coupling L3: Classification Naïve Bayes, k Nearest Neighbors, Support Vector Machine Decision Tree, Bagging and Boosting, Random Forest Neural Networks

3 L4: Clustering k Means, DBSCAN, Hierarchical Agglomerative Clustering
Topic Modeling

4 Agenda Sequence tagging Chinese processing Other language issues

5 Sequence Tagging To assign of a label to each member of a sequence of observed values. For example: part of speech tagging and voice recognition in language processing 語意分析中的詞性標記及語音辨識 Applications of sequence analysis or prediction in finance / bioinformatics 財務或生物資訊上的序列分析應用 This can be done… as a set of independent classification tasks. For example, step forward one per member of the sequence a time. by making the optimal label for a given element dependent on the choices of nearby elements.

6 Example (1) "美國是個自由的國家" moving a cursor from left to right, use the last n words to tag the current one. as a classification task, for example, to Decision Tree or SVM. Wi-4 Wi-3 Wi-2 Wi-1 Wi Use the last 4 words to tag the current one

7 Example (1) "美國是個自由的國家" to obtain probability P(Wi|Wi-1, Wi-2,…, Wi-n)
for the sequence, when i=1, the probability is P("美國是個自由的國家")= P(國|美)xP(是|國)x P(個|是)xP(自|個)x P(由|自)xP(的|由)x P(國|的)xP(家|國) * instead of product of the probabilities, sum of log is used usually in programming.

8 Example (1) "美國是個自由的國家" in voice recognition, the probabilities of various candidates are evaluated, and the highest one is chosen. P(“美國是個自由的國家”)=0.08 ←chosen P(“美國似個自由的國家”)=0.05 P(“美國是個製油的國家")=0.07

9 Example (2) 若想利用股票今日價量尋找明日價格漲跌之關係, 採用序列模型編碼如下 同樣可以使用分類、或連續的條件機率進行分析
Day1 Day2 Day3 Day4 Day5 Day6 Day7 Day8 價格 成交量

10 Hidden Markov Model Ref. Lei Zhang and Bing Liu, "Aspect and Entity Extraction for Opinion Mining", 2014

11

12 Conditional Random Fields
Ref. Lei Zhang and Bing Liu, "Aspect and Entity Extraction for Opinion Mining", 2014

13 CRF implementation CRFsuite http://www.chokkan.org/software/crfsuite/
MALLET

14 Chinese Processing

15 Problems in Chinese Processing
"小明日記:今日王叔叔來我家玩媽媽,說我做完 作業後,可以吃點心。然後,王叔叔誇我作業做 的好,於是抱起了我媽,媽叫叔叔小心一點,之 後叔叔又親了我媽媽,也親了我。" "老師批復:拿回家讓你爸看看,是標點符號有問 題,還是你王叔叔和你媽媽有問題。"

16 Problems in Chinese Processing
新詞 (out-of-vocabulary, OOV) 九把刀拍了部新電影叫等一個人咖啡 斷詞 (term segmentation) 消除歧義 我國代表現在正面臨很大的壓力 不可以營利為目的 在書中,蔡英文筆下的政治理念清晰可見

17 Problems in Chinese Processing
斷詞的困難:有時需依照更多的上下文意 全台大停電 power failure in NTU 全台大停電 power failure in Taiwan 同字(詞) 多意之問題 統一(公司)大陸廠 : 統一為公司名稱,而非動詞 喜歡上一個人 Like someone 喜歡上一個人 Like to be alone 喜歡上一個人 Like the last one

18 What's the Difference in Chinese ?
Algorithms in English are based in "Term" 前述主要演算法均基於詞做運算 Document → Paragraph → Sentence → Term Some expand to Phrase 有些擴充至片語 Some change to n-gram 有些改用n-gram The major difference in Chinese Character range space is much larger 中文字元個數遠多過於其它語言 No obvious boundary between characters / terms. 中文字或詞之間無明顯分隔符號

19 What's the Difference in Chinese ?
中文 英文 單位 字(元) Character 片語 句子 段落 文件 字母 Letter 字 Word 片語 Phrase 句子 Sentence 段落 Paragraph 文件 Document 統計資料 BIG5: 常用字約5000個, 次常用字約8000個 Unicode: 約4萬個漢字 注音: 共376個音(不含四聲變化) 中研院CKIP辭庫: 二字以上約13萬詞 Webster Dictionary: 470,000

20 Problem in Chinese Processing (1)
Term Segmentation 斷詞 (i.e. 搶詞問題) Example 我國 代表 現在 正 面臨 很 大 的 壓力 我 到 達文西 博物館 Solution 1. 字典法:例如長詞優先法 2. 法則式:例如文法式、構詞法則、歧義解決法則 3. 訓練統計式:例如詞頻法 (最大詞頻組合) 等 4. 自動分類式:將斷詞轉為分類問題 Result 現今主要第 3 , 4 類方法, 正確率可達 9 成以上

21 Problem in Chinese Processing (2)
Part-of-Speech Tagging 詞性標定 Example 我國 代表 現在 正 面臨 很 大 的 壓力 Nc Na Nd Neqb Nv Dfa Na Na Na VC Na VK Nv T VK Nv VA De VC VH Di VH VJ A A D Da N.. 名詞 V.. 動詞 D.. 副詞 A 形容詞 T 語助詞

22 Problem in Chinese Processing (2)
Part-of-Speech Tagging 詞性標定 Solution 1. 訓練統計式: 例如馬可夫機率模型 2. 自動分類式: 將詞性標定轉為分類問題 Result 正確率可達 9 成以上 可衍生出許多應用 表:中研院平衡語料庫詞類標記

23 Problem in Chinese Processing (3)
Unknown Term 未知詞 (或稱Out-of-Vocabulary) Example 新鮮人倪安東見面簽唱會 歌迷熱情喊凍蒜 國際運動仲裁庭祕書長瑞伯表示世跆盟可拒仲裁 Solution 1. 先經過斷詞,再處理未知部份 未知部份以構詞法則處理,或n-gram統計學習 2. 不經過斷詞,直接以訓練統計式處理 Result 正確率可達 7~8 成 (含詞性標定)

24 Tool for Chinese Processing (1)
Jieba 結巴中文分詞 開放程式碼,支援多種語言 已知詞部分採詞頻法,找尋最大得分路徑 未知詞部分採HMM標記 準確率受詞典影響大 例如:無法完整斷出「蔡英文」,因詞典中「英文」之得分大

25 Tool for Chinese Processing (2)
eLand ETool 開放完整API 主要功能 自動關鍵詞 自動摘要 斷詞與詞性標定 情緒判定 試用展示

26 自動關鍵字 原理 範例 以雙連文進行統計 將所有可能有意義的子字串進行合倂 今日馬英九總統…有馬英九粉絲忽然…馬英九立刻回應…
p(馬英|freq=1) ~ 1/n … > t 門檻值 p(馬英|freq=2) ~ (1/n)2 … > t p(馬英|freq=3) ~ (1/n)3 … < t, found p(馬英|freq=4) ~ (1/n)4 … 判定 馬英 係可能有意義的子字串 將所有可能有意義的子字串進行合倂 p(馬英|freq=3) & p(英九|freq=3) < t merge(馬英, 英九) = 馬英九

27 自動關鍵字 演算法 以n-gram,找出最長且最常結伴出現的字元串 需指定所謂 "最常出現" 的次數門檻值 以BACDBCDABACD為例
設定 threshold T=1 FinalList會得到 CD:3 BACD:2 代表擷取出兩個關鍵字 註: 類似LCS問題 (Longest Common Subsequence)、但限相鄰之算法

28  自動摘要 重新組合重要的句子 "句子" 作為單位 以關鍵詞計算每個句子的得分 由句子得分篩選固定比例的句子作為文章摘要

29 HMM 斷詞 Hidden Markov Model 統計機率式的模型 序列資料的描述 (sequence) S0 S1 S2 P00
Gaussian distribution Ot Ot-1 Ot+1 Ot+2 (State) (Observation Value) Transition Prob. Observation Prob. S0 S1 S2 P00 P01 P02 P10 P11 P12 P20 P21 P22

30 HMM 斷詞 中文斷詞的應用 Ex. V N SHI N 缺乏 耐性 是 一
缺乏(V)耐性(N)是(SHI)一(N)項(N)莫大(A)的(D)致命傷(N) V N SHI N S0 S1 S2 Ot Ot-1 Ot+1 Ot+2 State : 詞性 Observation Value:詞 缺乏 耐性

31 HMM 斷詞 中文斷詞的應用 取得機率最高的路徑 V N A ..

32 情緒判別 A bag-of-words Okapi BM25 引人入勝 上好 方便 主流 太過 白目 document term set
一流 公道 引人入勝 方便 主流 叫好 卓越 引誘 太過 出錯 失常 白目 丟臉 劣質 Positive Terms Negative Terms document term set document length avg. document length

33 情緒判別 Associate Attitude 建立關聯態度詞庫 …這家代理商的服務一點也不周全… …這家代理商的服務一點也不周全…
親切(1.0) …這家代理商的服務一點也不周全… 服務 周全(1.0) 敷衍(-1.0) 態度反轉 傲慢(-1.0) …這家代理商的服務一點也不周全…

34 Appendix

35 Chinese Text Retrieval without Using a Dictionary (Chen et al, SIGIR97)
Segmentation Break a string of characters into words Chinese characters and words Most Chinese words consist of two characters (趙元任) 26.7% unigrams, 69.8% bigrams, 2.7% trigrams (北京,現代漢語頻率辭典) 5% unigrams, 75% bigrams, 14% trigrams, 6% others (Liu) Word segmentation statistical methods, e.g., mutual information statistics rule-based methods, e.g., morphological rules, longest-match rules, ... hybrid methods Hsin-Hsi Chen

36 Indexing Techniques Unigram Indexing Bigram Indexing Trigram Indexing
Break a sequence of Chinese characters into individual ones. Regard each individual character as an indexing unit. GB : 6763 characters Bigram Indexing Regard all adjacent pairs of hanzi characters in text as indexing terms. Trigram Indexing Regard all the consecutive sequence of three hanzi characters as indexing terms. Hsin-Hsi Chen

37 Examples Hsin-Hsi Chen

38 Indexing Techniques (Continued)
Statistical Indexing Collect occurrence frequency in the collection for all Chinese characters occurring at least once in the collection. Collect occurrence frequency in the collection for all Chinese bigrams occurring at lease once in the collection. Compute the mutual information for all Chinese bigrams. I(x,y)=log2(p(x,y)/(p(x)*p(y))) = log2((f(x,y)/N) /((f(x)/N)*(f(y)/N))) = log2((f(x,y)*N) /(f(x)*f(y))) Strongly related: much larger value Not related: close to 0 Negatively related: negative I(x,y)=log2(p(x,y)/ (p(x)*p(y))) =log2 (p(x)/ (p(x)*p(y)) =log2 (1/ *p(y)) I(x,y)=log2(p(x,y)/(p(x)*p(y))) =log2(p(x|y)/p(x)) =log2(p(x|y)/p(x))=0 Hsin-Hsi Chen

39 f(c1): the occurrence frequency value of the first Chinese character of a bigram
f(c2): the occurrence frequency value of the second Chinese character f(c1c2): the occurrence frequency value of a bigram I(c1,c2): mutual information I(c1,c2) >> 0, c1 and c2 have strong relationship I(c1,c2) ~ 0, c1 and c2 have no relationship I(c1,c2) << 0, c1 and c2 have complementrary relationship > 0 < 0 Hsin-Hsi Chen

40 3 5 2 9 7 4 6 1 Hsin-Hsi Chen

41 Segmentation as Classification
我國 代表 現在 正 面臨 很大 的 壓力 B E B E B E S B E B E S B E 九把刀 不 同意 B I E S B E

42

43 Training data : input features and the target
C-2 T-2 C-1 T-1 C1 T1 C2 T2 C0 T0 目標欄位 B E

44 from "Chinese Word Segmentation by Classification of Characters", 2005

45 討論:擴大應用 Word Vector and Word Embedding 將Word視為向量,計算Word的相似性
更進一步,以上下文字,而非整篇文章為單位,可找出替換詞 Ex. 今天的天氣不錯,明天的天氣不錯:今天←→明天 Ex. 馬英九總統…,蔡英文總統…:馬英九←→蔡英文

46 Recap: the weight matrix
Anthony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth . . . ANTHONY BRUTUS CAESAR CALPURNIA CLEOPATRA MERCY WORSER . . . 5.25 1.21 8.59 0.0 2.85 1.51 1.37 3.18 6.10 2.54 1.54 1.90 0.11 1.0 0.12 4.15 0.25 0.35 0.88 1.95 Each document is now represented as a real-valued vector of tf idf weights ∈ R|V|. Each word may be represented as a real-valued vector, too. 46

47 全聯←→ 頂好、家樂福、大潤發、愛買、 美廉社、量販店、寶雅、屈臣氏、楓康

48 Other Language Issues

49 Tokenization Input: “Friends, Romans and Countrymen” Output: Tokens
Each such token is now a candidate for further processing 正規化及語言處理 在此一階段就直接丟棄 (保留) 哪些資訊? 索引與查詢(分析)時的處理要一致

50 Tokenization n-gram or n-word ?
德文 Taiwan verbietet Verzehr von Hunden und Katzen 印尼文 Taiwan Segera Berlakukan Undang-undang Larangan Makan Daging Anjing dan Kucing 義大利文 mangiare cani e gatti diventa illegale 越南文 Cộng hòa xã hội chủ nghĩa Việt Nam 日文 シンガポール 韓文 대만쇼케이스 泰文 ฟุตบอลหญิงชิงแชมป์เอเชีย

51 Tokenization Issues in tokenization:
Finland’s capital  Finland? Finlands? Finland’s? Hewlett-Packard  Hewlett and Packard as two tokens? San Francisco: one token or two? How do you decide it is one token? maneuver: 策略

52 Numbers 3/12/91 Mar. 12, 1991 55 B.C. B-52 My PGP key is 324a3df234cb23e Often, don’t index as text. But often very useful mixed with text: ex. 產品型號 Nikon D700 (One answer is using n-grams)

53 Tokenization: Language issues
L'ensemble  one token or two? L ? L’ ? Le ? Want l’ensemble to match with un ensemble German noun compounds are not segmented Lebensversicherungsgesellschaftsangestellter ‘life insurance company employee’

54 Tokenization: language issues
Chinese and Japanese have no spaces between words: 莎拉波娃现在居住在美国东南部的佛罗里达。 Not always guaranteed a unique tokenization Further complicated in Japanese, with multiple alphabets intermingled 混合使用 Dates/amounts in multiple formats 斷詞 問題 フォーチュン500社は情報不足のため時間あた$500K(約6,000万円) Katakana 片假名 Hiragana 平假名 Kanji 漢字 Romaji 羅馬拼音

55 Normalization Need to “normalize” terms in indexed text as well as query terms into the same form We want to match U.S.A. and USA 索引與查詢(分析)時的處理要一致 Alternative is to have multiple tokenization mixed language processing and n-gram approach

56 Normalization: other languages
Accents: résumé vs. resume. Most important criterion: Even in languages that standardly have accents, users often may not type them How would you like to present in the final result ? German: Tuebingen vs. Tübingen Should be equivalent 7月30日 vs. 7/30

57 Case folding Reduce all letters to lower case
exception: upper case (in mid-sentence?) e.g., General Motors Fed vs. fed SAIL vs. sail One approach is to lower case everything in analysis, meanwhile to represent in the original form

58 Stop words With a stop list, you exclude from dictionary entirely the commonest words. Intuition: They have little semantic content: the, a, and, to, be They take a lot of space: ~30% of postings for top 30 But the trend is away from doing this: You need them for: Phrase queries: “King of Denmark” Various song titles, etc.: “Let it be”, “To be or not to be” “Relational” queries: “flights to London”

59 Thesauri and soundex Handle synonyms 同義字 and homonyms 同音字
Hand-constructed equivalence classes e.g., car = automobile color = colour Rewrite to form equivalence classes 原則:兩種方式,在索引時處理?或在查詢時處理? (1) Index such equivalences Ex. When the document contains automobile, index it under car as well (usually, also vice-versa) (2) expand query Ex. When the query contains automobile, look under car as well

60 Soundex Traditional class of heuristics to expand a query into phonetic equivalents Language specific – mainly for names E.g., chebyshev  tchebycheff

61 Lemmatization Reduce inflectional/variant forms to base form E.g.,
am, are, is  be car, cars, car's, cars'  car the boy's cars are different colors  the boy car be different color Lemmatization implies doing “proper” reduction to dictionary headword form

62 Stemming Reduce terms to their “roots” before indexing
“Stemming” suggest crude affix chopping 很粗略地將字首字尾去除 language dependent e.g., automate(s), automatic, automation all reduced to automat. for exampl compress and compress ar both accept as equival to compress for example compressed and compression are both accepted as equivalent to compress.

63 Porter's algorithm Commonest algorithm for stemming English
Results suggest at least as good as other stemming options Conventions + 5 phases of reductions phases applied sequentially each phase consists of a set of commands sample convention: Of the rules in a compound command, select the one that applies to the longest suffix. 由一群合併或判斷規則所組成 → 挑選適用最長字尾的規則

64 Typical rules in Porter
sses  ss ies  i ational  ate tional  tion Weight of word sensitive rules (m>1) EMENT → replacement → replac cement → cement

65 Other stemmers Other stemmers exist, e.g., Lovins stemmer Single-pass, longest suffix removal (about 250 rules) 計算代價高 Do stemming and other normalizations help? help recall for some queries 有利找到資料 (檢出率上升) Ex. 找 意大利, 同時找出義大利 but harm precision on others 但可能會找到錯的資料 (精確率下降) Ex. 找 Steve Jobs 卻找到 steve’s job


Download ppt "Lecture 5 : Sequence Tagging and Language Processing"

Similar presentations


Ads by Google