Computer Vision Chapter 4

Slides:



Advertisements
Similar presentations
Chapter 2 Combinatorial Analysis 主講人 : 虞台文. Content Basic Procedure for Probability Calculation Counting – Ordered Samples with Replacement – Ordered.
Advertisements

Course 1 演算法: 效率、分析與量級 Algorithms: Efficiency, Analysis, and Order
基本概論 Basic concepts.
Unsupervised feature learning: autoencoders
分類:基本概念、決策樹與模型評估.
Performance Evaluation
Routing Protocols and Concepts – Chapter 3
-Artificial Neural Network- Hopfield Neural Network(HNN) 朝陽科技大學 資訊管理系 李麗華 教授.
Chapter 8 Liner Regression and Correlation 第八章 直线回归和相关
XI. Hilbert Huang Transform (HHT)
Leftmost Longest Regular Expression Matching in Reconfigurable Logic
Operating System CPU Scheduing - 3 Monday, August 11, 2008.
Minimum Spanning Trees
3-3 Modeling with Systems of DEs
Euler’s method of construction of the Exponential function
-Artificial Neural Network- Adaline & Madaline
Population proportion and sample proportion
模式识别 Pattern Recognition
Chapter 4 歸納(Induction)與遞迴(Recursion)
計算方法設計與分析 Design and Analysis of Algorithms 唐傳義
樹狀結構 陳怡芬 2018/11/16 北一女中資訊專題研究.
非線性規劃 Nonlinear Programming
第二單元 面積與黎曼和.
AOI (Automatic Optical Inspection )
Course 9 NP Theory序論 An Introduction to the Theory of NP
创建型设计模式.
Pattern Recognition Chapter1 Introduction.
组合逻辑3 Combinational Logic
主讲人: 吕敏 { } Spring 2016,USTC 算法基础 主讲人: 吕敏 { } Spring 2016,USTC.
Advanced Artificial Intelligence
Randomized Algorithms
Interval Estimation區間估計
塑膠材料的種類 塑膠在模具內的流動模式 流動性質的影響 溫度性質的影響
A Study on the Next Generation Automatic Speech Recognition -- Phase 2
樹 2 Michael Tsai 2013/3/26.
Advisor : Prof. Frank Y.S. Lin Presented by Yen-Yi, Hsu
最大熵模型简介 A Simple Introduction to the Maximum Entropy Models
模式识别 Pattern Recognition
Ch 6 實習.
Chp.4 The Discount Factor
谈模式识别方法在林业管理问题中的应用 报告人:管理工程系 马宁 报告地点:学研B107
Version Control System Based DSNs
Sensor Networks: Applications and Services
2019/4/8 A Load Balancing Mechanism for multiple SDN Controllers based on Load Informing Strategy Miultiple controller 的 load balancing 機制,使用一個叫 Load informing.
Maintaining Frequent Itemsets over High-Speed Data Streams
VII. Data Compression (A)
相關統計觀念復習 Review II.
Chp.4 The Discount Factor
Summary for Chapters 24 摘要: 24章
Computer Vision Chapter 4
Hashing Michael Tsai 2013/06/04.
主講人:陳鴻文 副教授 銘傳大學資訊傳播工程系所 日期:3/13/2010
Simple Regression (簡單迴歸分析)
Chp.4 The Discount Factor
Distance Vector vs Link State
BiCuts: A fast packet classification algorithm using bit-level cutting
Introduction to Probability Theory ‧1‧
Review of Statistics.
Introduction of this course
Hashing Michael Tsai 2017/4/25.
More About Auto-encoder
主要内容 什么是概念图? 概念图的理论基础 概念图的功能 概念地图的种类 如何构建概念图 概念地图的评价标准 国内外概念图研究现状
Distance Vector vs Link State Routing Protocols
Arguments to the main Function and Final Project
Chapter 9 Validation Prof. Dehan Luo
Class imbalance in Classification
簡單迴歸分析與相關分析 莊文忠 副教授 世新大學行政管理學系 計量分析一(莊文忠副教授) 2019/8/3.
INTRODUCTION Making 24 with 4 cards DETAILS TEST GAME GAME.
Gaussian Process Ruohua Shi Meeting
Presentation transcript:

Computer Vision Chapter 4 Statistical Pattern Recognition Presenter:傅楸善 & 李承翰 Cell phone: 0972720929 E-mail: r01922087@ntu.edu.tw 指導教授:傅楸善 博士 Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.

Introduction Units: Image regions and projected segments Each unit has an associated measurement vector Using decision rule to assign unit to class or category optimally DC & CV Lab. CSIE NTU

Introduction (Cont.) Feature selection and extraction techniques Decision rule construction techniques Techniques for estimating decision rule error DC & CV Lab. CSIE NTU

Simple Pattern Discrimination Also called pattern identification process A unit is observed or measured A category assignment is made that names or classifies the unit as a type of object The category assignment is made only on observed measurement (pattern) DC & CV Lab. CSIE NTU

Simple Pattern Discrimination (cont.) a: assigned category from a set of categories C t: true category identification from C d: observed measurement from a set of measurements D (t, a, d): event of classifying the observed unit P(t, a, d): probability of the event (t, a, b) DC & CV Lab. CSIE NTU

Economic Gain Matrix e(t, a): economic gain/utility with true category t and assigned category a A mechanism to evaluate a decision rule Identity gain matrix DC & CV Lab. CSIE NTU

An Instance DC & CV Lab. CSIE NTU

Another Instance P(g, g): probability of true good, assigned good, P(g, b): probability of true good, assigned bad, ... e(g, g): economic consequence for event (g, g), … e positive: profit consequence e negative: loss consequence 有四種狀況 DC & CV Lab. CSIE NTU

Another Instance (cont.) 我們可以得到兩張表 一張是表示true state 跟detected state 的機率 DC & CV Lab. CSIE NTU

Another Instance (cont.) 另一張表示 true state 跟detected state的economic gain DC & CV Lab. CSIE NTU

Another Instance (cont.) Fraction of good objects manufactured P(g) = P(g, g) + P(g, b) P(b) = P(b, g) + P(b, b) Expected profit per object E = DC & CV Lab. CSIE NTU

Conditional Probability DC & CV Lab. CSIE NTU

Conditional Probability (cont.) P(b|g): false-alarm rate P(g|b): misdetection rate Another formula for expected profit per object DC & CV Lab. CSIE NTU

Example 4.1 P(g) = 0.95, P(b) = 0.05 DC & CV Lab. CSIE NTU

Example 4.1 (cont.) DC & CV Lab. CSIE NTU

Example 4.2 P(g) = 0.95, P(b) = 0.05 另一個例子 DC & CV Lab. CSIE NTU

Example 4.2 (cont.) DC & CV Lab. CSIE NTU

Decision Rule Construction (t, a): summing (t, a, d) on every measurements d Therefore, Average economic gain DC & CV Lab. CSIE NTU

Decision Rule Construction (cont.) Measurement d 這軸已經被壓進右圖了 這本書的average 通常指的就是期望值 期望值可以從對應的項的sum of products得到 DC & CV Lab. CSIE NTU

Decision Rule Construction (cont.) We can use identity matrix as the economic gain matrix to compute the probability of correct assignment: DC & CV Lab. CSIE NTU

Fair Game Assumption Decision rule uses only measurement data in assignment; the nature and the decision rule are not in collusion In other words, P(a| t, d) = P(a| d) Decision rule 在分類的時候 我們只使用measurement data 就是他真正的類別跟decision rule是沒有互相影響的 也就是說 我們可以得到以下的式子 接著做一下式子的推導 DC & CV Lab. CSIE NTU

Fair Game Assumption (cont.) From the definition of conditional probability 根據條件機率的定義 DC & CV Lab. CSIE NTU

Fair Game Assumption (cont.) By fair game assumption, P(t, a, d) = By definition, = DC & CV Lab. CSIE NTU

Deterministic Decision Rule We use the notation f(a|d) to completely define a decision rule; f(a|d) presents all the conditional probability associated with the decision rule A deterministic decision rule: Decision rules which are not deterministic are called probabilistic/nondeterministic/stochastic DC & CV Lab. CSIE NTU

Expected Value on f(a|d) Previous formula By and => 上面是之前求期望值的公式 所以這些代換 可以得到新的期望值公式 DC & CV Lab. CSIE NTU

Expected Value on f(a|d) (cont.) 移動一下summation 的位置 我們最後可以得到 這個式子 這式子呈現了 期望值跟decion rule間的影響關係 DC & CV Lab. CSIE NTU

Bayes Decision Rules Maximize expected economic gain Satisfy DC & CV Lab. CSIE NTU

Bayes Decision Rules (cont.) DC & CV Lab. CSIE NTU

Bayes Decision Rules (cont.) + + DC & CV Lab. CSIE NTU

Continuous Measurement For the same example, try the continuous density function of the measurements: and Prove that they are indeed density function 如果measurement space是連續的 而不是離散的 我們使用連續密度函數 使用積分來取代使用summation 同個例子 T1類別 與T2類別 證明他們是密度函數 我們對0 1區間積分 可以得到1 DC & CV Lab. CSIE NTU

Continuous Measurement (cont.) Suppose that the prior probability of is and the prior probability of is When , a Bayes decision rule will assign an observed unit to t1, which implies => x: measurement 假設 prior probability t1先前的機率 t1 是1/3 t2 是2/3 P(t1,x) =X2 = 3*x2 *1/3 p(t2,x) = 3((1-x2)/2 * 2/3 DC & CV Lab. CSIE NTU

Continuous Measurement (cont.) .805 > .68, the continuous measurement has larger expected economic gain than discrete 所以我們可以得到0.85 比之前0.68來的更好 DC & CV Lab. CSIE NTU

Prior Probability The Bayes rule: Replace with The Bayes rule can be determined by assigning any categories that maximizes P(t) 是我們在我們observed the unit’s meaurement之前 預先期望他的true category 是T 所以我們可以得到另外一個Bayes decision rule的公式 使用Prior Probabilty 為什麼本來好好的P(t,d)不要, 要變成 P(d|t)呢 ? 大家看一下這個式子, 有時連P(t) 都不知道了, 更何況是 P(t,d) 呢 ? 一般來說P(d|t)是比較好量的 DC & CV Lab. CSIE NTU

Economic Gain Matrix Identity matrix Incorrect loses 1 A more balanced instance DC & CV Lab. CSIE NTU

Maximin Decision Rule Maximizes average gain over worst prior probability 小中取大決策準則 極大化最小收益這個準則得到的結果 可以保證比先前最差的期望值來的好 DC & CV Lab. CSIE NTU

Example 4.3 DC & CV Lab. CSIE NTU

Example 4.3 (cont.) DC & CV Lab. CSIE NTU

Example 4.3 (cont.) DC & CV Lab. CSIE NTU

Example 4.3 (cont.) The lowest Bayes gain is achieved when E[e;f5]=(0.8-0.5)*p(c1)+0.5 E[e;f7]=(0.5-0.9)*p(c1)+0.9 帶回 Bayes gain = 4/7 * 0.8 + 3/7 * 0.5 = 0.6714 The lowest gain is 0.6714 DC & CV Lab. CSIE NTU

Example 4.3 (cont.) 這是所有 decision rule的期望值 中間那條fm就是maximin decision rule DC & CV Lab. CSIE NTU

Example 4.4 兩個類別C1 c2 兩個measurement d1 d2 DC & CV Lab. CSIE NTU

Example 4.4 (cont.) DC & CV Lab. CSIE NTU

Example 4.4 (cont.) F2 是maximin decion rule DC & CV Lab. CSIE NTU

Example 4.4 (cont.) 在這個case maximin decision rule 也是deterministic decision rule DC & CV Lab. CSIE NTU

Example 4.5 這個個例子比較複雜 DC & CV Lab. CSIE NTU

Example 4.5 (cont.) 他要說明三件事 1.Bayes 是最好的, (廢話, 己知prior) 2.General maximin rule 永遠可以做到和 Bayes最差一樣好 3. General maximin rule != Bayes decision rule DC & CV Lab. CSIE NTU

Example 4.5 (cont.) DC & CV Lab. CSIE NTU

Decision Rule Error The misidentification errorαk The false-identification error βk  [aɪ͵dɛntəfəˋkeʃən] Alpha : 不小心把良品當成不良品 稱之為Type I error Beta : 把不良品當作良品 稱之為tpe II error DC & CV Lab. CSIE NTU

An Instance Beta 0 0.05*1+0.05*1 / ( 0.5 +0.5 +0.1+0.1+0.15+0.5) DC & CV Lab. CSIE NTU

Reserving Judgment The decision rule may withhold judgment for some measurements Then, the decision rule is characterized by the fraction of time it withhold judgment and the error rate for those measurement it does assign. It is an important technique to control error rate. DC & CV Lab. CSIE NTU

Reserving Judgment Let be the maximum Type I error we can tolerate with category k Let be the maximum Type II error we can tolerate with category k Measurement that will not be rejected (acceptance region) DC & CV Lab. CSIE NTU

Nearest Neighbor Rule Assign pattern x to the closest vector in the training set The definition of “closest”: where is a metric or measurement space Chief difficulty: brute-force nearest neighbor algorithm computational complexity proportional to number of patterns in training set brute-force nearest neighbor:暴力法 DC & CV Lab. CSIE NTU

Binary Decision Tree Classifier Assign by hierarchical decision procedure DC & CV Lab. CSIE NTU

Major Problems Choosing tree structure Choosing features used at each non-terminal node Choosing decision rule at each non-terminal node DC & CV Lab. CSIE NTU

Decision Rules at the Non-terminal Node Thresholding the measurement component Fisher’s linear decision rule Bayes quadratic decision rule Bayes linear decision rule Linear decision rule from the first principal component 1.設定一個threshold 大的往左 小的往右 2.另用一個數學式子來判斷 3.也是一個式子 4..當所有類別的共異變數(convariance)都一樣的時候 Bayes quadratic decision 會變成linear 5.PCA 是一種正交(orthogonal)線性轉換,能將原始資料轉換到一個新的座標系統 DC & CV Lab. CSIE NTU

Error Estimation An important way to characterize the performance of a decision rule Training data set: must be independent of testing data set Hold-out method: a common technique construct the decision rule with half the data set, and test with the other half 1.Error estimation 是個重要的方法去衡量decision rule的效能(performance) 2.要測試一個decision rule 我們通常會拿一組資料去測試 看出來的成果好不好 這組資料必須跟training 用的data set是互相獨立的 這個方法叫做 resubstitution method (重新帶入法) 3.Hold out method(保留法) 把資料分為一半 一半拿來train decision rule 再拿另一半做test data set 然後互換再做一次 合併兩次得到的error rate去做error estimation DC & CV Lab. CSIE NTU

Neural Network A set of units each of which takes a linear combination of values from either an input vector or the output of other units DC & CV Lab. CSIE NTU

Neural Network (cont.) Has a training algorithm Responses observed Reinforcement algorithms Back propagation to change weights DC & CV Lab. CSIE NTU

Summary Bayesian approach Maximin decision rule Misidentification and false-alarm error rates Nearest neighbor rule Construction of decision trees Estimation of decision rules error Neural network DC & CV Lab. CSIE NTU