Computer and Robot Vision I Chapter 8 The Facet Model Presented by: 張蓉蓉 r07922152@ntu.edu.tw 指導教授: 傅楸善 博士 Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
8.0 Outline 8.1 Introduction 8.2 Relative Maxima 8.3 Sloped Facet Parameter and Error Estimation 8.4 Facet-Based Peak Noise Removal 8.5 Iterated Facet Model 8.6 Gradient-Based Facet Edge Detection 8.7 Bayesian Approach to Gradient Edge Detection 8.1 用local給的資訊推一個最逼近原圖的數學model(算式 函數 之類的東西)出來 8.2 介紹小平面模型定義一維觀測序列相對極大值的原理應用 8.3 複習對斜的小平面參數預估 8.4 對傾斜的小平面模型雜訊的移除處理 8.5 介紹小平面模型可以用來將亮度灰階影像表面分區域 8.6 介紹使用古典梯度邊界偵測應用 8.7 討論使用貝式法去決定觀測的梯度強度統計
8.0 Outline 8.8 Zero-Crossing Edge Detector 8.9 Integrated Directional Derivative Gradient Operator 8.10 Corner Detection 8.11 Isotropic Derivative 8.12 Ridges and Ravines on Digital Images 8.13 Topographic Primal Sketch 8.8討論zero crossing 在二階導數的邊界偵測 8.9討論集成方向導數梯度 8.10 討論小平面方法去做corner detection 8.11討論了使用小平面的方法來計算高階各向同性微分大小 8.12討論山脊和溝壑 8.13提出了每個像素的各種地形類別的標籤轉化
8.1 Introduction The facet model principle states that the image can be thought of as an underlying continuum or piecewise continuous gray level intensity surface. observed digital image: noisy discretized sampling of distorted version of this surface 一张灰阶影像其实就是一个矩阵,矩阵里面的每一个点存放着这个像素的灰度值 如果我们把灰度值当做是山的高度,那么一块区域的灰度值越亮,这块区域就越高,就是这幅图上我们看到的 实际的山应该是连续的,但对于数位影像来说,它是对于连续影像的离散采样,是会有杂讯存在的 所以我们观察到的数位影像是真实影像的扭曲的版本 DC & CV Lab. CSIE NTU
8.1 Introduction To actually carry out the processing with the observed digital image requires a model describes what the general form of the surface would be in the neighborhood of any pixel 我们对数位影像进行处理的时候,需要用模型来描述任意像素点的邻域平面的一般形式 就是用局部的邻域里的資訊推一個最逼近原始灰度值的數學式出來,然后用这个数学式来表示邻域内的灰度值 DC & CV Lab. CSIE NTU
8.1 Introduction general forms: some mathematical formulas 1. piecewise constant (flat facet model), Ideal region: constant gray level 2. piecewise linear (sloped facet model), ideal region: sloped plane gray level 3. piecewise quadratic, gray level surface: bivariate quadratic 4. piecewise cubic, gray level surface: cubic surfaces 一般式:一些數學方程式 分段常數(平的小平面模型)理想區間:常數灰階值 分段線性(斜的小平面模型)理想區間:斜的平面灰階值 分段二次方程式(灰階表面)理想區間:二元二次方程式 分段三次方程式,灰階表面 理想區間:立方表面 DC & CV Lab. CSIE NTU
Break time DC & CV Lab. CSIE NTU
8.2 Relative Maxima a simple labeling application relative maxima: Detect and locate all relative maxima relative maxima: first derivative zero second derivative negative 来看一个找出相对极大值的例子 复习一下求相对极大值的方法 要满足两点 1 一阶导数为零 2 二阶导数是负的 DC & CV Lab. CSIE NTU
8.2 Relative Maxima f 1 , f 2 , f 3 ,…… f N One-dimensional observation sequence f 1 , f 2 , f 3 ,…… f N To find the relative maxima, we can least-squares fit a quadratic function c m 2 + b m+ a , where −k≤m≤k , to each group of 2k+1 successive observations 我们现在有一个一维的观测序列f1到fN 我们现在先取这个序列的前2k+1个点,用一个二次方程式用最小平方法去拟合它(拟合了之后我们才能用数学的方法找出这个序列的极大值呀) 我们把原点定在2k+1个点的中间那个点,所以m是从-k到k 有hat说明是估计值 DC & CV Lab. CSIE NTU
8.2 Relative Maxima ε n 2 = m=−k k ( c m 2 + b m+ a − 𝑓 𝑛+𝑚 ) 2 The squared fitting error ε n 2 for the n th group of 2k+1 can be expressed by ε n 2 = m=−k k ( c m 2 + b m+ a − 𝑓 𝑛+𝑚 ) 2 #計算誤差最小的平方和的算式 對2K+1 組其中的第 n 組的平方fitting ε n 2 誤差,一般式可以表示為 ε n 2 = m=−k k ( c m 2 + b m+ a − 𝑓 𝑛+𝑚 ) 2 最小平方法求得的数据与实际数据之间误差的平方和为最小 我们拟合出来的数学式和观测值的误差的平方和就是这个式子 我们的观测值一共有大N个,每次取2k+1个点,假设可以取小n组 最小平方法要求我们求出来的chat bhat ahat 使得这个误差的平方和最小 DC & CV Lab. CSIE NTU
8.2 Relative Maxima ε n 2 = m=−k k ( c m 2 + b m+ a − 𝑓 𝑛+𝑚 ) 2 Taking partial derivatives of ε n 2 with respect to the free parameter a , b , c results in the following ε n 2 = m=−k k ( c m 2 + b m+ a − 𝑓 𝑛+𝑚 ) 2 用 ε n 2 的相對於自由參數 a , b , c 偏導數如下矩陣表示 什么时候这个误差最小呢 偏导数为0的时候 我们对ahat bhat chat 求偏导数 DC & CV Lab. CSIE NTU
8.2 Relative Maxima Assume k = 1, setting these partial derivatives to zero and simplifying yields the matrix equation from 2 c +3 a 2 b 2 c +2 a = 𝑓 𝑛+1 + 𝑓 𝑛 + 𝑓 𝑛−1 𝑓 𝑛+1 − 𝑓 𝑛−1 𝑓 𝑛+1 + 𝑓 𝑛−1 =0 现在考虑一个简单情况,假设k=1,那么就是每次只取三个点 求得 a , b , c 我们就得到了拟合的方程式 DC & CV Lab. CSIE NTU
8.2 Relative Maxima The quadratic y= c m 2 + b m+ a has relative extrema at x 0 =− b /2 c The extremum is a relative maximum when c <0 The algorithm then amounts to the following 二次式 y 在 x 0 =− b /2 c 和C<0 時具有極大值 其实也就是通过一阶导数等于零 二阶导数小于零来算的 然后我们就可以设计一个演算法来找出这个观测序列的相对极大值点 3我们刚才是用2k+1点最中间的那个点来当做原点,x0绝对值小于二分之一,说明这个相对极大值最靠近原点, 我们就把这组2k+1个点的原点标记为相对极大值点 DC & CV Lab. CSIE NTU
Break time DC & CV Lab. CSIE NTU
8.3 Sloped Facet Parameter and Error Estimation We employ a least-squares procedure both to estimate the parameters of the sloped facet model for a given two-dimensional rectangular neighborhood whose row index set is R and whose column index set is C and to estimate the noise variance 8.3 對斜的小平面參數預估 刚才看了一维的,现在来看二维的斜面小平面模型。 对于一个二维长方形邻域,我们用大R作为row的索引值,大C作为column的索引值 DC & CV Lab. CSIE NTU
8.3 Sloped Facet Parameter and Error Estimation Assume the coordinate of the given pixel are (0,0) in its central neighborhood, and assume each (r,c) ∈ R X C Image function g is modeled by Where η is a random variable indexed on R×C , which represents noise. We assume that η~N(0, σ 2 ) is noise having mean 0 and variance σ 2 and that the noise for any two pixels is independent. 𝑔 𝑟,𝑐 =𝛼𝑟+𝛽𝑐+𝛾+𝜂(𝑟,𝑐) ###j我們假設給定的(0,0)為相鄰區域的中心,假設每一個點(r,c) 屬於(RXC),其影像函式可表示為, 二维的斜面小平面模型的方程式就是这个 g(r,c)是二维的观测值 Abr是参数 他们都不带hat 说明不是估计值,是上帝才知道的值 r,c是行和列的索引值,我们把区域的中心看做是原点 eita是杂讯 它满足期望为0,变异数为sigma方的常态分布 DC & CV Lab. CSIE NTU
8.3 Sloped Facet Parameter and Error Estimation The least-squares procedure determine parameters a , b , c that minimize the sum of the squared differences between the fitted surface and the observed one Taking the partial derivatives of ε n 2 and setting them to zero 𝜀 2 = 𝑟∈𝑅 𝑐∈𝐶 [ 𝛼 𝑟+ 𝛽 𝑐+ 𝛾 −𝑔(𝑟,𝑐)] 2 最小平方法要求参数ahat bhat chat使得误差的平方和最小 还是求最小值的操作 一阶偏微分等于0 DC & CV Lab. CSIE NTU
8.3 Sloped Facet Parameter and Error Estimation Without loss of generality, we choose our coordinate RxC so that the center of the neighborhood RxC has coordinates (0, 0) The symmetry in the chosen coordinate system leads to Hence @不失一般性,我們選擇中心點RXC做為座標的原點 @選擇的座標系統具有對稱性使得 加總r = 0 加總c =0 𝑟∈𝑅 𝑐∈𝐶 𝑟𝑐 = 𝑟∈𝑅 𝑟 𝑐∈𝐶 𝑐 =0 DC & CV Lab. CSIE NTU
8.3 Sloped Facet Parameter and Error Estimation 𝑟∈𝑅 𝑐∈𝐶 𝑟𝑐 =0 Hence DC & CV Lab. CSIE NTU
8.3 Sloped Facet Parameter and Error Estimation Solving for 𝛼 , 𝛽 , and 𝛾 , we obtain 然后我们就可以求得估计值abr了 估计值a的分子 是小r平方的加和 分母是小r和观测值g(r,c)的乘积的加和 DC & CV Lab. CSIE NTU
8.3 Sloped Facet Parameter and Error Estimation Example 8.1 : consider the following 3*3 region (-1,-1) (-1,0) (-1,1) (0,-1) (0,0) (0,1) (1,-1) (1,0) (1,1) 𝛼 = −1× 3+5+9 +0∗ 4+7+7 +1∗(0+3+7) −1 2 + −1 2 + −1 2 + 0 2 + 0 2 +0 2 +1 2 +1 2 +1 2 𝛽 = −1× 3+4+0 +0∗ 5+7+3 +1∗(9+7+7) −1 2 + −1 2 + −1 2 + 0 2 + 0 2 +0 2 +1 2 +1 2 +1 2 来看个例子巩固一下 𝛾 = 3+5+9+4+7+7+0+3+7 9 3 5 9 4 7 𝛼 =−1.17 𝛽 =2.67 𝛾 =5 DC & CV Lab. CSIE NTU
8.3 Sloped Facet Parameter and Error Estimation The estimated gray level surface is given by 𝛼 𝑟+ 𝛽 𝑐+ 𝛾 −1.17 𝑟+2.67 𝑐 +5 (-1,-1) (-1,0) (-1,1) (0,-1) (0,0) (0,1) (1,-1) (1,0) (1,1) 3.5 6.17 8.84 2.33 5 7.67 1.16 3.83 6.5 (-1) × -1.17 + (-1) × 2.67 +5 我们知道了统计值abr之后,就可以利用这个拟合的方程式求出这个区域的拟合灰度值 就是右边这个矩阵 0 × -1.17 + 0 × 2.67 +5 1 × -1.17 + 1 × 2.67 +5 DC & CV Lab. CSIE NTU
8.3 Sloped Facet Parameter and Error Estimation The estimated gray level surface is given by 𝛼 𝑟+ 𝛽 𝑐+ 𝛾 −1.17 𝑟+2.67 𝑐 +5 3.5 6.17 8.84 2.33 5 7.67 1.17 3.83 6.5 3 5 9 4 7 0.5 1.17 -0.17 -1.67 -2 0.67 0.83 -0.5 求出拟合矩阵之后,我们就可以算出误差的平方和 𝜀 2 =11.19 DC & CV Lab. CSIE NTU
8.3 Sloped Facet Parameter and Error Estimation Solving for 𝛼 , 𝛽 , and 𝛾 , we obtain 再来复习一下刚才算估计值的公式 DC & CV Lab. CSIE NTU
8.3 Sloped Facet Parameter and Error Estimation Replacing g(r, c) by αr+βc+γ+η(r, c) and simplifying the equations will allow us to see explicitly the dependence of 𝛼 , 𝛽 , and 𝛾 on the noise 现在我们用这个上帝才知道的方程式来代替观测值g(r,c) DC & CV Lab. CSIE NTU
8.3 Sloped Facet Parameter and Error Estimation η is noise having mean 0 and variance 𝜎 2 利用这个新式子 我们可以算出估计值ahat bhat rhat的数学期望和变异数 因为eita是均值为0 变异数位sigma方的常态分布 𝐸 𝛼 =𝛼 𝐸 𝛽 =𝛽 𝐸[ 𝛾 ]=𝛾 DC & CV Lab. CSIE NTU
8.3 Sloped Facet Parameter and Error Estimation review 加一个常数 变异数不变 乘以系数a 变异数会乘以系数a方 XY独立 协方差为0 DC & CV Lab. CSIE NTU
8.3 Sloped Facet Parameter and Error Estimation 𝑉 𝛼 =𝑉 𝑟 𝑐 𝑟𝜂(𝑟,𝑐) 𝑟 𝑐 𝑟 2 = 𝑟 𝑐 𝑟 2 𝑉[𝜂 𝑟,𝑐 ] ( 𝑟 𝑐 𝑟 2 ) 2 = 𝜎 2 𝑟 𝑐 𝑟 2 DC & CV Lab. CSIE NTU
8.3 Sloped Facet Parameter and Error Estimation Examining the squared error residual 𝜀 2 又到了计算误差平方和的时候了 用拟合函数减去观测值 求平方 然后加和 DC & CV Lab. CSIE NTU
8.3 Sloped Facet Parameter and Error Estimation Using the fact that We obtain 使用 (a^-a)代入得到 以下式子 DC & CV Lab. CSIE NTU
8.3 Sloped Facet Parameter and Error Estimation Review chi-distribution Assume Z 1 , Z 2 , …… , Z n are independent variable , where Z i ~N 0,1 . Y= Z 1 2 + Z 2 2 +…+ Z n 2 is distributed according to the chi-squared distribution with n degrees of freedom. Y~ χ n 2 接下来就要回顾统计学里的卡方分佈 定義如下: @假設K個隨機變量 Z1, Z2, …Zn 是互相獨立符合標準常態分佈的隨機變量(平均值期望為0 變異數為 1) @則隨機變量Z的平方和被稱為符合自由度為N的卡方分佈 記作Y~ X n 2 DC & CV Lab. CSIE NTU
8.3 Sloped Facet Parameter and Error Estimation 𝜂~𝑁(0, 𝜎 2 ) 𝑟 𝑐 𝜂 2 (𝑟, 𝑐) 𝜎 2 is a chi-squared distribution with 𝑟 𝑐 1 degrees of freedom. 𝜂 //eta 𝜎 // sigma 假定eta 的隨機常態分佈是 (平均值為0,變異數是 sigma ^2) @自由度為1的卡方分佈 DC & CV Lab. CSIE NTU
8.3 Sloped Facet Parameter and Error Estimation α ~𝑁 𝛼, 𝜎 2 𝑟∈𝑅 𝑐∈𝐶 𝑟 2 ( 𝛼 −𝛼) 𝑟∈𝑅 𝑐∈𝐶 𝑟 2 𝜎 ~𝑁(0,1) is a chi-squared distribution with 3 degrees of freedom. chi-squared distribution :卡方分佈 @因為 α , β , and γ 為獨立的常態分佈,所以 「 方程式」 為自由度為3的卡方分佈 DC & CV Lab. CSIE NTU
8.3 Sloped Facet Parameter and Error Estimation ε 2 / σ 2 is distributed as a chi-squared variate with r c 1 −3 degrees of freedom. This means that ε 2 /( r c 1 −3) can be used as an unbiased estimator for 𝜎 2 @ ε 2 / σ 2 的分佈為自由度為 r c 1 −3 的卡方變量 @代表 可以為 ε 2 /( r c 1 −3)使用來當作 𝜎 2 的無偏差估計 DC & CV Lab. CSIE NTU
Break time DC & CV Lab. CSIE NTU
8.4 Facet-Based Peak Noise Removal Peak noise pixel: a pixel whose gray level intensity significantly differs from neighborhood pixels. It is difficult to judge that the center pixel in part (b) is peak noise, whereas it is easier in part (a) This indicates that the gray level spatial statistic are important 8.4 小平面基底的尖端雜訊移除 尖端雜訊像素 灰階值意義上和鄰居不一樣 在這裡並不能說 b 就沒有雜訊,他比a 要難以斷定。這個例子指出空間灰階值的統計就相當重要 DC & CV Lab. CSIE NTU
8.4 Facet-Based Peak Noise Removal Let N be a set of neighborhood pixels that does not contain the center pixel Where η(r,c) is assumed to be independent additive Gaussian noise having mean 0 and variance 𝜎 2 我们现在还是用斜面小平面模型的这个方程式 考虑一个邻域大N 大N是个正方形,但是不包括区域中心的那个像素 DC & CV Lab. CSIE NTU
8.4 Facet-Based Peak Noise Removal The least-squares procedure determines 𝛼 , 𝛽 , and 𝛾 which minimize the sum of the squared difference between the fitted surface are given by the observed one: The minimizing 𝛼 , 𝛽 , and 𝛾 are given by 误差公式和估计值公式都应该很熟悉了 但是之前有两个加和号,现在只有一个 因为之前考虑的邻域可以是长方形,但这边考虑的是正方形,所以r和c的总数相同,可以写在一起 DC & CV Lab. CSIE NTU
8.4 Facet-Based Peak Noise Removal Under the hypothesis that g 0,0 is not peak noise, g 0,0 − γ has a Gaussian distribution with mean 0 and variance σ 2 (1+ 1 #N ) g 0,0 − γ = α×0+𝛽×0+𝛾+𝜂(0,0) - γ = 𝛾+𝜂 0,0 − 𝛾 − (𝑟,𝑐)𝜖𝑁 𝜂(𝑟,𝑐) #𝑁 = 𝜂 0,0 − (𝑟,𝑐)𝜖𝑁 𝜂(𝑟,𝑐) #𝑁 我们现在要利用刚才的那些资讯来移除峰值杂讯了 现在我们有g(0,0) 是在原点的观测值 观测值是可能有杂讯的 还有rhat 是拟合函数在原点的估计值 因为ab=0 rhat是一个线性函数算出来的 所以它应该和邻居像素的值比较相近 如果 g 0,0 小于等于rhat的话,我们就认为g(0,0)不是杂讯 所以我们就要用到统计学里的假设检验 g 0,0 − γ 有高斯分佈平均值為0 ,變異數 σ 2 (1+ 1 #N ) DC & CV Lab. CSIE NTU
8.4 Facet-Based Peak Noise Removal Under the hypothesis that g 0,0 is not peak noise, g 0,0 − γ has a Gaussian distribution with mean 0 and variance σ 2 (1+ 1 #N ) V g 0,0 − γ =V[𝜂 0,0 − (𝑟,𝑐)𝜖𝑁 𝜂(𝑟,𝑐) #𝑁 ] = σ 2 + #𝑁 (#𝑁) 2 σ 2 = σ 2 (1+ 1 #N ) 基於 g 0,0 不是一個峰值雜訊的假設 g 0,0 − γ 有高斯分佈平均值為0 ,變異數 σ 2 (1+ 1 #N ) DC & CV Lab. CSIE NTU
8.4 Facet-Based Peak Noise Removal Under the hypothesis that g 0,0 is not peak noise, g 0,0 − γ has a Gaussian distribution with mean 0 and variance σ 2 (1+ 1 #N ) Hence g 0,0 − γ 𝜎 1+ 1 #N has mean 0 and variance 1 基於 g 0,0 不是一個峰值雜訊的假設 g 0,0 − γ 有高斯分佈平均值為0 ,變異數 σ 2 (1+ 1 #N ) DC & CV Lab. CSIE NTU
8.4 Facet-Based Peak Noise Removal 統計定義 t-distribution Z~N(0,1), Y~ χ n 2 , Z and Y are independent. T= Z Y n is a t-distribution with n degrees of freedom, denoted as T~ t n 複習 https://en.wikipedia.org/wiki/Student%27s_t-distribution https://zh.wikipedia.org/wiki/卡方分佈 DC & CV Lab. CSIE NTU
8.4 Facet-Based Peak Noise Removal We have already obtained that ε 2 / σ 2 has a chi-squared distribution with #N−3 degrees of freedom. 𝑌= 𝜀 2 𝜎 2 ~ 𝜒 #𝑁−3 2 𝑍= 𝑔 0,0 − 𝛾 𝜎 1+ 1 #𝑁 ~ 𝑁(0,1) Hence 𝑡= 𝑍 𝑌/(#𝑁−3) = (𝑔 0,0 − 𝛾 ) #𝑁−3 (1+ 1 #𝑁 ) 𝜀 2 has a t-distribution with #N−3 degrees of freedom @所以我們已經得到 ε 2 / σ 2 自由度為 #N-3 的卡方分佈 @故 t= (g 0,0 − γ ) #N−3 (1+ 1 #N ) ε 2 具有自由度為 #N-3 的t-分佈 DC & CV Lab. CSIE NTU
8.4 Facet-Based Peak Noise Removal The center pixel is judged to be a peak noise pixel if a test of the hypothesis g 0,0 ≤ γ rejects the hypothesis Let T #N−3, p be the number satisfying P t≤ T #N−3, p =1−p The reasonable value for p is 0.05 to 0.01 T #N−3, p is a threshold. @如果假設 hypothesis g 0,0 = γ 被 reject 則 中心像素被判定為峰值雜訊 DC & CV Lab. CSIE NTU
8.4 Facet-Based Peak Noise Removal t-distribution 1-p @如果假設 hypothesis g 0,0 = γ 被 reject 則 中心像素被判定為峰值雜訊 #N-3 DC & CV Lab. CSIE NTU
8.4 Facet-Based Peak Noise Removal t-distribution @如果假設 hypothesis g 0,0 = γ 被 reject 則 中心像素被判定為峰值雜訊 DC & CV Lab. CSIE NTU
8.4 Facet-Based Peak Noise Removal If t> T #N−3, p the hypothesis of the equality of g 0,0 and γ is rejected, and the output value for the center pixel is given by γ . If t≤ T #N−3, p the hypothesis of the equality of g 0,0 and γ is not rejected, and the output value for the center pixel is given by g 0,0 . If t> T #N−3, p @ hypothesis g 0,0 = r 被 reject 則 中心像素輸出值為給定的 γ . If t≤ T #N−3, p @ hypothesis g 0,0 =r沒有被 reject 則 中心像素輸出值為給定的g 0,0 DC & CV Lab. CSIE NTU
Break time DC & CV Lab. CSIE NTU
8.5 Iterated Facet Model The iterated model for ideal image assume that the spatial of the image can be partitioned into connected regions called facets, each of which satisfies certain gray level and shape constraints. 重複模型對理想影像假設影像可以被分割成許多相連區域 每一個都符合特定的灰階和形狀限制 DC & CV Lab. CSIE NTU
8.5 Iterated Facet Model gray level constraint: shape constraint: gray levels in each facet must be a polynomial function of row-column coordinates shape constraint: each facet must be sufficiently smooth in shape 灰階限制 --每一個小平面的座標(R, C)必須是多項式方程式 形狀限制 --每一個小平面必須滿足平滑的形狀 DC & CV Lab. CSIE NTU
8.6 Gradient-Based Facet Edge Detection gradient-based facet edge detection: high values in first partial derivative (sharp discontinuities) 8.6 基於梯度的平面邊界偵測 边界就是它的一边暗,另一边暗 所以梯度邊界偵測就是找一階偏微分的值很大的地方(明顯的不連續) DC & CV Lab. CSIE NTU
8.6 Gradient-Based Facet Edge Detection When a neighborhood is fitted with the sloped facet model , α r+ β c+ γ , a gradient magnitude of α 2 + β 2 will result. Use the estimated gradient α 2 + β 2 as the basis for edge detection. 当时斜面的小平面模型时,我们直接用梯度的大小做边界检测就可以 如果取的一个邻域内没有边界,说明这个邻域内灰度值很相近,ahat bhat接近为0 所以梯度的大小也接近0 但是如果邻域内有边界的话,我这个梯度的大小的门槛值要设为多少,才能判断出有边界呢 所以我们又要用统计值了 DC & CV Lab. CSIE NTU
8.6 Gradient-Based Facet Edge Detection α ~N(α, σ 2 r c r 2 ) β ~N(β, σ 2 r c c 2 ) α , β are independent ( α −α) 2 r c r 2 + ( β −β) 2 r c c 2 σ 2 ~ χ 2 2 找边界的时候,接近0说明不是边界 DC & CV Lab. CSIE NTU
8.6 Gradient-Based Facet Edge Detection It follows that to test the hypothesis of no edge under the assumption that 𝛼=𝛽=0, we use the statistic G: G= α 2 r c r 2 + β 2 r c c 2 σ 2 ~ χ 2 2 他依照假設去測試沒有邊界在 a = B = 0, 我們使用 統計值 G DC & CV Lab. CSIE NTU
8.6 Gradient-Based Facet Edge Detection If ε n 2 represents the squared residual fitting error from the n-th neighborhood and the images has N neighborhoods, then we may use σ 2 = 1 N n=1 N ε n 2 r c 1 −3 in replace of 𝜎 2 . G= α 2 r c r 2 + β 2 r c c 2 σ 2 如果 E 表示平方餘數符合n-個鄰居誤差 且 影像有 n 個鄰居。則我們可以使用 ~A^2 取代 a^2 G是两个卡方分布相除,应该是F分布 但因为sigma方hat的自由度太大, 所以G可以看做是一个自由度为2的卡方分布 DC & CV Lab. CSIE NTU
Break time DC & CV Lab. CSIE NTU
8.7 Bayesian Approach to Gradient Edge Detection The Bayesian approach to the decision of whether or not an observed gradient magnitude G is statistically significant and therefore participates in some edge is to decide there is an edge (statistically significant gradient) when, : given gradient magnitude conditional probability of edge : given gradient magnitude conditional probability of nonedge 貝式方法去決定觀察梯度G是否顯著統計且參與決定某些邊是否為邊界當 DC & CV Lab. CSIE NTU
8.7 Bayesian Approach to Gradient Edge Detection Hence a decision for edge is made whenever 𝑃(𝐺|𝑛𝑜𝑛𝑒𝑑𝑔𝑒) is known to be the density function of a χ 2 2 variate. DC & CV Lab. CSIE NTU
8.7 Bayesian Approach to Gradient Edge Detection Infer P G edge from P(G) P(G) density function of the histogram of observed gradient magnitude P nonedge is a user-specified prior probability of nonedge, 0.9 to 0.95 are reasonable. DC & CV Lab. CSIE NTU
8.7 Bayesian Approach to Gradient Edge Detection The threshold t for G is determined as that value t for which P t edge P edge =P t nonedge P nonedge Implies that the threshold t must satisfy that P t =2P t nonedge P nonedge DC & CV Lab. CSIE NTU
Break time DC & CV Lab. CSIE NTU
8.8 Zero-Crossing Edge Detector compare gradient edge detector: looks for high values of first derivatives zero-crossing edge detector: looks for relative maxima in first derivative zero-crossing: pixel as edge if zero crossing of second directional derivative underlying gray level intensity function f takes the form 把在二階微分通過零的pixel當作edge Zero-crossing edge detector : 會在一階導數相對極大值 零點:像素的邊緣,如果第二個方向導數底層的灰度級強度函數f的零點採用以下形式 DC & CV Lab. CSIE NTU
8.8.1 Discrete Orthogonal Polynomials The underlying functions from which the directional derivative are computed are easy to represent as linear combinations of the polynomials in any polynomial basis set. basis set: the discrete orthogonal polynomials 以多項式的線性組合來表達被隱含的圖像函數 @底層函數從方向導數容易用來表示在已多項式為基底集合的線系組合多項式 DC & CV Lab. CSIE NTU
8.8.1 Discrete Orthogonal Polynomials discrete orthogonal polynomial basis set of size N: polynomials deg. 0..N - 1 discrete Chebyshev polynomials: these unique polynomials Size N 的離散正交多項式 其最高系數是0~n-1 下面表示如何產生 正交離散多項式 DC & CV Lab. CSIE NTU
8.8.1 Discrete Orthogonal Polynomials (cont’) discrete orthogonal polynomials can be recursively generated 二維離散正交多項式可以遞回產生 DC & CV Lab. CSIE NTU
8.8.1 Discrete Orthogonal Polynomials (cont’) discrete orthogonal polynomials can be recursively generated Example: Find 𝑃 2 𝑟 𝑃 2 𝑟 =𝑟 𝑃 1 𝑟 − 𝛽 1 𝑃 0 𝑟 =𝑟×𝑟− 𝑟∈𝑅 𝑟 𝑃 1 𝑟 𝑃 0 𝑟 𝑟∈𝑅 𝑃 0 2 𝑟 ×1 = 𝑟 2 − 𝑟∈𝑅 𝑟 2 𝑟∈𝑅 1 = 𝑟 2 − 𝜇 2 𝜇 0 二維離散正交多項式可以遞回產生 DC & CV Lab. CSIE NTU
8.8.1 Discrete Orthogonal Polynomials (cont’) discrete orthogonal polynomials can be recursively generated 二維離散正交多項式可以遞回產生 , DC & CV Lab. CSIE NTU
8.8.2 Two-Dimensional Discrete Orthogonal Polynomials 2-D discrete orthogonal polynomials creatable from tensor products of 1D from above equations r^3-(17/5)r 2011-12-20 二維離散正交多項式從上面的從一維公式張量積 可創建 𝜇 2 = (−1) 2 + 0 2 + 1 2 =2 𝜇 0 = −1 0 + 0 0 + 1 0 =2 {-1,0,1} 𝑃 2 (𝑟)= 𝑟 2 − 𝜇 2 𝜇 0 = r 2 − 2 3 DC & CV Lab. CSIE NTU
8.8.2 Two-Dimensional Discrete Orthogonal Polynomials 2-D discrete orthogonal polynomials creatable from tensor products of 1D from above equations Index Set {-1,0,1}×{-1,0,1} Discrete Orthogonal Polynomial Set {1,𝑟, 𝑟 2 − 2 3 }× 1,𝑐, 𝑐 2 − 2 3 ={1,𝑟, 𝑟 2 − 2 3 ,𝑐, 𝑐 2 − 2 3 ,𝑟𝑐,𝑐 𝑟 2 − 2 3 ,𝑟 𝑐 2 − 2 3 ,( 𝑟 2 − 2 3 )( 𝑐 2 − 2 3 )} r^3-(17/5)r 2011-12-20 二維離散正交多項式從上面的從一維公式張量積 可創建 DC & CV Lab. CSIE NTU
Break time DC & CV Lab. CSIE NTU
8.8.3 Equal-Weighted Least-Squares Fitting Problem the exact fitting problem is to determine the coefficients such that The approximate fitting problem is to determine coefficients a0,…,aK, K ≦ N-1, such that Is minimized. the result is for each index r, the data value d(r) is multiplied by the weight a_m means each fitting coefficients can be computed as a linear combination of the data values DC & CV Lab. CSIE NTU
8.8.3 Equal-Weighted Least-Squares Fitting Problem Numerator (分子) denominator (分母) DC & CV Lab. CSIE NTU
8.8.3 Equal-Weighted Least-Squares Fitting Problem (cont’) 5x5 矩陣 以中心 代表 r=0, c =0 。左上代表r =-2, c =-2。右下代表 r =2, c =2。代入式子即可得出矩陣 ex K10: c3-3.4c DC & CV Lab. CSIE NTU
Break time DC & CV Lab. CSIE NTU
8.8.4 Directional Derivative Edge Finder We define the directional derivative edge finder as the operator that places an edge in all pixels having a negatively sloped zero crossing of the second directional derivative taken in the direction of the gradient r: row c: column radius in polar coordinate angle in polar coordinate, clockwise from column axis 我們定義方向導數邊界尋檢器視為一運算。 一個邊的所有pixel 的二階方向導數在具有方向梯度上有負斜率過零點 DC & CV Lab. CSIE NTU
8.8.4 Directional Derivative Edge Finder (cont’) directional derivative of f at point (r, c) in direction : http://blog.ncue.edu.tw/sys/lib/read_attach.php?id=13236 http://blog.ncue.edu.tw/sys/lib/read_attach.php?id=13237 DC & CV Lab. CSIE NTU
8.8.4 Directional Derivative Edge Finder (cont’) 𝑓 𝛼 ′ 𝑟, 𝑐 = lim ℎ→0 𝑓 𝑟+ℎ𝑠𝑖𝑛𝛼, 𝑐+ℎ𝑐𝑜𝑠𝛼 −𝑓 𝑟,𝑐 ℎ = lim ℎ→0 𝑓 𝑟+ℎ𝑠𝑖𝑛𝛼, 𝑐+ℎ𝑐𝑜𝑠𝛼 −𝑓 𝑟,𝑐+ℎ𝑐𝑜𝑠𝛼 ℎ + lim ℎ→0 𝑓 𝑟, 𝑐+ℎ𝑐𝑜𝑠𝛼 −𝑓 𝑟,𝑐 ℎ = lim ℎ→0 𝑓 𝑟+ℎ𝑠𝑖𝑛𝛼, 𝑐+ℎ𝑐𝑜𝑠𝛼 −𝑓 𝑟,𝑐+ℎ𝑐𝑜𝑠𝛼 ℎ𝑠𝑖𝑛𝛼 𝑠𝑖𝑛𝛼 + lim ℎ→0 𝑓 𝑟, 𝑐+ℎ𝑐𝑜𝑠𝛼 −𝑓 𝑟,𝑐 ℎ𝑐𝑜𝑠𝛼 𝑐𝑜𝑠𝛼 = 𝜕𝑓 𝜕𝑟 𝑟, 𝑐 𝑠𝑖𝑛𝛼+ 𝜕𝑓 𝜕𝑐 𝑟, 𝑐 𝑐𝑜𝑠𝛼 DC & CV Lab. CSIE NTU
8.8.4 Directional Derivative Edge Finder (cont’) second directional derivative of f at point (r, c) in direction : DC & CV Lab. CSIE NTU
8.8.4 Directional Derivative Edge Finder (cont’) f α ′′′ ρ <0, f α ′′ ρ =0, and f α ′ (ρ)≠0 An edge pixel DC & CV Lab. CSIE NTU
8.9 Integrated Directional Derivative Gradient Operator an operator that measures the integrated directional derivative strength as the integral of first directional derivative taken over a square area more accurate step edge direction DC & CV Lab. CSIE NTU
Break time DC & CV Lab. CSIE NTU
8.10 Corner Detection purpose corners with two conditions corners: to detect buildings in aerial images corner points: to determine displacement vectors from image pair corners with two conditions the occurrence of an edge significant changes in edge direction gray scale corner detectors: detect corners directly by gray scale image 目的 偵測空照圖中的建築 角落點:求出一對圖片的位移向量 角落的2條件 有邊出現 明顯邊方向的變化 DC & CV Lab. CSIE NTU
Aerial Images DC & CV Lab. CSIE NTU
立體視覺圖 DC & CV Lab. CSIE NTU
8.10.1 Corner Detection Let 𝜃 𝑟,𝑐 be the gradient direction at coordinate (𝑟,𝑐), and let𝜃0 =𝜃(0,0) (sin𝜃0,cos𝜃0) is a unit vector in the direction of the gradient at the origin The origin(0,0) is a corner point involves meeting the following two conditions: DC & CV Lab. CSIE NTU
8.11 Isotropic Derivative Magnitudes A gradient edge can be understood as arising from a first-order isotropic derivative magnitude determine those linear combinations of squared partial derivative of the two-dimensional functions that are invariant under rotation of the domain of the two dimensional function. rotate the coordinate system by 𝜃 and call the resulting function g in the new (r’,c’) coordinate @梯度邊緣可以被理解為從第一階各向同性衍生物量值而產生 @判定二維函數的“二維函數的平方偏導”的線性組合在定義域下的旋轉下不變 @選轉𝜃角後的座標在 𝑔 的表示如下 DC & CV Lab. CSIE NTU
8.11 Isotropic Derivative Magnitudes A gradient edge can be understood as arising from a first-order isotropic derivative magnitude determine those linear combinations of squared partial derivative of the two-dimensional functions that are invariant under rotation of the domain of the two dimensional function. rotate the coordinate system by 𝜃 and call the resulting function g in the new (r’,c’) coordinate @梯度邊緣可以被理解為從第一階各向同性衍生物量值而產生 @判定二維函數的“二維函數的平方偏導”的線性組合在定義域下的旋轉下不變 @選轉𝜃角後的座標在 𝑔 的表示如下 𝑟 ′ =𝑟𝑐𝑜𝑠𝜃+𝑐𝑠𝑖𝑛𝜃, 𝑐 ′ =−𝑟𝑠𝑖𝑛𝜃+𝑐𝑐𝑜𝑠𝜃 DC & CV Lab. CSIE NTU
8.11 Isotropic Derivative Magnitudes 𝑟 ′ =𝑟𝑐𝑜𝑠𝜃+𝑐𝑠𝑖𝑛𝜃, 𝑐 ′ =−𝑟𝑠𝑖𝑛𝜃+𝑐𝑐𝑜𝑠𝜃 𝑔 𝑟 ′ , 𝑐 ′ = 𝑘 1 + 𝑘 2 𝑟 ′ + 𝑘 3 𝑐 ′ = 𝑘 1 + 𝑘 2 𝑐𝑜𝑠𝜃− 𝑘 3 𝑠𝑖𝑛𝜃 𝑟+ 𝑘 2 𝑠𝑖𝑛𝜃+ 𝑘 3 𝑐𝑜𝑠𝜃 𝑐 DC & CV Lab. CSIE NTU
8.11 Isotropic Derivative Magnitudes [ 𝜕𝑔 𝑟 ′ , 𝑐 ′ 𝜕𝑟 ] 2 + [ 𝜕𝑔 𝑟 ′ , 𝑐 ′ 𝜕𝑐 ] 2 =(k2cos𝜃-k3sin𝜃)2 + (k2sin𝜃+ k3cos𝜃)2 = 𝑘 2 2 + 𝑘 3 2 = [ 𝜕𝑓 𝑟, 𝑐 𝜕𝑟 ] 2 + [ 𝜕𝑓 𝑟, 𝑐 𝜕𝑐 ] 2 The sum of the squares of the first partials is the same constant 𝑘 2 2 + 𝑘 3 2 , the squared gradient magnitude, for the original function or for the rotated function. DC & CV Lab. CSIE NTU
Break time DC & CV Lab. CSIE NTU
8.12 Ridges and Ravines on Digital Images 在數位影像上的山脊(Ridges)和深谷(Ravines) DC & CV Lab. CSIE NTU
8.12 Ridges and Ravines on Digital Images A digital ridge (ravine) occurs on a digital image when there is a simply connected sequence of pixels with gray level intensity values that are significantly higher (lower) in the sequence than those neighboring the sequence. The facet model can be used to help accomplish ridge and ravine identification. 在數位影像上山脊或深谷,的灰階值簡單的相連,且與相鄰區域有著顯著的高(低)序列 山脊發生在灰階值比鄰居高 深谷發生在灰階值比鄰居低 DC & CV Lab. CSIE NTU
8.13 Topographic Primal Sketch 8.13.1 Introduction The basis of the topographic primal sketch consists of the labeling and grouping of the underlying Image-intensity surface patches according to the categories defined by monotonic, gray level, and invariant functions of directional derivatives categories: 根據方向導數的單調函數、灰度函數和不變函數定義的類別,對基礎圖像強度表面圖案進行標記和分類,就構成了地形原始草圖 Peak:山峰 pit:凹坑 ridge:山脊 ravine:深谷 saddle:鞍狀 flat:平面 hillside:山坡 DC & CV Lab. CSIE NTU
8.13.1 Introduction (cont’) Invariance Requirement histogram normalization, equal probability quantization: nonlinear enhancing For example, edges based on zero crossings of second derivatives will change in position as the monotonic gray level transformation changes Convexity of a gray level intensity surface is not preserved under such transformation. peak, pit, ridge, valley, saddle, flat, hillside: have required invariance 基於二階導數過零的邊界會隨著單調灰度變換的變化而變化 DC & CV Lab. CSIE NTU
8.13.2 Mathematical Classification of Topographic Structure We will use the following notation to describe the mathematical properties of our topographic categories for continuous surfaces @ f 梯度向量 @梯度力度 @二階方向導數具有最大力度的單位方向向量 @正交於w1 的向量 @二階方向導數在w1方向的值 DC & CV Lab. CSIE NTU
8.13.2 Mathematical Classification of Topographic Structure H= 𝜕 2 f 𝜕 r 2 𝜕 2 f 𝜕r𝜕c 𝜕 2 f 𝜕c𝜕r 𝜕 2 f 𝜕 c 2 The eigenvalues of the Hessian are the values of the extrema of the second directional derivative, and their associated eigenvectors are the directions in which the second directional derivative is extremized. @一階方向導數在w1方向的值 不失一般性,假設|入1| >= |入2| @二階方向打數可以被計算表示成 Hessian 2x2 matrix @Hessian矩陣的特徵值是所述二階方向導數的極值的值,和它們的相關特徵向量,其中二階方向導數偏移的方向。 DC & CV Lab. CSIE NTU
8.13.2 Peak Peak (knob): occurs where there is a local maximum in all directions peak: curvature downward in all directions at peak: gradient zero & second directional derivative negative in all directions point classified as peak if : gradient magnitude @發生在所有方向的局部最大值 @所有方下的曲率皆向下 @在峰點:梯度為0 @所有方向的二階方向導數皆為負 DC & CV Lab. CSIE NTU
8.13.2 Peak : second directional derivative in direction DC & CV Lab. CSIE NTU
8.13.2 Pit pit (sink: bowl): local minimum in all directions pit: gradient zero, second directional derivative positive @跟peak 相反,所有方向皆為局部最小值 DC & CV Lab. CSIE NTU
8.13.2 Ridge A ridge occurs on ridge line, a curve consisting of a series of ridge points and along ridge line, the points to the right and left are lower ridge line: may be flat, sloped upward, sloped downward, curved upward… ridge: local maximum in one direction Ridge (鞍部/嶺) @發生在稜線 @包含了一系列的脊點 DC & CV Lab. CSIE NTU
8.13.2 Ravine A ravine (valley): is identical to a ridge except that it is a local minimum in one direction walk along ravine line: points to the right and left are higher Ravine(峽谷) DC & CV Lab. CSIE NTU
8.13.2 Saddle saddle: local maximum in one direction, local minimum in perpendicular dir. saddle: positive curvature in one direction, negative in perpendicular dir. saddle: gradient magnitude zero saddle: extrema of second directional derivative have opposite signs Saddle (鞍) @在一方向有局部最大值,另一方向則有局部最小值 @一方向正曲率,另一垂直方向為負 @梯度為0 @兩個不同方向的極值也正負相反 DC & CV Lab. CSIE NTU
8.13.2 Flat A flat (plain) is a simple, horizontal surface, must have zero gradient and no curvature. It satisfies the following conditions If the conditions are true, classify a foot or a shoulder foot: flat begins to turn up into a hill shoulder: flat ending and turning down into a hill Flat (平的) @平面、簡單、水平表面 @梯度0,沒有曲率 @ @foot : 平開始轉成小山 @shoulder:平結束變且開始轉成小山的地方 DC & CV Lab. CSIE NTU
8.13.2 Hillside A hillside point is anything not covered by previous categories. hillside: nonzero gradient, no strict extrema Slope: tilted flat (constant gradient) convex hill: curvature positive (upward) Hillside(山腰、山坡) @非零梯度、沒有嚴格的極值 @山坡:非零梯度,無嚴格極大值 @斜率:傾斜的平面(常數梯度) @凸的山坡:曲率為正(向上) DC & CV Lab. CSIE NTU
8.13.2 Hillside concave hill: curvature negative (downward) saddle hill: up in one direction, down in perpendicular direction inflection point: zero crossing of second directional derivative @凸的坡:曲率為負(向下) @鞍狀坡:一方向向上,另一垂直方向向下 @拐點:第二個方向導數的過零點 DC & CV Lab. CSIE NTU
8.13.2 Summary of the Topographic Categories Mathematical properties of topographic structures on continuous surfaces DC & CV Lab. CSIE NTU
8.13.2 Invariance of the Topographic Categories Topographic labels (peak, pit, ridge, ravine, saddle, flat, and hillside) Gradient direction Directions of second directional derivative extrema for peak, pit, ridge, ravine, and saddle 1.2.3. are all invariant under monotonically increasing gray level transformations monotonically increasing: positive derivative everywhere @地形標籤 @梯度方向 @二階方向導數極值的方向 @123 的特性不隨著單調增灰階值的轉換而改變 DC & CV Lab. CSIE NTU
8.13.3 Topographic Classification Algorithm peak, pit, ridge, ravine, saddle: likely not to occur at pixel center peak, pit, ridge, ravine, saddle: if within pixel area, carry the label DC & CV Lab. CSIE NTU
8.13.3 Topographic Classification Algorithm We search zero crossing of the first directional derivative in the directions of extreme second directional derivative, 𝜔 (1) and 𝜔 (2) . DC & CV Lab. CSIE NTU
8.13.3 Case One: No Zero Crossing no zero crossing along either of two directions: flat or hillside if gradient zero, then flat if gradient nonzero, then hillside Hillside: possibly inflection point, slope, convex hill, concave hill,… 如果在兩隔一階方向導數沒有 zero crossing 則有可能是 flat or hillside DC & CV Lab. CSIE NTU
8.13.3 Case Two: One Zero Crossing one zero crossing: peak, pit, ridge, ravine, or saddle 如果在一階方向導數有zero crossing 則必定給訂的標籤是(peak, pit, ridge, ravine, or saddle) DC & CV Lab. CSIE NTU
8.13.3 Case Three: Two Zero Crossings LABEL1, LABEL2: assign label to each zero crossing DC & CV Lab. CSIE NTU
8.13.3 Case Four: More Then Two Zero Crossings more than two zero crossings: choose the one closest to pixel center more than two zero crossings: after ignoring the other, same as case 3 DC & CV Lab. CSIE NTU
8.13.4 Summary of Topographic Classification Scheme one pass through the image, at each pixel calculate fitting coefficients, 𝑘 1 through 𝑘 10 of cubic polynomial use above coefficients to find gradient, gradient magnitude, and the eigenvalues and eigenvector of Hessian search in eigenvector direction for zero crossing of first derivative recompute gradient, gradient magnitude, second derivative at each zero crossing of the first directional derivative, then classify DC & CV Lab. CSIE NTU
END DC & CV Lab. CSIE NTU