-Artificial Neural Network(ANN)- Self Organization Map(SOM)

Slides:



Advertisements
Similar presentations
Chapter 2 Combinatorial Analysis 主講人 : 虞台文. Content Basic Procedure for Probability Calculation Counting – Ordered Samples with Replacement – Ordered.
Advertisements

Artificial Neural Network 人工神经网络.
Unsupervised feature learning: autoencoders
指導教授:陳牧言 老師 組員:資管四1 劉柏駿 陳柏村 蔡信宏 李志誠 洪聲甫 李紹剛
資料採礦與商業智慧 第四章 類神經網路-Neural Net.
智慧型系統在知識管理之應用 報告人:陳隆輝 高雄師範大學事業經營學系.
第四章 概率密度函数的非参数估计 2学时.
第23章 類神經網路 本章的學習主題  1.類神經網路的基本概念 2.類神經網路之應用 3.倒傳遞類神經網路
-Artificial Neural Network- Hopfield Neural Network(HNN) 朝陽科技大學 資訊管理系 李麗華 教授.
Mode Selection and Resource Allocation for Deviceto- Device Communications in 5G Cellular Networks 林柏毅 羅傑文.
Understanding Interest Rates
XI. Hilbert Huang Transform (HHT)
Leftmost Longest Regular Expression Matching in Reconfigurable Logic
A TIME-FREQUENCY ADAPTIVE SIGNAL MODEL-BASED APPROACH FOR PARAMETRIC ECG COMPRESSION 14th European Signal Processing Conference (EUSIPCO 2006), Florence,
Minimum Spanning Trees
-Artificial Neural Network- Adaline & Madaline
Introduction To Mean Shift
An Adaptive Cross-Layer Multi-Path Routing Protocol for Urban VANET
Guo Dong, Member, IEEE, and Ming Xie, Member, IEEE
Some Effective Techniques for Naive Bayes Text Classification
Rate and Distortion Optimization for Reversible Data Hiding Using Multiple Histogram Shifting Source: IEEE Transactions On Cybernetics, Vol. 47, No. 2,February.
指導教授:許子衡 教授 報告學生:翁偉傑 Qiangyuan Yu , Geert Heijenk
Population proportion and sample proportion
模式识别 Pattern Recognition
Manifold Learning Kai Yang
Digital Terrain Modeling
Source: IEEE Access, vol. 5, pp , October 2017
K-modes(补充) K-模,对k-平均方法的改进,k-原型的简化 处理分类属性
The Greedy Method.
Digital Terrain Modeling
第4章 网络互联与广域网 4.1 网络互联概述 4.2 网络互联设备 4.3 广域网 4.4 ISDN 4.5 DDN
Course 9 NP Theory序論 An Introduction to the Theory of NP
创建型设计模式.
Advanced Artificial Intelligence
Network Planning Algorithms in CATV Networks
Randomized Algorithms
Interval Estimation區間估計
子博弈完美Nash均衡 我们知道,一个博弈可以有多于一个的Nash均衡。在某些情况下,我们可以按照“子博弈完美”的要求,把不符合这个要求的均衡去掉。 扩展型博弈G的一部分g叫做一个子博弈,如果g包含某个节点和它所有的后继点,并且一个G的信息集或者和g不相交,或者整个含于g。 一个Nash均衡称为子博弈完美的,如果它在每.
Probabilistic Neural Network (PNN)
神经信息学 自组织网络 ——自组织映射 史忠植 中科院计算所 2019/2/2.
最大熵模型简介 A Simple Introduction to the Maximum Entropy Models
類神經網路簡介 B 朱峰森 B 梁家愷.
Computer Vision Chapter 4
谈模式识别方法在林业管理问题中的应用 报告人:管理工程系 马宁 报告地点:学研B107
数据摘要现状调研报告 上下文摘要初步思考 徐丹云.
Version Control System Based DSNs
VIDEO COMPRESSION & MPEG
研究技巧與論文撰寫方法 中央大學資管系 陳彥良.
高性能计算与天文技术联合实验室 智能与计算学部 天津大学
前向人工神经网络敏感性研究 曾晓勤 河海大学计算机及信息工程学院 2003年10月.
Learn Question Focus and Dependency Relations from Web Search Results for Question Classification 各位老師大家好,這是我今天要報告的論文題目,…… 那在題目上的括號是因為,前陣子我們有投airs的paper,那有reviewer對model的名稱產生意見.
主講人:陳鴻文 副教授 銘傳大學資訊傳播工程系所 日期:3/13/2010
高考应试作文写作训练 5. 正反观点对比.
Neural Networks: Learning
Chapter 10 Mobile IP TCP/IP Protocol Suite
Efficient Query Relaxation for Complex Relationship Search on Graph Data 李舒馨
Mechanics Exercise Class Ⅱ
An Quick Introduction to R and its Application for Bioinformatics
More About Auto-encoder
第六章 自組性類神經網路 類神經網路.
主要内容 什么是概念图? 概念图的理论基础 概念图的功能 概念地图的种类 如何构建概念图 概念地图的评价标准 国内外概念图研究现状
蛋白質交互作用資料庫、 網路拓樸分析與藥物標的搜尋 Protein Interactome, Topological Analysis on Complex Network for Identification of Drug Target
群聚分析操作介紹 -以SOM和K-means為例
Chapter 9 Validation Prof. Dehan Luo
Class imbalance in Classification
Principle and application of optical information technology
WiFi is a powerful sensing medium
Gaussian Process Ruohua Shi Meeting
Hybrid fractal zerotree wavelet image coding
Presentation transcript:

-Artificial Neural Network(ANN)- Self Organization Map(SOM) 朝陽科技大學 資訊管理系 李麗華 教授

Outline Review: Types of ANN Introduction SOM Application Network Structure Concept of Neighborhood Learning Process and Reuse Process Computation Process--Training SOM Exerciese 朝陽科技大學資訊管理系 李麗華 教授

Review: Types of ANN Learning Type: Supervise Learning: using a training set and the expected output for network learning. EX: Perceptron, BPN, PNN, LVQ, CPN. Unsupervise Learning: the network is learned and adjusted based only on the input pattern (date). There is no expected output for fine tune the network. EX:SOM, ART Associative memory Learning: the network uses the weight matrix to memorize all training patterns. The output is recalled based on the learned patterns. EX: Hopfield, Bidirectional Associative Memory(BAM), Hopfield-Tank Optimization Application: the network is designed to perform the optimize solution or global solution. EX: ANN, HTN 朝陽科技大學資訊管理系 李麗華 教授

Introduction It’s proposed by Kohonen in 1980. SOM is an unsupervised two layered network that can organize a topological map from a random starting point. SOM is also called Kohnen’s self organizating feature map. The resulting map shows the natural relationships among the patterns that are given to the network. The application is good for clustering analysis. 朝陽科技大學資訊管理系 李麗華 教授

SOM Application Ex: For hand writting pattern classification Ex: For feature extraction Ex: For simulate the topological feature Ex: For web usage pattern discovering Ex: To simulate the voice pattern Ex: Finding clusters before applying Data Mining technique 朝陽科技大學資訊管理系 李麗華 教授

朝陽科技大學資訊管理系 李麗華 教授

Network Structure One input layer One competitive layer which is usually a 2-Dim grid X1 Xi Yj1 Yj2 Yjk j Y1k Y11 Y12 Wijk k … Input layer :f(x):x Output layer:competitive layer with topological map relationship Weights :randomly assigned 朝陽科技大學資訊管理系 李麗華 教授

Concept of Neighborhood Center:the winning node C is the center. Distance: RFactor(鄰近係數): RFjk即 f(rjk,R)=e(-rjk/R) when rjk=0, then e(-rjk/R) 1 when rjk=∞, then e(-rjk/R) 0 when rjk=R, then e(-rjk/R) 0.368 (*)The longer the distance, the smaller the neighborhood factor. N is the node to be calculated C is the center node r is the distance from N to C R:the radius of the neighborhood r :distance from N to C R Factor Adjustment:Rn=R_rate*Rn-1, R_rate<1.0 朝陽科技大學資訊管理系 李麗華 教授

Learning Process 1. Setup network 2. Randomly assign weights to W X1 Xi Yj1 Yj2 Yjk j Y1k Y11 Y12 Wijk k … 1. Setup network 2. Randomly assign weights to W 3. Set the coordinate value of the output layer Y(j,k) 4. Send the Input vector X 5. Find the winning node 6. Update weight △W using R factor 7. ηn=η_rate *ηn-1 Rn=Rrate *Rn-1 8. Repeat from 4 to 7 until converge (*)After the network is trained, the weight matrix is obtained 朝陽科技大學資訊管理系 李麗華 教授

Reuse the network Setup the SOM network Read the weight matrix Set the coordinate value of the output layer Y(j,k) Read input vector Find the winning node Yjk Output the clustering result of Y. 朝陽科技大學資訊管理系 李麗華 教授

Computation process--Training 1. Setup network Input Vector : X1~Xi Output (2-D) Map: Yjk 2. Compute winning node 1 j=j*& k=k* if 0 otherwise 3. Yjk = 朝陽科技大學資訊管理系 李麗華 教授

Computation process (cont.) 4. Update Weights △Wijk=η.(Xi - Wijk ).RFjk Where Wijk=△Wijk+Wijk 5. Repeat 2-4 for all input ηn=η-rate *ηn-1 Rn= R-rate * Rn-1 6. 7. Repeat until converge 朝陽科技大學資訊管理系 李麗華 教授

SOM Exercise-1 Let there be one 2-Dimentional clustering problem. The vector space include 5 different clusters and each has 2 sample sets as shown in the graph below. X1 X2 1 -0.9 -0.8 2 0.6 3 0.9 4 0.7 -0.4 5 -0.2 0.2 6 -0.7 -0.6 7 0.8 8 9 10 0.1 -1 1 6 4 9 5 10 8 3 7 2 X1 X2 朝陽科技大學資訊管理系 李麗華 教授

SOM Exercise-2 Sol: Setup a 2X9 SOM network Randomly assign weights Wi00 -0.2 -0.8 Wi01 0.2 -0.4 Wi02 0.3 0.6 Wi10 Wi11 -0.3 Wi12 -0.6 Wi20 0.7 Wi21 0.8 Wi22 Wi10 Wi11 Wi02 Wi20 Wi12 Wi22 Wi00 Wi01 Wi21 -1 1 x1 x2 朝陽科技大學資訊管理系 李麗華 教授

SOM Exercise-3 Let R=2.0, η=1.0, 代入第一個pattern  [-0.9, -0.8] 0.6 3 0.9 4 0.7 -0.4 5 -0.2 0.2 6 -0.7 -0.6 7 0.8 8 9 10 0.1 X1 X2 Wi00 -0.2 -0.8 Wi01 0.2 -0.4 Wi02 0.3 0.6 Wi10 .6 Wi11 -0.3 Wi12 -0.6 Wi20 0.7 Wi21 0.8 Wi22 net00=[(-0.9+0.2)2+(-0.8+0.8) 2]=0.49 net01=[(-0.9-0.2) 2+(-0.8+0.4) 2]=1.37 net02=[(-0.9+0.3) 2+(-0.8-0.6) 2]=2.32 net10=[(-0.9+0.4) 2+(-0.8-0.6) 2]=1.71 net11=[(-0.9+0.3) 2+(-0.8-0.2) 2]=1.36 net12=[(-0.9+0.6) 2+(-0.8+0.2) 2]=0.45 net20=[(-0.9+0.7) 2+(-0.8-0.2) 2]=1.04 net21=[(-0.9-0.8) 2+(-0.8+0.6) 2]=2.93 net22=[(-0.9+0.8) 2+(-0.8+0.6) 2]=0.05 We find Min here 朝陽科技大學資訊管理系 李麗華 教授

SOM Exercise-4 Update weights r00= r01= r02 = r22= Therefore, we find net22 is the winning node, i.e., J*=2, K*=2. Update weights j*=2 k*=2 r00= r01= r02 = r22= Calculate RF00 = 0.243 Calculate RF01 = Calculate RF22= 1 朝陽科技大學資訊管理系 李麗華 教授

SOM Exercise-5 Update weights (continue) For each j, k, we update the weight, as following: Wi00=η‧(X1-Wi00)‧ RF00 =1.0 × (-0.9+0.2)×(0.243)=-0.17 Wi01=η‧(X1-W00)‧ RF00 Wi02=η‧(X1-W01)‧ RF00 : Wijk=η‧(X1-Wij)‧ RF00 1 Wi10 Wi02 Wi11 Wi20 -1 1 △Wijk=η.(Xi - Wijk ).RFjk When Wijk=△Wijk+Wijk rjk= hint Wi12 -0.17 Wi01 Wi22 Wi21 Wi00 -1 朝陽科技大學資訊管理系 李麗華 教授

Input Vectors Initial Weights 4Y10 ; 5Y11; 6Y12; jk編號對照: 7Y20 ; 8Y21; 9Y22; 4Y10 ; 5Y11; 6Y12; 1Y00 ; 2Y01; 3Y02; Input Vectors Initial Weights

4Y10 ; 5Y11; 6Y12; 1Y00 ; 2Y01; 3Y02; (Y22) jk編號對照: 7Y20 ; 8Y21; 9Y22; 4Y10 ; 5Y11; 6Y12; 1Y00 ; 2Y01; 3Y02; (Y22) Y22 win and weights are updated towards the first input vector

jk編號對照: 7Y20 ; 8Y21; 9Y22; 4Y10 ; 5Y11; 6Y12; 1Y00 ; 2Y01; 3Y02;

SOM network 補充影片 TouchBased SOM http://vimeo.com/7714780 2D模擬3Dcolor的示範程式http://cartwright.chem.ox.ac.uk/aistuff/runcoloursom.html TouchBased SOM http://vimeo.com/7714780 自組織拓樸圖形學習http://www.dailymotion.com/video/x6i7aq_selforganizing-maps_tech 解Travelling Salesman Problem http://video.google.com/videoplay?docid=6179747342401102483# Matlab SOM 影片教學http://www.youtube.com/watch?v=MUWGogdiAdY (影像辨視) http://www.youtube.com/watch?v=7ecSO4fwP_o (3D圖形) http://www.youtube.com/watch?v=dASyjPQtbS8 http://www.youtube.com/watch?v=ZWtKjp3mWZ8&feature=related (3D彩色柱狀圖) http://www.youtube.com/watch?v=kcG58hwoqag&feature=related (傳統的網格圖) http://www.youtube.com/watch?v=kcG58hwoqag&feature=related (影片合集) 朝陽科技大學資訊管理系 李麗華 教授