-Artificial Neural Network- Hopfield Neural Network(HNN) 朝陽科技大學 資訊管理系 李麗華 教授.

Slides:



Advertisements
Similar presentations
1 Lecture 5 Properties of LTI Systems The Solution of LCCDE.
Advertisements

( Numerical Methods for Ordinary Differential Equations )
限制性定语从句和非限制性定语从句 区别:( 1 )限制性定语从句与其先行词 关系密切,如果去掉该从句,剩余部分 的意思不完整甚至失去意义;非限制性 定语从句只是其先行词的附加说明,如 去掉,句子剩余部分意思仍然完整。 A man who does not try to learn from others.
回归教材、梳理知识、突出能力 ——2015年历史二轮复习思考 李树全 西安市第八十九中学.
資料採礦與商業智慧 第四章 類神經網路-Neural Net.
湖南师大附中高三政治第二次月考 试题讲评 试题讲评.
Java Programming Hygiene - for DIDC
報告即將開始.
Chap. 4 Techniques of Circuit Analysis
Chapter 8 Liner Regression and Correlation 第八章 直线回归和相关
Chaoping Li, Zhejiang University
XI. Hilbert Huang Transform (HHT)
Leftmost Longest Regular Expression Matching in Reconfigurable Logic
Minimum Spanning Trees
3-3 Modeling with Systems of DEs
Euler’s method of construction of the Exponential function
Visualizing and Understanding Neural Machine Translation
-Artificial Neural Network- Adaline & Madaline
An Adaptive Cross-Layer Multi-Path Routing Protocol for Urban VANET
Thinking of Instrumentation Survivability Under Severe Accident
Population proportion and sample proportion
模式识别 Pattern Recognition
VERBAL PHRASES OF “PUT”
Differential Equations (DE)
Consumer Memory 指導老師 莊勝雄 MA4D0102郭虹汝MA4D0201吳宜臻.
Chapter 4 歸納(Induction)與遞迴(Recursion)
Source: IEEE Access, vol. 5, pp , October 2017
On Some Fuzzy Optimization Problems
第十章 基于立体视觉的深度估计.
1 巨集 2 資料型態 3 物件、屬性、方法與事件 4 陳述式與副函式 5 其他注意事項 6 範例
Fundamentals of Physics 8/e 27 - Circuit Theory
Digital Terrain Modeling
Course 9 NP Theory序論 An Introduction to the Theory of NP
创建型设计模式.
组合逻辑3 Combinational Logic
Advanced Artificial Intelligence
Randomized Algorithms
TBL教學法簡介 王英偉醫師提供 張景媛修改.
子博弈完美Nash均衡 我们知道,一个博弈可以有多于一个的Nash均衡。在某些情况下,我们可以按照“子博弈完美”的要求,把不符合这个要求的均衡去掉。 扩展型博弈G的一部分g叫做一个子博弈,如果g包含某个节点和它所有的后继点,并且一个G的信息集或者和g不相交,或者整个含于g。 一个Nash均衡称为子博弈完美的,如果它在每.
Probabilistic Neural Network (PNN)
The Concept of Fuzzy Theory
Lesson 44:Popular Sayings
软件测试 第3章 测试用例设计 Kerry Zhu
人工智慧:學習.
最大熵模型简介 A Simple Introduction to the Maximum Entropy Models
Chapter 5 Recursion.
ISO9001:2008 GB/T19001:2008 换版动态.
Order Flow and Exchange Rate Dynamics
Version Control System Based DSNs
VIDEO COMPRESSION & MPEG
研究技巧與論文撰寫方法 中央大學資管系 陳彥良.
高性能计算与天文技术联合实验室 智能与计算学部 天津大学
-Artificial Neural Network(ANN)- Self Organization Map(SOM)
前向人工神经网络敏感性研究 曾晓勤 河海大学计算机及信息工程学院 2003年10月.
3.5 Region Filling Region Filling is a process of “coloring in” a definite image area or region. 2019/4/19.
中国科学技术大学计算机系 陈香兰 2013Fall 第七讲 存储器管理 中国科学技术大学计算机系 陈香兰 2013Fall.
Neural Networks: Learning
Q & A.
 隐式欧拉法 /* implicit Euler method */
More About Auto-encoder
动词不定式(6).
5. Combinational Logic Analysis
何正斌 博士 國立屏東科技大學工業管理研究所 教授
2 Number Systems, Operations, and Codes
怎樣把同一評估 給與在不同班級的學生 How to administer the Same assessment to students from Different classes and groups.
Lecture #10 State space approach.
LIU Lei Shanghai Center for Bioinformation Technology 03/05/2013
Gaussian Process Ruohua Shi Meeting
SAS 統計程序實作 PROC MEANS (一個母體)
Presentation transcript:

-Artificial Neural Network- Hopfield Neural Network(HNN) 朝陽科技大學 資訊管理系 李麗華 教授

朝陽科技大學 李麗華 教授 2 Assoicative Memory(AM) - 1 Def: Associative memory (AM) is any device that associates a set of predefined output patterns with specific input patterns. Two types of AM: –Auto-associative Memory: Converts a corrupted input pattern into the most resembled input. –Hetro-associative Memory: Produces an output pattern that was stored corresponding to the most similar input pattern.

朝陽科技大學 李麗華 教授 3 Models: It is the associative mapping of an input vector X into the output vector V. EX: Hopfield Neural Network (HNN) EX: Bidirectional Associative Memory (BAM) Assoicative Memory(AM) - 2 Assoicative Memory X1X2X3:XnX1X2X3:Xn v1v2v3:vmv1v2v3:vm

朝陽科技大學 李麗華 教授 4 Introduction Hopfield Neural Network(HNN) was proposed by Hopfield in HNN is an auto-associative memory network. It is a one layer, fully connected network. X1X1 X2X2 XnXn … …

朝陽科技大學 李麗華 教授 5 HNN Architecture Input :  X i  ﹛ -1, +1 ﹜ Output : same as input( ∵ single layer network) Transfer function : X i new = Weights : Connections : +1 net j > 0 X i if net j = 0 -1 net j < 0 X1X1 X2X2 XnXn … … ( X i 是指前一個 X 值 )

朝陽科技大學 李麗華 教授 6 HNN Learning Process Learning Process : a. Setup the network, i.e., design the input nodes & connections. b. Calculate and derived the weight matrix C. Store the weight matrix. The learning process is done when the weight matrix is derived. We shall obtain a nxn weight matrix, W nxn.

朝陽科技大學 李麗華 教授 7 HNN Recall Process Recall a. Read the nxn weight matrix, W nxn. b. Input the test pattern X for recalling. c. Compute new input ( i.e. output ) d. Repeat process c. until the network converge ( i.e. the net value is not changed or the error is very small ) +1 net j > 0 X j old if net j = 0 +1 net j < 0 X j : ( or net = W ‧ X i ) X new

朝陽科技大學 李麗華 教授 8 Example: Use HNN to memorize patterns (1) Use HNN to memorize the following patterns. Let the Green color is represented by “1” and white color is represented by “-1”. The input data is as shown in the table PX1X1 X2X2 X3X3 X4X4 X5X5 X6X6 X X2X X3X X4X4 X1X1 X2X2 X3X3 X4X4

朝陽科技大學 李麗華 教授 9 W ii =0 PX1X1 X2X2 X3X3 X4X4 X5X5 X6X6 X X2X X3X X4X4 Example: Use HNN to memorize patterns (2)

朝陽科技大學 李麗華 教授 10 Recall Example: Use HNN to memorize patterns (3) The pattern is recalled as:

朝陽科技大學 李麗華 教授 11 Lyapunov or Liapunov(1/5) Aleksandr Mikhailovich Lyapunov (1918) is a Russian mathematician and physicist. His work in the field of differential equations, potential theory, the stability of systems and probability theory is very important.potential theoryprobability theory His methods, today named Lyapunov methods, which he developed in 1899, make it possible to define the stability of sets of ordinary differential equations.1899

朝陽科技大學 李麗華 教授 12 Liapunov function (1/5) –This is an energy function that has been utilized to construct the method of HNN. Where –In order to let the function converge, we need to achieve =constant

朝陽科技大學 李麗華 教授 13 Liapunov function (2/5) To prove the Liapunov function can be used in HNN We let pf :

朝陽科技大學 李麗華 教授 14 Liapunov function (3/5)

朝陽科技大學 李麗華 教授 15 Liapunov function (4/5) Discussion : A Case1: when Case2 when B ∴ <0 B, B >0 ∴ B <0 A &,

朝陽科技大學 李麗華 教授 16 Liapunov function (5/5) Liapunov → Auto associative 令  ‚ ƒ 當 ∴ 最小最大