-Artificial Neural Network- Adaline & Madaline

Slides:



Advertisements
Similar presentations
2. 認知器 2.1 什麼是認知器 (Perceptrons)? 2.2 學習法則 (Learning algorithms) 2.3 轉移函數 2.4 範例.
Advertisements

高考“修辞”漫谈 主讲:福建省沙县一中 叶建宏.
中四 升學講座 中五 2007年12月8日.
Chapter 3 預測.
生物統計與SAS軟體課程教學(三) 雙變項統計分析(一)
資料採礦與商業智慧 第四章 類神經網路-Neural Net.
-Artificial Neural Network- Hopfield Neural Network(HNN) 朝陽科技大學 資訊管理系 李麗華 教授.
Chapter 8 Liner Regression and Correlation 第八章 直线回归和相关
XI. Hilbert Huang Transform (HHT)
A TIME-FREQUENCY ADAPTIVE SIGNAL MODEL-BASED APPROACH FOR PARAMETRIC ECG COMPRESSION 14th European Signal Processing Conference (EUSIPCO 2006), Florence,
深層學習 暑期訓練 (2017).
3-3 Modeling with Systems of DEs
Visualizing and Understanding Neural Machine Translation
An Adaptive Cross-Layer Multi-Path Routing Protocol for Urban VANET
IV. Implementation IV-A Method 1: Direct Implementation 以 STFT 為例
Population proportion and sample proportion
第四章 人工智慧演算法 2018/9/19.
Chapter 2 簡單迴歸模型.
模式识别 Pattern Recognition
Manifold Learning Kai Yang
Differential Equations (DE)
第三章 生物神經網路 與類神經網路 類神經網路 台大生工系水資源資訊系統研究室.
Acoustic规范和测试 Base Band 瞿雪丽 2002/1/30.
一個較受忽略的學習障礙— 數學學習障礙 施達明 教育學院 澳門大學.
非線性規劃 Nonlinear Programming
Source: IEEE Access, vol. 5, pp , October 2017
On Some Fuzzy Optimization Problems
Linguistics and language teaching
附加内容 “AS”用法小结(2).
Stochastic Relationships and Scatter Diagrams
Decision Support System (靜宜資管楊子青)
簡單迴歸模型的基本假設 用最小平方法(OLS-ordinary least square)找到一個迴歸式:
Course 9 NP Theory序論 An Introduction to the Theory of NP
创建型设计模式.
Simulated Annealing 報告者:李怡緯 OPLAB in NTUIM.
Advanced Artificial Intelligence
Randomized Algorithms
Fundamentals of Physics 8/e 31 - Alternating Fields and Current
Interval Estimation區間估計
Probabilistic Neural Network (PNN)
ZEEV ZEITIN Delft University of Technology, Netherlands
主講人 陳陸輝 特聘研究員兼主任 政治大學 選舉研究中心
Decision Support System (靜宜資管楊子青)
類神經網路簡介 B 朱峰森 B 梁家愷.
信号与图像处理基础 Adaptive Filter 中国科学技术大学 自动化系 曹 洋.
Version Control System Based DSNs
VIDEO COMPRESSION & MPEG
第3章 預測 2019/4/11 第3章 預測.
Mechanics Exercise Class Ⅰ
-Artificial Neural Network(ANN)- Self Organization Map(SOM)
前向人工神经网络敏感性研究 曾晓勤 河海大学计算机及信息工程学院 2003年10月.
Simple Regression (簡單迴歸分析)
Course 10 削減與搜尋 Prune and Search
Neural Networks: Learning
Q & A.
Chapter 10 Mobile IP TCP/IP Protocol Suite
Nucleon EM form factors in a quark-gluon core model
李宏毅專題 Track A, B, C 的時間、地點開學前通知
名词从句(2).
 隐式欧拉法 /* implicit Euler method */
An Quick Introduction to R and its Application for Bioinformatics
动词不定式(6).
5. Combinational Logic Analysis
神经网络 Statistical Learning 方匡南 厦门大学教授 博士生导师.
簡單迴歸分析與相關分析 莊文忠 副教授 世新大學行政管理學系 計量分析一(莊文忠副教授) 2019/8/3.
Principle and application of optical information technology
Gaussian Process Ruohua Shi Meeting
大腦的解題 ─神經網路簡介 陳慶瀚 機器智慧與自動化技術(MIAT)實驗室 義守大學電機系
BESIII MDC 模拟与调试 袁野 年粒子物理实验计算软件与技术研讨会 威海.
Presentation transcript:

-Artificial Neural Network- Adaline & Madaline 朝陽科技大學 資訊管理系 李麗華 教授

Outline ADALINE MADALINE Least-Square Learning Rule The proof of Least-Square Learning Rule 朝陽科技大學 李麗華 教授

Introduction to ADALINE(1/2) ADALINE (Adaptive Linear Neuron or Adaptive Linear Element) is a single layer neural network. It was developed by Professor Bernard Widrow and his graduate student Ted Hoff at Stanford University in 1960. It is based on the McCulloch–Pitts neuron. It consists of a weight, a bias and a summation function. Reference: http://en.wikipedia.org/wiki/ADALINE 朝陽科技大學 李麗華 教授

Introduction to ADALINE (2/2) The difference between Adaline and the standard (McCulloch-Pitts) perceptron is that in the learning phase the weights are adjusted according to the weighted sum of the inputs (the net). In the standard perceptron, the net is passed to the activation (transfer) function and the function's output is used for adjusting the weights. There also exists an extension known as Madaline. Reference: http://en.wikipedia.org/wiki/ADALINE 朝陽科技大學 李麗華 教授

ADALINE (1/3) ADALINE: (Adaptive Linear Neuron) 1959 by Bernard Widrow X1 w1 PE w2 ☆ single processing element X2 . wn Xn (*)ADALINE is not good for non linear separable problem. To deal with it, the MADALINE network with two units of ADALINE can help solving the XOR problem. 朝陽科技大學 李麗華 教授

ADALINE (2/3) Method : The value in each unit must be +1 or –1 (perceptron 是輸出 0和1 ) ‧net = 1 -1 different from perceptron's transfer function 朝陽科技大學 李麗華 教授

ADALINE (3/3) 朝陽科技大學 李麗華 教授

Introduction to MADALINE Madaline (Multiple Adaline) is using a set of ADALINEs in parallel as its input layers and a single PE (processing element) in its output layer. The network of ADALINES can span many layers. For problems with multiple input variables and one output, each input is applied to one Adaline. For similar problems with multiple outputs, Madaline of parallel process can be used. The Madaline network is useful for problems which involve prediction based on multiple inputs, such as weather forecasting (Input variables: barometric pressure, difference in pressure. Output variables: rain, cloudy, sunny). Reference: http://en.wikipedia.org/wiki/ADALINE 朝陽科技大學 李麗華 教授

MADALINE MADALINE: It is composed of many ADALINE (Multilayer Adaline.) Wij netj . Y no Wij in 2nd layer if more than half of netj ≥ 0, then output +1,otherwise, output -1 After the second layer, the majority vote is applied. 朝陽科技大學 李麗華 教授

Least-Square Learning Rule (1/6) x x · X = [ x , x , , x ] t , ( i.e . X = 1 ) 1 £ L k £ L 1 n k j M 字母 k :代表第 k 組input pattern t :代表向量轉置 (transpose) L : 代表input pattern數量 x n ) . i.e ( , ] [ 1 = n t w W L M · n i å = netj = Wt Xj = wi xi = w0x0 + w1x1 + …+ wn xn 朝陽科技大學 李麗華 教授

Least-Square Learning Rule (2/6) By applying the least-square learning rule the weights can be obtained by using the formula. R: Correlation Matrix P RW * = R ' L å R = , R ' = R ' + R ' + ... + R ' = X X t W * = R - 1 P where 1 2 L j j L j = 1 T X t Pt = j j L 朝陽科技大學 李麗華 教授

Least-Square Learning Rule (3/6) X1 X2 X3 Tj 1 -1 朝陽科技大學 李麗華 教授

Least-Square Learning Rule (4/6) Sol. 先算R 朝陽科技大學 李麗華 教授

Least-Square Learning Rule (5/6) 朝陽科技大學 李麗華 教授

Least-Square Learning Rule (6/6) Verify the net: 代入(1,1,0) net=3X1-2X2-2X3=1 Y=1 ok 代入(1,0,1) net=3X1-2X2-2X3=1 Y=1 ok 代入(1,1,1) net=3X1-2X2-2X3=-1 Y=-1 ok 3 ADALINE -2 X1 X2 X3 Y (*)同學請回家找出反矩陣的快速計算法 朝陽科技大學 李麗華 教授

Proof of Least Square Learning Rule(1/3) We use Least Mean Square Error to ensure the minimum total error. As long as the total error approaches zero, the best solution is found. Therefore, we are looking for the minimum of〈 〉. Proof: 朝陽科技大學 李麗華 教授

Proof of Least Square Learning Rule(2/3) 承上頁 令此項為R 朝陽科技大學 李麗華 教授

Proof of Least Square Learning Rule(3/3) 承上頁 朝陽科技大學 李麗華 教授