深層學習 暑期訓練 (2017)
助教 大助教: 萬家宏 作業一: 許宗嫄 作業二: 陳奕禎 作業三: 蔡哲平 作業四: 萬家宏 作業五: 黃淞楓 作業問題在社團上討論
作業繳交 五個作業 因為是暑假,所以並不強迫繳交 請加入 FB 社團「深度學習暑期訓練」 等一下會講作業大概的內容 都很簡單 作業詳細內容和做法都已經公告在社團上 會使用 keras 這套工具 (歡迎自學 Tensorflow ) 教材:李宏毅教授在台大計中的十二小時演講錄影 內容公告於社團上
工作站 專題工作站的 IP 是 140.112.21.80 密碼統一是 speech,再自行修改密碼 在以下 google doc 登記(8/3 23:59截止) https://docs.google.com/forms/d/1- J6rbxFMN2yBNUnApgJYxvjM7v- brbiiVqucNGhOsnY/viewform?edit_requested=tr ue 如果每組自己有 linux 系統會比較快樂 如果每組自己有 GPU 會更快樂
機器學習
Machine Learning You said “Hello” Learning ...... “Hi” “How are you” “Good bye” You write the program for learning. A large amount of audio data
Machine Learning This is “cat” Learning ...... “monkey” “cat” “dog” You write the program for learning. A large amount of images
Machine Learning ≈ Looking for a Function Speech Recognition Image Recognition Playing Go Dialogue System “How are you” “Cat” “5-5” (next move) “Hi” “Hello” (what the user said) (system response)
Framework Model Image Recognition: “cat” A set of function “cat” “money” Repeat again “dot” “snake”
Framework Model Supervised Learning Image Recognition: “cat” A set of function Model Better! Goodness of function f Repeat again Supervised Learning Training Data function input: function output: “monkey” “cat” “dog”
Pick the “Best” Function Image Recognition: Framework “cat” A set of function Neural Network Training Testing Model “cat” Step 1 Goodness of function f Using Pick the “Best” Function Repeat again Step 2 Step 3 Training Data “monkey” “cat” “dog”
Deep means many hidden layers Neural Network neuron Input Layer 1 …… Layer 2 …… Layer L …… Output …… y1 …… y2 …… …… You can connect the neurons by other ways you like How many layer is deep? CNN just another way to connect the neuros. You can always connect the neurons in your own way. “+” is ignored Each dimension corresponds to a digit (10 dimension is needed) …… yM Input Layer Output Layer Hidden Layers Deep means many hidden layers
作業說明
作業一 作業說明(Lecture 1~5) Task 1 “5” “0” “4” “1” basic:higher than a specific accuracy (98.0%) option:analyze the output of each layer
作業一 Task 2 Network 政治 體育 政治 財經 http://top-breaking-news.com/ 體育 政治 財經 basic:higher than a specific accuracy (78.0%) option:analyze the output of each layer
作業一 More reference: Example code of task 1: https://github.com/fchollet/keras/blob/master/example s/mnist_mlp.py Example code of task 2: https://github.com/fchollet/keras/blob/master/example s/reuters_mlp.py Neural Networks and Deep Learning http://neuralnetworksanddeeplearning.com/ Chapter 1 - 3
作業二 作業說明(Lecture 6) Image Recognition Network “monkey” “monkey” “cat” “dog” “dog”
作業二 Basic:higher than a specific accuracy (81.0%) option:analyze the functionality of “filter” More Reference: Example code: https://github.com/fchollet/keras/blob/master/example s/cifar10_cnn.py http://cs231n.github.io/convolutional-networks/ Neural Networks and Deep Learning (Chapter 6)
作業三 作業說明、Corpus(Lecture 7, 9) Machine learns human language Machine writes documents The life is ……. You do not have to teach machine grammars ……. Basic: Let machine generate an English sentence Option: Let machine generate a Chinese sentence
作業三 Machine learns human language More reference Example code: https://github.com/fchollet/keras/blob/master/ examples/lstm_text_generation.py http://karpathy.github.io/2015/05/21/rnn- effectiveness/ http://colah.github.io/posts/2015-08- Understanding-LSTMs/
作業四 作業說明(Lecture 8) Auto-encoder: unsupervised learning NN Encoder Basic: visualize the “code”. Does different digits represent by different “code”? NN Decoder code Option: Given a “code”, can machine write a digit?
作業四 Auto-encoder: unsupervised learning More reference: https://blog.keras.io/building-autoencoders-in- keras.html Advanced: Auto-Encoding Variational Bayes, https://arxiv.org/abs/1312.6114 Generative Adversarial Networks, http://arxiv.org/abs/1406.2661 Replacing “digits” with “images”
作業五 Anime Face Generation 復旦大學 作業說明(何之源的知乎): https://zhuanlan.zhihu.com/p/24767059 DCGAN: https://github.com/carpedm20/DCGAN-tensorflow
The evolution of generation NN Generator v1 NN Generator v2 NN Generator v3 Discri-minator v1 Discri-minator v2 Discri-minator v3 Binary Classifier Real images: Using a discriminator to help training the generator
Basic Idea of GAN Normal Distribution NN Generator v1 Step 1 1 1 1 1 Step 1 1 1 1 1 Discri-minator v1 image Optimizing the Discriminator by minimizing the loss of discriminator. 1/0
Basic Idea of GAN Normal Distribution NN Generator Next step: v1 Updating the generator by minimizing the JS divergence between the two images but fixing the discriminator. v2 The output be classified as “real” (as close to 1 as possible) Discri-minator v1 Generator + Discriminator = a network 1.0 0.13
Anime Face Generation 100 updates
Anime Face Generation 1000 updates
Anime Face Generation 2000 updates
Anime Face Generation 5000 updates
Anime Face Generation 10,000 updates
Anime Face Generation 20,000 updates
Anime Face Generation 50,000 updates
Have Fun!