-
Notifications
You must be signed in to change notification settings - Fork 1
Expand file tree
/
Copy pathCS584.html
More file actions
570 lines (486 loc) · 38 KB
/
CS584.html
File metadata and controls
570 lines (486 loc) · 38 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
<html>
<head>
<meta http-equiv="Cache-Control" content="no-cache, no-store, must-revalidate" />
<meta http-equiv="Pragma" content="no-cache" />
<meta http-equiv="Expires" content="0" />
<link href='http://fonts.googleapis.com/css?family=Crimson+Text:400,400italic,700&v2' rel='stylesheet' type='text/css'>
<link href="assets/css/sylstyle.css" rel="stylesheet" type="text/css" />
<title>CS584: Deep Learning</title>
</head>
<body>
<h1>CS584: Deep Learning</h1>
<div id="header">
Emory University, Spring 2017<br/>
</div>
Prof. Shamim Nemati (OH: Mon 1:00pm-2:00pm in BMI (36 Eagle Row, 5th Floor South) 579)<br/>
TF: Supreeth Prajwal (OH: Wed 10:15am-11:15am BMI 581)<br/>
TF: Ali Ahmadvand (OH: Tue 9am-10am BMI 581)<br/>
Time: Monday and Wednesday, 11:30am-12:45pm<br/>
Location: MSc E408<br/>
Contact: Instructor firstname dot Instructor lastname at emory.edu<br/>
Course Website: http://nematilab.info/CS584.html <br/>
GitHub: <a target="_blank" href="https://github.com/NematiLab/CS584">https://github.com/NematiLab/CS584</a> (permission only)
<hr/>
<span class="menu">
<a target="_blank" href="#schedule">schedule</a> |
<a target="_blank" href="#assignments">assignments</a> |
<a target="_blank" href="#grading">grading</a> |
<a target="_blank" href="#books">books</a> |
<a target="_blank" href="#faq">faq</a> |
</span>
<h3>Announcements</h3>
<ul>
<li>January 11, 2017: Please bring your laptops for Lecture 2 (Wed 18 Jan 2017). We will have a hands-on demonstration of various computational resources available at Emory for running large scale deep learning computations.</li>
<li>January 23, 2017: Please complete <a target="_blank" href="#Assignment1">Assignment 1</a> by Sunday, 01/29/2017. </li>
<li>January 30, 2017: Please complete <a target="_blank" href="#Assignment2">Assignment 2</a> by Sunday, 02/05/2017. </li>
<li>February 8, 2017: Please complete <a target="_blank" href="#Assignment3">Assignment 3</a> by Sunday, 02/12/2017. </li>
<li>February 16, 2017: Please complete <a target="_blank" href="#Assignment4">Assignment 4</a> by Sunday, 02/26/2017. </li>
<li>February 20, 2017: Midterm project presentations are on Wed 22 March. Midterm project reports are due Sunday 26th of March. </li>
<li>March 7, 2017: Optional <a target="_blank" href="#Assignment5">Assignment 5</a> has been posted. </li>
<li>March 15, 2017: Please complete <a target="_blank" href="#Assignment6">Assignment 6</a> by Wednesday, 03/29/2017. </li>
<li>April 3, 2017: Please complete <a target="_blank" href="#Assignment7">Assignment 7</a> by Monday, 04/10/2017. </li>
<li>April 16, 2017: Please complete <a target="_blank" href="#Assignment8">Assignment 8</a> by Monday, 04/23/2017. </li>
</ul>
<hr/>
<a name="schedule">
<h3>Schedule</h3>
Subject to change.
<div class="lecture">
<span class="lecture-date">Wed 11 Jan 2017</span>
<p class="lecture">
<span class="lecture-title">Lecture 1: Introduction to Deep Learning</span>
<div class="lecture-prep">
<ul>
<li>[<span class="required">required</span>] Book: Goodfellow -- Chapters 1-3 -- <i>Introduction, Linear Algebra, Probability and Info. Theory</i></li>
<li>[<span class="optional">optional</span>] Book: Murphy -- Chapter 28 -- <i>Deep Learning</i></li>
<li>[<span class="required">required</span>] Video: Andrej Karpathy -- <a target="_blank" href="https://youtu.be/NfnWJUyUJYU?list=PLkt2uSq6rBVctENoVBg1TpCC7OQi31AlC&t=63">Introduction to Deep Learning and Historical Context</a></li>
<li>[<span class="optional">optional</span>] Video: de Freitas -- <a target="_blank" href="https://youtu.be/PlhFWT7vAEw?list=PLE6Wd9FR--EfW8dtjAuPoTuPcqmOV53Fu&t=1">Introduction to Deep Learning</a></li>
<li>[<span class="optional">optional</span>] Paper: LeCun, Bengio & Hinton <i><a target="_blank" href="http://www.nature.com/nature/journal/v521/n7553/full/nature14539.html">Deep Learning</a></i></li>
<li>[<span class="optional">optional</span>] Paper: Jürgen Schmidhuber <i><a target="_blank" href="http://people.idsia.ch/~juergen/deep-learning-conspiracy.html">Deep Learning Conspiracy</a></i></li>
</ul>
</div>
</p>
<hr width="80%" />
<span class="lecture-date">Wed 18 Jan 2017</span>
<p class="lecture">
<span class="lecture-title">Lecture 2: Intro to Deep Learning Software and Hardware</span>
<div class="lecture-prep">
<ul>
<li>[<span class="required">required</span>] Book: Goodfellow -- Chapter 4-5 -- <i>Numerical Compu., ML Basics</i></li>
<li>[<span class="required">required</span>] Video: Andrej Karpathy -- <a target="_blank" href="https://www.youtube.com/watch?v=Vf_-OkqbwPo&list=PLkt2uSq6rBVctENoVBg1TpCC7OQi31AlC&index=1">Deep Learning libraries</a></li>
<li>[<span class="required">required</span>] Video: Andrej Karpathy -- <a target="_blank" href="https://www.youtube.com/watch?v=8inugqHkfvE/">Data-driven approach, kNN, Linear Classification 1</a></li>
<li>[<span class="required">required</span>] Video: Andrej Karpathy -- <a target="_blank" href="https://www.youtube.com/watch?v=qlLChbHhbg4/">Linear Classification 2, Optimization</a></li>
<li>[<span class="optional">optional</span>] Video: de Freitas -- <a target="_blank" href="https://youtu.be/DHspIG64CVM?list=PLVS8_c-GvidyYbxrsODI5Cz45U3w0D1gp&t=199">Linear Models</a></li>
<li>[<span class="optional">optional</span>] Video: de Freitas -- <a target="_blank" href="https://youtu.be/kPrHqQzCkg0?list=PLE6Wd9FR--EfW8dtjAuPoTuPcqmOV53Fu&t=1">Maximum likelihood and information</a></li>
</ul>
</div>
</p>
</div>
<div class="lecture">
<h4>Feedforward Networks</h4>
<span class="lecture-date">Mon 23 Jan 2017</span>
<p class="lecture">
<span class="lecture-title">Lecture 3: Deep Feedforward Networks</span>
<div class="lecture-prep">
<ul>
<li>[<span class="required">required</span>] Book: Goodfellow -- Chapter 6-7 -- <i>Deep Feedforward Networks, Regularization for Deep Learning</i></li>
<li>[<span class="required">required</span>] Video: Andrej Karpathy -- <a target="_blank" href="https://www.youtube.com/watch?v=i94OvYb6noo&index=4&list=PLkt2uSq6rBVctENoVBg1TpCC7OQi31AlC">Backpropagation, Neural Networks 1</a></li>
<li>[<span class="optional">optional</span>] Video: de Freitas -- <a target="_blank" href="https://youtu.be/VR0W_PNwLGw?list=PLE6Wd9FR--EfW8dtjAuPoTuPcqmOV53Fu">Regularization, model complexity and data complexity (part 1)</a></li>
<li>[<span class="optional">optional</span>] Video: de Freitas -- <a target="_blank" href="https://youtu.be/qz9bKfOqd0Y?list=PLE6Wd9FR--EfW8dtjAuPoTuPcqmOV53Fu&t=1">Regularization, model complexity and data complexity (part 2)</a></li>
<li>[<span class="optional">optional</span>] Paper: Jaderberg et al, <i><a target="_blank" href="https://deepmind.com/blog/decoupled-neural-networks-using-synthetic-gradients/">Decoupled Neural Interfaces Using Synthetic Gradients</a></i></li>
<li>[<span class="optional">optional</span>] Paper: Lillicrap et al., <i><a target="_blank" href="http://www.nature.com/articles/ncomms13276">Random synaptic feedback weights support error backpropagation for deep learning</a></i></li>
</ul>
</div>
</p>
<hr width="80%" />
<span class="lecture-date">Wed 25 Jan 2017</span>
<p class="lecture">
<span class="lecture-title">Lecture 4: Optimization</span>
<div class="lecture-prep">
<ul>
<li>[<span class="required">required</span>] Book: Goodfellow -- Chapter 8 -- <i>Optimization for Training Deep Models</i></li>
<li>[<span class="required">required</span>] Video: Andrej Karpathy -- <a target="_blank" href="https://youtu.be/gYpoJMlgyXA?list=PLkt2uSq6rBVctENoVBg1TpCC7OQi31AlC&t=1">Neural Networks Part 2</a></li>
<li>[<span class="optional">optional</span>] Video: de Freitas -- <a target="_blank" href="https://youtu.be/0qUAb94CpOw?list=PLE6Wd9FR--EfW8dtjAuPoTuPcqmOV53Fu">Optimization</a></li>
<li>[<span class="optional">optional</span>] Paper: LeCun, <i><a target="_blank" href="http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf">Efficient BackProp</a></i></li>
<li>[<span class="optional">optional</span>] Paper: Bengio, <i><a target="_blank" href="http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf">Understanding the difficulty of training deep feedforward neural networks</a></i></li>
</ul>
</div>
</p>
<hr width="80%" />
<span class="lecture-date">Mon 30 Jan 2017</span>
<p class="lecture">
<span class="lecture-title">Lecture 5: Convolutional Neural Networks</span>
<div class="lecture-prep">
<ul>
<li>[<span class="required">required</span>] Book: Goodfellow -- Chapter 9 -- <i>Convolutional Networks</i></li>
<li>[<span class="required">required</span>] Video: Andrej Karpathy -- <a target="_blank" href="https://youtu.be/hd_KFJ5ktUc?list=PLkt2uSq6rBVctENoVBg1TpCC7OQi31AlC">Neural Networks Part 3 / Intro to ConvNets</a></li>
<li>[<span class="required">required</span>] Video: Andrej Karpathy -- <a target="_blank" href="https://youtu.be/LxfUGhug-iQ?list=PLkt2uSq6rBVctENoVBg1TpCC7OQi31AlC">Convolutional Neural Networks</a></li>
<li>[<span class="optional">optional</span>] Video: de Freitas -- <a target="_blank" href="https://www.youtube.com/watch?v=bEUX_56Lojc&list=PLE6Wd9FR--EfW8dtjAuPoTuPcqmOV53Fu&index=13">Convolutional Neural Networks</a></li>
<li>[<span class="optional">optional</span>] Paper: Deshpande<i><a target="_blank" href="https://adeshpande3.github.io/adeshpande3.github.io/The-9-Deep-Learning-Papers-You-Need-To-Know-About.html">The 9 Deep Learning Papers You Need To Know About </a></i></li>
</ul>
</div>
</p>
<span class="lecture-date">Wed 1 Feb 2017</span>
<p class="lecture">
<span class="lecture-title">Lecture 6: Convolutional Neural Networks</span>
<div class="lecture-prep">
<ul>
<li>[<span class="required">required</span>] Book: Goodfellow -- Chapter 9 -- <i>Convolutional Networks</i></li>
<li>[<span class="required">required</span>] Video: Andrej Karpathy -- <a target="_blank" href="https://youtu.be/GxZrEKZfW2o?list=PLkt2uSq6rBVctENoVBg1TpCC7OQi31AlC">Localization and Detection</a></li>
<li>[<span class="required">required</span>] Video: Andrej Karpathy -- <a target="_blank" href="https://youtu.be/ta5fdaqDT3M?list=PLkt2uSq6rBVctENoVBg1TpCC7OQi31AlC&t=1">Visualization, Deep Dream, Neural Style, Adversarial Examples</a></li>
<li>[<span class="required">required</span>] Video: Andrej Karpathy -- <a target="_blank" href="https://youtu.be/pA4BsUK3oP4?list=PLkt2uSq6rBVctENoVBg1TpCC7OQi31AlC&t=137">ConvNets in practice</a></li>
<li>[<span class="optional">optional</span>] Paper: Gulshan, Peng et al., <i><a target="_blank" href="http://jamanetwork.com/journals/jama/article-abstract/2588763">Deep Learning Algorithm for Detection of Diabetic Retinopathy</a></i></li>
</ul>
</div>
</p>
</div>
<div class="lecture">
<h4>Time Series Models</h4>
<span class="lecture-date">Mon 6 Feb 2017</span>
<p class="lecture">
<span class="lecture-title">Lecture 7: Recurrent Neural Networks</span>
<div class="lecture-prep">
<ul>
<li>[<span class="required">required</span>] Book: Goodfellow -- Chapter 10 -- <i>Sequence Modeling</i></li>
<li>[<span class="required">required</span>] Video: Andrej Karpathy -- <a target="_blank" href="https://youtu.be/yCC09vCHzF8?list=PLkt2uSq6rBVctENoVBg1TpCC7OQi31AlC&t=1">Recurrent Neural Networks, Image Captioning, LSTM</a></li>
<li>[<span class="optional">optional</span>] Video: de Freitas -- <a target="_blank" href="https://youtu.be/56TYLaQN4N8?list=PLE6Wd9FR--EfW8dtjAuPoTuPcqmOV53Fu">Recurrent Neural Nets and LSTMs</a></li>
<li>[<span class="optional">optional</span>] Paper: Greff et al. -- <a target="_blank" href="https://arxiv.org/pdf/1503.04069v1.pdf">LSTM: A Search Space Odyssey</a></li>
<li>[<span class="optional">optional</span>] Paper: Lipton et al. -- <a target="_blank" href="https://arxiv.org/pdf/1511.03677.pdf">Learning To Diagnose With LSTM Recurrent Neural Networks</a></li>
</ul>
</div>
</p>
<hr width="80%" />
<span class="lecture-date">Wed 8 Feb 2017</span>
<p class="lecture">
<span class="lecture-title">Lecture 8: Dynamic Bayesian Networks</span>
<div class="lecture-prep">
<ul>
<li>[<span class="required">required</span>] Book: Murphy -- Chapter 17, Section 17.1-17.5 -- <i>Markov and Hidden Markov Models</i></li>
<li>[<span class="optional">optional</span>] Book: Murphy -- Chapter 10, Sections 10.1-10.5 -- <i>Directed Graphical Models (Bayes Nets)</i></li>
<li>[<span class="optional">optional</span>] Video: Zoubin Ghahramani -- <a target="_blank" href="http://videolectures.net/mlss09uk_ghahramani_gm/">Graphical Models</a></li>
<li>[<span class="optional">optional</span>] Paper: Choi et al. -- <a target="_blank" href="http://jamia.oxfordjournals.org/content/jaminfo/early/2016/08/13/jamia.ocw112.full.pdf">Using recurrent neural network models for
early detection of heart failure onset</a></li>
<li>[<span class="optional">optional</span>] Paper: Liao and Ahn, <i><a target="_blank" href="http://www.phmsociety.org/sites/phmsociety.org/files/phm_submission/2016/ijphm_16_020.pdf">Combining Deep Learning and Survival Analysis for Asset Health
Management</a></i></li>
</ul>
</div>
</p>
</div>
<div class="lecture">
<h4>Reinforcement Learning</h4>
<span class="lecture-date">Mon 13 Feb 2017</span>
<p class="lecture">
<span class="lecture-title">Lecture 9: Deep RL, Discrete Action Spaces </span>
<div class="lecture-prep">
<ul>
<li>[<span class="required">required</span>] Paper: Mnih et al., <i><a target="_blank" href="http://www.nature.com/nature/journal/v518/n7540/full/nature14236.html">Human-level control through deep reinforcement learning</a></i></li>
<li>[<span class="required">required</span>] Video: de Freitas -- <a target="_blank" href="https://youtu.be/dV80NAlEins?list=PLE6Wd9FR--EfW8dtjAuPoTuPcqmOV53Fu">Reinforcement learning and neuro-dynamic</a></li>
<li>[<span class="optional">optional</span>] Video: Pineau -- <a target="_blank" href="http://videolectures.net/deeplearning2016_pineau_reinforcement_learning/">Introduction to Reinforcement Learning</a></li>
<li>[<span class="optional">optional</span>] Paper: Lipton, <i><a target="_blank" href="https://arxiv.org/pdf/1611.01211v2.pdf">Combating Reinforcement Learning's Sisyphean Curse with Intrinsic</a></i></li>
</ul>
</div>
</p>
<hr width="80%" />
<span class="lecture-date">Wed 15 Feb 2017</span>
<p class="lecture">
<span class="lecture-title">Lecture 10: Deep RL, Continuous Action Spaces </span>
<div class="lecture-prep">
<ul>
<li>[<span class="required">required</span>] Paper: Mnih et al., <i><a target="_blank" href="https://arxiv.org/pdf/1406.6247v1.pdf">Recurrent Models of Visual Attention</a></i></li>
<li>[<span class="required">required</span>] Video: de Freitas -- <a target="_blank" href="https://youtu.be/kUiR0RLmGCo?list=PLE6Wd9FR--EfW8dtjAuPoTuPcqmOV53Fu">Deep Reinforcement Learning - Policy search</a></li>
<li>[<span class="required">required</span>] Repo: <i><a target="_blank" href="https://github.com/junhyukoh/deep-reinforcement-learning-papers">A List of Deep Reinforcement Learning Papers</a></i></li>
</ul>
</div>
</p>
</div>
<div class="lecture">
<h4>Applications & Practicals</h4>
<span class="lecture-date">Mon 20 Feb 2017</span>
<p class="lecture">
<span class="lecture-title">Lecture 11: Deep Learning in Practice</span>
<div class="lecture-prep">
<ul>
<li>[<span class="required">required</span>] Book: Goodfellow -- Chapter 11 -- <i>Practical Methodology</i></li>
<li>[<span class="optional">optional</span>] Paper: Bengio, <i><a target="_blank" href="https://arxiv.org/pdf/1206.5533v2.pdf">Practical recommendations for gradient-based training of deep architectures</a></i></li>
<li>[<span class="optional">optional</span>] Paper: Lipton, <i><a target="_blank" href="https://arxiv.org/pdf/1606.03490v2.pdf">The Mythos of Model Interpretability</a></i></li>
<li>[<span class="optional">optional</span>] Video: de Freitas, <i><a target="_blank" href="https://www.youtube.com/watch?v=vz3D36VXefI">Bayesian Optimization</a></i></li>
</ul>
</div>
</p>
<hr width="80%" />
<span class="lecture-date">Wed 22 Feb 2017</span>
<p class="lecture">
<span class="lecture-title">Lecture 12: Biomedical Applications</span>
<div class="lecture-prep">
<ul>
<li>[<span class="required">required</span>] Book: Goodfellow -- Chapter 12 -- <i>Applications</i></li>
<li>[<span class="required">required</span>] Paper: Min, Lee et al., <i><a target="_blank" href="https://arxiv.org/pdf/1603.06430.pdf">Deep Learning in Bioinformatics</a></i></li>
<li>[<span class="optional">optional</span>] Paper: Ranganath et al., <i><a target="_blank" href="https://arxiv.org/pdf/1608.02158v1.pdf">Deep Survival Analysis</a></i></li>
<li>[<span class="optional">optional</span>] Paper: Katzman et al., <i><a target="_blank" href="https://arxiv.org/pdf/1606.00931v2.pdf">Deep Survival: A Deep Cox Proportional Hazards Network</a></i></li>
</ul>
</div>
</p>
</div>
<div class="lecture">
<h4>Unsupervised Deep Learning</h4>
<span class="lecture-date">Monday 27 Feb 2017</span>
<p class="lecture">
<span class="lecture-title">Lecture 13: Linear Factor Models</span>
<div class="lecture-prep">
<ul>
<li>[<span class="required">required</span>] Book: Goodfellow -- Chapter 13 -- <i>Linear Factor Models</i></li>
<li>[<span class="optional">required</span>] Book: Murphy -- Chapter 12 -- <i>Latent Linear Models</i></li>
<li>[<span class="optional">optional</span>] Video: Zoubin Ghahramani -- <a target="_blank" href="http://videolectures.net/mlss09uk_ghahramani_gm/">Graphical Models</a></li>
<li>[<span class="required">required</span>] Paper: Sam Roweis and Zoubin Ghahramani. <i><a target="_blank" href="papers/lds.pdf">A Unifying Review of Linear Gaussian Models</a></i>. Neural Computation 11(2), 1999.</li>
</ul>
</div>
</p>
<hr width="80%" />
<span class="lecture-date">Mon 13 March 2017</span>
<p class="lecture">
<span class="lecture-title">Lecture 14: Autoencoders</span>
<div class="lecture-prep">
<ul>
<li>[<span class="required">required</span>] Book: Goodfellow -- Chapter 14 -- <i>Autoencoders</i></li>
<li>[<span class="required">required</span>] Video: Andrej Karpathy -- <a target="_blank" href="https://youtu.be/ekyBklxwQMU?list=PLkt2uSq6rBVctENoVBg1TpCC7OQi31AlC&t=1950">Unsupervised Learning</a></li>
<li>[<span class="required">required</span>] Paper: Miotto et al., <i><a target="_blank" href="http://www.nature.com/articles/srep26094">Deep Patient</a></i></li>
<li>[<span class="required">required</span>] Paper: Cheng et al., <i><a target="_blank" href="http://www.nature.com/articles/srep24454">Computer-Aided Diagnosis with Deep Learning </a></i></li>
<li>[<span class="required">required</span>] Paper: Lasko et al, <i><a target="_blank" href="http://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0066341&type=printable">Computational Phenotype Discovery Using Unsupervised Feature Learning over Noisy, Sparse, and Irregular Clinical Data</a></i></li>
</ul>
</div>
</p>
<hr width="80%" />
<span class="lecture-date">Wed 15 March 2017</span>
<p class="lecture">
<span class="lecture-title">Lecture 15: Introduction to Boltzmann Machines</span>
<div class="lecture-prep">
<ul>
<li>[<span class="required">required</span>] Book: Goodfellow -- Chapter 16.7, 20.1, 20.2 -- <i>Structured Probabilistic Models for Deep Learning</i></li>
<li>[<span class="optional">optional</span>] Paper: Montavon, Muller -- <a target="_blank" href="http://gregoire.montavon.name/publications/montavon-lncs12.pdf">Deep Boltzmann Machines and the Centering Trick.</a></li>
<li>[<span class="optional">optional</span>] Paper: Hinton -- <a target="_blank" href="http://gregoire.montavon.name/publications/montavon-lncs12.pdf">A practical guide to training restricted Boltzmann machines.</a></li>
</ul>
</div>
</p>
<hr width="80%" />
<span class="lecture-date">Mon 20 March 2017</span>
<p class="lecture">
<span class="lecture-title">Lecture 16: Unsupervised Time Series Modeling </span>
<div class="lecture-prep">
<ul>
<li>[<span class="required">required</span>] Paper: Fraccaro et al., <i><a target="_blank" href="https://arxiv.org/pdf/1605.07571v2.pdf">Sequential Neural Models with Stochastic Layers</a></i></li>
<li>[<span class="required">required</span>] Paper: Längkvist et al., <i><a target="_blank" href="http://www.diva-portal.org/smash/get/diva2:710518/FULLTEXT02">A Review of Unsupervised Feature Learning and Deep Learning for Time Series Modeling</a></i></li>
<li>[<span class="required">required</span>] Paper: Bayer and Osendorfer <i><a target="_blank" href="https://arxiv.org/pdf/1411.7610v3.pdf">Learning Stochastic Recurrent Networks</a></i></li>
</ul>
</div>
</p>
<hr width="80%" />
<span class="lecture-date">Wed 22 March 2017</span>
<p class="lecture">
<span class="lecture-title">Midterm Project Presentations</span>
<div class="lecture-prep">
<ul>
</ul>
</div>
</p>
</div>
<div class="lecture">
<h4>Sampling and Inference Procedures</h4>
<span class="lecture-date">Mon 27 March 2017</span>
<p class="lecture">
<span class="lecture-title">Lecture 17: Sampling Techniques</span>
<div class="lecture-prep">
<ul>
<li>[<span class="required">required</span>] Book: Goodfellow -- Chapter 17 -- <i>Monte Carlo Methods</i></li>
<li>[<span class="optional">optional</span>] Book: Murphy -- Chapter 23, Section 23.1-23.4 -- <i>Monte Carlo Inference</i></li>
<li>[<span class="optional">optional</span>] Book: Murphy -- Chapter 24, Sections 24.1-24.4 -- <i>Markov Chain Monte Carlo (MCMC) Inference</i></li>
<li>[<span class="optional">optional</span>] Book: Murphy -- Chapter 24, Sections 24.5-24.7 -- <i>Markov Chain Monte Carlo (MCMC) Inference</i></li>
<li>[<span class="optional">optional</span>] Video: Iain Murray -- <a target="_blank" href="http://videolectures.net/mlss09uk_murray_mcmc/">Markov Chain Monte Carlo</a></li>
<li>[<span class="optional">optional</span>] Video: de Freitas -- <a target="_blank" href="http://videolectures.net/mlss08au_freitas_asm/">Monte Carlo Simulation for Statistical Inference</a></li>
<li>[<span class="optional">optional</span>] Video: Christian Robert -- <a target="_blank" href="http://videolectures.net/mlss04_robert_mcmcm/">Markov Chain Monte Carlo Methods</a></li>
</ul>
</div>
</p>
<hr width="80%" />
<span class="lecture-date">Wed 29 March 2017</span>
<p class="lecture">
<span class="lecture-title">Lecture 18: Approximate Inference</span>
<div class="lecture-prep">
<ul>
<li>[<span class="required">required</span>] Book: Goodfellow -- Chapter 18-19 -- <i>Partition Function and Approximate Inference</i></li>
</ul>
</div>
</p>
</div>
<div class="lecture">
<h4>Deep Generative Models</h4>
<span class="lecture-date">Mon 3 April 2017</span>
<p class="lecture">
<span class="lecture-title">Lecture 19: Restricted Boltzmann Machine</span>
<div class="lecture-prep">
<ul>
<li>[<span class="required">required</span>] Book: Goodfellow -- Chapter 20.3-20.5 -- <i></i>Deep Boltzmann Machines</li>
<li>[<span class="optional">optional</span>] Book: Murphy -- Chapter 27, Section 27.7 -- <i>Latent Variable Models for Discrete Data</i></li>
<li>[<span class="optional">optional</span>] Video: Geoffrey Hinton -- <a target="_blank" href="http://videolectures.net/mlss09uk_hinton_dbn/">Deep Belief Networks</a></li>
<li>[<span class="optional">optional</span>] Video: Yoshua Bengio and Yann LeCun -- <a target="_blank" href="http://videolectures.net/icml09_bengio_lecun_tldar/">Tutorial on Deep Learning Architectures</a></li>
</ul>
</div>
</p>
<hr width="80%" />
<span class="lecture-date">Wed 5 April 2017</span>
<p class="lecture">
<span class="lecture-title">Lecture 20: Recurrent Temporal RBM</span>
<div class="lecture-prep">
<ul>
<li>[<span class="required">required</span>] Book: Goodfellow -- Chapter 20 -- <i></i>Deep Generative Models</li>
<li>[<span class="required">required</span>] Paper: Sutskever, Hinton et al. -- <a target="_blank" href="http://www.cs.toronto.edu/~fritz/absps/rtrbm.pdf">The Recurrent Temporal Restricted Boltzmann Machine</a></li>
<li>[<span class="required">required</span>] Paper: Mittelman et al. -- <a target="_blank" href="http://cvgl.stanford.edu/papers/srtrbm_icml14.pdf">Structured Recurrent Temporal Restricted Boltzmann Machines</a></li>
<li>[<span class="required">required</span>] Paper: Chung et al. -- <a target="_blank" href="https://arxiv.org/pdf/1506.02216v3.pdf">A Recurrent Latent Variable Model for Sequential Data</a></li>
</ul>
</div>
</p>
<hr width="80%" />
<span class="lecture-date">Mon 10 April 2017</span>
<p class="lecture">
<span class="lecture-title">Lecture 21: Helmholtz Machines I</span>
<div class="lecture-prep">
<ul>
<li>[<span class="optional">optional</span>] Paper: Kirby -- <a target="_blank" href="http://www.nku.edu/~kirby/docs/HelmholtzTutorialKoeln.pdf">A Tutorial on Helmholtz Machines</a></li>
<li>[<span class="optional">optional</span>] Paper: Hinton -- <a target="_blank" href="http://science.sciencemag.org/content/268/5214/1158.full.pdf">The "wake-sleep" algorithm for unsupervised neural networks</a></li>
<li>[<span class="optional">optional</span>] Paper: Lin and Tegmark -- <a target="_blank" href="https://arxiv.org/pdf/1608.08225v2.pdf">Why does deep and cheap learning work so well?</a></li>
</ul>
</div>
</p>
<hr width="80%" />
<span class="lecture-date">Wed 12 April 2017</span>
<p class="lecture">
<span class="lecture-title">Lecture 22: Helmholtz Machines II</span>
<div class="lecture-prep">
<ul>
<li>[<span class="optional">optional</span>] Paper: Kirby -- <a target="_blank" href="http://www.nku.edu/~kirby/docs/HelmholtzTutorialKoeln.pdf">A Tutorial on Helmholtz Machines </a></li>
<li>[<span class="optional">optional</span>] Paper: Bornschein -- <a target="_blank" href="https://arxiv.org/pdf/1506.03877v5.pdf">Bidirectional Helmholtz Machines </a></li>
<li>[<span class="optional">optional</span>] Paper: Mehta and Schwab -- <a target="_blank" href="https://arxiv.org/pdf/1410.3831v1.pdf">An exact mapping between the Variational Renormalization Group and Deep Learning.</a></li>
</ul>
</div>
</p>
</div>
<div class="lecture">
<h4>Deep Learning research</h4>
<span class="lecture-date">Mon 17 April 2017</span>
<p class="lecture-prep">
<ul>
Student-requested topics, such as <a target="_blank" href="http://blog.aylien.com/introduction-generative-adversarial-networks-code-tensorflow/">Generative Adversarial Networks</a>,
<a target="_blank" href="http://benjaminbolte.com/blog/2016/keras-resnet.html">Deep Residual Networks</a>, etc.
</ul>
</p>
<hr width="80%" />
<span class="lecture-date">Mon 19 April 2017 </span>
<p class="lecture-prep">
<ul>
Student-requested topics, such as <a target="_blank" href="http://blog.aylien.com/introduction-generative-adversarial-networks-code-tensorflow/">Generative Adversarial Networks</a>,
<a target="_blank" href="http://benjaminbolte.com/blog/2016/keras-resnet.html">Deep Residual Networks</a>, etc. </ul>
</p>
</div>
<div class="lecture">
<h4>Final Presentations and Reports</h4>
<span class="lecture-date">Mon 24, Wed 26 April 2017</span>
<p class="lecture-prep">
<ul>
<li><span class="lecture-assign">Final Project Presentations</span></li>
</ul>
</p>
<hr width="80%" />
<span class="lecture-date">May 1 2017</span>
<p class="lecture-prep">
<ul>
<li><span class="lecture-assign">Final Project Reports Due</span></li>
</ul>
</p>
</div>
<a name="assignments">
<h3>Assignments</h3>
Weekly Reports should be documented in one single <a target="_blank" href="https://www.google.com/docs/about/">Google Doc</a> , with clear headings for each week (dated). Please share the link with your instructor.
<br>
These assignments include your progress on hands-on coding tutorials, including any obstacles preventing you from successfully completing your tutorials.
<br>
Additionally, you're expected to review and critique one paper every week (minimum 300 words).
<br>
Finally, your weekly report should include any concepts that you find difficult to understand pertaining to the corresponding week's lectures.<br><br>
<a name="Assignment1"> <li>[<span class="required">required</span>] Assignment 1: <a target="_blank" href="https://github.com/NematiLab/CS584/tree/master/lab1_FFN"> Feedforward Neural Networks </a></li> </a>
<a name="Assignment2"> <li>[<span class="required">required</span>] Assignment 2: <a target="_blank" href="https://github.com/NematiLab/CS584/tree/master/lab2_CNN"> Convolutional Neural networks </a></li> </a>
<a name="Assignment3"> <li>[<span class="required">required</span>] Assignment 3: <a target="_blank" href="https://github.com/NematiLab/CS584/tree/master/lab3_RNN"> Recurrent Neural Networks </a></li> </a>
<a name="Assignment4"> <li>[<span class="required">required</span>] Assignment 4: <a target="_blank" href="https://github.com/NematiLab/CS584/tree/master/Lab4_RNN_CNN"> A Practical Application </a></li> </a>
<a name="Assignment5"> <li>[<span class="required">required</span>] Assignment 5: <a target="_blank" href="https://github.com/NematiLab/CS584/tree/master/Lab5_visualization"> Visualization </a> </li> </a>
<a name="Assignment6"> <li>[<span class="required">required</span>] Assignment 6: <a target="_blank" href="https://github.com/NematiLab/CS584/tree/master/lab6_AE"> Autoencoders (AE) </a></li> </a>
<a name="Assignment7"> <li>[<span class="required">required</span>] Assignment 7: <a target="_blank" href="https://github.com/NematiLab/CS584/tree/master/Lab7_AE_CNN"> Unsupervised Learning </a></li> </a>
<a name="Assignment8"> <li>[<span class="optional">optional</span>] Assignment 8: <a target="_blank" href="https://github.com/NematiLab/CS584/tree/master/Lab8_GAN_AE_LSTM"> Generative Networks </a></li> </a>
</a>
<hr/>
<a name="project">
<h3>Final Project</h3> Use <a target="_blank" href="https://www.overleaf.com/">overleaf</a> to write a journal style report and share the link with your instructor. <br\>
You may use the NIPS style files available <a target="_blank" href="http://nips.cc/Conferences/2013/PaperInformation/StyleFiles">here</a>.
<ul>
<li>Final Report: Due 1 May 2017</li>
</ul>
</a>
<hr/>
<a name="grading">
<h3>Grading</h3>
<ul>
<li>Weekly Assignments and Reports (lowest dropped): 20%</li>
<li>Midterm Project Report: 30%</li>
<li>Final Project Presentation and Report: 45%</li>
<li>Attendance and Contribution to Class Discussions: 5%</li>
</ul>
</a>
<hr/>
<a name="books">
<h3>General Deep Learning and Machine Learning Books</h3>
<ul>
<li>[<span class="required">required</span>] Book: Ian Goodfellow and Yoshua Bengio and Aaron Courville, <a target="_blank" href="https://www.amazon.com/Deep-Learning-Adaptive-Computation-Machine/dp/0262035618">Deep Learning</a>, MIT Press.</li>
<li>[<span class="optional">optional</span>] Book: Kevin Murphy, <a target="_blank" href="https://www.amazon.com/Machine-Learning-Probabilistic-Perspective-Computation/dp/0262018020">Machine Learning: A Probabilistic Perspective</a>, MIT Press.</li>
<li>[<span class="optional">optional</span>] eBook: Michael Nielsen, <a target="_blank" href="http://neuralnetworksanddeeplearning.com">Neural Networks and Deep Learning</a></li>
</ul>
</a>
<a name="books">
<h3>Other Resources</h3>
<ul>
<li>[<span class="optional">optional</span>] Codes: <a target="_blank" href="https://www.tensorflow.org/tutorials/">Official Tensorflow Tutorials</a></li>
<li>[<span class="optional">optional</span>] Lectures & Codes: <a target="_blank" href="https://classroom.udacity.com/courses/ud730/">Udacity Deep Learning Course</a> /li>
<li>[<span class="optional">optional</span>] Codes: <a target="_blank" href="http://deeplearning.net/tutorial/">IFT6266 Deep Learning Tutorials with Python codes</a></li>
<li>[<span class="optional">optional</span>] Codes: <a target="_blank" href="http://www.vlfeat.org/matconvnet/">MatConvNet: CNNs for MATLAB</a></li>
<li>[<span class="optional">optional</span>] Codes: <a target="_blank" href="http://ufldl.stanford.edu/wiki/index.php/UFLDL_Tutorial">Stanford’s Unsupervised Feature and Deep Learning Matlab tutorials</a> on <a target="_blank" href="https://github.com/amaas/stanford_dl_ex">GitHub</a></li>
</ul>
</a>
<hr/>
<a name="faq">
<h3>Frequently Asked Questions</h3>
<ul>
<li><b>Can I collaborate with another student on my term project?</b> I expect a unique midterm report from all students. You may form teams of size n<=3 for the final project with written permission (i.e., email), from your instructor.
</li>
<li><b>What are the requirements of this course?</b>
Knowledge of linear algebra, multivariate calculus, basic statistics and probability theory, and Machine Learning (CS534) and Artificial Intelligence (CS557). Homework and project will require programming in Python and Matlab.
</li>
<li><b>Are there standard techniques for handling missing data?</b> This is an area of active research. For starters, it helps to distinguish between MCAR (missing completely at random), MAR (missing at random), MNAR (missing not at random). You may find the following papers useful (or rather food for thought!): <br>
(1) Rezende et al. -- <a target="_blank" href="https://arxiv.org/pdf/1401.4082.pdf"> Stochastic Backpropagation and Approximate Inference in Deep Generative Models</a><br>
(2) Bengio et al., <i><a target="_blank" href="https://www.iro.umontreal.ca/~lisa/pointeurs/miss-nips8.pdf"> Recurrent Neural Networks for Missing or Asynchronous Data</a></i><br>
(3) Lipton et al. -- <a target="_blank" href="http://www.jmlr.org/proceedings/papers/v56/Lipton16.pdf"> Modeling Missing Data in Clinical Time Series with RNNs</a><br>
(4) Che et al. -- <a target="_blank" href="https://arxiv.org/pdf/1606.01865.pdf"> Recurrent Neural Networks For Multivariate Time Series With Missing Values</a><br>
(5) Uchihashi and Kanemura, <i><a target="_blank" href="http://link.springer.com/chapter/10.1007/978-3-319-46681-1_32"> Modeling the Propensity Score with Statistical Learning</a></i><br>
(6) Beaulieu-Jones et al, <i><a target="_blank" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5144587/pdf/nihms831925.pdf"> Missing Data Imputation In The Electronic Health Record Using Deeply Learned Autoencoders</a></i><br>
(7) Leke et al., <i><a target="_blank" href="https://arxiv.org/pdf/1512.01362.pdf"> Missing data and deep learning</a></i>
</li>
<li><b>What do you recommend for visualizing high-dimensional datasets?</b> <br>
<a target="_blank" href="https://lvdmaaten.github.io/tsne/"> t-SNE</a> is a popular technique. Use it with <a target="_blank" href="http://distill.pub/2016/misread-tsne/">care</a>!
</li>
<li><b>What are some good references on model comparison?</b> <br>
(1) Rezende et al. -- <a target="_blank" href="https://www.jstor.org/stable/2531595?seq=1#page_scan_tab_contents"> DeLong ER, DeLong DM, Clarke-Pearson DL. Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics. 1988 Sep 1:837-45.</a><br>
(2) Bengio et al., <i><a target="_blank" href="https://www.ncbi.nlm.nih.gov/pubmed/17569110"> Pencina MJ, D'Agostino RB, Vasan RS. Evaluating the added predictive ability of a new marker: from area under the ROC curve to reclassification and beyond. Statistics in medicine. 2008 Jan 30;27(2):157-72.</a></i><br>
(3) Lipton et al. -- <a target="_blank" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2782591/"> Cook NR, Ridker PM. The use and magnitude of reclassification measures for individual predictors of global cardiovascular risk. Annals of internal medicine. 2009 Jun 2;150(11):795.</a><br>
(4) Che et al. -- <a target="_blank" href="http://jmlr.csail.mit.edu/proceedings/papers/v8/airola10a/airola10a.pdf"> Airola A, Pahikkala T, Waegeman W, De Baets B, Salakoski T. A comparison of AUC estimators in small-sample studies. InMLSB 2010 (pp. 3-13).</a><br>
(5) Davis and Goadrich -- <a target="_blank" href="http://machinelearning.wustl.edu/mlpapers/paper_files/icml2006_DavisG06.pdf"> Davis J, Goadrich M. The relationship between Precision-Recall and ROC curves. InProceedings of the 23rd international conference on Machine learning 2006 Jun 25 (pp. 233-240). ACM.</a><br>
</li>
</ul>
</a>
</div>
</body>
</html>