|
83 | 83 | 2, |
84 | 84 | None, |
85 | 85 | 'good-books-with-hands-on-material-and-codes'), |
86 | | - ('Twst yourself: Deep learning 1', |
| 86 | + ('Test yourself: Deep learning 1', |
87 | 87 | 2, |
88 | 88 | None, |
89 | | - 'twst-yourself-deep-learning-1'), |
| 89 | + 'test-yourself-deep-learning-1'), |
90 | 90 | ('Test yourself: Deep learning 2', |
91 | 91 | 2, |
92 | 92 | None, |
|
376 | 376 | <!-- navigation toc: --> <li><a href="#gaussian-processes-and-bayesian-analysis" style="font-size: 80%;">Gaussian processes and Bayesian analysis</a></li> |
377 | 377 | <!-- navigation toc: --> <li><a href="#hpc-path" style="font-size: 80%;">HPC path</a></li> |
378 | 378 | <!-- navigation toc: --> <li><a href="#good-books-with-hands-on-material-and-codes" style="font-size: 80%;">Good books with hands-on material and codes</a></li> |
379 | | - <!-- navigation toc: --> <li><a href="#twst-yourself-deep-learning-1" style="font-size: 80%;">Twst yourself: Deep learning 1</a></li> |
| 379 | + <!-- navigation toc: --> <li><a href="#test-yourself-deep-learning-1" style="font-size: 80%;">Test yourself: Deep learning 1</a></li> |
380 | 380 | <!-- navigation toc: --> <li><a href="#test-yourself-deep-learning-2" style="font-size: 80%;">Test yourself: Deep learning 2</a></li> |
381 | 381 | <!-- navigation toc: --> <li><a href="#test-yourself-optimization-part" style="font-size: 80%;">Test yourself: Optimization part</a></li> |
382 | 382 | <!-- navigation toc: --> <li><a href="#test-yourself-analysis-of-results" style="font-size: 80%;">Test yourself: Analysis of results</a></li> |
@@ -491,12 +491,12 @@ <h2 id="overview-of-first-week-january-19-23-2026" class="anchor">Overview of fi |
491 | 491 | <div class="panel-body"> |
492 | 492 | <!-- subsequent paragraphs come in larger fonts, so start with a paragraph --> |
493 | 493 | <ol> |
494 | | - <li> Presentation of course</li> |
495 | | - <li> Discussion of possible projects</li> |
496 | | - <li> Deep learning methods, mathematics and review of neural networks |
| 494 | +<li> Presentation of course</li> |
| 495 | +<li> Discussion of possible projects</li> |
| 496 | +<li> Deep learning methods, mathematics and review of neural networks |
497 | 497 | <!-- o "Video of lecture at <a href="https://youtu.be/SY57dC46L9o" target="_self"><tt>https://youtu.be/SY57dC46L9o</tt></a> --></li> |
498 | | - <li> Recommended reading first three weeks: Raschka et al chapter 11 and Goodfellow et al chapters 6 and 7</li> |
499 | | - <li> Permanent Zoom link for the whole semester is <a href="https://uio.zoom.us/my/mortenhj" target="_self"><tt>https://uio.zoom.us/my/mortenhj</tt></a></li> |
| 498 | +<li> Recommended reading first three weeks: Raschka et al chapters 11-12 and Goodfellow et al chapters 6-8</li> |
| 499 | +<li> Permanent Zoom link for the whole semester is <a href="https://uio.zoom.us/my/mortenhj" target="_self"><tt>https://uio.zoom.us/my/mortenhj</tt></a></li> |
500 | 500 | </ol> |
501 | 501 | </div> |
502 | 502 | </div> |
@@ -564,6 +564,8 @@ <h2 id="additional-topics-kernel-regression-gaussian-processes-and-bayesian-stat |
564 | 564 | variable). |
565 | 565 | </p> |
566 | 566 |
|
| 567 | +<p>These topics are not covered by the lectures but can be used to define projects.</p> |
| 568 | + |
567 | 569 | <!-- !split --> |
568 | 570 | <h2 id="project-paths-overarching-view" class="anchor">Project paths, overarching view </h2> |
569 | 571 |
|
@@ -671,14 +673,18 @@ <h2 id="good-books-with-hands-on-material-and-codes" class="anchor">Good books w |
671 | 673 | </p> |
672 | 674 |
|
673 | 675 | <!-- !split --> |
674 | | -<h2 id="twst-yourself-deep-learning-1" class="anchor">Twst yourself: Deep learning 1 </h2> |
| 676 | +<h2 id="test-yourself-deep-learning-1" class="anchor">Test yourself: Deep learning 1 </h2> |
| 677 | + |
| 678 | +<p>Deep learning (essentially neural networks) |
| 679 | +background knowledge we deem important to be familiar with. |
| 680 | +</p> |
675 | 681 |
|
676 | 682 | <ol> |
677 | | -<li> Describe the architecture of a typical feed forward Neural Network (NN).</li> |
| 683 | +<li> Can you describe the architecture of a typical feed forward Neural Network (NN).</li> |
678 | 684 | <li> What is an activation function and discuss the use of an activation function.</li> |
679 | 685 | <li> Can you name and explain three different types of activation functions?</li> |
680 | 686 | <li> You are using a deep neural network for a prediction task. After training your model, you notice that it is strongly overfitting the training set and that the performance on the test isn’t good. What can you do to reduce overfitting?</li> |
681 | | -<li> How would you know if your model is suffering from the problem of exploding Gradients?</li> |
| 687 | +<li> How would you know if your model is suffering from the problem of exploding gradients?</li> |
682 | 688 | <li> Can you name and explain a few hyperparameters used for training a neural network?</li> |
683 | 689 | </ol> |
684 | 690 | <!-- !split --> |
@@ -1960,7 +1966,7 @@ <h2 id="updating-the-gradients" class="anchor">Updating the gradients </h2> |
1960 | 1966 | </footer> |
1961 | 1967 | --> |
1962 | 1968 | <center style="font-size:80%"> |
1963 | | -<!-- copyright --> © 1999-2025, Morten Hjorth-Jensen. Released under CC Attribution-NonCommercial 4.0 license |
| 1969 | +<!-- copyright --> © 1999-2026, Morten Hjorth-Jensen. Released under CC Attribution-NonCommercial 4.0 license |
1964 | 1970 | </center> |
1965 | 1971 | </body> |
1966 | 1972 | </html> |
|
0 commit comments