UINA500 Deep Learning

Filozoficko-přírodovědecká fakulta v Opavě
zima 2022
Rozsah
2/2/0. 6 kr. Ukončení: zk.
Vyučující
doc. Ing. Petr Sosík, Dr. (přednášející)
Mgr. Tomáš Filip (cvičící)
doc. Ing. Petr Sosík, Dr. (cvičící)
Garance
doc. Ing. Petr Sosík, Dr.
Ústav informatiky – Filozoficko-přírodovědecká fakulta v Opavě
Rozvrh
Út 8:55–10:30 B2
  • Rozvrh seminárních/paralelních skupin:
UINA500/A: Út 16:25–18:00 B3b, P. Sosík
Omezení zápisu do předmětu
Předmět je nabízen i studentům mimo mateřské obory.
Mateřské obory/plány
Cíle předmětu
This classical branch of Artificial Intelligence covers a range of machine learning algorithm, typically benefiting from gradient-based learning methods. The most typical learning model is the artificial neural net with many efficient algorithms capable to learn from examples, to generalize knowledge and to search for approximate solutions of intractable problems. These algorithms can be run on special parallel machines but also on classical computers.
Výstupy z učení
The student will get acquainted with the basic mathematical and structural principles of deep learning. Will be able to design and test deep learning networks for a variety of tasks such as classification, image analysis, comprehension and text generation, or strategic decision making.
Osnova
  • 1. The structure of biological neuron, mathematical model of a simple neuron and a multi-layer neural network. Advantages and applications of artificial neural nets in deep learning. 2. Active, adaptive and organization dynamics, neural training schemes (supervised/unsupervised/reinforcement). Training and testing sets, training process, the overfitting problem. 3. The perceptron and its training algorithm. Implementation of simple logic functions. Limited capabilities of single-layer perceptron. 4. Multilayer networks and the Backpropagation (BP) algorithm. Modifications and improvements of the BP algorithm, conjugate-gradient methods, resilient propagation, further training methods. 5. Deep feedforward networks, architecture design, regularization methods. 6. Optimization of deep learning, parameter initialization methods, adaptive learning rates, meta-algorithms. 7. Deep learning in recurrent networks, topology and training algorithms, recursive networks. Problem of long dependencies, LSTM networks and related models. 8. Radial Basis Function networks, organization and active dynamics. Three phases of training, properties, applications, a comparison with multilayer perceptron. 9. Competitive networks and vector quantization problem, Lloyd's algorithm. The Kohonen training rule, the UCL, DCL a SCL algorithms. Self-organizing maps – SOM. The ART networks, principles and properties.
Literatura
    povinná literatura
  • Chollet, Francois. Deep learning with Python. Simon and Schuster, 2017.
  • GOODFELLOW, Ian, BENGIO, Yoshua, COURVILLE, Aaron. Deep Learning. Cambridge, Massachusetts: MIT Press, 2016
    doporučená literatura
  • MIRJALILI, Seyedali. Evolutionary Algorithms and Neural Networks: Theory and Applications. New York, NY: Springer International Publishing, 2018.
Výukové metody
Interactive lecture Lecture with a video analysis
Metody hodnocení
Individual projects and exercises for solutions at home.
Vyučovací jazyk
Angličtina
Další komentáře
Studijní materiály
Předmět je zařazen také v obdobích zima 2021, zima 2023, zima 2024.