Student's mathematical seminar - FJFI/CVUT - 2023/24



Čas a místo: Středa v 14h v T-211, Trojanova 13

Seminář je otevřen všem pravidelným i občasným zájemcum.

Upcoming program:


15. 5. 2024: Andrzej Grzesik (Poland)
Title: t.b.a.
Abstract: t.b.a.

24. 4. 2024: Ekkehard Schnoor (RWTH Aachen/Fraunhofer Heinrich-Hertz-Institut)
Title: A statistical learning perspective on the iterative soft-thresholding algorithm: from sparse linear classifiers to unfolded neural networks for sparse reconstruction
Abstract: This talk explores applications of the LASSO in machine learning, with a particular focus on the iterative soft-thresholding algorithm (ISTA) as a practical solution algorithm. Two statistical learning problems based on ISTA will be introduced. The first problem adopts a compressive sensing perspective and utilizes trained ISTA-based neural networks for sparse reconstruction. The second problem investigates sparse linear classifiers obtained by ISTA. In both cases, the aim is to bound or predict the generalization error. Various tools from high-dimensional probability theory are employed, such as to derive bounds for the Rademacher complexity of the hypothesis classes of ISTA-inspired neural networks, and including a probabilistic variant of the Banach fixed point theorem.

17. 4. 2024: Daniel Khol (FJFI ČVUT)
Title: Theory of Potentials
Abstract: The lecture covers the fundamental aspects of potential theory. It introduces the basics of potential theory on graphs and d-dimensional toruses. Key discussions include the analysis of Gaussian measures, polymer partition functions, and the use of the Feynman-Kac formula. The lecture also touches on practical computation methods like the Wick formula and random walks.

Lectures of the past

10. 4. 2024 (14:30): Kristina Jarůšková (FJFI ČVUT, CERN)
Title: Generative deep learning for detector simulations at CERN
Abstract: Computer simulations of particle collisions in a detector are an essential part of the particle physics experiment lifecycle. But why does CERN need to simulate its detectors and why should we use deep learning to do so? This talk introduces a generative DL use case for high energy physics with a focus on a masked model based on the transformer architecture.

3. 4. 2024: Vojtěch Kužel (FJFI ČVUT)
Title: Understanding how porosity gradients can make a better filter using homogenization theory
Abstract: We will demonstrate how to use homogenization theory, particularly multiple-scale methods, to effectively describe diffusive motion in a material with spatially nonhomogeneous properties. This approach serves as a pathway to solving numerous optimization problems related to filters of all kinds.

27. 3. 2024: Zdeněk Mihula (FEL ČVUT)
Title: Noncompact Sobolev embeddings, quantitative aspects
Abstract: Sobolev embeddings that are in a sense optimal, or nearly optimal, are typically noncompact. There are various quantities measuring ``how bad'' noncompactness of operators (e.g., of Sobolev embeddings) is, such as the ball measure of noncompactness or some so-called s-number. We will investigate some (nearly) optimal Sobolev embeddings from such a quantitative point of view.

20. 3. 2024: doc. Ivana Pultarová (FSv ČVUT)
Title: Preconditioning with guaranteed bounds to every eigenvalue of the resulting matrix
Abstract: Numerical solution of partial differential equations often leads to systems of linear equations which have some specific features, like symmetry, positive definiteness and sparsity. We present a preconditioning method yielding a system with bounded spectrum and, moreover, bounds to every eigenvalue can be obtained. The method is applicable to finite element, finite difference, discontinuous Galerkin, stochastic Galerkin and other methods. Our approach is motivated by recent results on operator preconditioning by Nielsen, Tveito, Hackbusch, Strakoš and Gergelits.

14. 3. 2024: Martin Kunz (FJFI ČVUT)
Title: A Sampling Procedure for the Generalized Inverse Gaussian (GIG)
Abstract: Monte Carlo (MC) methods have been used for solving enormous and influential statistical problems, which have escaped the capabilities of deterministic computational methods. The ability of generating independent samples (variates) from random variables with a specified distribution is an essential part and a prerequisite for the use of MC methods.
Following an introduction to the topic of random sampling and a motivational example, which demonstrates why some researchers argue that the MC approach beats the “curse of dimensionality”, a sampling method for the GIG distribution will be developed by applying two standard methods for variate generation. The theory for the two methods, the Ratio of Uniforms (RoU) and the Accept-Reject Sampling method, will be presented and they will be used for the development of a uniformly efficient GIG variate generator. Due to the computational nature of the topic, a Julia implementation will be shown along with a concluding example to demonstrate the potential use of this sampler for statistical inference.
Notebook:here

6. 3. 2024: Václav Šmídl (ÚTIA AV ČR)
Title: Bayesian optimization: motivation and potential research directions
Abstract: Bayesian optimization is a successful tool for optimization of objective functions with expensive evaluations. However, empirical experiments reveal that its domain of superiority is rather limited. The rule of thumb is that the problem should have less than 20 dimensions and be suitable. In this talk, I will introduce the basics of Bayesian optimization, outline its weaknesses and their potential solutions. I will briefly summarize my research activities in this domain and space for collaboration.
Slides and other materials:Slides, jl-Notebook

28. 2. 2024: Jan Volec (FJFI ČVUT)
Title: Subgraphs with a positive minimum semidegree in digraphs with large outdegree
Abstract: We show that every d-out-regular directed graph G has a subgraph that has the minimum semidegree at least d(d+1)/(2*|V(G)|). On the other hand, for every c > 0 we construct infinitely many tournaments T with the minimum outdegree d and the property that every subgraph of T has the minimum semidegree at most (1+c)*d(d+1)/(2*|V(T)|).
Based on a joint work with Andrzej Grzesik and Vojta Rödl.

21. 2. 2024: Matěj Trödler (FJFI ČVUT)
Title: Dispersion of a point set - a lower bound
Abstract: Dispersion of a point cloud in a unit cube is the size of the largest axis-parallel box, which does not interesect this cloud. We provide a new lower bound for this quantity, based on a combinatorial argument.

Předchozí semestry:

12. 12. 2023: Jan Hladký (ÚI AV ČR Prague)
Title: Grafony
Abstract: Kolem roku 2004 začala skupina matematiků kolem Lászla Lovásze vyvíjet přístup k velkým grafům založený na metodách matematické analýzy. Z tohoto přístupu vznikla velmi elegantní teorie. V přednášce vysvětlím pojem grafonu, který je centrálním objektem celé teorie.

5. 12. 2023: Nikola Drnková (FJFI ČVUT Prague)
Title: Matematické modelování difuze komponent vícesložkové směsi pomocí Maxwellovy-Stefanovy teorie
Abstract: Odvození Maxwellova-Stefanova zákona a formulace úlohy popisující difuzí ve vícesložkové směsi. Popis numerického algoritmu vytvořeného pro řešení úlohy a následná ukázka testování algoritmu na konkrétních případech dvou a třísložkové směsi.

28. 11. 2023: Jakub Malášek (FJFI ČVUT Prague)
Title: Odhad Hausdorffovy míry invariantních množin
Abstract: Přednáška představí základní pojmy geometrické teorie míry jako je Hausdorffova dimenze, Cantorova množina a Sierpinského trojúhelník. Dále předvedeme odhady pro Hausdorffovu míru Sierpinského trojúhelníku.

21. 11. 2023: Tomáš Růžek (FJFI ČVUT Prague)
Title: Mathematical modelling of patogen evolution
Abstract: Přednáška představí model SIR pro šíření infekčních nemocí a jeho následnou modifikaci pro modelování dlouhodobě přítomných infekčních nemocích v hostitelské populaci. Díky tomu budeme dále pomocí metody adaptivní dynamiky diskutovat, jakým způsobem se může populace patogenu způsobující nemoc evolučně (v biologickém smyslu) vyvíjet.

14. 11. 2023: Matěj Trödler (FJFI ČVUT Prague)
Title: Kernel methods in machine learning
Abstract: This lecture explores the mathematical derivation of Principal Component Analysis (PCA) to enhance understanding of dimensionality reduction. We delve into the steps of extracting principal components and discuss the necessity of introducing kernel functions for constructing Kernel PCA, addressing complex data relationships. The presentation aims to elucidate PCA's derivation process, emphasizing its extension into Kernel PCA for handling nonlinearities in dataset representation. Lastly, foundational principles behind these techniques are provided.

7. 11. 2023: Daniel Khol (FJFI ČVUT Prague)
Title: Cluster expansions of abstract polymer models
Abstract: Přednáška se pokusí vysvětlit jak je možné přejít od sum přes všechny spojité grafy v polymerových modelech k sumě přes klustery. Přednáška se snaží pokrýt začátek nevydaného článku Resummation of cluster expansion sums in abstract polymer models od Olivera Nagy a Miloše Zahradníka.

31. 10. 2023: RNDr. Věra Kůrková, DrSc. (ÚI AV)
Title: Approximation of classifiers of large data sets by deep ReLU networks
Abstract:Rapid development of experimental research and successful practical applications of deep networks inspires many theoretical questions. In this talk, we will focus on approximation capabilities of deep ReLU networks, which are one of the most popular network architectures. We will explore the effect of network depths and numbers of their parameters on behavior of approximation errors. To obtain probabilistic bounds on approximation errors, we will employ concepts from statistical learning theory (growth function, VC dimension) and high-dimensional geometry (concentration of measure). We will address the dilemma between approximation accuracy and consistency in learning from random samples of data. We will discuss limitations of approximation capabilities of networks of finite VC dimension in distribution agnostic settings.

24. 10. 2023: Pavel Jakš (FJFI ČVUT Praha)
Title: Transformery
Abstract: Přdstavíme transformery, architekturu neuronových sítí, která využívá tzv. attention mechanism, což staví na reprezentaci slov pomocí vektorů.

17. 10. 2023: David Rendl (FJFI ČVUT Praha)
Title: Dekonvoluce a rekonstrukce obrazu
Abstract: Přednáška bude o dekonvoluci a rekonstrukci obrazu. Popíšeme, jak reprezentujeme snímky, jak modelujeme rozmazání a odvodíme některé dekonvoluční algoritmy. Dále odvodíme Richardson-Lucyho algoritmus ve dvou regularizovaných verzích. Představíme také některé triky pro snížení výpočetní náročnosti. Celé povídání bude doplněno praktickými ukázkami. Pokud zbude čas, tak ukážeme metody měření ostrosti a vyhodnocování výsledků rekonstrukce snímků.

10. 10. 2023: Václav Klika (FJFI ČVUT Praha)
Title: t.b.a.
Abstract: t.b.a.

3. 10. 2023: Jan Volec (FJFI ČVUT Praha)
Title: Turánovské problémy
Abstract: V přednášce si představíme Turánovu větu - základní výsledek z extremální teorie grafů - a několik různorodých technik, jak tuto větu dokázat. Dále si pak představíme přirozené zobecnení Turánovy věty pro výšší dimenze, a zformulujeme tzv. hypergrafovou Turánovu domněnku - jeden z nejvýznamnějších otevřených problémů extremální kombinatoriky. Pro nejmenší instanci Turánovy hypergrafové domněnky - jaké maximální množství trojic na nosné množine n bodů lze vybrat tak, aby vybraný systém trojic neobsahoval čtyřstěn - zmíníme nejlepší známý horní odhad, a zmíníme základní myšlenku, jak se tento odhad dokazuje. 9. 5. 2023 (Tuesday): Stanislav Hencl (MFF UK, Prague)
Title: Models of nonlinear elasticity: Questions and progress
Abstract: Lecture is an introduction to nonlinear elasticity, its mathematical formulation and basic results and open questions.

27. 4. 2023 (Thursday): Jan Haskovec (KAUST, Saudi Arabia)
Title: Functional Differential Equations in Models of Collective Behavior
Abstract: I will give an overview of recent results for models of collective behavior governed by functional differential equations. The talk will focus on models of interacting agents with applications in biology (flocking, swarming), social sciences (opinion formation) and engineering (swarm robotics), where latency (delay) plays a significant role. I will explain that there are two main sources of delay - inter-agent communications and information processing - and show that they have qualitatively different impacts on the group dynamics. I will give an ovierview of analytical methods for studying the asymptotic behavior of the models and their mean-field limits. Finally, motivated by situations where finite speed of information propagation is significant, I will introduce an interesting class of problems where the delay depens nontrivially and nonlinearly on the state of the system, and discuss the available analytical results and open problems here.

24. 4. 2023: Martin Kunz (FJFI ČVUT)
Title: The Tale of a Boundary-Layer Problem
Abstract: I will demonstrate the solution with all its steps (and missteps) to the boundary layer problem eps y'' = yy' - y, such that y(0) = 1 and y(1) = -1 and where eps <<1. The journey towards the solution will take us on an excurse through some basics of perturbation theory and asymptotic approximation matching. I will demonstrate visualizations that will give you insight into the reasoning behind the steps taken towards the solution serving as an exhibition for the leverage in problem solving accessible to us through the computational power at our fingertips.

17. 4. 2023: David Sychrovský (MFF UK)
Title: t.b.a.
Abstract: t.b.a.

3. 4. 2023: Aleksej Gaj (FJFI ČVUT)
Title: Quantum decision making: an introduction
Abstract: Decision making (DM) is one of the basic disciplines in contemporary AI and ML. DM is a purposeful choice among several alternatives based on the available information. Classical subjective expected theory (Savage, 1954) serves as a main paradigm currently used. Confrontation with a human-like style of thinking (see Ellsberg and Allais paradoxes) has stimulated a new line of research towards the quantum version of DM (see Yukalov and Sornette, 2010). In the talk we will speak about how quantum mechanics apparatus can be used to modify & improve classical decision theory.

27. 3. 2023: Erin Carson (MFF UK)
Title: Using Mixed Precision in Numerical Linear Algebra
Abstract: Support for floating point arithmetic in multiple precisions is becoming increasingly common in emerging architectures. Mixed precision capabilities are already included in many machines on the TOP500 list and will be a crucial hardware feature in exascale machines. From a computational scientist's perspective, our goal is to determine how and where we can exploit mixed precision computation in our codes. This requires both an understanding of performance characteristics as well as an understanding of the numerical behavior of algorithms in finite precision arithmetic.

After giving an introduction to floating point computation, mixed precision hardware, and current work in mixed precision numerical linear algebra, we present examples that demonstrate what can go wrong if we use low precision blindly. This motivates the need for rigorous rounding error analysis in algorithms used in scientific computing and data science applications.

Understanding the behavior of algorithms in finite precision is necessary not only for illuminating potential dangers, but also for revealing opportunities. As an example of where rounding error analysis can lead to new insights and improved algorithms, we present a general algorithm for solving linear systems based on mixed-precision iterative refinement. From this, we develop a mixed-precision GMRES-based iterative refinement scheme that works for even ill-conditioned systems. We then present recent extensions of this theoretical analysis to least squares problems and practical settings in which approximate and randomized preconditioners are used.

20. 3. 2023: Pavel Jaks (FJFI ČVUT)
Název: Hypercubes and the Sensitivity Conjecture

13. 3. 2023: Magdalena Prorok (AGH University of Science and Technology)
Title: Directed graphs without rainbow triangles
Abstract: One of the most fundamental questions in graph theory is Mantel's theorem which determines the maximum number of edges in a triangle-free graph of order n. Recently a colourful variant of this problem has been solved. In such variant we consider k graphs on a common vertex set, thinking of each graph as edges in a distinct colour, and want to determine the smallest num- ber of edges in each colour which guarantees existence of a rainbow triangle. In this talk we solve the analogous problem for directed graphs without rain- bow triangles, either directed or transitive, for any number of colours. The constructions and proofs essentially differ for k = 3 and k >= 4 and the type of the forbidden triangle.
This is joint work with Sebastian Babinski and Andrzej Grzesik.

6. 3. 2023: Adam Janich (FJFI, ČVUT)
Název:word2vec
Abstrakt: Bude referována metoda word2vec.

27. 2. 2023: RNDr. Věra Kůrková, DrSc. (ÚI AV)
Název: Some implications of high-dimensional geometry for classification by neural networks
Abstrakt: Computational difficulties of multidimensional tasks, called the ``curse of dimensionality’’, have long been known. On the other hand, almost deterministic behavior of some randomized models and algorithms depending on large numbers of variables can be attributed to the ``blessing of dimensionality’’. These phenomena can be explained by rather counter-intuitive properties of geometry of high-dimensional spaces. They imply concentration of values of sufficiently smooth functions of many variables around their mean values and possibilities of reduction of dimensionality of data by random projections. In the lecture, it will be shown how these properties of high-dimensional geometry can be employed to obtain some insights into suitability of various types of neural networks for classification of large data sets. Probabilistic bounds on network complexity will be derived using concentration properties of approximation errors based on Azuma and McDiarmid inequalities. Consequences for choice of network architectures will be analyzed in terms of growth functions and VC dimensions of sets of network input-output functions. General results will be illustrated by examples of deep perceptron networks with various piecewise polynomial activation functions (ReLU, RePU).

20. 2. 2023: Jan Vybíral (FJFI ČVUT)
Název: A multivariate Riesz basis of ReLU neural networks
Abstrakt: We consider the trigonometric-like system of piecewise linear functions introduced recently by Daubechies, DeVore, Foucart, Hanin, and Petrova. We provide an alternative proof that this system forms a Riesz basis of L2([0,1]) based on the Gershgorin theorem. We also generalize this system to higher dimensions d>1 by a construction, which avoids using (tensor) products. As a consequence, the functions from the new Riesz basis of L2([0,1]^d) can be easily represented by neural networks. Moreover, the Riesz constants of this system are independent of d, making it an attractive building block regarding future multivariate analysis of neural networks.




(Zatím) Navržená témata:


Honza Volec:
Decades-Old Computer Science Conjecture Solved in Two Pages
A 53-Year-Old Network Coloring Conjecture Is Disproved
Google Researcher, Long Out of Math, Cracks Devilish Problem About Sets


Honza Vybíral:
Kernel Principal Component Analysis
word2vec
Spherical codes and Borsuk’s conjecture
Optimal asymptotic bounds for spherical designs
Approximation of infinitely differentiable multivariate functions is intractable


Vašek Klika
Surprises in a Classic Boundary-Layer Problem
Deep Learning: An Introduction for Applied Mathematicians
An Algorithmic Introduction to Numerical Simulation of Stochastic Differential Equations
Period Three Implies Chaos
The chemical basis of morphogenesis