Online Seminars on Artificial Intelligence and Mathematics, 2022 Edition – Wed May 25th

Organizers: Italia De Feis and Flavio Lombardi (Cnr-Iac)

Marco Cuturi
Professor of Statistics, CREST – ENSAE, Institut Polytechnique de Paris
Learning through ambiguity: differentiable matchings and mappings
Wednesday June 22, 2022 – 14.30

Youtube Channel

Optimal transport (OT) theory is the branch of mathematics that
aims at studying and generalizing the fundamental problem of matching
optimally two groups of observations, covered in all CS 101 courses
(remember the Hungarian algorithm). Following a short introduction to that
theory, I present use cases in ML when optimal matchings pop up in various
applied areas in ML, where matchings are used to resolve labelling
ambiguities. I will then show why a direct resolution of OT problems (using
e.g. the Hungarian algorithm or more general network flows/linear programs)
runs into several issues: computational complexity, sample complexity, poor
parallelization and lack of a meaningful notion of differentiability (e.g.
how an optimal matching varies with changes in inputs). I will then detail
how regularization can help solve this issues, and present the
implementation of these approaches in the ott-jax toolbox.

AI and mathematical imaging – the what, why and how

Carola-Bibiane Schönlieb

Department of Applied Mathematics and Theoretical Physics (DAMTP) University of Cambridge

Wednesday May 25, 2022- 14.30

Youtube Channel


Mathematical imaging is a topic that touches upon several areas of mathematics, engineering and computer science, including functional and non-smooth analysis, the theory and numerical analysis of partial differential equations, harmonic, stochastic and statistical analysis, optimisation and machine learning. In this talk we will learn about some of these mathematical problems, about variational models for image analysis and their connection to partial differential equations and about a new paradigm in mathematical imaging using deep neural networks. The talk is furnished with applications to art restoration, forest conservation and cancer research.

On Some Research Lines in Optimization Methods for Machine Learning

Daniela di Serafino

Professor of Numerical Analysis
Dipartimento di Matematica e Applicazioni “Renato Caccioppoli” – Università degli Studi di Napoli Federico II

Wednesday May 18, 2022- 14.30

Youtube Channel:

Machine Learning (ML) and related “intelligent” computing systems, e.g. search engines, software for image and speech detection and classification, social media filtering devices and recommendation platforms, are widely used in today’s society. They belong to an interdisciplinary research area, involving computer science, mathematics, statistics and application domains. Mathematical optimization and related numerical methods are one of the main pillars of this area, providing tools for the computation of the parameters that identify systems aimed at making decisions based on as-yet-unseen data. In this talk, I give a basic overview of optimization methods for ML, from first- to second-order approaches, together with their main properties, pros and cons, and future research directions.

For more info about these seminars:
Seminar activity partially supported by TAILOR, an EU-funded ICT-48 Network
(GA 952215)

How Framelets Enhance Graph Neural Networks (Seminar in English)

Martedì 4 Maggio 2021 – ore 14.30

Yu Guang Wang

Max Planck Institute for Mathematics in the Sciences & University of New South Wales

Leipzig – Germany & Sidney – Australia

Diretta Youtube


This work presents a new approach for assembling graph neural networks based on framelet transforms. The latter provides a multi-scale representation for graph-structured data. With the framelet system, we can decompose the graph feature into low-pass and high-pass frequencies as extracted features for network training, which then defines a framelet-based graph convolution. The framelet decomposition naturally induces a graph pooling strategy by aggregating the graph feature into low-pass and high-pass spectra, which considers both the feature values and geometry of the graph data and conserves the total information. The graph neural networks with the proposed framelet convolution and pooling achieve state-of-the-art performance in many types of node and graph prediction tasks. Moreover, we propose shrinkage as a new activation for the framelet convolution, which thresholds the high-frequency information at different scales. Compared to ReLU, shrinkage in framelet convolution improves the graph neural network model in terms of denoising and signal compression: noises in both node and structure can be significantly reduced by accurately cutting off the high-pass coefficients from framelet decomposition, and the signal can be compressed to less than half its original size with the prediction performance well preserved.

Specialization of deep learning architectures for modeling brain activity and function

Nicola Toschi Associate Professor in Medical Physics at the university of Rome Tor Vergata and Research Staff and Associate Investigator at the A.A. Martinos Center for Biomedical Imaging – Harvard Medical School/MGH Wednesday March 23, 2022- 14.30

Youtube Channel:

In the last 5 years, the advent of deep learning (DL) paradigms has revolutionized the field of artificial intelligence (AI). Previously, the contribution of AI across disciplines was confined to applications whose overall complexity is low, and towered by that of human physiology. However, modern DL paradigms offer unprecedented ability to integrate multimodal, multi-domain, and multiscale data with previously unimaginable prediction performance and little or no need for data preprocessing. With advent of ‘big’, publicly available, and often multidomain data repositories it is now possible to build and validate AI frameworks with a tangible potential to boost neuroscience research, enhance decision-making in disease management algorithm, and enrich next-generation clinical trials. This seminar will focus on the latest developments and deep architectures for modeling and studying biomedical data in general and brain structure and function in particular, as measured through e.g.. MEG, EEG, or MRI. Particular emphasis will be given to key architectural developments like e.g. separable convolutions, multi-head attention networks, spiking neural networks graph and contrastive learning. Every example will include a use-case stemming from an ongoing or publishedstudy. For more info about these seminars: Seminar activity partially supported by TAILOR, an EU-funded ICT-48 Network (GA 952215)

Representing non-negative function. With applications in non-convex optimization and beyond

Alessandro Rudi Researcher at INRIA and École Normale Supérieure, Paris Wednesday  March 9, 2022- 14.30

Youtube Channel:

Many problems in applied mathematics are expressed naturally in terms of non-negative functions. While linear models are well suited to represent functions with output in R, being at the same time very expressive and flexible, the situation is different for the case of non-negative functions where the existing models lack one of good properties. In this talk we present a rather flexible and expressive model for non-negative functions. We will show direct applications in probability representation and non-convex optimization. In particular, the model allows to derive an algorithm for non-convex optimization that is adaptive to the degree of differentiability of the objective function and achieves optimal rates of convergence. Finally, we show how to apply the same technique to other interesting problems in applied mathematics that can be easily expressed in terms of inequalities.

For more info about these seminars:
Seminar activity partially supported by TAILOR, an EU-funded ICT-48 Network (GA 952215)