Due to their immense success, Artificial Intelligence, or especially Machine Learning, algorithms are getting more and more important in our life. They make high stakes decisions instead of us, but their decisions are uninterpretable and, therefore, uncontrollable by humans because of the highly complex structure of these mathematical models.
The untrustworthiness of these models gave birth to the new research area of Explainable Artificial Intelligence (XAI) which is concerned with giving post-training explanations for the decisions of these complex models or producing highly accurate interpretable (simple) models by means of optimization algorithms. These techniques should also be backed up by algorithms that preserve the privacy of the individuals whose data are used in the training process.
These active research areas are creating very successful applications but mathematical theory has begun to develop very recently. There have been very interesting additions to this line of work by the community of mathematicians and theoreticians in the last three-four years. The main aim of this course is to introduce these results as well as their applications.
The courses are aimed mainly at MSc. and PhD. students. Knowledge of multivariable calculus, linear algebra, probability, and familiarity with the basic concepts of machine learning are assumed.
Day 1: (6 hours) Introduction to Machine Learning (ML). Optimization for ML. Generalization of optimization algorithms.
Day 2: (6 hours) Introduction to Explainable Artificial Intelligence (XAI). Model agnostic explanation methods (attribution based, counterfactuals). Intrinsically interpretable machine learning models. Locally interpretable generalized linear models.
Day 3: (6 hours) Attribution based model agnostic explanations and popular algorithms like LIME, SHAP, etc. Rule based intrinsically interpretable models. How to get optimal rule sets. How to measure fairness and create fair models.
Day 4: (6 hours) Attribution based intrinsically interpretable machine learning models: GAM, EBM, etc. Statistical learning theory and generalization of ML models. VC dimension and error bounds.
Day 5: (6 hours) Continuation of statistical learning theory and generalization. Rademacher complexity, error bounds and applications. Existence of interpretable models. Rashomon sets and Rashomon ratio. Impossibility results on continuous recourse functions.
Instructors
İlker Birbil (University of Amsterdam, School of Business, Amsterdam),
Rukiye Kara (Mimar Sinan Fine Arts University, Mathematics Department, Istanbul)
Özgür Martin (Mimar Sinan Fine Arts University, Mathematics Department, Istanbul)
Figen Öztoprak (Gebze Technical University, Department of Industrial Engineering, Istanbul)
Mustafa Topkara (Mimar Sinan Fine Arts University, Mathematics Department, Istanbul)
Tentative schedule
10/09/2023: The participants will arrive at the Nesin Mathematics Village
11/09/2023 – 15/09/2023: Classes run from 09:00 to 12:00 and then from 14:00 – 17:00, total of 6 hours per day.
15/09/2023: The participants leave the Nesin Mathematics Village.
Application information
We have financial support available especially for young participants to partially/totally cover accommodation and travel expenses. Female participants and participants from developing countries are especially encouraged to apply.
In order to attend the summer school, please fill out the following form:
Application Form
Deadline for applications will be 15th of June.
Scientific committee
İlker Birbil (University of Amsterdam, School of Business, Amsterdam)
Özgür Martin (Mimar Sinan Fine Arts University, Mathematics Department, Istanbul)
Figen Öztoprak (Gebze Technical University, Department of Industrial Engineering, Istanbul)
Organizers
Rukiye Kara (Mimar Sinan Fine Arts University, rukiye.kara (at) msgsu.edu.tr)
Özgür Martin (Mimar Sinan Fine Arts University, ozgur.martin (at) msgsu.edu.tr)
Mustafa Topkara (Mimar Sinan Fine Arts University, mustafa.topkara (at) msgsu.edu.tr)