Title of the Course: PAC Learning
Instructor: Dr. Uğur Efem
Institution: Dyson Institute of Engineering and Technology
Dates: 19-25 August 2024
Prerequisites: Linear Algebra, Elementary single and multi-variable calculus/analysis, Probability. In addition, some familiarity with measure theoretic probability is preferable but this can be addressed in lectures if needed.
Level: Advance undergraduate and graduate students.
Abstract: In Machine Learning, a major aim is to design and develop algorithms that can, based on sample data, make accurate predictions. This aim raises a few important questions, namely, what does learning mean? What can be learned effectively? How can one quantify and measure accuracy? What is a good sample size to guarantee for an algorithm to learn a concept and make accurate predictions? Is it possible to learn (effectively) from noisy data?
Probably Approximately Correct (PAC) Learning is a mathematical framework that allows us to formalise the concepts of learning, accuracy, and concept classes. In return, this allows us to give mathematical answers to the questions above. This short course will introduce the mathematical foundations of PAC Learning framework, and give some answers to the above questions. We will talk about PAC Learnability, generalisation and empirical errors, PAC Learnable concepts, and VC-dimension. We will see some elementary examples of PAC-Learnable concepts such as Disjunctive and Conjunctive Normal Forms and graph colourings. Our main aim is to prove that a concept class is PAC-Learnable if and only if it has finite VC-dimension. Using this result, if time permits, I will give some more complicated examples of PAC-Learnable concepts from algebra and analysis.
Language: TR, EN