Advanced Topics in Artificial Intelligence
| Keywords |
| Classification |
Keyword |
| OFICIAL |
Informatics Engineering |
| OFICIAL |
Computer Science |
Instance: 2025/2026 - 1S 
Cycles of Study/Courses
| Acronym |
No. of Students |
Study Plan |
Curricular Years |
Credits UCN |
Credits ECTS |
Contact hours |
Total Time |
| M.IA |
52 |
Syllabus |
1 |
- |
6 |
42 |
162 |
Teaching Staff - Responsibilities
Teaching language
English
Objectives
Provide students with knowledge about new AI developments that involve advances in areas as diverse as logic, statistics and operations research.Emphasis will be placed on:
- probabilistic search with focus on Monte Carlo Tree Search - directed and non-directed probabilistic graphical models, including inference and learning of parameters and structure; connection to linear classifiers and neural networks - logical representation: First order logic (FOL) and Datalog for structure representation; learning logical programs in Inductive Logic Programming (ILP). - integration: Statistical relational learning (SRL) and neural-logical networks (NeSy).The course requires skills acquired in Design and Analysis of Algorithms, Artificial Intelligence and Data Mining.Learning outcomes and competences
Students will develop competences on the usage of artificial intelligence and search / optimization methods in practical situations, in which a part of the knowledge is available in data sets or databases.
Working method
Presencial
Pre-requirements (prior knowledge) and co-requirements (common knowledge)
PAlgorithm Design and Analysis, Artificial Intelligence, Data Mining I
Program
This UC will function in 3 modules, as follows:
Module 1: Monte Carlo Tree Search
1.1. Introduction to Reinforcement Learning:
- Overview of reinforcement learning concepts
- Markov decision processes (MDPs)
- Exploration vs. exploitation trade-off
1.2. Bandit Algorithms:
- Multi-armed bandit problem
- Epsilon-greedy, UCB, Thompson sampling
- Contextual bandits
- Selected applications
1.3. Monte Carlo Tree Search:
- Basics of MCTS
- Tree policies and default policies
- Upper confidence bound for trees (UCT)
- Applications (game playing, robotics)
1.4. Advanced Topics:
- Deep reinforcement learning
- Applications in selected domains
Módulo 2: Logic Representation and Modeling
2.1 Introduction to Probabilistic Logic Programming
- brief review of logic programming
- knowledge representation using logic programming
- knowledge representation using probabilistic logic programming
- syntax and semantics of probabilistic logic programming using ProbLog and CLP(BN)
2.2 Learning Probabilistic Logic Programs
introduction to inductive logic programming (ILP)
algorithms and systems for ILP
limitations
learning first order logic probabilistic rules
algorithms and complexity
exact and approximate probability calculations
combining probabilistic inference with logical inference
2.3 Bipolar Argumentation (if time allows)
Module 3:
3.1. Learning in Logic Revisited.
- Can ILP scale?
- Dataset cxamples
- From Abduction to Induction
3.2. SRL and Nesy Revisited
- From Problog to NNs: it's all about layers
- From TensorLog to GNNs: it's all about matrixes
- Experimental Evaluation
3.3 Generative Models
- Does Logic Programming need attention?
- Meliad and AlphaProof
- Tanenbaum's ToW
3.4 Where do you go from here?
- Discussion with free-ring-your-favorite-paper!
Mandatory literature
Richard S. Sutton and Andrew G. Barto; Reinforcement Learning: An Introduction, MIT Press. ISBN: 978-0262039246
Fabrizio Riguzzi; Foundations of Probabilistic Logic Programming Languages, Semantics, Inference and Learning, Second Edition, River Publishers, 2022
Luc de Raedt;
Probabilistic inductive logic programming. ISBN: 9783540786511
Stuart Russell, Peter Norvig; Artificial Intelligence: A Modern Approach. ISBN: 0134610997
Complementary Bibliography
Stuart Russell, Peter Norvig; Artificial Intelligence: A Modern Approach, Pearson, 2020. ISBN: 978-0134610993
Teaching methods and learning activities
Lectures: presentation of the program topics and discussion of aplications in artificial intelligence.
Labs: problem solving.
Software
python
keywords
Physical sciences > Computer science > Cybernetics > Artificial intelligence
Physical sciences > Mathematics > Applied mathematics > Numerical analysis
Physical sciences > Mathematics > Applied mathematics > Operations research
Evaluation Type
Distributed evaluation with final exam
Assessment Components
| designation |
Weight (%) |
| Teste |
66,70 |
| Exame |
33,30 |
| Total: |
100,00 |
Amount of time allocated to each course unit
| designation |
Time (hours) |
| Estudo autónomo |
106,00 |
| Frequência das aulas |
56,00 |
| Total: |
162,00 |
Eligibility for exams
Mandatory attendance at classes, in accordance with U.P. rules.
Calculation formula of final grade
Final Grade = T1 + T2 + T3
T1 = Grade from the 1st test (maximum score: 20/3 points)
T2 = Grade from the 2nd test (maximum score: 20/3 points)
T3 = Grade from the 3rd test (maximum score: 20/3 points)
Notes:
(i) Tests T1 and T2 are held during the semester and cover modules 1 and 2, respectively.
(ii) T3 is held during the exam period and corresponds to module 3 of the course syllabus.
(iii) Any student can choose not to take T1, T2, or T3 and obtain their final grade by taking the resit exam.
(iv) Students can take the resit exam both to pass the course and to improve their grade. The resit exam consists of three independent parts, corresponding to each of the modules. Students seeking to pass can choose to complete all parts or just some, retaining the grade from the parts not completed.
Examinations or Special Assignments
n/a
Internship work/project
n/a
Special assessment (TE, DA, ...)
The same evaluation criteria is used for all students.
Classification improvement
Students who want to improve their grade in certain modules should complete the corresponding parts of the resit exam. Modules not attempted during the resit will retain their original test grade.