MUSEMI - Meet us for SEminars @uniMI- is a series of seminars organised by Phd computer science students of the university of Milan.
The main purpose is both to share knowledge among different research groups inside the Computer Science Department and to have the occasion for practicing public speaking.
We think that the main driving force for research is meeting with other enthusiastic people and exchange of ideas.
At the same time we wanted to create a familiar place where younger researchers (first of all Phd students) could start to learn how to communicate and present their ideas.
The meetings are set once in two weeks, they last about 1h30mins and they comprehend two presentations.
There are two possible presentation formats: a longer one (30-35mins) and a shorter one (15-20mins).
As the main aim of those meetings is to create networking between people, at the end of the presentation there is the chance of a question time.
Presentations are mainly held by Phd students, but also professors or older researchers are welcome to propose their topics for the next appointments.
Next scheduled meetings will follow.
Calendar
Understanding how Large Language Models (LLMs) make decisions is key to building safer AI systems and fostering trust in their outputs, especially in high-stakes applications. While correlation-based explainability methods can highlight patterns in model behavior, they often fall short of uncovering the why behind those patterns. This is where causation-based techniques comes in. In this talk, we explore one of these causation-based techniques: Causal Mediation Analysis (CMA) is a framework that helps trace causal relationships within LLMs, offering a deeper look into how different components influence predictions. We will also discuss what it means to apply CMA to LLMs and the insights it can reveal, including knowledge such as hidden biases or internal reasoning pathways. We also discuss the challenges of this approach, from architectural constraints that affect experimental design to ensuring meaningful comparisons across studies. By the end of this session, you’ll have a clearer understanding of what CMA can (and can’t) do for LLM explainability and how it fits into the broader quest to move from mere correlation to true causal understanding.
The recognition of Activities of Daily Living (ADLs) in smart home environments has significant applications in energy management, safety, well-being, and healthcare. Traditional approaches rely on deep learning models such as CNNs and LSTMs, which require large labeled datasets to generalize effectively. Recent studies have demonstrated that Large Language Models (LLMs) possess remarkable common-sense knowledge about human activities. They can encode the semantic relationships between triggered sensors and performed actions, offering a promising alternative for ADL recognition. However, LLMs alone struggle with domain-specific patterns and contextual ambiguities. For instance, while opening a fridge is commonly linked to meal preparation, it can also indicate medication intake when a patient retrieves water to take a pill. To address this challenge, we are exploring how to leverage the semantic capabilities of LLMs to develop a specialized foundation model tailored for sensor-based Human Activity Recognition in smart homes. This talk will delve into the potential and limitations of LLMs in this domain, discuss recent advancements, and outline our research direction toward building a more robust and context-aware recognition system.