Three advanced-skills seminars are run in December by experts in GenAI and connected systems. The first two onsite at CNAM, the third one onsite at UPC, with remote participants from the other AI4CI universities.
Date: 1/12 9:30 am – 1 pm CET
Speakers: Davide Avesani (CNAM), Massimo Gallo & Paolo Medagliani (Huawei)



The life of a token: from words to bits on the wire
Abstract:
Large Language Models (LLMs) are reshaping our world, but how do they actually function? While it is common knowledge that Graphics Processing Units (GPUs) are the key ingredient for LLMs, a look “under the hood” reveals complex requirements: models must be trained efficiently to ensure quality and served (inference) with low latency to guarantee a good user experience. Furthermore, as model sizes exceed the capacity of a single GPU, parallelism strategies become essential. The goal is to keep all GPUs busy at maximum capacity while avoiding downtime, which requires tight synchronization and generates a high volume of communication. Efficiently achieving this remains a complex and ongoing research challenge.
The first part of this talk explores the origins and structure of this communication, with the goal of characterizing and estimating the expected network traffic. We will look at how an LLM is built, how the transformer chain works, how training data is turned into something the model can understand, and what information is passed through different model stages. Building on this foundation, we will review the main parallelization strategies for LLMs and how each one impacts communication patterns and network load.
The second part of the session will focus on collective communication, a term used to refer to the set of methods used to manage data exchange. In this part, we will introduce the different types of collective communications used in both training and inference. We will examine the network limitations and overheads they introduce, shedding light on how to measure performance and optimize data exchange between GPUs. Finally, we will explore how these collectives can be extended across multiple Data Centers via WAN for distributed training and inference.
Date: 8/12 3:45 pm CET
Speaker: Jérémie Leguay (Nokia Bell Labs)

Towards autonomous networks with agentic AI
Abstract:
This presentation will explore agentic network automation, focusing on how Large Language Model (LLM) agents are transforming network operations. I will provide an overview of key agentic frameworks, such as ADK and LangGraph, and discuss relevant protocols like MCP and A2A. Through practical use cases and examples, I will illustrate the capabilities and potential of these technologies. The presentation will conclude with an overview of ongoing research challenges around agentic AI in the pursuit of truly autonomous networks.
Bio: Jeremie Leguay received his Ph.D. degree in computer science from Pierre et Marie Curie University, Paris, France. He is Department Head at Nokia Bell Labs Paris Saclay on Network Systems Research. From 2004, he conducted research and led the Networking Lab at Thales Communications and Security (SIX GTS division) where he developed activities on sensor networks, mobile networks, and software-defined networks for mission-critical networked systems. In 2014, he joined Huawei Technologies as leader of the Network and Traffic Optimization Team to conduct research activities on the planning and control of IP networks. He is Senior IEEE member. He has been a Senior Expert and Director of the Datacom Dijkstra Lab at Huawei Technologies. His current activities are mainly on Routing, Network Management and Optimization, Self-driving networks, Automation, Network for AI.
—
Date: 15/12 11:00 am – 1 pm CET
Speaker: Diego Perino (Barcelona Supercomputing Center)

LLM in practice: the Llama case
Abstract:
The talk will focus on developing LLM in practice, with a specific focus on evaluations and safety. I will first introduce our approach in building Llama models including current and future trends. I will then focus on our approach to open trust and safety in the GenAI era as example of evaluation and training. FinallIy, I will focus on the AI judges approach providing an overview of the area and different approaches and challenges.
Bio:
Diego Perino is an organization manager, technical leader and scientist with passion to work in cutting edge projects with industrial impact. He has been working for different companies in the ITC sector (Telefonica, Bell Labs, Orange Labs, Meta) and covered different technologies and research areas (above all AI, networks, systems). Apart from his industrial experience, he has also been very active in the scientific community with several publications, participation in conference committees, and editorial board contributions. He holds a Ph.D. from the Paris Diderot-Paris 7 University, MSc. from Politecnico di Torino, Eurecom Institute and Université de Nice-Sophia Antipolis.

