With o1, OpenAI ushered a new era: LLMs with reasoning capabilities. This new breed of models broadened the concept of scaling laws, shifting focus from train-time to inference-time compute. But how do these models work? What does "inference-time compute" exactly mean? What data do we use to train these new models? And finally - and perhaps more importantly: how expensive can they get, and what can we use them for?
talk-data.com
Speaker
Luca Baggi
1
talks
AI Engineer
xtream
AI Engineer @xtream
Bio from: PyData Roma Capitale + PyRoma Meetup @ The Social Hub
Filtering by:
PyData Roma Capitale + PyRoma Meetup @ The Social Hub
×
Filter by Event / Source
Talks & appearances
Showing 1 of 3 activities