talk-data.com
People (147 results)
See all 147 →Activities & events
| Title & Speakers | Event |
|---|---|
|
GenerativeAI Super Meet Up
2025-12-09 · 10:00
Bonjour à tous, Nous organisons pour la première fois une journée de "Super meet up" avec des anciens speakers qui reviennent pour nous parler de leur dernières avancées. Cette journée aura lieu le 9 décembre au CNIT. Elle se déroule au sein de la conférence apidays où ils nous offrent une salle. Bonne nouvelle : nous avons 80 places à offrir à la communauté via ce lien. Pour assister à la conférence, il faut absolument prendre une place sur le site apidays. Le pass vous donne également droit au reste de la conférence les autres jours. Pour les talks nous sommes très contents d'annoncer :
Vous avez également des places à 30% de réduction ici s'il n'en reste plus de gratuites via ce lien. Hâte de vous y retrouver, La team Generative AI France |
GenerativeAI Super Meet Up
|
|
Panorama mondial de l’exécution stratégique
2025-12-04 · 16:00
Quels que soient les secteurs ou les pays, un même défi se répète : des stratégies ambitieuses qui n’aboutissent pas aux résultats attendus. Le Global State of Strategy Execution Report, publié par OKR Mentors, analyse la façon dont plus de 200 organisations dans 30 pays transforment leur stratégie en résultats concrets. Grâce au Global Strategy Execution Maturity Index™, l’étude révèle ce qui distingue les 11 % de leaders en exécution stratégique, ceux qui dépassent systématiquement leurs objectifs. Alignement, rituels, responsabilisation, passage à l’échelle… Les données montrent là où la majorité des organisations échouent et ce que les meilleurs font différemment. Les leaders de l’exécution stratégique ont :
Rejoignez Laurent Morisseau et Elie Casamitjana, CEO d’OKR Mentors, pour un décryptage approfondi de ces résultats et des bonnes pratiques pour une stratégie agile. Vous découvrirez également comment l’évaluation SEM360™ vous permet de mesurer la maturité d’exécution de votre organisation et d’identifier votre Persona en Exécution Stratégique. |
Panorama mondial de l’exécution stratégique
|
|
Bacthub: avancées et perspectives
2025-11-27 · 11:00
Laurence Watier
– chercheuse à l’Inserm et responsable scientifique du projet Bacthub
@ Inserm
Présentation du projet Bacthub par Laurence Watier: premiers résultats, méthodologie innovante fondée sur le chaînage sécurisé de données hospitalières et médico-administratives, applications concrètes pour la surveillance et la prévention de l’antibiorésistance en France, et temps d’échange sur les perspectives et les collaborations possibles. |
L’antibiorésistance : consommation, surveillance & impact des antibiotiques
|
|
Schedules and timezones in Go
2025-10-22 · 20:10
Earlier this year I got to plan out and implement a scheduling system for a fleet of robots in Go, and I'd like to share my learnings from this experience! I'll give a general overview of how a scheduling system can work, cover related challenges and how they can be addressed, and share some examples of working with schedules and timezones in Go. |
|
|
Using Context in Go Servers: A Guide to Best Practices
2025-10-22 · 19:50
My talk, "Using Context in Go Servers: A Guide to Best Practices," delves into how the context package in Go simplifies the management of request lifecycles, cancellations, and deadlines in server applications. The context package is a critical tool for building scalable and reliable systems, yet it's often underutilized or misunderstood. In this session, I will cover: - The fundamentals of the context package and its key features. - Practical examples of integrating context with database queries and HTTP requests. - Best practices for passing and managing context across function calls. - Common pitfalls and how to avoid them, such as preventing goroutine leaks and handling cancellations effectively. I want to share this with the community because context is pivotal in developing robust server-side applications, and mastering it can significantly enhance the scalability, maintainability, and reliability of Go applications. This talk aims to provide practical, actionable insights that developers of all levels can immediately apply to their projects. |
|
|
Infinity Standards and “go generate”
2025-10-22 · 19:00
When faced with thousands of message types with no support in Go, some people reach for Java. Other people reach for “go generate” instead. This is the story of a 20 year migration, a standard that produces more standards, and going way too far with “go generate”. |
|
|
[AI Alliance] How to Train Your LLM Web Agent: A Statistical Diagnosis
2025-10-16 · 16:00
LLM-based web agents have recently made significant progress, but much of it has occurred in closed-source systems, widening the gap with open-source alternatives. Progress has been held back by two key challenges: first, a narrow focus on single-step tasks that overlooks the complexity of multi-step web interactions; and second, the high compute costs required to post-train LLM-based web agents. To address this, we present the first statistically grounded study on compute allocation for LLM web-agent post-training. Our approach uses a two-stage pipeline, training a Llama 3.1 8B or QWEN 2.5 7B student to imitate a Llama 3.3 70B teacher or QWEN 2.5 72B via supervised fine-tuning (SFT), followed by on-policy reinforcement learning (GRPO). We find this process highly sensitive to hyperparameter choices, making exhaustive sweeps impractical. To spare others from expensive trial-and-error, we sample 1,370 configurations and use bootstrapping to estimate effective hyperparameters. Our results show that combining SFT with on-policy RL consistently outperforms either approach alone on both WorkArena and MiniWob++. Further, this strategy requires only 55% of the compute to match the peak performance of pure SFT on MiniWob++, effectively pushing the compute-performance Pareto frontier, and is the only strategy that can close the gap with closed-source models. Read the paper on ArXiv: How to Train Your LLM Web Agent: A Statistical Diagnosis (PDF) About the speaker I’m Massimo Caccia, Senior Research Scientist at ServiceNow Research, specializing in post-training methods for computer-use agents. I see computer use as the ultimate playground for testing agents, thanks to its ubiquity and diversity. My research involves conducting large-scale empirical studies to systematically evaluate trade-offs among different approaches and to develop practical know-how, with reinforcement learning being a particular focus. As a core contributor to the web-agent research library ecosystem, I actively shape evaluation frameworks (BrowserGym, WorkArena) and development platforms (AgentLab). My goal is to bridge foundational research and scalable tools to advance the field. Previously, I completed my Ph.D. at the Quebec Artificial Intelligence Institute (Mila) under Professor Laurent Charlin. During my doctoral studies, I collaborated with DeepMind’s Continual Learning team led by Marc’Aurelio Ranzato, Amazon’s team under Alex Smola, and ElementAI prior to its integration with ServiceNow. My Ph.D. research focused on building agents capable of accumulating and transferring knowledge across tasks, drawing from continual learning, transfer learning, and meta-learning. My work explored applications in language, vision, and reinforcement learning, emphasizing improvements in data and compute efficiency. About the AI Alliance The AI Alliance is an international community of researchers, developers and organizational leaders committed to support and enhance open innovation across the AI technology landscape to accelerate progress, improve safety, security and trust in AI, and maximize benefits to people and society everywhere. Members of the AI Alliance believe that open innovation is essential to develop and achieve safe and responsible AI that benefit society rather than benefit a select few big players. Join the community Sign up for the AI Alliance newsletter (check the website footer) and join our new AI Alliance Discord. |
[AI Alliance] How to Train Your LLM Web Agent: A Statistical Diagnosis
|
|
[AI Alliance] How to Train Your LLM Web Agent: A Statistical Diagnosis
2025-10-16 · 16:00
LLM-based web agents have recently made significant progress, but much of it has occurred in closed-source systems, widening the gap with open-source alternatives. Progress has been held back by two key challenges: first, a narrow focus on single-step tasks that overlooks the complexity of multi-step web interactions; and second, the high compute costs required to post-train LLM-based web agents. To address this, we present the first statistically grounded study on compute allocation for LLM web-agent post-training. Our approach uses a two-stage pipeline, training a Llama 3.1 8B or QWEN 2.5 7B student to imitate a Llama 3.3 70B teacher or QWEN 2.5 72B via supervised fine-tuning (SFT), followed by on-policy reinforcement learning (GRPO). We find this process highly sensitive to hyperparameter choices, making exhaustive sweeps impractical. To spare others from expensive trial-and-error, we sample 1,370 configurations and use bootstrapping to estimate effective hyperparameters. Our results show that combining SFT with on-policy RL consistently outperforms either approach alone on both WorkArena and MiniWob++. Further, this strategy requires only 55% of the compute to match the peak performance of pure SFT on MiniWob++, effectively pushing the compute-performance Pareto frontier, and is the only strategy that can close the gap with closed-source models. Read the paper on ArXiv: How to Train Your LLM Web Agent: A Statistical Diagnosis (PDF) About the speaker I’m Massimo Caccia, Senior Research Scientist at ServiceNow Research, specializing in post-training methods for computer-use agents. I see computer use as the ultimate playground for testing agents, thanks to its ubiquity and diversity. My research involves conducting large-scale empirical studies to systematically evaluate trade-offs among different approaches and to develop practical know-how, with reinforcement learning being a particular focus. As a core contributor to the web-agent research library ecosystem, I actively shape evaluation frameworks (BrowserGym, WorkArena) and development platforms (AgentLab). My goal is to bridge foundational research and scalable tools to advance the field. Previously, I completed my Ph.D. at the Quebec Artificial Intelligence Institute (Mila) under Professor Laurent Charlin. During my doctoral studies, I collaborated with DeepMind’s Continual Learning team led by Marc’Aurelio Ranzato, Amazon’s team under Alex Smola, and ElementAI prior to its integration with ServiceNow. My Ph.D. research focused on building agents capable of accumulating and transferring knowledge across tasks, drawing from continual learning, transfer learning, and meta-learning. My work explored applications in language, vision, and reinforcement learning, emphasizing improvements in data and compute efficiency. About the AI Alliance The AI Alliance is an international community of researchers, developers and organizational leaders committed to support and enhance open innovation across the AI technology landscape to accelerate progress, improve safety, security and trust in AI, and maximize benefits to people and society everywhere. Members of the AI Alliance believe that open innovation is essential to develop and achieve safe and responsible AI that benefit society rather than benefit a select few big players. Join the community Sign up for the AI Alliance newsletter (check the website footer) and join our new AI Alliance Discord. |
[AI Alliance] How to Train Your LLM Web Agent: A Statistical Diagnosis
|
|
[AI Alliance] How to Train Your LLM Web Agent: A Statistical Diagnosis
2025-10-16 · 16:00
LLM-based web agents have recently made significant progress, but much of it has occurred in closed-source systems, widening the gap with open-source alternatives. Progress has been held back by two key challenges: first, a narrow focus on single-step tasks that overlooks the complexity of multi-step web interactions; and second, the high compute costs required to post-train LLM-based web agents. To address this, we present the first statistically grounded study on compute allocation for LLM web-agent post-training. Our approach uses a two-stage pipeline, training a Llama 3.1 8B or QWEN 2.5 7B student to imitate a Llama 3.3 70B teacher or QWEN 2.5 72B via supervised fine-tuning (SFT), followed by on-policy reinforcement learning (GRPO). We find this process highly sensitive to hyperparameter choices, making exhaustive sweeps impractical. To spare others from expensive trial-and-error, we sample 1,370 configurations and use bootstrapping to estimate effective hyperparameters. Our results show that combining SFT with on-policy RL consistently outperforms either approach alone on both WorkArena and MiniWob++. Further, this strategy requires only 55% of the compute to match the peak performance of pure SFT on MiniWob++, effectively pushing the compute-performance Pareto frontier, and is the only strategy that can close the gap with closed-source models. Read the paper on ArXiv: How to Train Your LLM Web Agent: A Statistical Diagnosis (PDF) About the speaker I’m Massimo Caccia, Senior Research Scientist at ServiceNow Research, specializing in post-training methods for computer-use agents. I see computer use as the ultimate playground for testing agents, thanks to its ubiquity and diversity. My research involves conducting large-scale empirical studies to systematically evaluate trade-offs among different approaches and to develop practical know-how, with reinforcement learning being a particular focus. As a core contributor to the web-agent research library ecosystem, I actively shape evaluation frameworks (BrowserGym, WorkArena) and development platforms (AgentLab). My goal is to bridge foundational research and scalable tools to advance the field. Previously, I completed my Ph.D. at the Quebec Artificial Intelligence Institute (Mila) under Professor Laurent Charlin. During my doctoral studies, I collaborated with DeepMind’s Continual Learning team led by Marc’Aurelio Ranzato, Amazon’s team under Alex Smola, and ElementAI prior to its integration with ServiceNow. My Ph.D. research focused on building agents capable of accumulating and transferring knowledge across tasks, drawing from continual learning, transfer learning, and meta-learning. My work explored applications in language, vision, and reinforcement learning, emphasizing improvements in data and compute efficiency. About the AI Alliance The AI Alliance is an international community of researchers, developers and organizational leaders committed to support and enhance open innovation across the AI technology landscape to accelerate progress, improve safety, security and trust in AI, and maximize benefits to people and society everywhere. Members of the AI Alliance believe that open innovation is essential to develop and achieve safe and responsible AI that benefit society rather than benefit a select few big players. Join the community Sign up for the AI Alliance newsletter (check the website footer) and join our new AI Alliance Discord. |
[AI Alliance] How to Train Your LLM Web Agent: A Statistical Diagnosis
|
|
[AI Alliance] How to Train Your LLM Web Agent: A Statistical Diagnosis
2025-10-16 · 16:00
LLM-based web agents have recently made significant progress, but much of it has occurred in closed-source systems, widening the gap with open-source alternatives. Progress has been held back by two key challenges: first, a narrow focus on single-step tasks that overlooks the complexity of multi-step web interactions; and second, the high compute costs required to post-train LLM-based web agents. To address this, we present the first statistically grounded study on compute allocation for LLM web-agent post-training. Our approach uses a two-stage pipeline, training a Llama 3.1 8B or QWEN 2.5 7B student to imitate a Llama 3.3 70B teacher or QWEN 2.5 72B via supervised fine-tuning (SFT), followed by on-policy reinforcement learning (GRPO). We find this process highly sensitive to hyperparameter choices, making exhaustive sweeps impractical. To spare others from expensive trial-and-error, we sample 1,370 configurations and use bootstrapping to estimate effective hyperparameters. Our results show that combining SFT with on-policy RL consistently outperforms either approach alone on both WorkArena and MiniWob++. Further, this strategy requires only 55% of the compute to match the peak performance of pure SFT on MiniWob++, effectively pushing the compute-performance Pareto frontier, and is the only strategy that can close the gap with closed-source models. Read the paper on ArXiv: How to Train Your LLM Web Agent: A Statistical Diagnosis (PDF) About the speaker I’m Massimo Caccia, Senior Research Scientist at ServiceNow Research, specializing in post-training methods for computer-use agents. I see computer use as the ultimate playground for testing agents, thanks to its ubiquity and diversity. My research involves conducting large-scale empirical studies to systematically evaluate trade-offs among different approaches and to develop practical know-how, with reinforcement learning being a particular focus. As a core contributor to the web-agent research library ecosystem, I actively shape evaluation frameworks (BrowserGym, WorkArena) and development platforms (AgentLab). My goal is to bridge foundational research and scalable tools to advance the field. Previously, I completed my Ph.D. at the Quebec Artificial Intelligence Institute (Mila) under Professor Laurent Charlin. During my doctoral studies, I collaborated with DeepMind’s Continual Learning team led by Marc’Aurelio Ranzato, Amazon’s team under Alex Smola, and ElementAI prior to its integration with ServiceNow. My Ph.D. research focused on building agents capable of accumulating and transferring knowledge across tasks, drawing from continual learning, transfer learning, and meta-learning. My work explored applications in language, vision, and reinforcement learning, emphasizing improvements in data and compute efficiency. About the AI Alliance The AI Alliance is an international community of researchers, developers and organizational leaders committed to support and enhance open innovation across the AI technology landscape to accelerate progress, improve safety, security and trust in AI, and maximize benefits to people and society everywhere. Members of the AI Alliance believe that open innovation is essential to develop and achieve safe and responsible AI that benefit society rather than benefit a select few big players. Join the community Sign up for the AI Alliance newsletter (check the website footer) and join our new AI Alliance Discord. |
[AI Alliance] How to Train Your LLM Web Agent: A Statistical Diagnosis
|
|
[AI Alliance] How to Train Your LLM Web Agent: A Statistical Diagnosis
2025-10-16 · 16:00
LLM-based web agents have recently made significant progress, but much of it has occurred in closed-source systems, widening the gap with open-source alternatives. Progress has been held back by two key challenges: first, a narrow focus on single-step tasks that overlooks the complexity of multi-step web interactions; and second, the high compute costs required to post-train LLM-based web agents. To address this, we present the first statistically grounded study on compute allocation for LLM web-agent post-training. Our approach uses a two-stage pipeline, training a Llama 3.1 8B or QWEN 2.5 7B student to imitate a Llama 3.3 70B teacher or QWEN 2.5 72B via supervised fine-tuning (SFT), followed by on-policy reinforcement learning (GRPO). We find this process highly sensitive to hyperparameter choices, making exhaustive sweeps impractical. To spare others from expensive trial-and-error, we sample 1,370 configurations and use bootstrapping to estimate effective hyperparameters. Our results show that combining SFT with on-policy RL consistently outperforms either approach alone on both WorkArena and MiniWob++. Further, this strategy requires only 55% of the compute to match the peak performance of pure SFT on MiniWob++, effectively pushing the compute-performance Pareto frontier, and is the only strategy that can close the gap with closed-source models. Read the paper on ArXiv: How to Train Your LLM Web Agent: A Statistical Diagnosis (PDF) About the speaker I’m Massimo Caccia, Senior Research Scientist at ServiceNow Research, specializing in post-training methods for computer-use agents. I see computer use as the ultimate playground for testing agents, thanks to its ubiquity and diversity. My research involves conducting large-scale empirical studies to systematically evaluate trade-offs among different approaches and to develop practical know-how, with reinforcement learning being a particular focus. As a core contributor to the web-agent research library ecosystem, I actively shape evaluation frameworks (BrowserGym, WorkArena) and development platforms (AgentLab). My goal is to bridge foundational research and scalable tools to advance the field. Previously, I completed my Ph.D. at the Quebec Artificial Intelligence Institute (Mila) under Professor Laurent Charlin. During my doctoral studies, I collaborated with DeepMind’s Continual Learning team led by Marc’Aurelio Ranzato, Amazon’s team under Alex Smola, and ElementAI prior to its integration with ServiceNow. My Ph.D. research focused on building agents capable of accumulating and transferring knowledge across tasks, drawing from continual learning, transfer learning, and meta-learning. My work explored applications in language, vision, and reinforcement learning, emphasizing improvements in data and compute efficiency. About the AI Alliance The AI Alliance is an international community of researchers, developers and organizational leaders committed to support and enhance open innovation across the AI technology landscape to accelerate progress, improve safety, security and trust in AI, and maximize benefits to people and society everywhere. Members of the AI Alliance believe that open innovation is essential to develop and achieve safe and responsible AI that benefit society rather than benefit a select few big players. Join the community Sign up for the AI Alliance newsletter (check the website footer) and join our new AI Alliance Discord. |
[AI Alliance] How to Train Your LLM Web Agent: A Statistical Diagnosis
|
|
[AI Alliance] How to Train Your LLM Web Agent: A Statistical Diagnosis
2025-10-16 · 16:00
LLM-based web agents have recently made significant progress, but much of it has occurred in closed-source systems, widening the gap with open-source alternatives. Progress has been held back by two key challenges: first, a narrow focus on single-step tasks that overlooks the complexity of multi-step web interactions; and second, the high compute costs required to post-train LLM-based web agents. To address this, we present the first statistically grounded study on compute allocation for LLM web-agent post-training. Our approach uses a two-stage pipeline, training a Llama 3.1 8B or QWEN 2.5 7B student to imitate a Llama 3.3 70B teacher or QWEN 2.5 72B via supervised fine-tuning (SFT), followed by on-policy reinforcement learning (GRPO). We find this process highly sensitive to hyperparameter choices, making exhaustive sweeps impractical. To spare others from expensive trial-and-error, we sample 1,370 configurations and use bootstrapping to estimate effective hyperparameters. Our results show that combining SFT with on-policy RL consistently outperforms either approach alone on both WorkArena and MiniWob++. Further, this strategy requires only 55% of the compute to match the peak performance of pure SFT on MiniWob++, effectively pushing the compute-performance Pareto frontier, and is the only strategy that can close the gap with closed-source models. Read the paper on ArXiv: How to Train Your LLM Web Agent: A Statistical Diagnosis (PDF) About the speaker I’m Massimo Caccia, Senior Research Scientist at ServiceNow Research, specializing in post-training methods for computer-use agents. I see computer use as the ultimate playground for testing agents, thanks to its ubiquity and diversity. My research involves conducting large-scale empirical studies to systematically evaluate trade-offs among different approaches and to develop practical know-how, with reinforcement learning being a particular focus. As a core contributor to the web-agent research library ecosystem, I actively shape evaluation frameworks (BrowserGym, WorkArena) and development platforms (AgentLab). My goal is to bridge foundational research and scalable tools to advance the field. Previously, I completed my Ph.D. at the Quebec Artificial Intelligence Institute (Mila) under Professor Laurent Charlin. During my doctoral studies, I collaborated with DeepMind’s Continual Learning team led by Marc’Aurelio Ranzato, Amazon’s team under Alex Smola, and ElementAI prior to its integration with ServiceNow. My Ph.D. research focused on building agents capable of accumulating and transferring knowledge across tasks, drawing from continual learning, transfer learning, and meta-learning. My work explored applications in language, vision, and reinforcement learning, emphasizing improvements in data and compute efficiency. About the AI Alliance The AI Alliance is an international community of researchers, developers and organizational leaders committed to support and enhance open innovation across the AI technology landscape to accelerate progress, improve safety, security and trust in AI, and maximize benefits to people and society everywhere. Members of the AI Alliance believe that open innovation is essential to develop and achieve safe and responsible AI that benefit society rather than benefit a select few big players. Join the community Sign up for the AI Alliance newsletter (check the website footer) and join our new AI Alliance Discord. |
[AI Alliance] How to Train Your LLM Web Agent: A Statistical Diagnosis
|
|
[AI Alliance] How to Train Your LLM Web Agent: A Statistical Diagnosis
2025-10-16 · 16:00
LLM-based web agents have recently made significant progress, but much of it has occurred in closed-source systems, widening the gap with open-source alternatives. Progress has been held back by two key challenges: first, a narrow focus on single-step tasks that overlooks the complexity of multi-step web interactions; and second, the high compute costs required to post-train LLM-based web agents. To address this, we present the first statistically grounded study on compute allocation for LLM web-agent post-training. Our approach uses a two-stage pipeline, training a Llama 3.1 8B or QWEN 2.5 7B student to imitate a Llama 3.3 70B teacher or QWEN 2.5 72B via supervised fine-tuning (SFT), followed by on-policy reinforcement learning (GRPO). We find this process highly sensitive to hyperparameter choices, making exhaustive sweeps impractical. To spare others from expensive trial-and-error, we sample 1,370 configurations and use bootstrapping to estimate effective hyperparameters. Our results show that combining SFT with on-policy RL consistently outperforms either approach alone on both WorkArena and MiniWob++. Further, this strategy requires only 55% of the compute to match the peak performance of pure SFT on MiniWob++, effectively pushing the compute-performance Pareto frontier, and is the only strategy that can close the gap with closed-source models. Read the paper on ArXiv: How to Train Your LLM Web Agent: A Statistical Diagnosis (PDF) About the speaker I’m Massimo Caccia, Senior Research Scientist at ServiceNow Research, specializing in post-training methods for computer-use agents. I see computer use as the ultimate playground for testing agents, thanks to its ubiquity and diversity. My research involves conducting large-scale empirical studies to systematically evaluate trade-offs among different approaches and to develop practical know-how, with reinforcement learning being a particular focus. As a core contributor to the web-agent research library ecosystem, I actively shape evaluation frameworks (BrowserGym, WorkArena) and development platforms (AgentLab). My goal is to bridge foundational research and scalable tools to advance the field. Previously, I completed my Ph.D. at the Quebec Artificial Intelligence Institute (Mila) under Professor Laurent Charlin. During my doctoral studies, I collaborated with DeepMind’s Continual Learning team led by Marc’Aurelio Ranzato, Amazon’s team under Alex Smola, and ElementAI prior to its integration with ServiceNow. My Ph.D. research focused on building agents capable of accumulating and transferring knowledge across tasks, drawing from continual learning, transfer learning, and meta-learning. My work explored applications in language, vision, and reinforcement learning, emphasizing improvements in data and compute efficiency. About the AI Alliance The AI Alliance is an international community of researchers, developers and organizational leaders committed to support and enhance open innovation across the AI technology landscape to accelerate progress, improve safety, security and trust in AI, and maximize benefits to people and society everywhere. Members of the AI Alliance believe that open innovation is essential to develop and achieve safe and responsible AI that benefit society rather than benefit a select few big players. Join the community Sign up for the AI Alliance newsletter (check the website footer) and join our new AI Alliance Discord. |
[AI Alliance] How to Train Your LLM Web Agent: A Statistical Diagnosis
|
|
Analyse de données interactive : la rencontre de ES|QL, Arrow et Pandas
2025-10-09 · 17:00
Sylvain Wallez
– Speaker
@ Elastic
L'introduction de ES|QL dans Elasticsearch facilite la recherche et l'analyse de grands jeux de données.\n\nES|QL présente ses résultats sous forme tabulaire en JSON, CSV et aussi au format Apache Arrow, un format de dataframe compact permettant des échanges sans désérialisation, qui est nativement supporté par la librairie Python Pandas.\n\nCette intégration ouvre de nouvelles perspectives pour l'exploration des données avec les outils habituels des data analysts, et l'intégration facile des pipelines d'aggrégation dans les applications.\n\nAprès un bref aperçu de ES|QL, nous ferons une exploration interactive d'un jeu de données avec ES|QL, Arrow et Pandas dans un notebook Jupyter. Et un petit benchmark vous montrera l'efficacité du format Arrow comparé à JSON ! |
|
|
Une command line pour Elasticsearch: Oxydation d'un process
2025-10-09 · 17:00
Laurent Saint-Félix
– Speaker
@ Elastic
Découvrez comment la création de divers projets parallèles a révélé le besoin d'un outil plus performant et sécurisé pour interagir avec Elasticsearch. Explorez avec nous le processus qui nous a amenés à choisir Rust pour son potentiel en termes de performance et de sécurité. Ce talk présente un POC (Proof of Concept) illustrant comment ces projets parallèles ont inspiré et façonné sa création. Nous examinerons un écosystème riche, les défis rencontrés et les solutions innovantes mises en œuvre pour aboutir à un outil robuste. |
|
|
Guidance for building a Multi-Provider Generative AI Gateway on AWS
2025-09-16 · 19:15
Sara van de Moosdijk
– Solution Architect for AI/ML
@ Amazon Web Services
A generative AI gateway is a design pattern popular in enterprise settings which establishes a central gateway where developers can access foundation models from multiple providers. It includes features for access control, quota management, cost control, governance, and observability. This session will dive deep on a recommended architecture for building an AI gateway on AWS and provide a demo of the final result. |
|
|
From experiments to virtual colleagues: Our first steps with AI Agents
2025-09-16 · 18:10
Soraya Duriez
– Product Manager
@ dsm-firmenich
,
Laurens Glasbergen
– Head of Technology Innovation & Scouting
@ dsm-firmenich
,
Ashish Sahu
– Digital Architect
@ dsm-firmenich
AI agents promise to take us beyond simple prompting into a world where machines can reason, plan, and act. In this session, we’ll share our hands-on journey experimenting with AI agents, separating hype from reality. We’ll walk through how we got started, the use cases we explored, the tools and frameworks we tested, what worked (and what didn’t), and the key lessons we’ve learned along the way. |
|
|
Talk by Alexey Milovidov
2025-09-08 · 19:25
Alexey Milovidov
– CTO
@ ClickHouse
|
NYC AI Meetup: Building Scalable Systems with ClickHouse & Docker
|
|
Alexey Milovidov, Co-founder & CTO @ ClickHouse
2025-09-08 · 19:15
Alexey Milovidov
– CTO
@ ClickHouse
ClickHouse
|
Building Scalable Systems with ClickHouse & Docker
|
|
Talk by Ian Armstrong
2025-09-08 · 19:00
Ian Armstrong
– Engineer
@ Profound
|
NYC AI Meetup: Building Scalable Systems with ClickHouse & Docker
|