talk-data.com talk-data.com

Pallavi Koppol

Speaker

Pallavi Koppol

3

talks

Research Scientist Databricks

Pallavi Koppol is a Research Scientist at Databricks, focusing on reinforcement learning from human feedback and post-training methods. She earned a PhD from Carnegie Mellon University in 2023 on interactive machine learning from human feedback, and previously worked on perception and ML infrastructure at Waymo. Her research investigates how AI systems can effectively elicit high-quality information from people and use it to improve learning outcomes.

Bio from: Data + AI Summit 2025

Frequent Collaborators

Filter by Event / Source

Talks & appearances

3 activities · Newest first

Search activities →
Summit Live: Women In Data and AI Conversation

Each year at Summit, Women in Data and AI have a half day for in-person discussions on empowering Women in Data and AI Breakfast, and networking with like-minded professionals and trailblazers. For this virtual discussion, hear from Kate Ostbye (Pfizer), Lisa Cohen (Anthropic), Pallavi Koppol and Holly Smith (Databricks) about navigating challenges, celebrating successes, and inspire one another as we champion diversity and innovation in data together. And how to get involved year-round.

AI Evaluation from First Principles: You Can't Manage What You Can't Measure

Is your AI evaluation process holding back your system's true potential? Many organizations struggle with improving GenAI quality because they don't know how to measure it effectively. This research session covers the principles of GenAI evaluation, offers a framework for measuring what truly matters, and demonstrates implementation using Databricks.Key Takeaways:-Practical approaches for establishing reliable metrics for subjective evaluations-Techniques for calibrating LLM judges to enable cost-effective, scalable assessment-Actionable frameworks for evaluation systems that evolve with your AI capabilitiesWhether you're developing models, implementing AI solutions, or leading technical teams, this session will equip you to define meaningful quality metrics for your specific use cases and build evaluation systems that expose what's working and what isn't, transforming AI guesswork into measurable success.

talk
with Jonathan Hsieh (LanceDB) , Cathy Yin (Databricks) , Andrew Shieh (Databricks) , Ziyi Yang (Databricks) , Andy Konwinski (Databricks) , Denny Lee (Databricks) , Asfandyar Qureshi (Databricks) , Yuki Watanabe (Databricks) , Brandon Cui (Databricks) , Andrew Drozdov (Databricks) , Anand Kannappan (Patronus AI) , Harsh Panchal (Databricks) , Tomu Hirata (Databricks) , Daya Khudia (Databricks) , Jose Javier Gonzalez (Databricks) , Jasmine Collins (Databricks) , MAHESWARAN SATHIAMOORTHY (Bespoke Labs) , Jonathan Chang (Databricks) , Matei Zaharia (Databricks) , Alexander Trott (Databricks) , Tejas Sundaresan (Databricks) , Pallavi Koppol (Databricks) , Jonathan Frankle (Databricks) , Erich Elsen (Databricks) , Ivan Zhou (Databricks) , Davis Blalock , Gayathri Murali (META)

https://bit.ly/devconnectdais