talk-data.com
People (88 results)
See all 88 →Activities & events
| Title & Speakers | Event |
|---|---|
|
PGConf NYC 2023
2023-10-03 · 13:00
PGConf NYC 2023 (October 3 - 5, 2023, 237 Park Ave, New York, NY) is packed with user stories and best practices for how to use PostgreSQL. Join us and connect with other developers, DBAs, administrators, decisions makers, and contributors to the open source PostgreSQL community! Register today on the PGConf NYC 2023 website: https://2023.pgconf.nyc/tickets/ You can find the schedule here: https://postgresql.us/events/pgconfnyc2023/schedule/ We're excited to have Andy Pavlo, Associate Professor at Carnegie Mellon University and co-founder of OtterTune, delivering the keynote this year at PGConf NYC 2023, as he provides a history of the relational model and SQL and how it has evolved throughout technological changes, including in today's climate around AI/ML. PGConf NYC 2023 also has lots of content relevant to how you're running PostgreSQL, including case studies on managing large fleets and workloads on PostgreSQL, how to improve your query performance, different ways to minimize your downtime, and learning about upcoming PostgreSQL features! PGConf NYC 2023 is not possible without the generous support of our sponsors. PGConf NYC takes place in one of the largest markets of PostgreSQL users. Your sponsorship lets you connect with decision makers, developers, DBAs, and PostgreSQL contributors, helps keep ticket prices low, and helps grow the PostgreSQL community. For more information on sponsorship, please visit the below link: https://2023.pgconf.nyc/sponsors/ Can't wait to participate in PGConf NYC 2023? Registration is available: https://2023.pgconf.nyc/tickets/ We look forward to seeing you in October! |
PGConf NYC 2023
|
|
The State of Databases Today (w/ Andy Pavlo)
2023-09-08 · 07:00
Andy Pavlo
– guest
Andy Pavlo is a professor of databaseology (he says it's a made-up word) at Carnegie Mellon and currently on leave to build his own company—OtterTune, which uses AI to figure out the settings to get the best performance out of databases. He is one of the preeminent minds on databases and a die-hard relational database maximalist. We talk about the state of databases today, why there are so many specialized databases (and if we need so many), why tuning databases is so hard but important, and how the database landscape will evolve. |
The Analytics Engineering Podcast |
|
Make Database Performance Optimization A Playful Experience With OtterTune
2021-06-23 · 02:00
Andy Pavlo
– guest
,
Tobias Macey
– host
Summary The database is the core of any system because it holds the data that drives your entire experience. We spend countless hours designing the data model, updating engine versions, and tuning performance. But how confident are you that you have configured it to be as performant as possible, given the dozens of parameters and how they interact with each other? Andy Pavlo researches autonomous database systems, and out of that research he created OtterTune to find the optimal set of parameters to use for your specific workload. In this episode he explains how the system works, the challenge of scaling it to work across different database engines, and his hopes for the future of database systems. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! RudderStack’s smart customer data pipeline is warehouse-first. It builds your customer data warehouse and your identity graph on your data warehouse, with support for Snowflake, Google BigQuery, Amazon Redshift, and more. Their SDKs and plugins make event streaming easy, and their integrations with cloud applications like Salesforce and ZenDesk help you go beyond event streaming. With RudderStack you can use all of your customer data to answer more difficult questions and then send those insights to your whole customer data stack. Sign up free at dataengineeringpodcast.com/rudder today. We’ve all been asked to help with an ad-hoc request for data by the sales and marketing team. Then it becomes a critical report that they need updated every week or every day. Then what do you do? Send a CSV via email? Write some Python scripts to automate it? But what about incremental sync, API quotas, error handling, and all of the other details that eat up your time? Today, there is a better way. With Census, just write SQL or plug in your dbt models and start syncing your cloud warehouse to SaaS applications like Salesforce, Marketo, Hubspot, and many more. Go to dataengineeringpodcast.com/census today to get a free 14-day trial. Your host is Tobias Macey and today I’m interviewing Andy Pavlo about OtterTune, a system to continuously monitor and improve database performance via machine learning Interview Introduction How did you get involved in the area of data management? Can you describe what OtterTune is and the story behind it? How does it relate to your work with NoisePage? What are the challenges that database administrators, operators, and users run into when working with, configuring, and tuning transactional systems? What are some of the contributing factors to the sprawling complexity of the configurable parameters for these databases? Can you describe how OtterTune is implemented? What are some of the aggregate benefits that OtterTune can gain by running as a centralized service and learning from all of the systems that it connects to? What are some of the assumptions that you made when starting the commercialization of this technology that have been challenged or invalidated as you began working with initial customers? How have the design and goals of the system changed or evolved since you first began working on it? What is involved in adding support for a new database engine? How applicable are the OtterTune capabilities to analyti |
Data Engineering Podcast |