talk-data.com talk-data.com

Topic

postgresql

332

tagged

Activity Trend

6 peak/qtr
2020-Q1 2026-Q1

Activities

332 activities · Newest first

Ditch legacy and embrace freedom with AlloyDB Omni, your hybrid and multicloud enterprise database. Run anywhere, from data centers to the public clouds of your choice, and unlock performance and ease of management. Elevate your apps with HTAP and built-in generative AI to build vector embeddings for lightning-fast search, remotely or locally – no connectivity needed. Simplify operations with the Kubernetes operator: automate lifecycle, HA/DR, and scale effortlessly. Learn more about AlloyDB Omni and supercharge your data strategy, anywhere.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Your transactional data powers many applications – from Analytics to generative AI and interactive online systems. AlloyDB unifies all these workloads onto a single, high-performance platform to extend your real-time data. This session dives into two built-in features: AlloyDB AI and the Analytics Accelerator. We'll show the key technologies behind these features, including Google's fast vector search and the columnar engine that enables fast analytical queries, hybrid transaction, and analytics use cases. We’ll share how customers simplified their Analytical and gen AI apps with these two features.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

AlloyDB is a fully PostgreSQL-compatible database, ready for top-tier workloads, and for modernizing legacy proprietary databases in the Cloud. Powered by Google-embedded storage for superior performance with full PostgreSQL compatibility, AlloyDB offers the best of the cloud with scale-out architecture, a 99.99% no-nonsense-availability SLA, intelligent caching, and ML-enabled adaptive systems that simplify database management. In this session, we’ll cover what’s new in AlloyDB, plus deep-dive into the technology that powers it.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

As more organizations adopt open database standards, they need easy-to-use, high-performance migration tools, especially for heterogeneous migrations. In this session, we will focus on how Database Migration Service (DMS) is revolutionizing migrations from Oracle to AlloyDB for PostgreSQL. With a unique set of capabilities, DMS is harnessing the power of AI to accelerate these migrations and to improve developer productivity with last-mile code conversion and code explainability.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Developers choose PostgreSQL for its power, ecosystem, and enterprise-grade features. In this session, unlock best practices for building apps of all kinds with PostgreSQL. We'll cover Google Kubernetes Engine deployments, pgvector for generative AI development, performance optimization with caching, essential observability strategies, and more.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Logical decoding was introduced in PostgreSQL 9.4 and has been greatly improved since then. One of the popular use cases of this feature is implementation of the Change Data Capture (CDC) pattern. Although developers are usually eager to adopt this modern approach to make use of all the apparent benefits it brings, my strong belief is that usage of logical decoding in PostgreSQL requires at least a basic understanding of the implementation and should definitely be treated with great care. In this talk I will share the problems we, as Zalando's DBaaS team, dealt with while adopting and maintaining our internal CDC solution based on logical decoding and point you out to the nuances you should always keep in mind when using this technology.

This talk is prepared as a bunch of slides, where each slide describes a really bad way people can screw up their PostgreSQL database and provides a weight - how frequently Ilya saw that kind of problem. Right before the talk Ilya will reshuffle the deck to draw twenty or so random slides and explain to you why such practices are bad and how to avoid running into them.

Send us a text Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. Datatopics Unplugged is your go-to spot for relaxed discussions around tech, news, data, and society. Dive into conversations that should flow as smoothly as your morning coffee (but don't), where industry insights meet laid-back banter. Whether you're a data aficionado or just someone curious about the digital age, pull up a chair, relax, and let's get into the heart of data, unplugged style! In this episode, "Mamba to Challenge the Transformer Architecture?", we delve into a spectrum of tech topics and scrutinize the facets of productivity in the engineering realm. Here's a sneak peek of what we're discussing today: Checkout or Switch: Exploring new tech tools and their necessity. For further reading, check out this article.Mamba: A Viable Replacement for Transformers?: Discussing Mamba's potential to replace Transformer models in AI, and the paradigm shift it represents. Learn more in this LinkedIn post and look forward to its presentation at the ICLR 2024 conference.Biggest Productivity Killers in the Engineering Industry: Identifying the top productivity obstacles in engineering, including perfectionism, procrastination, and context-switching. Dive deeper into the topic here.Meta's Copyright Contradiction: Analyzing Meta's approach to copyright law in protecting its AI model while contesting similar protections for others. More on this discussion can be found here.Speeding Up Postgres Analytical Queries: Showcasing how pg_analytics enhances Postgres analytical queries by 94x, posing a question on the need for other tools. For more insights, visit this blog post.Intro music courtesy of fesliyanstudios.com

PostgreSQL Query Optimization: The Ultimate Guide to Building Efficient Queries

Write optimized queries. This book helps you write queries that perform fast and deliver results on time. You will learn that query optimization is not a dark art practiced by a small, secretive cabal of sorcerers. Any motivated professional can learn to write efficient queries from the get-go and capably optimize existing queries. You will learn to look at the process of writing a query from the database engine’s point of view, and know how to think like the database optimizer. The book begins with a discussion of what a performant system is and progresses to measuring performance and setting performance goals. It introduces different classes of queries and optimization techniques suitable to each, such as the use of indexes and specific join algorithms. You will learn to read and understand query execution plans along with techniques for influencing those plans for better performance. The book also covers advanced topics such as the use of functions and procedures, dynamic SQL, and generated queries. All of these techniques are then used together to produce performant applications, avoiding the pitfalls of object-relational mappers. This second edition includes new examples using Postgres 15 and the newest version of the PostgresAir database. It includes additional details and clarifications about advanced topics, and covers configuration parameters in greater depth. Finally, it makes use of advancements in NORM, using automatically generated functions. What You Will Learn Identify optimization goals in OLTP and OLAP systems Read and understand PostgreSQL execution plans Distinguish between short queries and long queries Choose the right optimization technique for each query type Identify indexes that will improve query performance Optimize full table scans Avoid the pitfalls of object-relational mapping systems Optimize the entire application rather than just database queries Who This Book Is For IT professionals working in PostgreSQL who want to develop performant and scalable applications, anyone whose job title contains the words “database developer” or “database administrator" or who is a backend developer charged with programming database calls, and system architects involved in the overall design of application systems running against a PostgreSQL database

Peter Farkas: Moving MongoDB Workloads to Postgres with FerretDB

"Peter Farkas unveils a game-changing solution in 'Moving MongoDB Workloads to Postgres with FerretDB.' 🔄 Learn how to seamlessly transition MongoDB workloads to Postgres without application-level changes, and ensure a smooth user experience with familiar tools and frameworks. 📦🐘 #MongoDB #Postgres #FerretDB"

✨ H I G H L I G H T S ✨

🙌 A huge shoutout to all the incredible participants who made Big Data Conference Europe 2023 in Vilnius, Lithuania, from November 21-24, an absolute triumph! 🎉 Your attendance and active participation were instrumental in making this event so special. 🌍

Don't forget to check out the session recordings from the conference to relive the valuable insights and knowledge shared! 📽️

Once again, THANK YOU for playing a pivotal role in the success of Big Data Conference Europe 2023. 🚀 See you next year for another unforgettable conference! 📅 #BigDataConference #SeeYouNextYear

PostgreSQL 16 Administration Cookbook

This cookbook is a comprehensive guide to mastering PostgreSQL 16 database administration. With over 180 practical recipes, this book covers everything from query performance and backup strategies to replication and high availability. You'll gain hands-on expertise in solving real-world challenges while leveraging the new and improved features of PostgreSQL 16. What this Book will help me do Perform efficient batch processing with Postgres' SQL MERGE statement. Implement parallel transaction processes using logical replication. Enhance database backups and recovery with advanced compression techniques. Monitor and fine-tune database performance for optimal operation. Apply new PostgreSQL 16 features for secure and reliable databases. Author(s) The team of authors, including Gianni Ciolli, Boriss Mejías, Jimmy Angelakos, Vibhor Kumar, and Simon Riggs, bring years of experience in PostgreSQL database management and development. Their expertise spans professional system administration, academic research, and contributions to PostgreSQL development. Their collaborative insights enrich this comprehensive guide. Who is it for? This book is ideal for PostgreSQL database administrators seeking advanced techniques, data architects managing PostgreSQL in production, and developers interested in mastering PostgreSQL 16. Whether you're an experienced DBA upgrading to PostgreSQL 16 or a newcomer looking for practical recipes, this book provides valuable strategies and solutions.

Abstract: RisingWave is an open-source streaming database designed from scratch for the cloud. It implemented a Snowflake-style storage-compute separation architecture to reduce performance cost, and provides users with a PostgreSQL-like experience for stream processing. Over the last three years, RisingWave has evolved from a one-person project to a rapidly-growing product deployed by nearly 100 enterprises and startups. But the journey of building RisingWave is full of challenges. In this talk, I'd like to share with you lessons we've gained from four dimensions: 1) the decoupled compute-storage architecture, 2) the balances between stream processing and OLAP, 3) the Rust ecosystem, and 4) the product positioning. I will dive deep into technical details and then share with you my views on the future of stream processing.

Get superior price and performance with Azure cloud-scale databases | BRK224H

Improve performance with the latest capabilities for Azure SQL Databases, Azure Database for PostgreSQL, and SQL Server enabled by Azure Arc for hybrid and multi-cloud. You’ll learn how customers enabled ongoing innovation by migrating to Azure Database for MySQL. This session will cover tactical ways to get the most from your applications with the databases that are easy to use, deliver unmatched price/performance, support open-source and enable transformative AI technologies.

To learn more, please check out these resources: * https://aka.ms/Ignite23CollectionsBRK224H * https://info.microsoft.com/ww-landing-contact-me-for-events-m365-in-person-events.html?LCID=en-us&ls=407628-contactme-formfill * https://aka.ms/ArcSQL * https://aka.ms/azure-ignite2023-dataaiblog

𝗦𝗽𝗲𝗮𝗸𝗲𝗿𝘀: * Chandra Gavaravarapu * Maximilian Conrad * Shireesh Thota * Simon Faber * Vlad Rabenok * Xiaoxuan Guo * Ed Donahue * Aditya Badramraju * Bob Ward * Denzil Ribeiro * Parikshit Savjani

𝗦𝗲𝘀𝘀𝗶𝗼𝗻 𝗜𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻: This video is one of many sessions delivered for the Microsoft Ignite 2023 event. View sessions on-demand and learn more about Microsoft Ignite at https://ignite.microsoft.com

BRK224H | English (US) | Data

MSIgnite

Summary

Databases are the core of most applications, but they are often treated as inscrutable black boxes. When an application is slow, there is a good probability that the database needs some attention. In this episode Lukas Fittl shares some hard-won wisdom about the causes and solution of many performance bottlenecks and the work that he is doing to shine some light on PostgreSQL to make it easier to understand how to keep it running smoothly.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack You shouldn't have to throw away the database to build with fast-changing data. You should be able to keep the familiarity of SQL and the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date. With Materialize, you can! It’s the only true SQL streaming database built from the ground up to meet the needs of modern data products. Whether it’s real-time dashboarding and analytics, personalization and segmentation or automation and alerting, Materialize gives you the ability to work with fresh, correct, and scalable results — all in a familiar SQL interface. Go to dataengineeringpodcast.com/materialize today to get 2 weeks free! Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. This episode is brought to you by Datafold – a testing automation platform for data engineers that finds data quality issues before the code and data are deployed to production. Datafold leverages data-diffing to compare production and development environments and column-level lineage to show you the exact impact of every code change on data, metrics, and BI tools, keeping your team productive and stakeholders happy. Datafold integrates with dbt, the modern data stack, and seamlessly plugs in your data CI for team-wide and automated testing. If you are migrating to a modern data stack, Datafold can also help you automate data and code validation to speed up the migration. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold Your host is Tobias Macey and today I'm interviewing Lukas Fittl about optimizing your database performance and tips for tuning Postgres

Interview

Introduction How did you get involved in the area of data management? What are the different ways that database performance problems impact the business? What are the most common contributors to performance issues? What are the useful signals that indicate performance challenges in the database?

For a given symptom, what are the steps that you recommend for determining the proximate cause?

What are the potential negative impacts to be aware of when tu

Learn PostgreSQL - Second Edition

Learn PostgreSQL, a comprehensive guide to mastering PostgreSQL 16, takes readers on a journey from the fundamentals to advanced concepts, such as replication and database optimization. With hands-on exercises and practical examples, this book provides all you need to confidently use, manage, and build secure and scalable databases. What this Book will help me do Master the essentials of PostgreSQL 16, including advanced SQL features and performance tuning. Understand database replication methods and manage a scalable architecture. Enhance database security through roles, schemas, and strict privilege management. Learn how to personalize your experience with custom extensions and functions. Acquire practical skills in backup, restoration, and disaster recovery planning. Author(s) Luca Ferrari and Enrico Pirozzi are experienced database engineers and PostgreSQL enthusiasts with years of experience using and teaching PostgreSQL technology. They specialize in creating learning content that is practical and focused on real-world situations. Their writing emphasizes clarity and systematically equips readers with professional skills. Who is it for? This book is perfect for database professionals, software developers, and system administrators looking to develop their PostgreSQL expertise. Beginners with an interest in databases will also find this book highly approachable. Ideal for readers seeking to improve their database scalability and robustness. If you aim to hone practical PostgreSQL skills, this guide is essential.

Procedural Programming with PostgreSQL PL/pgSQL: Design Complex Database-Centric Applications with PL/pgSQL

Learn the fundamentals of PL/PGSQL, the programming language of PostgreSQL which is most robust Open Source Relational Database. This book provides practical insights into developing database code objects such as functions and procedures, with a focus on effectively handling strings, numbers, and arrays to achieve desired outcomes, and transaction management. The unique approach to handling Triggers in PostgreSQL ensures that both functionality and performance are maintained without compromise. You'll gain proficiency in writing inline/anonymous server-side code within the limitations, along with learning essential debugging and profiling techniques. Additionally, the book delves into statistical analysis of PL/PGSQL code and offers valuable knowledge on managing exceptions while writing code blocks. Finally, you'll explore the installation and configuration of extensions to enhance the performance of stored procedures and functions. What You'll Learn Understand the PL/PGSQL concepts Learn to debug, profile, and optimize PL/PGSQL code Study linting PL/PGSQL code Review transaction management within PL/PGSQL code Work with developer friendly features like operators, casts, and aggregators Who Is This Book For App developers, database migration consultants, and database administrators.