talk-data.com talk-data.com

Topic

Oracle

database enterprise_software cloud

541

tagged

Activity Trend

33 peak/qtr
2020-Q1 2026-Q1

Activities

541 activities · Newest first

Summary Data warehouses have gone through many transformations, from standard relational databases on powerful hardware, to column oriented storage engines, to the current generation of cloud-native analytical engines. SnowflakeDB has been leading the charge to take advantage of cloud services that simplify the separation of compute and storage. In this episode Kent Graziano, chief technical evangelist for SnowflakeDB, explains how it is differentiated from other managed platforms and traditional data warehouse engines, the features that allow you to scale your usage dynamically, and how it allows for a shift in your workflow from ETL to ELT. If you are evaluating your options for building or migrating a data platform, then this is definitely worth a listen.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show! You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media and the Python Software Foundation. Upcoming events include the Software Architecture Conference in NYC and PyCOn US in Pittsburgh. Go to dataengineeringpodcast.com/conferences to learn more about these and other events, and take advantage of our partner discounts to save money when you register today. Your host is Tobias Macey and today I’m interviewing Kent Graziano about SnowflakeDB, the cloud-native data warehouse

Interview

Introduction How did you get involved in the area of data management? Can you start by explaining what SnowflakeDB is for anyone who isn’t familiar with it?

How does it compare to the other available platforms for data warehousing? How does it differ from traditional data warehouses?

How does the performance and flexibility affect the data modeling requirements?

Snowflake is one of the data stores that is enabling the shift from an ETL to an ELT workflow. What are the features that allow for that approach and what are some of the challenges that it introduces? Can you describe how the platform is architected and some of the ways that it has evolved as it has grown in popularity?

What are some of the current limitations that you are struggling with?

For someone getting started with Snowflake what is involved with loading data into the platform?

What is their workflow for allocating and scaling compute capacity and running anlyses?

One of the interesting features enabled by your architecture is data sharing. What are some of the most interesting or unexpected uses of that capability that you have seen? What are some other features or use cases for Snowflake that are not as well known or publicized which you think users should know about? When is SnowflakeDB the wrong choice? What are some of the plans for the future of SnowflakeDB?

Contact Info

LinkedIn Website @KentGraziano on Twitter

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

SnowflakeDB

Free Trial Stack Overflow

Data Warehouse Oracle DB MPP == Massively Parallel Processing Shared Nothing Architecture Multi-Cluster Shared Data Architecture Google BigQuery AWS Redshift AWS Redshift Spectrum Presto

Podcast Episode

SnowflakeDB Semi-Structured Data Types Hive ACID == Atomicity, Consistency, Isolation, Durability 3rd Normal Form Data Vault Modeling Dimensional Modeling JSON AVRO Parquet SnowflakeDB Virtual Warehouses CRM == Customer Relationship Management Master Data Management

Podcast Episode

FoundationDB

Podcast Episode

Apache Spark

Podcast Episode

SSIS == SQL Server Integration Services Talend Informatica Fivetran

Podcast Episode

Matillion Apache Kafka Snowpipe Snowflake Data Exchange OLTP == Online Transaction Processing GeoJSON Snowflake Documentation SnowAlert Splunk Data Catalog

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Support Data Engineering Podcast

Summary With the constant evolution of technology for data management it can seem impossible to make an informed decision about whether to build a data warehouse, or a data lake, or just leave your data wherever it currently rests. What’s worse is that any time you have to migrate to a new architecture, all of your analytical code has to change too. Thankfully it’s possible to add an abstraction layer to eliminate the churn in your client code, allowing you to evolve your data platform without disrupting your downstream data users. In this episode AtScale co-founder and CTO Matthew Baird describes how the data virtualization and data engineering automation capabilities that are built into the platform free up your engineers to focus on your business needs without having to waste cycles on premature optimization. This was a great conversation about the power of abstractions and appreciating the value of increasing the efficiency of your data team.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show! This week’s episode is also sponsored by Datacoral, an AWS-native, serverless, data infrastructure that installs in your VPC. Datacoral helps data engineers build and manage the flow of data pipelines without having to manage any infrastructure, meaning you can spend your time invested in data transformations and business needs, rather than pipeline maintenance. Raghu Murthy, founder and CEO of Datacoral built data infrastructures at Yahoo! and Facebook, scaling from terabytes to petabytes of analytic data. He started Datacoral with the goal to make SQL the universal data programming language. Visit dataengineeringpodcast.com/datacoral today to find out more. Having all of your logs and event data in one place makes your life easier when something breaks, unless that something is your Elastic Search cluster because it’s storing too much data. CHAOSSEARCH frees you from having to worry about data retention, unexpected failures, and expanding operating costs. They give you a fully managed service to search and analyze all of your logs in S3, entirely under your control, all for half the cost of running your own Elastic Search cluster or using a hosted platform. Try it out for yourself at dataengineeringpodcast.com/chaossearch and don’t forget to thank them for supporting the show! You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Dataversity, Corinium Global Intelligence, Alluxio, and Data Council. Upcoming events include the combined events of the Data Architecture Summit and Graphorum, the Data Orchestration Summit, and Data Council in NYC. Go to dataengineeringpodcast.com/conferences to learn more about these and other events, and take advantage of our partner discounts to save money when you register today. Your host is Tobias Macey and today I’m interviewing Matt Baird about AtScale, a platform that

Interview

Introduction How did you get involved in the area of data management? Can you start by describing the AtScale platform and how it fits in the ecosystem of data tools? What was your motivation for building the platform and what were some of the early challenges that you faced in achieving your current level of success? How is the AtScale platform architected and what have been some of the main areas of evolution and change since you first began building it?

How has the surrounding data ecosystem changed since AtScale was founded? How are current industry trends influencing your product focus?

Can you talk through the workflow for someone implementing AtScale? What are some of the main use cases that benefit from data virtualization capabilities?

How does it influence the relevancy of data warehouses or data lakes?

What are some of the types of tools or patterns that AtScale replaces in a data platform? What are some of the most interesting or unexpected ways that you have seen AtScale used? What have been some of the most challenging aspects of building and growing the platform? When is AtScale the wrong choice? What do you have planned for the future of the platform and business?

Contact Info

LinkedIn @zetty on Twitter

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don’t forget to check out our other show, Podcast.init to learn about the Python language, its community, and the innovative ways it is being used. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you’ve learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on iTunes and tell your friends and co-workers Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat

Links

AtScale PeopleSoft Oracle Hadoop PrestoDB Impala Apache Kylin Apache Druid Go Language Scala

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Support Data Engineering Podcast

Oracle Database Application Security: With Oracle Internet Directory, Oracle Access Manager, and Oracle Identity Manager

Focus on the security aspects of designing, building, and maintaining a secure Oracle Database application. Starting with data encryption, you will learn to work with transparent data, back-up, and networks. You will then go through the key principles of audits, where you will get to know more about identity preservation, policies and fine-grained audits. Moving on to virtual private databases, you’ll set up and configure a VPD to work in concert with other security features in Oracle, followed by tips on managing configuration drift, profiles, and default users. Shifting focus to coding, you will take a look at secure coding standards, multi-schema database models, code-based access control, and SQL injection. Finally, you’ll cover single sign-on (SSO), and will be introduced to Oracle Internet Directory (OID), Oracle Access Manager (OAM), and Oracle Identity Management (OIM) by installing and configuring them to meet your needs. Oracle databases hold the majority of the world’s relational data, and are attractive targets for attackers seeking high-value targets for data theft. Compromise of a single Oracle Database can result in tens of millions of breached records costing millions in breach-mitigation activity. This book gets you ready to avoid that nightmare scenario. What You Will Learn Work with Oracle Internet Directory using the command-line and the console Integrate Oracle Access Manager with different applications Work with the Oracle Identity Manager console and connectors, while creating your own custom one Troubleshooting issues with OID, OAM, and OID Dive deep into file system and network security concepts Who This Book Is For Oracle DBAs and developers. Readers will need a basic understanding of Oracle RDBMS and Oracle Application Server to take complete advantage of this book.

SQL Server 2019 Revealed: Including Big Data Clusters and Machine Learning

Get up to speed on the game-changing developments in SQL Server 2019. No longer just a database engine, SQL Server 2019 is cutting edge with support for machine learning (ML), big data analytics, Linux, containers, Kubernetes, Java, and data virtualization to Azure. This is not a book on traditional database administration for SQL Server. It focuses on all that is new for one of the most successful modernized data platforms in the industry. It is a book for data professionals who already know the fundamentals of SQL Server and want to up their game by building their skills in some of the hottest new areas in technology. SQL Server 2019 Revealed begins with a look at the project's team goal to integrate the world of big data with SQL Server into a major product release. The book then dives into the details of key new capabilities in SQL Server 2019 using a “learn by example” approach for Intelligent Performance, security, mission-criticalavailability, and features for the modern developer. Also covered are enhancements to SQL Server 2019 for Linux and gain a comprehensive look at SQL Server using containers and Kubernetes clusters. The book concludes by showing you how to virtualize your data access with Polybase to Oracle, MongoDB, Hadoop, and Azure, allowing you to reduce the need for expensive extract, transform, and load (ETL) applications. You will then learn how to take your knowledge of containers, Kubernetes, and Polybase to build a comprehensive solution called Big Data Clusters, which is a marquee feature of 2019. You will also learn how to gain access to Spark, SQL Server, and HDFS to build intelligence over your own data lake and deploy end-to-end machine learning applications. What You Will Learn Implement Big Data Clusters with SQL Server, Spark, and HDFS Create a Data Hub with connections to Oracle, Azure, Hadoop, and other sources Combine SQL and Spark to build a machine learning platform for AI applications Boost your performance with no application changes using Intelligent Performance Increase security of your SQL Server through Secure Enclaves and Data Classification Maximize database uptime through online indexing and Accelerated Database Recovery Build new modern applications with Graph, ML Services, and T-SQL Extensibility with Java Improve your ability to deploy SQL Server on Linux Gain in-depth knowledge to run SQL Server with containers and Kubernetes Know all the new database engine features for performance, usability, and diagnostics Use the latest tools and methods to migrate your database to SQL Server 2019 Apply your knowledge of SQL Server 2019 to Azure Who This Book Is For IT professionals and developers who understand the fundamentals of SQL Server and wish to focus on learning about the new, modern capabilities of SQL Server 2019. The book is for those who want to learn about SQL Server 2019 and the new Big Data Clusters and AI feature set, support for machine learning and Java, how to run SQL Server with containers and Kubernetes, and increased capabilities around Intelligent Performance, advanced security, and high availability.

Beginning Oracle SQL for Oracle Database 18c: From Novice to Professional

Start developing with Oracle SQL. This book is a one-stop introduction to everything you need to know about getting started developing an Oracle Database. You'll learn about foundational concepts, setting up a simple schema, adding data, reading data from the database, and making changes. No experience with databases is required to get started. Examples in the book are built around Oracle Live SQL, a freely available, online sandbox for practicing and experimenting with SQL statements, and Oracle Express Edition, a free version of Oracle Database that is available for download. A marquee feature of Beginning Oracle SQL for Oracle Database 18c is the small chapter size. Content is divided into easily digestible chunks that can be read and practiced in very short intervals of time, making this the ideal book for a busy professional to learn from. Even just a 15-20 minute block of free time can be put to good use. AuthorBen Brumm begins by helping you understand what a database is, and getting you set up with a sandbox in which to practice the SQL that you are learning. From there, easily digestible chapters cover, point-by-point, the different aspects of writing queries to get data out of a database. You’ll also learn about creating tables and getting data into the database. Crucial topics such as working with nulls and writing analytic queries are given the attention they deserve, helping you to avoid pitfalls when writing queries for production use. What You'll Learn Create, update, and delete tables in an Oracle database Add, update, delete data from those database tables Query and view data stored in your database Manipulate and transform data using in-built database functions and features Correctly choose when to use Oracle-specific syntax and features Who This Book Is For Those new to Oracle who are planning to develop software using Oracle as the back-end data store. The book is also for those who are getting started in software development and realize they need to learn some kind of database language. Those who are learning software development on the side of their normal job, or learning it as a college student, who are ready to learn what a database is and how to use it also will find this book useful.

Pro Oracle SQL Development: Best Practices for Writing Advanced Queries

Write SQL statements that are more powerful, simpler, and faster using Oracle SQL and its full range of features. This book provides a clearer way of thinking about SQL by building sets, and provides practical advice for using complex features while avoiding anti-patterns that lead to poor performance and wrong results. Relevant theories, real-world best practices, and style guidelines help you get the most out of Oracle SQL. Pro Oracle SQL Development is for anyone who already knows Oracle SQL and is ready to take their skills to the next level. Many developers, analysts, testers, and administrators use Oracle databases frequently, but their queries are limited because they do not have the knowledge, experience, or right environment to help them take full advantage of Oracle’s advanced features. This book will inspire you to achieve more with your Oracle SQL statements through tips for creating your own style for writing simple, yet powerful, SQL. It teaches you how to think about and solve performance problems in Oracle SQL, and covers advanced topics and shows you how to become an Oracle expert. What You'll Learn Understand the power of Oracle SQL and where to apply it Create a database development environment that is simple, scalable, and conducive to learning Solve complex problems that were previously solved in a procedural language Write large Oracle SQL statements that are powerful, simple, and fast Apply coding styles to make your SQL statements more readable Tune large Oracle SQL statements to eliminate and avoid performance problems Who This Book Is For Developers, testers, analysts, and administrators who want to harness the full power of Oracle SQL to solve their problems as simply and as quickly as possible. For traditional database professionals the book offers new ways of thinking about the language they have used for so long. For modern full stack developers the book explains how a database can be much more than simply a place to store data.

SQL All-In-One For Dummies, 3rd Edition

The latest on SQL databases SQL All -In-One For Dummies, 3rd Edition, is a one-stop shop for everything you need to know about SQL and SQL-based relational databases. Everyone from database administrators to application programmers and the people who manage them will find clear, concise explanations of the SQL language and its many powerful applications. With the ballooning amount of data out there, more and more businesses, large and small, are moving from spreadsheets to SQL databases like Access, Microsoft SQL Server, Oracle databases, MySQL, and PostgreSQL. This compendium of information covers designing, developing, and maintaining these databases. Cope with any issue that arises in SQL database creation and management Get current on the newest SQL updates and capabilities Reference information on querying SQL-based databases in the SQL language Understand relational databases and their importance to today’s organizations SQL All-In-One For Dummies is a timely update to the popular reference for readers who want detailed information about SQL databases and queries.

Oracle DBA Mentor: Succeeding as an Oracle Database Administrator

New Oracle database administrators can get off the ground running. This book helps you develop the ability to think on your feet and move focus in an instant from arcane syntax details to broad, corporate issues. Along the way, you will see how to create your first database and implement best practices to ensure a well-running database system. What makes Oracle DBA Mentor different is that it also teaches you how to obtain answers that are not found in this or other books. Focus is given to creating a test bed and running test cases to examine hypotheses and prove out solutions so you can be sure they work in production. Attention is given to navigating product documentation and networking in forums and social media to build your skills and a network to draw on when solving problems under pressure. There are chapters of step-by-step technical content as well as coverage of essential skills to succeed as a DBA no matter which database engine you administer. By the time you are done reading this book, you will have confidence to face many of the situations thrown in your direction. You will know where to go for the answers you don’t yet know that you need. You’ll be able to work and troubleshoot under pressure. You’ll know how to create a database, institute backup and recovery procedures, secure the database and its valuable corporate data, and acquire more knowledge as needed so you can run a database to meet the needs of your organization. What You'll Learn Install Oracle Database with best practices Implement backup and recovery procedures Understand the fundamentals of databases and data security Find answers to technical problems using Oracle documentation, Oracle Support, and other resources Patch and upgrade an Oracle database Who This Book Is For The novice database administrator who wants help getting off the ground with their DBA career, and in building the skills to let that career flourish in the long term. Mid-level DBAs will also find the book helpful as they try to grow their career to the next level. While the book is geared toward the Oracle platform, database administrators from other platforms can benefit from the soft skills covered in this book.

Security with AI and Machine Learning

For security professionals seeking reliable ways to combat persistent threats to their networks, there’s encouraging news. Tools that employ AI and machine learning have begun to replace the older rules- and signature-based tools that can no longer combat today’s sophisticated attacks. In this ebook, Oracle’s Laurent Gil and Recorded Future’s Allan Liska look at the strengths (and limitations) of AI- and ML-based security tools for dealing with today’s threat landscape. This high-level overview demonstrates how these new tools use AI and ML to quickly identify threats, connect attack patterns, and allow operators and analysts to focus on their core mission. You’ll also learn how managed security service providers (MSSPs) use AI and ML to identify patterns from across their customer base. This ebook explains: Why rules-based, signature-based, and firewall solutions have fallen short How automated bots enable cybercriminals and nation-state actors to attack your network The evolution of the botnet: how threat actors constantly change their attack strategy How AI and ML techniques in web applications help you observe, quantify, and classify inbound requests How to detect insider threats and advanced persistent threat actors with AI and ML tools Case studies that show how a media company, an airline, and a university use AL and ML in security

Summary

The past year has been an active one for the timeseries market. New products have been launched, more businesses have moved to streaming analytics, and the team at Timescale has been keeping busy. In this episode the TimescaleDB CEO Ajay Kulkarni and CTO Michael Freedman stop by to talk about their 1.0 release, how the use cases for timeseries data have proliferated, and how they are continuing to simplify the task of processing your time oriented events.

Introduction

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch. To help other people find the show please leave a review on iTunes, or Google Play Music, tell your friends and co-workers, and share it on social media. Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat Your host is Tobias Macey and today I’m welcoming Ajay Kulkarni and Mike Freedman back to talk about how TimescaleDB has grown and changed over the past year

Interview

Introduction How did you get involved in the area of data management? Can you refresh our memory about what TimescaleDB is? How has the market for timeseries databases changed since we last spoke? What has changed in the focus and features of the TimescaleDB project and company? Toward the end of 2018 you launched the 1.0 release of Timescale. What were your criteria for establishing that milestone?

What were the most challenging aspects of reaching that goal?

In terms of timeseries workloads, what are some of the factors that differ across varying use cases?

How do those differences impact the ways in which Timescale is used by the end user, and built by your team?

What are some of the initial assumptions that you made while first launching Timescale that have held true, and which have been disproven? How have the improvements and new features in the recent releases of PostgreSQL impacted the Timescale product?

Have you been able to leverage some of the native improvements to simplify your implementation? Are there any use cases for Timescale that would have been previously impractical in vanilla Postgres that would now be reasonable without the help of Timescale?

What is in store for the future of the Timescale product and organization?

Contact Info

Ajay

@acoustik on Twitter LinkedIn

Mike

LinkedIn Website @michaelfreedman on Twitter

Timescale

Website Documentation Careers timescaledb on GitHub @timescaledb on Twitter

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

TimescaleDB Original Appearance on the Data Engineering Podcast 1.0 Release Blog Post PostgreSQL

Podcast Interview

RDS DB-Engines MongoDB IOT (Internet Of Things) AWS Timestream Kafka Pulsar

Podcast Episode

Spark

Podcast Episode

Flink

Podcast Episode

Hadoop DevOps PipelineDB

Podcast Interview

Grafana Tableau Prometheus OLTP (Online Transaction Processing) Oracle DB Data Lake

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Support Data Engineering Podcast

Java XML and JSON: Document Processing for Java SE

Use this guide to master the XML metalanguage and JSON data format along with significant Java APIs for parsing and creating XML and JSON documents from the Java language. New in this edition is coverage of Jackson (a JSON processor for Java) and Oracle’s own Java API for JSON processing (JSON-P), which is a JSON processing API for Java EE that also can be used with Java SE. This new edition of Java XML and JSON also expands coverage of DOM and XSLT to include additional API content and useful examples. All examples in this book have been tested under Java 11. In some cases, source code has been simplified to use Java 11’s var language feature. The first six chapters focus on XML along with the SAX, DOM, StAX, XPath, and XSLT APIs. The remaining six chapters focus on JSON along with the mJson, GSON, JsonPath, Jackson, and JSON-P APIs. Each chapter ends with select exercises designed to challenge your grasp of the chapter's content.An appendix provides the answers to these exercises. What You'll Learn Master the XML language Create, validate, parse, and transform XML documents Apply Java’s SAX, DOM, StAX, XPath, and XSLT APIs Master the JSON format for serializing and transmitting data Code against third-party APIs such as Jackson, mJson, Gson, JsonPath Master Oracle’s JSON-P API in a Java SE context Who This Book Is For Intermediate and advanced Java programmers who are developing applications that must access data stored in XML or JSON documents. The book also targets developers wanting to understand the XML language and JSON data format.

Summary

The Hadoop platform is purpose built for processing large, slow moving data in long-running batch jobs. As the ecosystem around it has grown, so has the need for fast data analytics on fast moving data. To fill this need the Kudu project was created with a column oriented table format that was tuned for high volumes of writes and rapid query execution across those tables. For a perfect pairing, they made it easy to connect to the Impala SQL engine. In this episode Brock Noland and Jordan Birdsell from PhData explain how Kudu is architected, how it compares to other storage systems in the Hadoop orbit, and how to start integrating it into you analytics pipeline.

Preamble

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch. To help other people find the show please leave a review on iTunes, or Google Play Music, tell your friends and co-workers, and share it on social media. Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat Your host is Tobias Macey and today I’m interviewing Brock Noland and Jordan Birdsell about Apache Kudu and how it is able to provide fast analytics on fast data in the Hadoop ecosystem

Interview

Introduction How did you get involved in the area of data management? Can you start by explaining what Kudu is and the motivation for building it?

How does it fit into the Hadoop ecosystem? How does it compare to the work being done on the Iceberg table format?

What are some of the common application and system design patterns that Kudu supports? How is Kudu architected and how has it evolved over the life of the project? There are many projects in and around the Hadoop ecosystem that rely on Zookeeper as a building block for consensus. What was the reasoning for using Raft in Kudu? How does the storage layer in Kudu differ from what would be found in systems like Hive or HBase?

What are the implementation details in the Kudu storage interface that have had the greatest impact on its overall speed and performance?

A number of the projects built for large scale data processing were not initially built with a focus on operational simplicity. What are the features of Kudu that simplify deployment and management of production infrastructure? What was the motivation for using C++ as the language target for Kudu?

If you were to start the project over today what would you do differently?

What are some situations where you would advise against using Kudu? What have you found to be the most interesting/unexpected/challenging lessons learned in the process of building and maintaining Kudu? What are you most excited about for the future of Kudu?

Contact Info

Brock

LinkedIn @brocknoland on Twitter

Jordan

LinkedIn @jordanbirdsell jbirdsell on GitHub

PhData

Website phdata on GitHub @phdatainc on Twitter

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

Kudu PhData Getting Started with Apache Kudu Thomson Reuters Hadoop Oracle Exadata Slowly Changing Dimensions HDFS S3 Azure Blob Storage State Farm Stanly Black & Decker ETL (Extract, Transform, Load) Parquet

Podcast Episode

ORC HBase Spark

Podcast Episode

SQL For Dummies, 9th Edition

Get ready to make SQL easy! Updated for the latest version of SQL, the new edition of this perennial bestseller shows programmers and web developers how to use SQL to build relational databases and get valuable information from them. Covering everything you need to know to make working with SQL easier than ever, topics include how to use SQL to structure a DBMS and implement a database design; secure a database; and retrieve information from a database; and much more. SQL is the international standard database language used to create, access, manipulate, maintain, and store information in relational database management systems (DBMS) such as Access, Oracle, SQL Server, and MySQL. SQL adds powerful data manipulation and retrieval capabilities to conventional languages—and this book shows you how to harness the core element of relational databases with ease. Server platform that gives you choices of development languages, data types, on-premises or cloud, and operating systems Find great examples on the use of temporal data Jump right in—without previous knowledge of database programming or SQL As database-driven websites continue to grow in popularity—and complexity— SQL For Dummies is the easy-to-understand, go-to resource you need to use it seamlessly.

Dynamic Oracle Performance Analytics: Using Normalized Metrics to Improve Database Speed

Use an innovative approach that relies on big data and advanced analytical techniques to analyze and improve Oracle Database performance. The approach used in this book represents a step-change paradigm shift away from traditional methods. Instead of relying on a few hand-picked, favorite metrics, or wading through multiple specialized tables of information such as those found in an automatic workload repository (AWR) report, you will draw on all available data, applying big data methods and analytical techniques to help the performance tuner draw impactful, focused performance improvement conclusions. This book briefly reviews past and present practices, along with available tools, to help you recognize areas where improvements can be made. The book then guides you through a step-by-step method that can be used to take advantage of all available metrics to identify problem areas and work toward improving them. The method presented simplifies the tuning process and solves the problem of metric overload. You will learn how to: collect and normalize data, generate deltas that are useful in performing statistical analysis, create and use a taxonomy to enhance your understanding of problem performance areas in your database and its applications, and create a root cause analysis report that enables understanding of a specific performance problem and its likely solutions. What You'll Learn Collect and prepare metrics for analysis from a wide array of sources Apply statistical techniques to select relevant metrics Create a taxonomy to provide additional insight into problem areas Provide a metrics-based root cause analysis regarding the performance issue Generate an actionable tuning plan prioritized according to problem areas Monitor performance using database-specific normal ranges ​ Who This Book Is For Professional tuners: responsible for maintaining the efficient operation of large-scale databases who wish to focus on analysis, who want to expand their repertoire to include a big data methodology and use metrics without being overwhelmed, who desire to provide accurate root cause analysis and avoid the cyclical fix-test cycles that are inevitable when speculation is used

Pro SQL Server on Linux: Including Container-Based Deployment with Docker and Kubernetes

Get SQL Server up and running on the Linux operating system and containers. No database professional managing or developing SQL Server on Linux will want to be without this deep and authoritative guide by one of the most respected experts on SQL Server in the industry. Get an inside look at how SQL Server for Linux works through the eyes of an engineer on the team that made it possible. Microsoft SQL Server is one of the leading database platforms in the industry, and SQL Server 2017 offers developers and administrators the ability to run a database management system on Linux, offering proven support for enterprise-level features and without onerous licensing terms. Organizations invested in Microsoft and open source technologies are now able to run a unified database platform across all their operating system investments. Organizations are further able to take full advantage of containerization through popular platforms such as Docker and Kubernetes. Pro SQL Server on Linux walks you through installing and configuring SQL Server on the Linux platform. The author is one of the principal architects of SQL Server for Linux, and brings a corresponding depth of knowledge that no database professional or developer on Linux will want to be without. Throughout this book are internals of how SQL Server on Linux works including an in depth look at the innovative architecture. The book covers day-to-day management and troubleshooting, including diagnostics and monitoring, the use of containers to manage deployments, and the use of self-tuning and the in-memory capabilities. Also covered are performance capabilities, high availability, and disaster recovery along with security and encryption. The book covers the product-specific knowledge to bring SQL Server and its powerful features to life on the Linux platform, including coverage of containerization through Docker and Kubernetes. What You'll Learn Learn about the history and internal of the unique SQL Server on Linux architecture. Install and configure Microsoft’s flagship database product on the Linux platform Manage your deployments using container technology through Docker and Kubernetes Know the basics of building databases, the T-SQL language, and developing applications against SQL Server on Linux Use tools and features to diagnose, manage, and monitor SQL Server on Linux Scale your application by learning the performance capabilities of SQL Server Deliver high availability and disaster recovery to ensure business continuity Secure your database from attack, and protect sensitive data through encryption Take advantage of powerful features such as Failover Clusters, Availability Groups, In-Memory Support, and SQL Server’sSelf-Tuning Engine Learn how to migrate your database from older releases of SQL Server and other database platforms such as Oracle and PostgreSQL Build and maintain schemas, and perform management tasks from both GUI and command line Who This Book Is For Developers and IT professionals who are new to SQL Server and wish to configure it on the Linux operating system. This book is also useful to those familiar with SQL Server on Windows who want to learn the unique aspects of managing SQL Server on the Linux platform and Docker containers. Readers should have a grasp of relational database concepts and be comfortable with the SQL language.

Introducing InnoDB Cluster: Learning the MySQL High Availability Stack

Set up, manage, and configure the new InnoDB Cluster feature in MySQL from Oracle. If you are growing your MySQL installation and want to explore making your servers highly available, this book provides what you need to know about high availability and the new tools that are available in MySQL 8.0.11 and later. Introducing InnoDB Cluster teaches you about the building blocks that make up InnoDB Cluster such as MySQL Group Replication for storing data redundantly, MySQL Router for the routing of inbound connections, and MySQL Shell for simplified setup and configuration, status reporting, and even automatic failover. You will understand how it all works together to ensure that your data are available even when your primary database server goes down. Features described in this book are available in the Community Edition of MySQL, beginning with the version 8.0.11 GA release, making this book relevant for any MySQL users in need of redundancy against failure. Tutorials in the book show how to configure a test environment and plan a production deployment. Examples are provided in the form of a walk-through of a typical MySQL high-availability setup. What You'll Learn Discover the newest high-availability features in MySQL Set up and use InnoDB Cluster as an HA solution Migrate your existing servers to MySQL 8 Employ best practices for using InnoDB Cluster Configure servers for optimal automatic failover to ensure that applications continue when a server fails Configure MySQL Router to load-balance inbound connections to the cluster Who This Book Is For Systems engineers, developers, and database professionals wanting to learn about the powerful high availability (HA) features, beginning with MySQL 8.0.11: MySQL Shell, MySQL Router, and MySQL Group Replication. The book is useful for those designing high-availability systems backed by a database, and for those interested in open source HA solutions.

Summary

One of the longest running and most popular open source database projects is PostgreSQL. Because of its extensibility and a community focus on stability it has stayed relevant as the ecosystem of development environments and data requirements have changed and evolved over its lifetime. It is difficult to capture any single facet of this database in a single conversation, let alone the entire surface area, but in this episode Jonathan Katz does an admirable job of it. He explains how Postgres started and how it has grown over the years, highlights the fundamental features that make it such a popular choice for application developers, and the ongoing efforts to add the complex features needed by the demanding workloads of today’s data layer. To cap it off he reviews some of the exciting features that the community is working on building into future releases.

Preamble

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to run a bullet-proof data platform. Go to dataengineeringpodcast.com/linode to get a $20 credit and launch a new server in under a minute. Are you struggling to keep up with customer request and letting errors slip into production? Want to try some of the innovative ideas in this podcast but don’t have time? DataKitchen’s DataOps software allows your team to quickly iterate and deploy pipelines of code, models, and data sets while improving quality. Unlike a patchwork of manual operations, DataKitchen makes your team shine by providing an end to end DataOps solution with minimal programming that uses the tools you love. Join the DataOps movement and sign up for the newsletter at datakitchen.io/de today. After that learn more about why you should be doing DataOps by listening to the Head Chef in the Data Kitchen at dataengineeringpodcast.com/datakitchen Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch. Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat Your host is Tobias Macey and today I’m interviewing Jonathan Katz about a high level view of PostgreSQL and the unique capabilities that it offers

Interview

Introduction How did you get involved in the area of data management? How did you get involved in the Postgres project? For anyone who hasn’t used it, can you describe what PostgreSQL is?

Where did Postgres get started and how has it evolved over the intervening years?

What are some of the primary characteristics of Postgres that would lead someone to choose it for a given project?

What are some cases where Postgres is the wrong choice?

What are some of the common points of confusion for new users of PostGreSQL? (particularly if they have prior database experience) The recent releases of Postgres have had some fairly substantial improvements and new features. How does the community manage to balance stability and reliability against the need to add new capabilities? What are the aspects of Postgres that allow it to remain relevant in the current landscape of rapid evolution at the data layer? Are there any plans to incorporate a distributed transaction layer into the core of the project along the lines of what has been done with Citus or CockroachDB? What is in store for the future of Postgres?

Contact Info

@jkatz05 on Twitter jkatz on GitHub

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

PostgreSQL Crunchy Data Venuebook Paperless Post LAMP Stack MySQL PHP SQL ORDBMS Edgar Codd A Relational Model of Data for Large Shared Data Banks Relational Algebra Oracle DB UC Berkeley Dr. Michae

Summary

Collaboration, distribution, and installation of software projects is largely a solved problem, but the same cannot be said of data. Every data team has a bespoke means of sharing data sets, versioning them, tracking related metadata and changes, and publishing them for use in the software systems that rely on them. The CEO and founder of Quilt Data, Kevin Moore, was sufficiently frustrated by this problem to create a platform that attempts to be the means by which data can be as collaborative and easy to work with as GitHub and your favorite programming language. In this episode he explains how the project came to be, how it works, and the many ways that you can start using it today.

Preamble

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to run a bullet-proof data platform. Go to dataengineeringpodcast.com/linode to get a $20 credit and launch a new server in under a minute. Are you struggling to keep up with customer request and letting errors slip into production? Want to try some of the innovative ideas in this podcast but don’t have time? DataKitchen’s DataOps software allows your team to quickly iterate and deploy pipelines of code, models, and data sets while improving quality. Unlike a patchwork of manual operations, DataKitchen makes your team shine by providing an end to end DataOps solution with minimal programming that uses the tools you love. Join the DataOps movement and sign up for the newsletter at datakitchen.io/de today. After that learn more about why you should be doing DataOps by listening to the Head Chef in the Data Kitchen at dataengineeringpodcast.com/datakitchen Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch. Your host is Tobias Macey and today I’m interviewing Kevin Moore about Quilt Data, a platform and tooling for packaging, distributing, and versioning data

Interview

Introduction How did you get involved in the area of data management? What is the intended use case for Quilt and how did the project get started? Can you step through a typical workflow of someone using Quilt?

How does that change as you go from a single user to a team of data engineers and data scientists?

Can you describe the elements of what a data package consists of?

What was your criteria for the file formats that you chose?

How is Quilt architected and what have been the most significant changes or evolutions since you first started? How is the data registry implemented?

What are the limitations or edge cases that you have run into? What optimizations have you made to accelerate synchronization of the data to and from the repository?

What are the limitations in terms of data volume, format, or usage? What is your goal with the business that you have built around the project? What are your plans for the future of Quilt?

Contact Info

Email LinkedIn

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

Quilt Data GitHub Jobs Reproducible Data Dependencies in Jupyter Reproducible Machine Learning with Jupyter and Quilt Allen Institute: Programmatic Data Access with Quilt Quilt Example: MissingNo Oracle Pandas Jupyter Ycombinator Data.World

Podcast Episode with CTO Bryon Jacob

Kaggle Parquet HDF5 Arrow PySpark Excel Scala Binder Merkle Tree Allen Institute for Cell Science Flask PostGreSQL Docker Airflow Quilt Teams Hive Hive Metastore PrestoDB

Podcast Episode

Netflix Iceberg Kubernetes Helm

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Support Data Engineering Podcast

Introducing the MySQL 8 Document Store

Learn the new Document Store feature of MySQL 8 and build applications around a mix of the best features from SQL and NoSQL database paradigms. Don’t allow yourself to be forced into one paradigm or the other, but combine both approaches by using the Document Store. MySQL 8 was designed from the beginning to bridge the gap between NoSQL and SQL. Oracle recognizes that many solutions need the capabilities of both. More specifically, developers need to store objects as loose collections of schema-less documents, but those same developers also need the ability to run structured queries on their data. With MySQL 8, you can do both! Introducing the MySQL 8 Document Store presents new tools and features that make creating a hybrid database solution far easier than ever before. This book covers the vitally important MySQL Document Store, the new X Protocol for developing applications, and a new client shell called the MySQL Shell. Also covered are supporting technologies and concepts such as JSON, schema-less documents, and more. The book gives insight into how features work and how to apply them to get the most out of your MySQL experience. The book covers topics such as: The headline feature in MySQL 8 MySQL's answer to NoSQL New APIs and client protocols What You'll Learn Create NoSQL-style applications by using the Document Store Mix the NoSQL and SQL approaches by using each to its best advantage in a hybrid solution Work with the new X Protocol for application connectivity in MySQL 8 Master the new X Developer Application Programming Interfaces Combine SQL and JSON in the same database and application Migrate existing applications to MySQL Document Store Who This Book Is For Developers and database professionals wanting to learn about the most profound paradigm-changing features of the MySQL 8 Document Store

Next-Generation Big Data: A Practical Guide to Apache Kudu, Impala, and Spark

Utilize this practical and easy-to-follow guide to modernize traditional enterprise data warehouse and business intelligence environments with next-generation big data technologies. Next-Generation Big Data takes a holistic approach, covering the most important aspects of modern enterprise big data. The book covers not only the main technology stack but also the next-generation tools and applications used for big data warehousing, data warehouse optimization, real-time and batch data ingestion and processing, real-time data visualization, big data governance, data wrangling, big data cloud deployments, and distributed in-memory big data computing. Finally, the book has an extensive and detailed coverage of big data case studies from Navistar, Cerner, British Telecom, Shopzilla, Thomson Reuters, and Mastercard. What You’ll Learn Install Apache Kudu, Impala, and Spark to modernize enterprise data warehouse and business intelligence environments, complete with real-world, easy-to-follow examples, and practical advice Integrate HBase, Solr, Oracle, SQL Server, MySQL, Flume, Kafka, HDFS, and Amazon S3 with Apache Kudu, Impala, and Spark Use StreamSets, Talend, Pentaho, and CDAP for real-time and batch data ingestion and processing Utilize Trifacta, Alteryx, and Datameer for data wrangling and interactive data processing Turbocharge Spark with Alluxio, a distributed in-memory storage platform Deploy big data in the cloud using Cloudera Director Perform real-time data visualization and time series analysis using Zoomdata, Apache Kudu, Impala, and Spark Understand enterprise big data topics such as big data governance, metadata management, data lineage, impact analysis, and policy enforcement, and how to use Cloudera Navigator to perform common data governance tasks Implement big data use cases such as big data warehousing, data warehouse optimization, Internet of Things, real-time data ingestion and analytics, complex event processing, and scalable predictive modeling Study real-world big data case studies from innovative companies, including Navistar, Cerner, British Telecom, Shopzilla, Thomson Reuters, and Mastercard Who This Book Is For BI and big data warehouse professionals interested in gaining practical and real-world insight into next-generation big data processing and analytics using Apache Kudu, Impala, and Spark; and those who want to learn more about other advanced enterprise topics