talk-data.com talk-data.com

Topic

JSON

JavaScript Object Notation (JSON)

data_format lightweight web_development file_format

129

tagged

Activity Trend

9 peak/qtr
2020-Q1 2026-Q1

Activities

129 activities · Newest first

MATLAB Recipes: A Problem-Solution Approach

Learn from state-of-the-art examples in robotics, motors, detection filters, chemical processes, aircraft, and spacecraft. With this book you will review contemporary MATLAB coding including the latest MATLAB language features and use MATLAB as a software development environment including code organization, GUI development, and algorithm design and testing. Features now covered include the new graph and digraph classes for charts and networks; interactive documents that combine text, code, and output; a new development environment for building apps; locally defined functions in scripts; automatic expansion of dimensions; tall arrays for big data; the new string type; new functions to encode/decode JSON; handling non-English languages; the new class architecture; the Mocking framework; an engine API for Java; the cloud-based MATLAB desktop; the memoize function; and heatmap charts. MATLAB Recipes: A Problem-Solution Approach, Second Edition provides practical, hands-on code snippets and guidance for using MATLAB to build a body of code you can turn to time and again for solving technical problems in your work. Develop algorithms, test them, visualize the results, and pass the code along to others to create a functional code base for your firm. What You Will Learn Get up to date with the latest MATLAB up to and including MATLAB 2020b Code in MATLAB Write applications in MATLAB Build your own toolbox of MATLAB code to increase your efficiency and effectiveness Who This Book Is For Engineers, data scientists, and students wanting a book rich in examples using MATLAB.

Beginning T-SQL: A Step-by-Step Approach

Get a performance-oriented introduction to the T-SQL language underlying the Microsoft SQL Server and Azure SQL database engines. This fourth edition is updated to include SQL Notebooks as well as up-to-date syntax and features for T-SQL on-premises and in the Azure cloud. Exercises and examples now include the WideWorldImporters database, the newest sample database from Microsoft for SQL Server. Also new in this edition is coverage of JSON from T-SQL, news about performance enhancements called Intelligent Query Processing, and an appendix on running SQL Server in a container on macOS or Linux. Beginning T-SQL starts you on the path to mastering T-SQL with an emphasis on best practices. Using the sound coding techniques taught in this book will lead to excellent performance in the queries that you write in your daily work. Important techniques such as windowing functions are covered to help you write fast-executing queries that solve real business problems.The book begins with an introduction to databases, normalization, and to setting up your learning environment. You will learn about the tools you need to use such as SQL Server Management Studio, Azure Data Studio, and SQL Notebooks. Each subsequent chapter teaches an aspect of T-SQL, building on the skills learned in previous chapters. Exercises in most chapters provide an opportunity for the hands-on practice that leads to true learning and distinguishes the competent professional. A stand-out feature in this book is that most chapters end with a Thinking About Performance section. These sections cover aspects of query performance relative to the content just presented, including the new Intelligent Query Processing features that make queries faster without changing code. They will help you avoid beginner mistakes by knowing about and thinking about performance from day 1. What You Will Learn Install a sandboxed SQL Server instance for learning Understand how relational databases are designed Create objects such as tables and stored procedures Query a SQL Server table Filter and order the results of a query Query and work with specialized data types such as XML and JSON Apply modern features such as window functions Choose correct techniques so that your queries perform well Who This Book Is For Anyone who wants to learn T-SQL from the beginning or improve their T-SQL skills; those who need T-SQL as an additional skill; and those who write queries such as application developers, database administrators, business intelligence developers, and data scientists. The book is also helpful for anyone who must retrieve data from a SQL Server database.

Analytics on your analytics, Drizly

Using dbt's metadata on dbt runs (run_results.json) Drizly analytics is able to track, monitor, and alert on its dbt models using Looker to visualize the data. In this video, Emily Hawkins covers how Drizly did this before, using dbt macros and inserts, and how the process was improved using run_results.json in conjunction with Dagster (and teamwork with Fishtown Analytics!)

Practical Azure SQL Database for Modern Developers: Building Applications in the Microsoft Cloud

Here is the expert-level, insider guidance you need on using Azure SQL Database as your back-end data store. This book highlights best practices in everything ranging from full-stack projects to mobile applications to critical, back-end APIs. The book provides instruction on accessing your data from any language and platform. And you learn how to push processing-intensive work into the database engine to be near the data and avoid undue networking traffic. Azure SQL is explained from a developer's point of view, helping you master its feature set and create applications that perform well and delight users. Core to the book is showing you how Azure SQL Database provides relational and post-relational support so that any workload can be managed with easy accessibility from any platform and any language. You will learn about features ranging from lock-free tables to columnstore indexes, and about support for data formats ranging from JSON and key-values to the nodes and edges in the graph database paradigm. Reading this book prepares you to deal with almost all data management challenges, allowing you to create lean and specialized solutions having the elasticity and scalability that are needed in the modern world. What You Will Learn Master Azure SQL Database in your development projects from design to the CI/CD pipeline Access your data from any programming language and platform Combine key-value, JSON, and relational data in the same database Push data-intensive compute work into the database for improved efficiency Delight your customers by detecting and improving poorly performing queries Enhance performance through features such as columnstore indexes and lock-free tables Build confidence in your mastery of Azure SQL Database's feature set Who This Book Is For Developers of applications and APIs that benefit from cloud database support, developers who wish to master their tools (including Azure SQL Database, and those who want their applications to be known for speedy performance and the elegance of their code

Microservices in SAP HANA XSA: A Guide to REST APIs Using Node.js

Build enterprise-grade microservices in the SAP HANA Advanced Model (XSA). This book explains building scalable APIs in XSA and the benefits of building microservices with SAP HANA XSA. This book covers the cloud foundry (CF) architecture and how SAP HANA XSA follows the model. It begins with the details of the different architectural layers of applications hosted in XSA (specifically, microservices). Everything you need to know is presented, including analyzing requests, modularization, database ingestion, building JSON responses, and scaling your microservices. You will learn to use developmental tools such as the SAP WEB IDE, POSTMAN, and the SAP HANA Cockpit for XSA, including debugging examples on SAP HANA XSA with code snippets showing how microservices can be developed, debugged, scaled, and deployed on SAP HANA XSA. Microservices are divided into security and authentication, request handling, modularization of Node.js, and interaction with the SAP HANA database containers and response formatting. An end-to-end scenario is presented of a Node.js REST API that uses HTTP methods, concluding with deploying an SAP HANA XSA project to a production environment. This book is simple enough to help you implement a Node.js module in order to understand the development of microservices, and complex enough for architects to design their next business-ready solution integrating UAA security, application modularization, and an end-to-end REST API on SAP HANA XSA. What You Will Learn Know the definition and architecture of cloud foundry and its application on SAP HANA XSA Understand REST principles and different HTTP methods Explore microservices (Node.js) development Database interaction from Node (executing SQL statements and stored procedures) Who This Book Is For Architects designing business-ready solutions that integrate UAA security, application modularization, and an end-to-end REST API on SAP HANA XSA

Learning Spark, 2nd Edition

Data is bigger, arrives faster, and comes in a variety of formatsâ??and it all needs to be processed at scale for analytics or machine learning. But how can you process such varied workloads efficiently? Enter Apache Spark. Updated to include Spark 3.0, this second edition shows data engineers and data scientists why structure and unification in Spark matters. Specifically, this book explains how to perform simple and complex data analytics and employ machine learning algorithms. Through step-by-step walk-throughs, code snippets, and notebooks, youâ??ll be able to: Learn Python, SQL, Scala, or Java high-level Structured APIs Understand Spark operations and SQL Engine Inspect, tune, and debug Spark operations with Spark configurations and Spark UI Connect to data sources: JSON, Parquet, CSV, Avro, ORC, Hive, S3, or Kafka Perform analytics on batch and streaming data using Structured Streaming Build reliable data pipelines with open source Delta Lake and Spark Develop machine learning pipelines with MLlib and productionize models using MLflow

In this talk I will introduce a DAG authoring and editing tool for Airflow that we have built. Installed as a plugin, this tool allows users to author DAGs compose existing operators and hooks with virtually no Python experience. We walk through a demo of DAG authorship and deployment, and spend time reviewing the underlying open-source standards used and the general approach that was taken to develop the code. In addition to allowing dags to be created in a visual editor, the underlying tech enables Airflow DAGs to be described programmatically in YAML or JSON. DAGs described there can be saved in backing databases instead of Python files.

Pro Power BI Desktop: Self-Service Analytics and Data Visualization for the Power User

Deliver eye-catching and insightful business intelligence with Microsoft Power BI Desktop. This new edition has been updated to cover all the latest features of Microsoft’s continually evolving visualization product. New in this edition is help with storytelling—adapted to PCs, tablets, and smartphones—and the building of a data narrative. You will find coverage of templates and JSON style sheets, data model annotations, and the use of composite data sources. Also provided is an introduction to incorporating Python visuals and the much awaited Decomposition Tree visual. Pro Power BI Desktop shows you how to use source data to produce stunning dashboards and compelling reports that you mold into a data narrative to seize your audience’s attention. Slice and dice the data with remarkable ease and then add metrics and KPIs to project the insights that create your competitive advantage. Convert raw data into clear, accurate, and interactive information with Microsoft’s free self-service BI tool. This book shows you how to choose from a wide range of built-in and third-party visualization types so that your message is always enhanced. You will be able to deliver those results on PCs, tablets, and smartphones, as well as share results via the cloud. The book helps you save time by preparing the underlying data correctly without needing an IT department to prepare it for you. What You Will Learn Deliver attention-grabbing information, turning data into insight Find new insights as you chop and tweak your data as never before Build a data narrative through interactive reports with drill-through and cross-page slicing Mash up data from multiple sources into a cleansed and coherent data model Build interdependent charts, maps, and tables to deliver visually stunninginformation Create dashboards that help in monitoring key performance indicators of your business Adapt delivery to mobile devices such as phones and tablets Who This Book Is For Power users who are ready to step up to the big leagues by going beyond what Microsoft Excel by itself can offer. The book also is for line-of-business managers who are starved for actionable data needed to make decisions about their business. And the book is for BI analysts looking for an easy-to-use tool to analyze data and share results with C-suite colleagues they support.

Microsoft SQL Server 2019: A Beginner's Guide, Seventh Edition, 7th Edition

Publisher's Note: Products purchased from Third Party sellers are not guaranteed by the publisher for quality, authenticity, or access to any online entitlements included with the product. Get Up to Speed on Microsoft® SQL Server® 2019 Quickly and Easily Start working with Microsoft SQL Server 2019 in no time with help from this thoroughly revised, practical resource. Filled with real-world examples and hands-on exercises, Microsoft SQL Server 2019: A Beginner’s Guide, Seventh Edition starts by explaining fundamental relational database system concepts. From there, you’ll learn how to write Transact-SQL statements, execute simple and complex database queries, handle system administration and security, and use powerful analysis and reporting tools. New topics such as SQL and JSON support, graph databases, and support for machine learning with R and Python are also covered in this step-by-step tutorial. • Install, configure, and customize Microsoft SQL Server 2019 • Create and modify database objects with Transact-SQL statements • Write stored procedures and user-defined functions • Handle backup and recovery, and automate administrative tasks • Tune your database system for optimal availability and reliability • Secure your system using authentication, encryption, and authorization • Work with SQL Server Analysis Services, Reporting Services, and other BI tools • Gain knowledge of relational storage, presentation, and retrieval of data stored in the JSON format • Manage graphs using SQL Server Graph Databases • Learn about machine learning support for R and Python

Summary Data warehouses have gone through many transformations, from standard relational databases on powerful hardware, to column oriented storage engines, to the current generation of cloud-native analytical engines. SnowflakeDB has been leading the charge to take advantage of cloud services that simplify the separation of compute and storage. In this episode Kent Graziano, chief technical evangelist for SnowflakeDB, explains how it is differentiated from other managed platforms and traditional data warehouse engines, the features that allow you to scale your usage dynamically, and how it allows for a shift in your workflow from ETL to ELT. If you are evaluating your options for building or migrating a data platform, then this is definitely worth a listen.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show! You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media and the Python Software Foundation. Upcoming events include the Software Architecture Conference in NYC and PyCOn US in Pittsburgh. Go to dataengineeringpodcast.com/conferences to learn more about these and other events, and take advantage of our partner discounts to save money when you register today. Your host is Tobias Macey and today I’m interviewing Kent Graziano about SnowflakeDB, the cloud-native data warehouse

Interview

Introduction How did you get involved in the area of data management? Can you start by explaining what SnowflakeDB is for anyone who isn’t familiar with it?

How does it compare to the other available platforms for data warehousing? How does it differ from traditional data warehouses?

How does the performance and flexibility affect the data modeling requirements?

Snowflake is one of the data stores that is enabling the shift from an ETL to an ELT workflow. What are the features that allow for that approach and what are some of the challenges that it introduces? Can you describe how the platform is architected and some of the ways that it has evolved as it has grown in popularity?

What are some of the current limitations that you are struggling with?

For someone getting started with Snowflake what is involved with loading data into the platform?

What is their workflow for allocating and scaling compute capacity and running anlyses?

One of the interesting features enabled by your architecture is data sharing. What are some of the most interesting or unexpected uses of that capability that you have seen? What are some other features or use cases for Snowflake that are not as well known or publicized which you think users should know about? When is SnowflakeDB the wrong choice? What are some of the plans for the future of SnowflakeDB?

Contact Info

LinkedIn Website @KentGraziano on Twitter

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

SnowflakeDB

Free Trial Stack Overflow

Data Warehouse Oracle DB MPP == Massively Parallel Processing Shared Nothing Architecture Multi-Cluster Shared Data Architecture Google BigQuery AWS Redshift AWS Redshift Spectrum Presto

Podcast Episode

SnowflakeDB Semi-Structured Data Types Hive ACID == Atomicity, Consistency, Isolation, Durability 3rd Normal Form Data Vault Modeling Dimensional Modeling JSON AVRO Parquet SnowflakeDB Virtual Warehouses CRM == Customer Relationship Management Master Data Management

Podcast Episode

FoundationDB

Podcast Episode

Apache Spark

Podcast Episode

SSIS == SQL Server Integration Services Talend Informatica Fivetran

Podcast Episode

Matillion Apache Kafka Snowpipe Snowflake Data Exchange OLTP == Online Transaction Processing GeoJSON Snowflake Documentation SnowAlert Splunk Data Catalog

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Support Data Engineering Podcast

Introducing MySQL Shell: Administration Made Easy with Python

Use MySQL Shell, the first modern and advanced client for connecting to and interacting with MySQL. It supports SQL, Python, and JavaScript. That’s right! You can write Python scripts and execute them within the shell interactively, or in batch mode. The level of automation available from Python combined with batch mode is especially helpful to those practicing DevOps methods in their database environments. Introducing MySQL Shell covers everything you need to know about MySQL Shell. You will learn how to use the shell for SQL, as well as the new application programming interfaces for working with a document store and even automating your management of MySQL servers using Python. The book includes a look at the supporting technologies and concepts such as JSON, schema-less documents, NoSQL, MySQL Replication, Group Replication, InnoDB Cluster, and more. MySQL Shell is the client that developers and databaseadministrators have been waiting for. Far more powerful than the legacy client, MySQL Shell enables levels of automation that are useful not only for MySQL, but in the broader context of your career as well. Automate your work and build skills in one of the most in-demand languages. With MySQL Shell, you can do both! What You'll Learn Use MySQL Shell with the newest features in MySQL 8 Discover what a Document Store is and how to manage it with MySQL Shell Configure Group Replication and InnoDB Cluster from MySQL Shell Understand the new MySQL Python application programming interfaces Write Python scripts for managing your data and the MySQL high availability features Who This Book Is For Developers and database professionals who want to automate their work and remain on the cutting edge of what MySQLhas to offer. Anyone not happy with the limited automation capabilities of the legacy command-line client will find much to like in this book on the MySQL Shell that supports powerful automation through the Python scripting language.

Mastering SQL Server 2017

Leverage the power of SQL Server 2017 Integration Services to build data integration solutions with ease Key Features Work with temporal tables to access information stored in a table at any time Get familiar with the latest features in SQL Server 2017 Integration Services Program and extend your packages to enhance their functionality Book Description Microsoft SQL Server 2017 uses the power of R and Python for machine learning and containerization-based deployment on Windows and Linux. By learning how to use the features of SQL Server 2017 effectively, you can build scalable apps and easily perform data integration and transformation. You'll start by brushing up on the features of SQL Server 2017. This Learning Path will then demonstrate how you can use Query Store, columnstore indexes, and In-Memory OLTP in your apps. You'll also learn to integrate Python code in SQL Server and graph database implementations for development and testing. Next, you'll get up to speed with designing and building SQL Server Integration Services (SSIS) data warehouse packages using SQL server data tools. Toward the concluding chapters, you'll discover how to develop SSIS packages designed to maintain a data warehouse using the data flow and other control flow tasks. By the end of this Learning Path, you'll be equipped with the skills you need to design efficient, high-performance database applications with confidence. This Learning Path includes content from the following Packt books: SQL Server 2017 Developer's Guide by Milos Radivojevic, Dejan Sarka, et. al SQL Server 2017 Integration Services Cookbook by Christian Cote, Dejan Sarka, et. al What you will learn Use columnstore indexes to make storage and performance improvements Extend database design solutions using temporal tables Exchange JSON data between applications and SQL Server Migrate historical data to Microsoft Azure by using Stretch Database Design the architecture of a modern Extract, Transform, and Load (ETL) solution Implement ETL solutions using Integration Services for both on-premise and Azure data Who this book is for This Learning Path is for database developers and solution architects looking to develop ETL solutions with SSIS, and explore the new features in SSIS 2017. Advanced analysis practitioners, business intelligence developers, and database consultants dealing with performance tuning will also find this book useful. Basic understanding of database concepts and T-SQL is required to get the best out of this Learning Path.

Learn RStudio IDE: Quick, Effective, and Productive Data Science

Discover how to use the popular RStudio IDE as a professional tool that includes code refactoring support, debugging, and Git version control integration. This book gives you a tour of RStudio and shows you how it helps you do exploratory data analysis; build data visualizations with ggplot; and create custom R packages and web-based interactive visualizations with Shiny. In addition, you will cover common data analysis tasks including importing data from diverse sources such as SAS files, CSV files, and JSON. You will map out the features in RStudio so that you will be able to customize RStudio to fit your own style of coding. Finally, you will see how to save a ton of time by adopting best practices and using packages to extend RStudio. Learn RStudio IDE is a quick, no-nonsense tutorial of RStudio that will give you a head start to develop the insights you need in your data science projects. What YouWill Learn Quickly, effectively, and productively use RStudio IDE for building data science applications Install RStudio and program your first Hello World application Adopt the RStudio workflow Make your code reusable using RStudio Use RStudio and Shiny for data visualization projects Debug your code with RStudio Import CSV, SPSS, SAS, JSON, and other data Who This Book Is For Programmers who want to start doing data science, but don’t know what tools to focus on to get up to speed quickly.

Learn Chart.js

This book, 'Learn Chart.js', serves as a comprehensive guide to mastering Chart.js for creating stunning web-based data visualizations. By combining JavaScript, HTML5 Canvas, and Chart.js, you will understand how to turn raw data into interactive visual stories. What this Book will help me do Develop skills to create interactive and engaging data visualizations using the Chart.js library. Learn to efficiently load, parse, and handle data from external formats like CSV and JSON. Understand different chart types offered by Chart.js and learn when to best use each one. Gain the ability to customize Chart.js charts, such as adjusting properties for styling or animations. Acquire hands-on experience with practical examples, equipping you to apply what you learn in real-world scenarios. Author(s) Helder da Rocha brings his extensive experience in programming and software development to this book, offering readers a clear and practical approach to mastering Chart.js. With a deep understanding of data visualization and web technologies, he conveys complex concepts in a straightforward way. Who is it for? This book is ideal for web developers, data analysts, and designers who have basic proficiency in HTML, CSS, and JavaScript. It is particularly suited for professionals looking to create impactful web-based data visualizations using open-source tools. Additionally, the book assumes no prior knowledge of the Canvas element, making it accessible for Chart.js beginners.

Java XML and JSON: Document Processing for Java SE

Use this guide to master the XML metalanguage and JSON data format along with significant Java APIs for parsing and creating XML and JSON documents from the Java language. New in this edition is coverage of Jackson (a JSON processor for Java) and Oracle’s own Java API for JSON processing (JSON-P), which is a JSON processing API for Java EE that also can be used with Java SE. This new edition of Java XML and JSON also expands coverage of DOM and XSLT to include additional API content and useful examples. All examples in this book have been tested under Java 11. In some cases, source code has been simplified to use Java 11’s var language feature. The first six chapters focus on XML along with the SAX, DOM, StAX, XPath, and XSLT APIs. The remaining six chapters focus on JSON along with the mJson, GSON, JsonPath, Jackson, and JSON-P APIs. Each chapter ends with select exercises designed to challenge your grasp of the chapter's content.An appendix provides the answers to these exercises. What You'll Learn Master the XML language Create, validate, parse, and transform XML documents Apply Java’s SAX, DOM, StAX, XPath, and XSLT APIs Master the JSON format for serializing and transmitting data Code against third-party APIs such as Jackson, mJson, Gson, JsonPath Master Oracle’s JSON-P API in a Java SE context Who This Book Is For Intermediate and advanced Java programmers who are developing applications that must access data stored in XML or JSON documents. The book also targets developers wanting to understand the XML language and JSON data format.

Learning Apache Drill

Get up to speed with Apache Drill, an extensible distributed SQL query engine that reads massive datasets in many popular file formats such as Parquet, JSON, and CSV. Drill reads data in HDFS or in cloud-native storage such as S3 and works with Hive metastores along with distributed databases such as HBase, MongoDB, and relational databases. Drill works everywhere: on your laptop or in your largest cluster. In this practical book, Drill committers Charles Givre and Paul Rogers show analysts and data scientists how to query and analyze raw data using this powerful tool. Data scientists today spend about 80% of their time just gathering and cleaning data. With this book, you’ll learn how Drill helps you analyze data more effectively to drive down time to insight. Use Drill to clean, prepare, and summarize delimited data for further analysis Query file types including logfiles, Parquet, JSON, and other complex formats Query Hadoop, relational databases, MongoDB, and Kafka with standard SQL Connect to Drill programmatically using a variety of languages Use Drill even with challenging or ambiguous file formats Perform sophisticated analysis by extending Drill’s functionality with user-defined functions Facilitate data analysis for network security, image metadata, and machine learning

Summary

Every business with a website needs some way to keep track of how much traffic they are getting, where it is coming from, and which actions are being taken. The default in most cases is Google Analytics, but this can be limiting when you wish to perform detailed analysis of the captured data. To address this problem, Alex Dean co-founded Snowplow Analytics to build an open source platform that gives you total control of your website traffic data. In this episode he explains how the project and company got started, how the platform is architected, and how you can start using it today to get a clearer view of how your customers are interacting with your web and mobile applications.

Preamble

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to run a bullet-proof data platform. Go to dataengineeringpodcast.com/linode to get a $20 credit and launch a new server in under a minute. You work hard to make sure that your data is reliable and accurate, but can you say the same about the deployment of your machine learning models? The Skafos platform from Metis Machine was built to give your data scientists the end-to-end support that they need throughout the machine learning lifecycle. Skafos maximizes interoperability with your existing tools and platforms, and offers real-time insights and the ability to be up and running with cloud-based production scale infrastructure instantaneously. Request a demo at dataengineeringpodcast.com/metis-machine to learn more about how Metis Machine is operationalizing data science. Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch. Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat This is your host Tobias Macey and today I’m interviewing Alexander Dean about Snowplow Analytics

Interview

Introductions How did you get involved in the area of data engineering and data management? What is Snowplow Analytics and what problem were you trying to solve when you started the company? What is unique about customer event data from an ingestion and processing perspective? Challenges with properly matching up data between sources Data collection is one of the more difficult aspects of an analytics pipeline because of the potential for inconsistency or incorrect information. How is the collection portion of the Snowplow stack designed and how do you validate the correctness of the data?

Cleanliness/accuracy

What kinds of metrics should be tracked in an ingestion pipeline and how do you monitor them to ensure that everything is operating properly? Can you describe the overall architecture of the ingest pipeline that Snowplow provides?

How has that architecture evolved from when you first started? What would you do differently if you were to start over today?

Ensuring appropriate use of enrichment sources What have been some of the biggest challenges encountered while building and evolving Snowplow? What are some of the most interesting uses of your platform that you are aware of?

Keep In Touch

Alex

@alexcrdean on Twitter LinkedIn

Snowplow

@snowplowdata on Twitter

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

Snowplow

GitHub

Deloitte Consulting OpenX Hadoop AWS EMR (Elastic Map-Reduce) Business Intelligence Data Warehousing Google Analytics CRM (Customer Relationship Management) S3 GDPR (General Data Protection Regulation) Kinesis Kafka Google Cloud Pub-Sub JSON-Schema Iglu IAB Bots And Spiders List Heap Analytics

Podcast Interview

Redshift SnowflakeDB Snowplow Insights Googl

SQL Server Advanced Data Types: JSON, XML, and Beyond

Deliver advanced functionality faster and cheaper by exploiting SQL Server's ever-growing amount of built-in support for modern data formats. Learn about the growing support within SQL Server for operations and data transformations that have previously required third-party software and all the associated licensing and development costs. Benefit through a better understanding of what can be done inside the database engine with no additional costs or development time invested in outside software. Widely used types such as JSON and XML are well-supported by the database engine. The same is true of hierarchical data and even temporal data. Knowledge of these advanced types is crucial to unleashing the full power that's available from your organization's SQL Server database investment. SQL Server Advanced Data Types explores each of the complex data types supplied within SQL Server. Common usage scenarios for eachcomplex data type are discussed, followed by a detailed discussion on how to work with each data type. Each chapter demystifies the complex data and you learn how to use the data types most efficiently. The book offers a practical guide to working with complex data, using real-world examples to demonstrate how each data type can be leveraged. Performance considerations are also discussed, including the implementation of special indexes such as XML indexes and spatial indexes. What You'll Learn Understand the implementation of basic data types and why using the correct type is so important Work with XML data through the XML data type Construct XML data from relational result sets Store and manipulate JSON data using the JSON data type Model and analyze spatial data for geographic information systems Define hierarchies and query them efficiently through the HierarchyID type Who This Book Is For SQL Server developers and application developers who need to store and access complex data structures

Summary

Data integration and routing is a constantly evolving problem and one that is fraught with edge cases and complicated requirements. The Apache NiFi project models this problem as a collection of data flows that are created through a self-service graphical interface. This framework provides a flexible platform for building a wide variety of integrations that can be managed and scaled easily to fit your particular needs. In this episode project members Kevin Doran and Andy LoPresto discuss the ways that NiFi can be used, how to start using it in your environment, and plans for future development. They also explained how it fits in the broad landscape of data tools, the interesting and challenging aspects of the project, and how to build new extensions.

Preamble

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to run a bullet-proof data platform. Go to dataengineeringpodcast.com/linode to get a $20 credit and launch a new server in under a minute. Are you struggling to keep up with customer request and letting errors slip into production? Want to try some of the innovative ideas in this podcast but don’t have time? DataKitchen’s DataOps software allows your team to quickly iterate and deploy pipelines of code, models, and data sets while improving quality. Unlike a patchwork of manual operations, DataKitchen makes your team shine by providing an end to end DataOps solution with minimal programming that uses the tools you love. Join the DataOps movement and sign up for the newsletter at datakitchen.io/de today. After that learn more about why you should be doing DataOps by listening to the Head Chef in the Data Kitchen at dataengineeringpodcast.com/datakitchen Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch. Your host is Tobias Macey and today I’m interviewing Kevin Doran and Andy LoPresto about Apache NiFi

Interview

Introduction How did you get involved in the area of data management? Can you start by explaining what NiFi is? What is the motivation for building a GUI as the primary interface for the tool when the current trend is to represent everything as code? How did you get involved with the project?

Where does it sit in the broader landscape of data tools?

Does the data that is processed by NiFi flow through the servers that it is running on (á la Spark/Flink/Kafka), or does it orchestrate actions on other systems (á la Airflow/Oozie)?

How do you manage versioning and backup of data flows, as well as promoting them between environments?

One of the advertised features is tracking provenance for data flows that are managed by NiFi. How is that data collected and managed?

What types of reporting are available across this information?

What are some of the use cases or requirements that lend themselves well to being solved by NiFi?

When is NiFi the wrong choice?

What is involved in deploying and scaling a NiFi installation?

What are some of the system/network parameters that should be considered? What are the scaling limitations?

What have you found to be some of the most interesting, unexpected, and/or challenging aspects of building and maintaining the NiFi project and community? What do you have planned for the future of NiFi?

Contact Info

Kevin Doran

@kevdoran on Twitter Email

Andy LoPresto

@yolopey on Twitter Email

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

NiFi HortonWorks DataFlow HortonWorks Apache Software Foundation Apple CSV XML JSON Perl Python Internet Scale Asset Management Documentum DataFlow NSA (National Security Agency) 24 (TV Show) Technology Transfer Program Agile Software Development Waterfall Spark Flink Kafka Oozie Luigi Airflow FluentD ETL (Extract, Transform, and Load) ESB (Enterprise Service Bus) MiNiFi Java C++ Provenance Kubernetes Apache Atlas Data Governance Kibana K-Nearest Neighbors DevOps DSL (Domain Specific Language) NiFi Registry Artifact Repository Nexus NiFi CLI Maven Archetype IoT Docker Backpressure NiFi Wiki TLS (Transport Layer Security) Mozilla TLS Observatory NiFi Flow Design System Data Lineage GDPR (General Data Protection Regulation)

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Support Data Engineering Podcast

Introducing the MySQL 8 Document Store

Learn the new Document Store feature of MySQL 8 and build applications around a mix of the best features from SQL and NoSQL database paradigms. Don’t allow yourself to be forced into one paradigm or the other, but combine both approaches by using the Document Store. MySQL 8 was designed from the beginning to bridge the gap between NoSQL and SQL. Oracle recognizes that many solutions need the capabilities of both. More specifically, developers need to store objects as loose collections of schema-less documents, but those same developers also need the ability to run structured queries on their data. With MySQL 8, you can do both! Introducing the MySQL 8 Document Store presents new tools and features that make creating a hybrid database solution far easier than ever before. This book covers the vitally important MySQL Document Store, the new X Protocol for developing applications, and a new client shell called the MySQL Shell. Also covered are supporting technologies and concepts such as JSON, schema-less documents, and more. The book gives insight into how features work and how to apply them to get the most out of your MySQL experience. The book covers topics such as: The headline feature in MySQL 8 MySQL's answer to NoSQL New APIs and client protocols What You'll Learn Create NoSQL-style applications by using the Document Store Mix the NoSQL and SQL approaches by using each to its best advantage in a hybrid solution Work with the new X Protocol for application connectivity in MySQL 8 Master the new X Developer Application Programming Interfaces Combine SQL and JSON in the same database and application Migrate existing applications to MySQL Document Store Who This Book Is For Developers and database professionals wanting to learn about the most profound paradigm-changing features of the MySQL 8 Document Store