talk-data.com talk-data.com

Topic

SQL

Structured Query Language (SQL)

database_language data_manipulation data_definition programming_language

1751

tagged

Activity Trend

107 peak/qtr
2020-Q1 2026-Q1

Activities

1751 activities · Newest first

Hands-On SAS for Data Analysis

"Hands-On SAS for Data Analysis" is a practical guide that introduces you to the fundamentals of using SAS for managing and analyzing data effectively. Through a hands-on approach, you'll explore key topics such as data manipulation with SAS 4GL, SQL querying, and creating insightful visualizations and reports. By the end of the book, you'll not only have a robust understanding of SAS but also be prepared for the SAS certification exam. What this Book will help me do Effectively use SAS modules and tools for comprehensive data analysis tasks. Master SAS 4GL functions to perform advanced data manipulation and transformation. Leverage advanced SQL options within SAS to query and analyze datasets. Become proficient in writing SAS Macros to automate repetitive tasks efficiently. Produce professional reports and visualizations using SAS Output Delivery System. Author(s) None Gulati is a renowned expert in data analysis and business intelligence, with years of professional experience in leveraging SAS for enterprise solutions. An experienced trainer and technical author, None has a unique ability to simplify complex concepts. Through this book, None shares practical knowledge that aligns with industry needs and certification goals. Who is it for? This book is designed for data professionals seeking to enhance their skills in SAS programming and data analysis. Whether you're just starting out with SAS or aiming to pass the SAS certification exam, this book will provide valuable insights. Readers with basic knowledge of data management will find this guide especially beneficial.

Analytic SQL in SQL Server 2014/2016

Business Intelligence (BI) has emerged as a field which seeks to support managers in decision-making. It encompasses the techniques, methods and tools for conducting analytically-based IT solutions, which are referred to as OLAP (OnLine Analytical Processing). Within this field, SQL has a role as a leader and is continuously evolving to cover both transactional and analytical data management. This book discusses the functions provided by Microsoft® SQL Server 2014/2016 in terms of business intelligence. The analytic functions are considered as an enrichment of the SQL language. They combine a series of practical functions to answer complex analysis requests with all the simplicity, elegance and acquired performance of the SQL language. Drawing on the wide experience of the author in teaching and research, as well as insights from contacts in the industry, this book focuses on the issues and difficulties faced by academics (students and teachers) and professionals engaged in data analysis with the SQL Server 2014/2016 database management system.

Introducing MySQL Shell: Administration Made Easy with Python

Use MySQL Shell, the first modern and advanced client for connecting to and interacting with MySQL. It supports SQL, Python, and JavaScript. That’s right! You can write Python scripts and execute them within the shell interactively, or in batch mode. The level of automation available from Python combined with batch mode is especially helpful to those practicing DevOps methods in their database environments. Introducing MySQL Shell covers everything you need to know about MySQL Shell. You will learn how to use the shell for SQL, as well as the new application programming interfaces for working with a document store and even automating your management of MySQL servers using Python. The book includes a look at the supporting technologies and concepts such as JSON, schema-less documents, NoSQL, MySQL Replication, Group Replication, InnoDB Cluster, and more. MySQL Shell is the client that developers and databaseadministrators have been waiting for. Far more powerful than the legacy client, MySQL Shell enables levels of automation that are useful not only for MySQL, but in the broader context of your career as well. Automate your work and build skills in one of the most in-demand languages. With MySQL Shell, you can do both! What You'll Learn Use MySQL Shell with the newest features in MySQL 8 Discover what a Document Store is and how to manage it with MySQL Shell Configure Group Replication and InnoDB Cluster from MySQL Shell Understand the new MySQL Python application programming interfaces Write Python scripts for managing your data and the MySQL high availability features Who This Book Is For Developers and database professionals who want to automate their work and remain on the cutting edge of what MySQLhas to offer. Anyone not happy with the limited automation capabilities of the legacy command-line client will find much to like in this book on the MySQL Shell that supports powerful automation through the Python scripting language.

SQL for Data Analytics

SQL for Data Analytics provides readers with the tools and knowledge to use SQL effectively for extracting, analyzing, and interpreting complex datasets. Whether you're working with time-series data, geospatial data, or textual data, this book combines insightful explanations with practical guidance to enhance your data analysis capabilities. What this Book will help me do Perform advanced statistical calculations using SQL functions like WINDOW. Develop and optimize queries for better performance and faster results. Analyze and work with geospatial, time-series, and text datasets effectively. Debug problematic SQL queries and ensure their correctness. Create robust SQL pipelines and integrate them with other analytics tools. Author(s) The authors of SQL for Data Analytics, Upom Malik, Matt Goldwasser, and Benjamin Johnston, are seasoned professionals experienced in both the practical and theoretical aspects of SQL and data analysis. They bring their collective expertise to guide readers through the essentials and advanced usage of SQL in analytics. Who is it for? This book is aimed at database engineers aspiring to delve into analytics, backend developers wanting to improve their data handling skills, and data professionals aiming to enhance their SQL proficiency. A basic understanding of SQL and databases will help readers follow along and maximize their learning.

Mastering SQL Server 2017

Leverage the power of SQL Server 2017 Integration Services to build data integration solutions with ease Key Features Work with temporal tables to access information stored in a table at any time Get familiar with the latest features in SQL Server 2017 Integration Services Program and extend your packages to enhance their functionality Book Description Microsoft SQL Server 2017 uses the power of R and Python for machine learning and containerization-based deployment on Windows and Linux. By learning how to use the features of SQL Server 2017 effectively, you can build scalable apps and easily perform data integration and transformation. You'll start by brushing up on the features of SQL Server 2017. This Learning Path will then demonstrate how you can use Query Store, columnstore indexes, and In-Memory OLTP in your apps. You'll also learn to integrate Python code in SQL Server and graph database implementations for development and testing. Next, you'll get up to speed with designing and building SQL Server Integration Services (SSIS) data warehouse packages using SQL server data tools. Toward the concluding chapters, you'll discover how to develop SSIS packages designed to maintain a data warehouse using the data flow and other control flow tasks. By the end of this Learning Path, you'll be equipped with the skills you need to design efficient, high-performance database applications with confidence. This Learning Path includes content from the following Packt books: SQL Server 2017 Developer's Guide by Milos Radivojevic, Dejan Sarka, et. al SQL Server 2017 Integration Services Cookbook by Christian Cote, Dejan Sarka, et. al What you will learn Use columnstore indexes to make storage and performance improvements Extend database design solutions using temporal tables Exchange JSON data between applications and SQL Server Migrate historical data to Microsoft Azure by using Stretch Database Design the architecture of a modern Extract, Transform, and Load (ETL) solution Implement ETL solutions using Integration Services for both on-premise and Azure data Who this book is for This Learning Path is for database developers and solution architects looking to develop ETL solutions with SSIS, and explore the new features in SSIS 2017. Advanced analysis practitioners, business intelligence developers, and database consultants dealing with performance tuning will also find this book useful. Basic understanding of database concepts and T-SQL is required to get the best out of this Learning Path.

Beginning Oracle SQL for Oracle Database 18c: From Novice to Professional

Start developing with Oracle SQL. This book is a one-stop introduction to everything you need to know about getting started developing an Oracle Database. You'll learn about foundational concepts, setting up a simple schema, adding data, reading data from the database, and making changes. No experience with databases is required to get started. Examples in the book are built around Oracle Live SQL, a freely available, online sandbox for practicing and experimenting with SQL statements, and Oracle Express Edition, a free version of Oracle Database that is available for download. A marquee feature of Beginning Oracle SQL for Oracle Database 18c is the small chapter size. Content is divided into easily digestible chunks that can be read and practiced in very short intervals of time, making this the ideal book for a busy professional to learn from. Even just a 15-20 minute block of free time can be put to good use. AuthorBen Brumm begins by helping you understand what a database is, and getting you set up with a sandbox in which to practice the SQL that you are learning. From there, easily digestible chapters cover, point-by-point, the different aspects of writing queries to get data out of a database. You’ll also learn about creating tables and getting data into the database. Crucial topics such as working with nulls and writing analytic queries are given the attention they deserve, helping you to avoid pitfalls when writing queries for production use. What You'll Learn Create, update, and delete tables in an Oracle database Add, update, delete data from those database tables Query and view data stored in your database Manipulate and transform data using in-built database functions and features Correctly choose when to use Oracle-specific syntax and features Who This Book Is For Those new to Oracle who are planning to develop software using Oracle as the back-end data store. The book is also for those who are getting started in software development and realize they need to learn some kind of database language. Those who are learning software development on the side of their normal job, or learning it as a college student, who are ready to learn what a database is and how to use it also will find this book useful.

Definitive Guide to DAX, The: Business intelligence for Microsoft Power BI, SQL Server Analysis Services, and Excel, 2nd Edition

Now expanded and updated with modern best practices, this is the most complete guide to Microsoft's DAX language for business intelligence, data modeling, and analytics. Expert Microsoft BI consultants Marco Russo and Alberto Ferrari help you master everything from table functions through advanced code and model optimization. You'll learn exactly what happens under the hood when you run a DAX expression, and use this knowledge to write fast, robust code. This edition focuses on examples you can build and run with the free Power BI Desktop, and helps you make the most of the powerful syntax of variables (VAR) in Power BI, Excel, or Analysis Services. Want to leverage all of DAX's remarkable capabilities? This no-compromise "deep dive" is exactly what you need. Related Content Video: Introduction to Microsoft Power BI (Video), Data Analysis Fundamentals with Excel (Video) Perform powerful data analysis with DAX for Power BI, SQL Server, and Excel · Master core DAX concepts, including calculated columns, measures, and calculation groups · Work efficiently with basic and advanced table functions · Understand evaluation contexts and the CALCULATE and CALCULATETABLE functions · Perform time-based calculations · Use calculation groups and calculation items · Use syntax of variables (VAR) to write more readable, maintainable code · Express diverse and unusual relationships with DAX, including many-to-many relationships and bidirectional filters · Master advanced optimization techniques, and improve performance in aggregations · Optimize data models to achieve better compression · Measure DAX query performance with DAX Studio and learn how to optimize your DAX

Data Warehousing with Greenplum, 2nd Edition

Data professionals are confronting the most disruptive change since relational databases appeared in the 1980s. SQL is still a major tool for data analytics, but conventional relational database management systems can’t handle the increasing size and complexity of today’s datasets. This updated edition teaches you best practices for Greenplum Database, the open source massively parallel processing (MPP) database that accommodates large sets of nonrelational and relational data. Marshall Presser, field CTO at Pivotal, introduces Greenplum’s approach to data analytics and data-driven decisions, beginning with its shared-nothing architecture. IT managers, developers, data analysts, system architects, and data scientists will all gain from exploring data organization and storage, data loading, running queries, and learning to perform analytics in the database. Discover how MPP and Greenplum will help you go beyond the traditional data warehouse. This ebook covers: Greenplum features, use case examples, and techniques for optimizing use Four Greenplum deployment options to help you balance security, cost, and time to usability Why each networked node in Greenplum’s architecture includes an independent operating system, memory, and storage Additional tools for monitoring, managing, securing, and optimizing query responses in the Pivotal Greenplum commercial database

Professional Azure SQL Database Administration - Second Edition

Professional Azure SQL Database Administration serves as your comprehensive guide to mastering the management and optimization of cloud-based Azure SQL Database solutions. With the differences and unique features of Azure SQL Database compared to the on-premise SQL Server, this book offers a clear roadmap to efficiently migrate, secure, scale, and maintain these databases in the cloud. What this Book will help me do Understand the differences between Azure SQL Database and on-premise SQL Server and their practical implications. Learn techniques to migrate existing SQL Server databases to Azure SQL Database seamlessly. Discover advanced ways to optimize database performance and scalability leveraging cloud capabilities. Master security strategies for Azure SQL databases, including backup, disaster recovery, and automated tasks. Develop proficiency in using tools such as PowerShell to automate and manage routine database administration tasks. Author(s) Ahmad Osama is an experienced database professional and author specializing in SQL Server and Azure SQL Database administration. With a robust background in database migration, maintenance, and performance tuning, Ahmad expertly bridges the gap between theory and practice. His approachable writing style makes complex database topics accessible to professionals seeking to expand their expertise. Who is it for? Professional Azure SQL Database Administration is an essential resource for database administrators, developers, and IT professionals keen on developing their knowledge about Azure SQL Database administration and cloud database solutions. Whether you're transitioning from traditional SQL Server environments or looking to optimize your database strategies in the cloud, this book caters to professionals with intermediate to advanced experience in database management and programming with SQL.

Pro SQL Server 2019 Wait Statistics: A Practical Guide to Analyzing Performance in SQL Server

Here is a practical guide for analyzing and troubleshooting SQL Server performance using wait statistics. Learn to identify precisely why your queries are running slowly. Measure the amount of time consumed by each bottleneck so that you can focus attention on making the largest improvements first. This edition is updated to cover analysis of wait statistics inside Query Store, the CXCONSUMER wait event, and to be current with SQL Server 2019. Whether you are new to wait statistics, or already familiar with them, this book provides a deeper understanding on how wait statistics are generated and what they can mean for your SQL Server instance’s performance. Pro SQL Server 2019 Wait Statistics goes beyond the most common wait types into the more complex and performance-threatening wait types. You’ll learn about per-query wait statistics and session-based wait statistics, and the types of problems they each can help you solve. The different wait types are categorized by their area of impact, including CPU, IO, Lock, and many more. The book presents clear examples to help you gain practical knowledge of why and how specific wait times increase or decrease, and how they impact your SQL Server’s performance. After reading this book you won’t want to be without the valuable information that wait statistics provide regarding where you should be spending your limited tuning time to maximize performance and value to your business. What You'll Learn Identify resource bottlenecks in a running SQL Server instance Locate wait statistics information inside DMVs and Query Store Analyze the root cause of sub-optimal performance Diagnose I/O contention and locking contention Benchmark SQL Server performance Lower the wait time of the most popular wait types Who This Book Is For Database administrators who want to identify and resolve performance bottlenecks, those who want to learn more about how the SQL Server engine accesses and uses resources inside SQL Server, and administrators concerned with achieving—and knowing they have achieved—optimal performance

Summary In recent years the traditional approach to building data warehouses has shifted from transforming records before loading, to transforming them afterwards. As a result, the tooling for those transformations needs to be reimagined. The data build tool (dbt) is designed to bring battle tested engineering practices to your analytics pipelines. By providing an opinionated set of best practices it simplifies collaboration and boosts confidence in your data teams. In this episode Drew Banin, creator of dbt, explains how it got started, how it is designed, and how you can start using it today to create reliable and well-tested reports in your favorite data warehouse.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show! Understanding how your customers are using your product is critical for businesses of any size. To make it easier for startups to focus on delivering useful features Segment offers a flexible and reliable data infrastructure for your customer analytics and custom events. You only need to maintain one integration to instrument your code and get a future-proof way to send data to over 250 services with the flip of a switch. Not only does it free up your engineers’ time, it lets your business users decide what data they want where. Go to dataengineeringpodcast.com/segmentio today to sign up for their startup plan and get $25,000 in Segment credits and $1 million in free software from marketing and analytics companies like AWS, Google, and Intercom. On top of that you’ll get access to Analytics Academy for the educational resources you need to become an expert in data analytics for measuring product-market fit. You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Dataversity, and the Open Data Science Conference. Go to dataengineeringpodcast.com/conferences to learn more and take advantage of our partner discounts when you register. Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch. To help other people find the show please leave a review on iTunes and tell your friends and co-workers Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat Your host is Tobias Macey and today I’m interviewing Drew Banin about DBT, the Data Build Tool, a toolkit for building analytics the way that developers build applications

Interview

Introduction How did you get involved in the area of data management? Can you start by explaining what DBT is and your motivation for creating it? Where does it fit in the overall landscape of data tools and the lifecycle of data in an analytics pipeline? Can you talk through the workflow for someone using DBT? One of the useful features of DBT for stability of analytics is the ability to write and execute tests. Can you explain how those are implemented? The packaging capabilities are beneficial for enabling collaboration. Can you talk through how the packaging system is implemented?

Are these packages driven by Fishtown Analytics or the dbt community?

What are the limitations of modeling everything as a SELECT statement? Making SQL code reusable is notoriously difficult. How does the Jinja templating of DBT address this issue and what are the shortcomings?

What are your thoughts on higher level approaches to SQL that compile down to the specific statements?

Can you explain how DBT is implemented and how the design has evolved since you first began working on it? What are some of the features of DBT that are often overlooked which you find particularly useful? What are some of the most interesting/unexpected/innovative ways that you have seen DBT used? What are the additional features that the commercial version of DBT provides? What are some of the most useful or challenging lessons that you have learned in the process of building and maintaining DBT? When is it the wrong choice? What do you have planned for the future of DBT?

Contact Info

Email @drebanin on Twitter drebanin on GitHub

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

DBT Fishtown Analytics 8Tracks Internet Radio Redshift Magento Stitch Data Fivetran Airflow Business Intelligence Jinja template language BigQuery Snowflake Version Control Git Continuous Integration Test Driven Development Snowplow Analytics

Podcast Episode

dbt-utils We Can Do Better Than SQL blog post from EdgeDB EdgeDB Looker LookML

Podcast Interview

Presto DB

Podcast Interview

Spark SQL Hive Azure SQL Data Warehouse Data Warehouse Data Lake Data Council Conference Slowly Changing Dimensions dbt Archival Mode Analytics Periscope BI dbt docs dbt repository

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Support Data Engineering Podcast

Pro Oracle SQL Development: Best Practices for Writing Advanced Queries

Write SQL statements that are more powerful, simpler, and faster using Oracle SQL and its full range of features. This book provides a clearer way of thinking about SQL by building sets, and provides practical advice for using complex features while avoiding anti-patterns that lead to poor performance and wrong results. Relevant theories, real-world best practices, and style guidelines help you get the most out of Oracle SQL. Pro Oracle SQL Development is for anyone who already knows Oracle SQL and is ready to take their skills to the next level. Many developers, analysts, testers, and administrators use Oracle databases frequently, but their queries are limited because they do not have the knowledge, experience, or right environment to help them take full advantage of Oracle’s advanced features. This book will inspire you to achieve more with your Oracle SQL statements through tips for creating your own style for writing simple, yet powerful, SQL. It teaches you how to think about and solve performance problems in Oracle SQL, and covers advanced topics and shows you how to become an Oracle expert. What You'll Learn Understand the power of Oracle SQL and where to apply it Create a database development environment that is simple, scalable, and conducive to learning Solve complex problems that were previously solved in a procedural language Write large Oracle SQL statements that are powerful, simple, and fast Apply coding styles to make your SQL statements more readable Tune large Oracle SQL statements to eliminate and avoid performance problems Who This Book Is For Developers, testers, analysts, and administrators who want to harness the full power of Oracle SQL to solve their problems as simply and as quickly as possible. For traditional database professionals the book offers new ways of thinking about the language they have used for so long. For modern full stack developers the book explains how a database can be much more than simply a place to store data.

Learn T-SQL Querying

Dive into the world of T-SQL with 'Learn T-SQL Querying,' a book designed to enhance your database querying skills and help you master Microsoft's SQL Server and Azure SQL Database. Through this guide, you'll explore best practices, learn advanced techniques for analyzing execution plans, and create efficient T-SQL queries. What this Book will help me do Understand the fundamentals of query optimization to write performant T-SQL queries. Analyze query execution plans to identify and troubleshoot performance issues effectively. Utilize dynamic management views and functions to monitor and optimize query performance. Implement features like Query Store to streamline troubleshooting and maintain performance changes. Avoid common T-SQL anti-patterns and embrace best practices to ensure scalable query design. Author(s) Pedro Lopes and None Lahoud bring years of expertise in SQL Server and database systems. Pedro has extensive experience as a database engineer, where he specializes in query processing and optimization. None has a deep understanding of T-SQL development, focusing on practical solutions. Together, they provide in-depth insights and actionable advice. Who is it for? This book is perfect for database administrators, database developers, and data analysts at any level looking to improve their T-SQL expertise. Beginners will gain foundational skills in T-SQL querying, while experienced professionals will find advanced strategies for optimizing SQL Server performance. Readers aiming to master both practical querying and troubleshooting will benefit the most.

Data Science and Engineering at Enterprise Scale

As enterprise-scale data science sharpens its focus on data-driven decision making and machine learning, new tools have emerged to help facilitate these processes. This practical ebook shows data scientists and enterprise developers how the notebook interface, Apache Spark, and other collaboration tools are particularly well suited to bridge the communication gap between their teams. Through a series of real-world examples, author Jerome Nilmeier demonstrates how to generate a model that enables data scientists and developers to share ideas and project code. You’ll learn how data scientists can approach real-world business problems with Spark and how developers can then implement the solution in a production environment. Dive deep into data science technologies, including Spark, TensorFlow, and the Jupyter Notebook Learn how Spark and Python notebooks enable data scientists and developers to work together Explore how the notebook environment works with Spark SQL for structured data Use notebooks and Spark as a launchpad to pursue supervised, unsupervised, and deep learning data models Learn additional Spark functionality, including graph analysis and streaming Explore the use of analytics in the production environment, particularly when creating data pipelines and deploying code

SQL All-In-One For Dummies, 3rd Edition

The latest on SQL databases SQL All -In-One For Dummies, 3rd Edition, is a one-stop shop for everything you need to know about SQL and SQL-based relational databases. Everyone from database administrators to application programmers and the people who manage them will find clear, concise explanations of the SQL language and its many powerful applications. With the ballooning amount of data out there, more and more businesses, large and small, are moving from spreadsheets to SQL databases like Access, Microsoft SQL Server, Oracle databases, MySQL, and PostgreSQL. This compendium of information covers designing, developing, and maintaining these databases. Cope with any issue that arises in SQL database creation and management Get current on the newest SQL updates and capabilities Reference information on querying SQL-based databases in the SQL language Understand relational databases and their importance to today’s organizations SQL All-In-One For Dummies is a timely update to the popular reference for readers who want detailed information about SQL databases and queries.

Hands-On Big Data Analytics with PySpark

Dive into the exciting world of big data analytics with 'Hands-On Big Data Analytics with PySpark'. This practical guide offers you the tools and knowledge to tackle massive datasets using PySpark. By exploring real-world examples, you'll learn to unleash the power of distributed systems to analyze and manipulate data at scale. What this Book will help me do Master using PySpark to handle large and complex datasets efficiently and effectively. Develop skills to optimize Spark programs using best practices like reducing shuffle operations. Learn to set up a PySpark environment, process data from platforms like HDFS, Hive, and S3. Enhance your data analytics capabilities by implementing powerful SQL queries and data visualizations. Understand testing and debugging techniques to build reliable, production-quality data pipelines. Author(s) Authored by Rudy Lai and Bartłomiej Potaczek, both seasoned data engineers and authors in the big data field. Rudy and Bartłomiej bring their extensive experience working with distributed systems and scalable data architectures into this book. Their approach is hands-on, focusing on real-world applications and best practices. Who is it for? This book is tailored for data scientists, engineers, and developers eager to advance their big data analytics capabilities. Whether you're new to big data or experienced with other analytics frameworks, this book will equip you with practical knowledge to utilize PySpark for scalable data solutions.

PROC SQL, 3rd Edition

PROC SQL: Beyond the Basics Using SAS®, Third Edition, is a step-by-step, example-driven guide that helps readers master the language of PROC SQL. Packed with analysis and examples illustrating an assortment of PROC SQL options, statements, and clauses, this book not only covers all the basics, but it also offers extensive guidance on complex topics such as set operators and correlated subqueries. Programmers at all levels will appreciate Kirk Lafler’s easy-to-follow examples, clear explanations, and handy tips to extend their knowledge of PROC SQL. This third edition explores new and powerful features in SAS® 9.4, including topics such as: IFC and IFN functions nearest neighbor processing the HAVING clause indexes It also features two completely new chapters on fuzzy matching and data-driven programming. Delving into the workings of PROC SQL with greater analysis and discussion, PROC SQL: Beyond the Basics Using SAS®, Third Edition, explores this powerful database language using discussion and numerous real-world examples.

PySpark SQL Recipes: With HiveQL, Dataframe and Graphframes

Carry out data analysis with PySpark SQL, graphframes, and graph data processing using a problem-solution approach. This book provides solutions to problems related to dataframes, data manipulation summarization, and exploratory analysis. You will improve your skills in graph data analysis using graphframes and see how to optimize your PySpark SQL code. PySpark SQL Recipes starts with recipes on creating dataframes from different types of data source, data aggregation and summarization, and exploratory data analysis using PySpark SQL. You’ll also discover how to solve problems in graph analysis using graphframes. On completing this book, you’ll have ready-made code for all your PySpark SQL tasks, including creating dataframes using data from different file formats as well as from SQL or NoSQL databases. What You Will Learn Understand PySpark SQL and its advanced features Use SQL and HiveQL with PySpark SQL Work with structured streaming Optimize PySpark SQL Master graphframes and graph processing Who This Book Is For Data scientists, Python programmers, and SQL programmers.

Summary Machine learning is a class of technologies that promise to revolutionize business. Unfortunately, it can be difficult to identify and execute on ways that it can be used in large companies. Kevin Dewalt founded Prolego to help Fortune 500 companies build, launch, and maintain their first machine learning projects so that they can remain competitive in our landscape of constant change. In this episode he discusses why machine learning projects require a new set of capabilities, how to build a team from internal and external candidates, and how an example project progressed through each phase of maturity. This was a great conversation for anyone who wants to understand the benefits and tradeoffs of machine learning for their own projects and how to put it into practice.

Introduction

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch. To help other people find the show please leave a review on iTunes, or Google Play Music, tell your friends and co-workers, and share it on social media. Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat Your host is Tobias Macey and today I’m interviewing Kevin Dewalt about his experiences at Prolego, building machine learning projects for Fortune 500 companies

Interview

Introduction How did you get involved in the area of data management? For the benefit of software engineers and team leaders who are new to machine learning, can you briefly describe what machine learning is and why is it relevant to them? What is your primary mission at Prolego and how did you identify, execute on, and establish a presence in your particular market?

How much of your sales process is spent on educating your clients about what AI or ML are and the benefits that these technologies can provide?

What have you found to be the technical skills and capacity necessary for being successful in building and deploying a machine learning project?

When engaging with a client, what have you found to be the most common areas of technical capacity or knowledge that are needed?

Everyone talks about a talent shortage in machine learning. Can you suggest a recruiting or skills development process for companies which need to build out their data engineering practice? What challenges will teams typically encounter when creating an efficient working relationship between data scientists and data engineers? Can you briefly describe a successful project of developing a first ML model and putting it into production?

What is the breakdown of how much time was spent on different activities such as data wrangling, model development, and data engineering pipeline development? When releasing to production, can you share the types of metrics that you track to ensure the health and proper functioning of the models? What does a deployable artifact for a machine learning/deep learning application look like?

What basic technology stack is necessary for putting the first ML models into production?

How does the build vs. buy debate break down in this space and what products do you typically recommend to your clients?

What are the major risks associated with deploying ML models and how can a team mitigate them? Suppose a software engineer wants to break into ML. What data engineering skills would you suggest they learn? How should they position themselves for the right opportunity?

Contact Info

Email: Kevin Dewalt [email protected] and Russ Rands [email protected] Connect on LinkedIn: Kevin Dewalt and Russ Rands Twitter: @kevindewalt

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

Prolego Download our book: Become an AI Company in 90 Days Google Rules Of ML AI Winter Machine Learning Supervised Learning O’Reilly Strata Conference GE Rebranding Commercials Jez Humble: Stop Hiring Devops Experts (And Start Growing Them) SQL ORM Django RoR Tensorflow PyTorch Keras Data Engineering Podcast Episode About Data Teams DevOps For Data Teams – DevOps Days Boston Presentation by Tobias Jupyter Notebook Data Engineering Podcast: Notebooks at Netflix Pandas

Podcast Interview

Joel Grus

JupyterCon Presentation Data Science From Scratch

Expensify Airflow

James Meickle Interview

Git Jenkins Continuous Integration Practical Deep Learning For Coders Course by Jeremy Howard Data Carpentry

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Support Data Engineering Podcast

Apache Spark Quick Start Guide

Dive into the world of scalable data processing with the "Apache Spark Quick Start Guide." This book offers a foundational introduction to Spark, empowering readers to harness its capabilities for big data processing. With clear explanations and hands-on examples, you'll learn to implement Spark applications that handle complex data tasks efficiently. What this Book will help me do Understand and implement Spark's RDDs and DataFrame APIs to process large datasets effectively. Set up a local development environment for Spark-based projects. Develop skills to debug and optimize slow-performing Spark applications. Harness built-in modules of Spark for SQL, streaming, and machine learning applications. Adopt best practices and optimization techniques for high-performance Spark applications. Author(s) Shrey Mehrotra is a seasoned software developer with expertise in big data technologies, particularly Apache Spark. With years of hands-on industry experience, Shrey focuses on making complex technical concepts accessible to all. Through his writing, he aims to share clear, practical guidance for developers of all levels. Who is it for? This guide is perfect for big data enthusiasts and professionals looking to learn Apache Spark's capabilities from scratch. It's aimed at data engineers interested in optimizing application performance and data scientists wanting to integrate machine learning with Spark. A basic familiarity with either Scala, Python, or Java is recommended.