talk-data.com talk-data.com

Event

O'Reilly Data Engineering Books

2001-10-19 – 2027-05-25 Oreilly Visit website ↗

Activities tracked

240

Collection of O'Reilly books on Data Engineering.

Filtering by: AI/ML ×

Sessions & talks

Showing 76–100 of 240 · Newest first

Search within this event →
Data for All

Do you know what happens to your personal data when you are browsing, buying, or using apps? Discover how your data is harvested and exploited, and what you can do to access, delete, and monetize it. Data for All empowers everyone—from tech experts to the general public—to control how third parties use personal data. Read this eye-opening book to learn: The types of data you generate with every action, every day Where your data is stored, who controls it, and how much money they make from it How you can manage access and monetization of your own data Restricting data access to only companies and organizations you want to support The history of how we think about data, and why that is changing The new data ecosystem being built right now for your benefit The data you generate every day is the lifeblood of many large companies—and they make billions of dollars using it. In Data for All, bestselling author John K. Thompson outlines how this one-sided data economy is about to undergo a dramatic change. Thompson pulls back the curtain to reveal the true nature of data ownership, and how you can turn your data from a revenue stream for companies into a financial asset for your benefit. About the Technology Do you know what happens to your personal data when you’re browsing and buying? New global laws are turning the tide on companies who make billions from your clicks, searches, and likes. This eye-opening book provides an inspiring vision of how you can take back control of the data you generate every day. About the Book Data for All gives you a step-by-step plan to transform your relationship with data and start earning a “data dividend”—hundreds or thousands of dollars paid out simply for your online activities. You’ll learn how to oversee who accesses your data, how much different types of data are worth, and how to keep private details private. What's Inside The types of data you generate with every action, every day How you can manage access and monetization of your own data The history of how we think about data, and why that is changing The new data ecosystem being built right now for your benefit About the Reader For anyone who is curious or concerned about how their data is used. No technical knowledge required. About the Author John K. Thompson is an international technology executive with over 37 years of experience in the fields of data, advanced analytics, and artificial intelligence. Quotes An honest, direct, pull-no-punches source on one of the most important personal issues of our time....I changed some of my own behaviors after reading the book, and I suggest you do so as well. You have more to lose than you may think. - From the Foreword by Thomas H. Davenport, author of Competing on Analytics and The AI Advantage A must-read for anyone interested in the future of data. It helped me understand the reasons behind the current data ecosystem and the laws that are shaping its future. A great resource for both professionals and individuals. I highly recommend it. - Ravit Jain, Founder & Host of The Ravit Show, Data Science Evangelist

Building a Next-Gen SOC with IBM QRadar

In "Building a Next-Gen SOC with IBM QRadar", you'll learn how to utilize IBM QRadar to create an efficient Security Operations Center (SOC). The book covers deploying QRadar in various environments, understanding its architecture, and leveraging its powerful features to detect and respond to real-time threats with confidence, ultimately enabling advanced security practices. What this Book will help me do Understand and deploy IBM QRadar in different environments, including on-premises and cloud. Leverage QRadar's features to analyze network traffic, detect threats, and enhance security monitoring. Effectively use QRadar rules and searches to identify, correlate, and respond to security events. Integrate AI technologies with QRadar to automate and improve threat management processes. Maintain, troubleshoot, and scale the QRadar environment to meet evolving security needs. Author(s) Ashish Kothekar is an experienced cybersecurity specialist with a deep understanding of IBM QRadar and SOC operations. He has dedicated his career to helping organizations implement effective security practices. Through his accessible writing and detailed examples, he aims to empower security professionals to maximize their use of QRadar. Who is it for? This book is perfect for SOC analysts, security engineers, and cybersecurity enthusiasts who want to enhance their security skills. Readers should have a basic knowledge of networking and cybersecurity principles. If you're looking to deepen your understanding of IBM QRadar and build a next-gen SOC, this book is for you.

IBM Power System AC922 Technical Overview and Introduction

This IBM® Redpaper™ publication is a comprehensive guide that covers the IBM Power System AC922 server (8335-GTH and 8335-GTX models). The Power AC922 server is the next generation of the IBM POWER® processor-based systems, which are designed for deep learning (DL) and artificial intelligence (AI), high-performance analytics, and high-performance computing (HPC). This paper introduces the major innovative Power AC922 server features and their relevant functions: Powerful IBM POWER9™ processors that offer up to 22 cores at up to 2.80 GHz (3.10 GHz turbo) performance with up to 2 TB of memory. IBM Coherent Accelerator Processor Interface (CAPI) 2.0, IBM OpenCAPI™, and second-generation NVIDIA NVLink 2.0 technology for exceptional processor to accelerator intercommunication. Up to six dedicated NVIDIA Tesla V100 graphics processing units (GPUs). This publication is for professionals who want to acquire a better understanding of IBM Power Systems™ products and is intended for the following audiences: Clients Sales and marketing professionals Technical support professionals IBM Business Partners Independent software vendors (ISVs) This paper expands the set of IBM Power Systems documentation by providing a desktop reference that offers a detailed technical description of the Power AC922 server. This paper does not replace the current marketing materials and configuration tools. It is intended as an extra source of information that, together with existing sources, can be used to enhance your knowledge of IBM server solutions.

IBM FlashSystem 7300 Product Guide

This IBM® Redpaper Product Guide describes the IBM FlashSystem® 7300 solution, which is a next-generation IBM FlashSystem control enclosure. It combines the performance of flash and a Non-Volatile Memory Express (NVMe)-optimized architecture with the reliability and innovation of IBM FlashCore® technology and the rich feature set and high availability (HA) of IBM Spectrum® Virtualize. To take advantage of artificial intelligence (AI)-enhanced applications, real-time big data analytics, and cloud architectures that require higher levels of system performance and storage capacity, enterprises around the globe are rapidly moving to modernize established IT infrastructures. However, for many organizations, staff resources, and expertise are limited, and cost-efficiency is a top priority. These organizations have important investments in existing infrastructure that they want to maximize. They need enterprise-grade solutions that optimize cost-efficiency while simplifying the pathway to modernization. IBM FlashSystem 7300 is designed specifically for these requirements and use cases. It also delivers a cyber resilience without compromising application performance. IBM FlashSystem 7300 provides a rich set of software-defined storage (SDS) features that are delivered by IBM Spectrum Virtualize, including the following examples: Data reduction and deduplication Dynamic tiering Thin-provisioning Snapshots Cloning Replication and data copy services Cyber resilience Transparent Cloud Tiering (TCT) IBM HyperSwap® including 3-site replication for high availability Scale-out and scale-up configurations further enhance capacity and throughput for better availability With the release of IBM Spectrum Virtualize V8.5, extra functions and features are available, including support for new third-generation IBM FlashCore Modules Non-Volatile Memory Express (NVMe) type drives within the control enclosure, and 100 Gbps Ethernet adapters that provide NVMe Remote Direct Memory Access (RDMA) options. New software features include GUI enhancements, security enhancements including multifactor authentication and single sign-on, and Fibre Channel (FC) portsets.

Data Fabric and Data Mesh Approaches with AI: A Guide to AI-based Data Cataloging, Governance, Integration, Orchestration, and Consumption

Understand modern data fabric and data mesh concepts using AI-based self-service data discovery and delivery capabilities, a range of intelligent data integration styles, and automated unified data governance—all designed to deliver "data as a product" within hybrid cloud landscapes. This book teaches you how to successfully deploy state-of-the-art data mesh solutions and gain a comprehensive overview on how a data fabric architecture uses artificial intelligence (AI) and machine learning (ML) for automated metadata management and self-service data discovery and consumption. You will learn how data fabric and data mesh relate to other concepts such as data DataOps, MLOps, AIDevOps, and more. Many examples are included to demonstrate how to modernize the consumption of data to enable a shopping-for-data (data as a product) experience. By the end of this book, you will understand the data fabric concept and architecture as it relates to themes such as automated unifieddata governance and compliance, enterprise information architecture, AI and hybrid cloud landscapes, and intelligent cataloging and metadata management. What You Will Learn Discover best practices and methods to successfully implement a data fabric architecture and data mesh solution Understand key data fabric capabilities, e.g., self-service data discovery, intelligent data integration techniques, intelligent cataloging and metadata management, and trustworthy AI Recognize the importance of data fabric to accelerate digital transformation and democratize data access Dive into important data fabric topics, addressing current data fabric challenges Conceive data fabric and data mesh concepts holistically within an enterprise context Become acquainted with the business benefits of data fabric and data mesh Who This Book Is For Anyone who is interested in deploying modern data fabric architectures and data mesh solutions within an enterprise, including IT and business leaders, data governance and data office professionals, data stewards and engineers, data scientists, and information and data architects. Readers should have a basic understanding of enterprise information architecture.

IBM Storage DS8900F Product Guide Release 9.3.2

This IBM® Redbooks Product Guide provides an overview of the features and functions that are available with the IBM Storage DS8900F models that run microcode Release 9.3.2 (Bundle 89.32/Licensed Machine Code 7.9.32). As of February 2023, the DS8900F with DS8000 Release 9.3.2 is the latest addition. The DS8900F is an all-flash system exclusively, and it offers three classes: IBM DS8980F: Analytic Class: The DS8980F Analytic Class offers best performance for organizations that want to expand their workload possibilities to artificial intelligence (AI), Business Intelligence, and Machine Learning. IBM DS8950F: Agility Class: The agility class is efficiently designed to consolidate all your mission-critical workloads for IBM zSystems, IBM LinuxONE, IBM Power Systems, and distributed environments under a single all-flash storage solution. IBM DS8910F: Flexibility Class: The flexibility class delivers significant performance for midrange organizations that are looking to meet storage challenges with advanced functionality delivered as a single rack solution.

Scaling Machine Learning with Spark

Learn how to build end-to-end scalable machine learning solutions with Apache Spark. With this practical guide, author Adi Polak introduces data and ML practitioners to creative solutions that supersede today's traditional methods. You'll learn a more holistic approach that takes you beyond specific requirements and organizational goals--allowing data and ML practitioners to collaborate and understand each other better. Scaling Machine Learning with Spark examines several technologies for building end-to-end distributed ML workflows based on the Apache Spark ecosystem with Spark MLlib, MLflow, TensorFlow, and PyTorch. If you're a data scientist who works with machine learning, this book shows you when and why to use each technology. You will: Explore machine learning, including distributed computing concepts and terminology Manage the ML lifecycle with MLflow Ingest data and perform basic preprocessing with Spark Explore feature engineering, and use Spark to extract features Train a model with MLlib and build a pipeline to reproduce it Build a data system to combine the power of Spark with deep learning Get a step-by-step example of working with distributed TensorFlow Use PyTorch to scale machine learning and its internal architecture

Graph Data Science with Neo4j

"Graph Data Science with Neo4j" teaches you how to utilize Neo4j 5 and its Graph Data Science Library 2.0 for analyzing and making predictions with graph data. By integrating graph algorithms into actionable machine learning pipelines using Python, you'll harness the power of graph-based data models. What this Book will help me do Query and manipulate graph data using Cypher in Neo4j. Design and implement graph datasets using your data and public sources. Utilize graph-specific algorithms for tasks such as link prediction. Integrate graph data science pipelines into machine learning projects. Understand and apply predictive modeling using the GDS Library. Author(s) None Scifo, the author of "Graph Data Science with Neo4j," is an experienced data scientist with expertise in graph databases and advanced machine learning techniques. Their technical approach combines practical implementation with clear, step-by-step guidance to provide readers the skills they need to excel. Who is it for? This book is ideal for data scientists and analysts familiar with basic Neo4j concepts and Python-based data science workflows who wish to deepen their skills in graph algorithms and machine learning integration. It is particularly suited for professionals aiming to advance their expertise in graph data science for practical applications.

SAP S/4HANA Financial Accounting Configuration: Learn Configuration and Development on an S/4 System

Upgrade your knowledge to learn S/4HANA, the latest version of the SAP ERP system, with its built-in intelligent technologies, including AI, machine learning, and advanced analytics. Since the first edition of this book published as SAP ERP Financial and Controlling: Configuration and Use Management, the perspective has changed significantly as S/4HANA now comes with new features, such as FIORI (new GUI), which focuses on flexible app style development and interactivity with mobile phones. It also has a universal journal, which helps in data integration in a single location, such as centralized processing, and is faster than ECC S/3. It merges FI & CO efficiently, which enables document posting in the Controlling area setup. General Ledger Accounts (FI) and Cost Element (CO) are mapped together in a way that cost elements (both primary and secondary) are part of G/L accounts. And a mandatory setup of customer-vendor integration with business partners is included vs the earlier ECC creation with separate vendor master and customer master.This updated edition presents new features in SAP S/4HANA, with in-depth coverage of the FI syllabus in SAP S/4HANA. A practical and hands-on approach includes scenarios with real-life examples and practical illustrations. There is no unnecessary jargon in this configuration and end-user manual. What You Will Learn Configure SAP FI as a pro in S/4 Master core aspects of Financial Accounting and Controlling Integrate SAP Financial with other SAP modules Gain a thorough hands-on experience with IMG (Implementation Guide) Understand and explain the functionalities of SAP FI Who This Book Is For FI consultants, trainers, developers, accountants, and SAP FI support organizations will find the book an excellent reference guide. Beginners without prior FI configuration experience will find the step-by-step illustrations to be practical and great hands-on experience.

Optimized Inferencing and Integration with AI on IBM zSystems: Introduction, Methodology, and Use Cases

In today's fast-paced, ever-growing digital world, you face various new and complex business problems. To help resolve these problems, enterprises are embedding artificial intelligence (AI) into their mission-critical business processes and applications to help improve operations, optimize performance, personalize the user experience, and differentiate themselves from the competition. Furthermore, the use of AI on the IBM® zSystems platform, where your mission-critical transactions, data, and applications are installed, is a key aspect of modernizing business-critical applications while maintaining strict service-level agreements (SLAs) and security requirements. This colocation of data and AI empowers your enterprise to optimally and easily deploy and infuse AI capabilities into your enterprise workloads with the most recent and relevant data available in real time, which enables a more transparent, accurate, and dependable AI experience. This IBM Redpaper publication introduces and explains AI technologies and hardware optimizations, such as IBM zSystems Integrated Accelerator for AI, and demonstrates how to leverage certain capabilities and components to enable solutions in business-critical use cases, such as fraud detection and credit risk scoring on the platform. Real-time inferencing with AI models, a capability that is critical to certain industries and use cases such as fraud detection, now can be implemented with optimized performance thanks to innovations like IBM zSystems Integrated Accelerator for AI embedded in the Telum chip within IBM z16™. This publication also describes and demonstrates the implementation and integration of the two end-to-end solutions (fraud detection and credit risk), from developing and training the AI models to deploying the models in an IBM z/OS® V2R5 environment on IBM z16 hardware, and to integrating AI functions into an application, for example an IBM z/OS Customer Information Control System (IBM CICS®) application. We describe performance optimization recommendations and considerations when leveraging AI technology on the IBM zSystems platform, including optimizations for micro-batching in IBM Watson® Machine Learning for z/OS (WMLz). The benefits that are derived from the solutions also are described in detail, which includes how the open-source AI framework portability of the IBM zSystems platform enables model development and training to be done anywhere, including on IBM zSystems, and the ability to easily integrate to deploy on IBM zSystems for optimal inferencing. You can uncover insights at the transaction level while taking advantage of the speed, depth, and securability of the platform. This publication is intended for technical specialists, site reliability engineers, architects, system programmers, and systems engineers. Technologies that are covered include TensorFlow Serving, WMLz, IBM Cloud Pak® for Data (CP4D), IBM z/OS Container Extensions (zCX), IBM Customer Information Control System (IBM CICS), Open Neural Network Exchange (ONNX), and IBM Deep Learning Compiler (zDLC).

Neural Search - From Prototype to Production with Jina

Dive into the world of modern search systems with 'Neural Search - From Prototype to Production with Jina.' This book introduces you to the fundamentals of neural search, exploring how machine learning revolutionizes information retrieval. You'll gain hands-on experience building versatile, scalable search engines using Jina, unraveling the complexities of AI-powered searches. What this Book will help me do Understand the basics of neural search compared to traditional search methods. Develop mastery of vector representation and its application in neural search. Learn to utilize Jina for constructing AI-powered search engines. Enhance your capabilities to handle multi-modal search systems like text, images, and audio. Acquire the skills to deploy and optimize deep learning-powered search systems effectively. Author(s) Bo Wang, Cristian Mitroi, Feng Wang, Shubham Saboo, and Susana Guzmán are experienced technologists and AI researchers passionate about simplifying complex subjects like neural search. With their expertise in Jina and deep learning, their collaborative approach ensures practical, reader-friendly content that empowers learners to excel in creating cutting-edge search systems. Who is it for? This book is perfect for machine learning, AI, or Python developers eager to advance their understanding of neural search. Whether you're building text, image, or other modality-based search systems, it caters to beginners with foundational knowledge and extends to professionals wanting to deepen their skills. Unlock the potential of Jina for your projects.

Serverless ETL and Analytics with AWS Glue

Discover how to harness AWS Glue for your ETL and data analysis workflows with "Serverless ETL and Analytics with AWS Glue." This comprehensive guide introduces readers to the capabilities of AWS Glue, from building data lakes to performing advanced ETL tasks, allowing you to create efficient, secure, and scalable data pipelines with serverless technology. What this Book will help me do Understand and utilize various AWS Glue features for data lake and ETL pipeline creation. Leverage AWS Glue Studio and DataBrew for intuitive data preparation workflows. Implement effective storage optimization techniques for enhanced data analytics. Apply robust data security measures, including encryption and access control, to protect data. Integrate AWS Glue with machine learning tools like SageMaker to build intelligent models. Author(s) The authors of this book include experts across the fields of data engineering and AWS technologies. With backgrounds in data analytics, software development, and cloud architecture, they bring a depth of practical experience. Their approach combines hands-on tutorials with conceptual clarity, ensuring a blend of foundational knowledge and actionable insights. Who is it for? This book is designed for ETL developers, data engineers, and data analysts who are familiar with data management concepts and want to extend their skills into serverless cloud solutions. If you're looking to master AWS Glue for building scalable and efficient ETL pipelines or are transitioning existing systems to the cloud, this book is ideal for you.

Simplifying Data Engineering and Analytics with Delta

This book will guide you through mastering Delta, a robust and versatile protocol for data engineering and analytics. You'll discover how Delta simplifies data workflows, supports both batch and streaming data, and is optimized for analytics applications in various industries. By the end, you will know how to create high-performing, analytics-ready data pipelines. What this Book will help me do Understand Delta's unique offering for unifying batch and streaming data processing. Learn approaches to address data governance, reliability, and scalability challenges. Gain technical expertise in building data pipelines optimized for analytics and machine learning use. Master core concepts like data modeling, distributed computing, and Delta's schema evolution features. Develop and deploy production-grade data engineering solutions leveraging Delta for business intelligence. Author(s) Anindita Mahapatra is an experienced data engineer and author with years of expertise in working on Delta and data-driven solutions. Her hands-on approach to explaining complex data concepts makes this book an invaluable resource for professionals in data engineering and analytics. Who is it for? Ideal for data engineers, data analysts, and anyone involved in AI/BI workflows, this book suits learners with some basic knowledge of SQL and Python. Whether you're an experienced professional or looking to upgrade your skills with Delta, this book will provide practical insights and actionable knowledge.

The Azure Data Lakehouse Toolkit: Building and Scaling Data Lakehouses on Azure with Delta Lake, Apache Spark, Databricks, Synapse Analytics, and Snowflake

Design and implement a modern data lakehouse on the Azure Data Platform using Delta Lake, Apache Spark, Azure Databricks, Azure Synapse Analytics, and Snowflake. This book teaches you the intricate details of the Data Lakehouse Paradigm and how to efficiently design a cloud-based data lakehouse using highly performant and cutting-edge Apache Spark capabilities using Azure Databricks, Azure Synapse Analytics, and Snowflake. You will learn to write efficient PySpark code for batch and streaming ELT jobs on Azure. And you will follow along with practical, scenario-based examples showing how to apply the capabilities of Delta Lake and Apache Spark to optimize performance, and secure, share, and manage a high volume, high velocity, and high variety of data in your lakehouse with ease. The patterns of success that you acquire from reading this book will help you hone your skills to build high-performing and scalable ACID-compliant lakehouses using flexible and cost-efficient decoupled storage and compute capabilities. Extensive coverage of Delta Lake ensures that you are aware of and can benefit from all that this new, open source storage layer can offer. In addition to the deep examples on Databricks in the book, there is coverage of alternative platforms such as Synapse Analytics and Snowflake so that you can make the right platform choice for your needs. After reading this book, you will be able to implement Delta Lake capabilities, including Schema Evolution, Change Feed, Live Tables, Sharing, and Clones to enable better business intelligence and advanced analytics on your data within the Azure Data Platform. What You Will Learn Implement the Data Lakehouse Paradigm on Microsoft’s Azure cloud platform Benefit from the new Delta Lake open-source storage layer for data lakehouses Take advantage of schema evolution, change feeds, live tables, and more Writefunctional PySpark code for data lakehouse ELT jobs Optimize Apache Spark performance through partitioning, indexing, and other tuning options Choose between alternatives such as Databricks, Synapse Analytics, and Snowflake Who This Book Is For Data, analytics, and AI professionals at all levels, including data architect and data engineer practitioners. Also for data professionals seeking patterns of success by which to remain relevant as they learn to build scalable data lakehouses for their organizations and customers who are migrating into the modern Azure Data Platform.

Tidy Modeling with R

Get going with tidymodels, a collection of R packages for modeling and machine learning. Whether you're just starting out or have years of experience with modeling, this practical introduction shows data analysts, business analysts, and data scientists how the tidymodels framework offers a consistent, flexible approach for your work. RStudio engineers Max Kuhn and Julia Silge demonstrate ways to create models by focusing on an R dialect called the tidyverse. Software that adopts tidyverse principles shares both a high-level design philosophy and low-level grammar and data structures, so learning one piece of the ecosystem makes it easier to learn the next. You'll understand why the tidymodels framework has been built to be used by a broad range of people. With this book, you will: Learn the steps necessary to build a model from beginning to end Understand how to use different modeling and feature engineering approaches fluently Examine the options for avoiding common pitfalls of modeling, such as overfitting Learn practical methods to prepare your data for modeling Tune models for optimal performance Use good statistical practices to compare, evaluate, and choose among models

Data Engineering with Alteryx

Dive into 'Data Engineering with Alteryx' to master the principles of DataOps while learning to build robust data pipelines using Alteryx. This book guides you through key practices to enhance data pipeline reliability, efficiency, and accessibility, making it an essential resource for modern data professionals. What this Book will help me do Understand and implement DataOps practices within Alteryx workflows. Design and develop data pipelines with Alteryx Designer for efficient data processing. Learn to manage and publish pipelines using Alteryx Server and Alteryx Connect. Gain advanced skills in Alteryx for handling spatial analytics and machine learning. Master techniques to monitor, secure, and optimize data workflows and access. Author(s) Paul Houghton is an experienced data engineer and author specializing in data engineering and DataOps. With extensive experience using Alteryx tools and workflows, Paul has a passion for teaching and sharing his knowledge through clear and practical guidance. His hands-on approach ensures readers successfully navigate and apply technical concepts to real-world projects. Who is it for? This book is ideal for data engineers, data scientists, and data analysts aiming to build reliable data pipelines with Alteryx. You do not need prior experience with Alteryx, but familiarity with data workflows will enhance your learning experience. If you're focused on aligning with DataOps methodologies, this book is tailored for you.

Ten Things to Know About ModelOps

The past few years have seen significant developments in data science, AI, machine learning, and advanced analytics. But the wider adoption of these technologies has also brought greater cost, risk, regulation, and demands on organizational processes, tasks, and teams. This report explains how ModelOps can provide both technical and operational solutions to these problems. Thomas Hill, Mark Palmer, and Larry Derany summarize important considerations, caveats, choices, and best practices to help you be successful with operationalizing AI/ML and analytics in general. Whether your organization is already working with teams on AI and ML, or just getting started, this report presents ten important dimensions of analytic practice and ModelOps that are not widely discussed, or perhaps even known. In part, this report examines: Why ModelOps is the enterprise "operating system" for AI/ML algorithms How to build your organization's IP secret sauce through repeatable processing steps How to anticipate risks rather than react to damage done How ModelOps can help you deliver the many algorithms and model formats available How to plan for success and monitor for value, not just accuracy Why AI will be soon be regulated and how ModelOps helps ensure compliance

Advanced Analytics with PySpark

The amount of data being generated today is staggering and growing. Apache Spark has emerged as the de facto tool to analyze big data and is now a critical part of the data science toolbox. Updated for Spark 3.0, this practical guide brings together Spark, statistical methods, and real-world datasets to teach you how to approach analytics problems using PySpark, Spark's Python API, and other best practices in Spark programming. Data scientists Akash Tandon, Sandy Ryza, Uri Laserson, Sean Owen, and Josh Wills offer an introduction to the Spark ecosystem, then dive into patterns that apply common techniques-including classification, clustering, collaborative filtering, and anomaly detection, to fields such as genomics, security, and finance. This updated edition also covers NLP and image processing. If you have a basic understanding of machine learning and statistics and you program in Python, this book will get you started with large-scale data analysis. Familiarize yourself with Spark's programming model and ecosystem Learn general approaches in data science Examine complete implementations that analyze large public datasets Discover which machine learning tools make sense for particular problems Explore code that can be adapted to many uses

Essential Math for Data Science

Master the math needed to excel in data science, machine learning, and statistics. In this book author Thomas Nield guides you through areas like calculus, probability, linear algebra, and statistics and how they apply to techniques like linear regression, logistic regression, and neural networks. Along the way you'll also gain practical insights into the state of data science and how to use those insights to maximize your career. Learn how to: Use Python code and libraries like SymPy, NumPy, and scikit-learn to explore essential mathematical concepts like calculus, linear algebra, statistics, and machine learning Understand techniques like linear regression, logistic regression, and neural networks in plain English, with minimal mathematical notation and jargon Perform descriptive statistics and hypothesis testing on a dataset to interpret p-values and statistical significance Manipulate vectors and matrices and perform matrix decomposition Integrate and build upon incremental knowledge of calculus, probability, statistics, and linear algebra, and apply it to regression models including neural networks Navigate practically through a data science career and avoid common pitfalls, assumptions, and biases while tuning your skill set to stand out in the job market

Designing Machine Learning Systems

Machine learning systems are both complex and unique. Complex because they consist of many different components and involve many different stakeholders. Unique because they're data dependent, with data varying wildly from one use case to the next. In this book, you'll learn a holistic approach to designing ML systems that are reliable, scalable, maintainable, and adaptive to changing environments and business requirements. Author Chip Huyen, co-founder of Claypot AI, considers each design decision--such as how to process and create training data, which features to use, how often to retrain models, and what to monitor--in the context of how it can help your system as a whole achieve its objectives. The iterative framework in this book uses actual case studies backed by ample references. This book will help you tackle scenarios such as: Engineering data and choosing the right metrics to solve a business problem Automating the process for continually developing, evaluating, deploying, and updating models Developing a monitoring system to quickly detect and address issues your models might encounter in production Architecting an ML platform that serves across use cases Developing responsible ML systems

Data Algorithms with Spark

Apache Spark's speed, ease of use, sophisticated analytics, and multilanguage support makes practical knowledge of this cluster-computing framework a required skill for data engineers and data scientists. With this hands-on guide, anyone looking for an introduction to Spark will learn practical algorithms and examples using PySpark. In each chapter, author Mahmoud Parsian shows you how to solve a data problem with a set of Spark transformations and algorithms. You'll learn how to tackle problems involving ETL, design patterns, machine learning algorithms, data partitioning, and genomics analysis. Each detailed recipe includes PySpark algorithms using the PySpark driver and shell script. With this book, you will: Learn how to select Spark transformations for optimized solutions Explore powerful transformations and reductions including reduceByKey(), combineByKey(), and mapPartitions() Understand data partitioning for optimized queries Build and apply a model using PySpark design patterns Apply motif-finding algorithms to graph data Analyze graph data by using the GraphFrames API Apply PySpark algorithms to clinical and genomics data Learn how to use and apply feature engineering in ML algorithms Understand and use practical and pragmatic data design patterns

Data Engineering with Google Cloud Platform

In 'Data Engineering with Google Cloud Platform', you'll explore how to construct efficient, scalable data pipelines using GCP services. This hands-on guide covers everything from building data warehouses to deploying machine learning pipelines, helping you master GCP's ecosystem. What this Book will help me do Build comprehensive data ingestion and transformation pipelines using BigQuery, Cloud Storage, and Dataflow. Design end-to-end orchestration flows with Airflow and Cloud Composer for automated data processing. Leverage Pub/Sub for building real-time event-driven systems and streaming architectures. Gain skills to design and manage secure data systems with IAM and governance strategies. Prepare for and pass the Professional Data Engineer certification exam to elevate your career. Author(s) Adi Wijaya is a seasoned data engineer with significant experience in Google Cloud Platform products and services. His expertise in building data systems has equipped him with insights into the real-world challenges data engineers face. Adi aims to demystify technical topics and deliver practical knowledge through his writing, helping tech professionals excel. Who is it for? This book is tailored for data engineers and data analysts who want to leverage GCP for building efficient and scalable data systems. Readers should have a beginner-level understanding of topics like data science, Python, and Linux to fully benefit from the material. It is also suitable for individuals preparing for the Google Professional Data Engineer exam. The book is a practical companion for enhancing cloud and data engineering skills.

Modern Data Engineering with Apache Spark: A Hands-On Guide for Building Mission-Critical Streaming Applications

Leverage Apache Spark within a modern data engineering ecosystem. This hands-on guide will teach you how to write fully functional applications, follow industry best practices, and learn the rationale behind these decisions. With Apache Spark as the foundation, you will follow a step-by-step journey beginning with the basics of data ingestion, processing, and transformation, and ending up with an entire local data platform running Apache Spark, Apache Zeppelin, Apache Kafka, Redis, MySQL, Minio (S3), and Apache Airflow. Apache Spark applications solve a wide range of data problems from traditional data loading and processing to rich SQL-based analysis as well as complex machine learning workloads and even near real-time processing of streaming data. Spark fits well as a central foundation for any data engineering workload. This book will teach you to write interactive Spark applications using Apache Zeppelin notebooks, write and compilereusable applications and modules, and fully test both batch and streaming. You will also learn to containerize your applications using Docker and run and deploy your Spark applications using a variety of tools such as Apache Airflow, Docker and Kubernetes. ​Reading this book will empower you to take advantage of Apache Spark to optimize your data pipelines and teach you to craft modular and testable Spark applications. You will create and deploy mission-critical streaming spark applications in a low-stress environment that paves the way for your own path to production. ​ What You Will Learn Simplify data transformation with Spark Pipelines and Spark SQL Bridge data engineering with machine learning Architect modular data pipeline applications Build reusable application components and libraries Containerize your Spark applications for consistency and reliability Use Docker and Kubernetes to deploy your Spark applications Speed up application experimentation using Apache Zeppelin and Docker Understand serializable structured data and data contracts Harness effective strategies for optimizing data in your data lakes Build end-to-end Spark structured streaming applications using Redis and Apache Kafka Embrace testing for your batch and streaming applications Deploy and monitor your Spark applications Who This Book Is For Professional software engineers who want to take their current skills and apply them to new and exciting opportunities within the data ecosystem, practicing data engineers who are looking for a guiding light while traversing the many challenges of moving from batch to streaming modes, data architects who wish to provide clear and concise direction for how best to harness anduse Apache Spark within their organization, and those interested in the ins and outs of becoming a modern data engineer in today's fast-paced and data-hungry world