talk-data.com talk-data.com

Topic

data

3406

tagged

Activity Trend

3 peak/qtr
2020-Q1 2026-Q1

Activities

Showing filtered results

Filtering by: O'Reilly Data Engineering Books ×
Mastering PostgreSQL 17 - Sixth Edition

Mastering PostgreSQL 17 is your guide to becoming a skilled PostgreSQL database administrator. Learn the latest in PostgreSQL 17, including deployment strategies, optimization approaches, and techniques for secure and high-performing database environments. This book equips you with best practices and actionable steps to elevate your PostgreSQL expertise to a professional level. What this Book will help me do Deploy and manage PostgreSQL 17 databases effectively in production environments. Utilize advanced optimization techniques to ensure queries run efficiently. Implement robust security measures, including encryption and access control. Learn and master database recovery strategies, backups, and replication. Troubleshoot real-world PostgreSQL database issues and performance bottlenecks. Author(s) Hans-Jürgen Schönig is a PostgreSQL expert with over 25 years of hands-on experience as a user and consultant. As the CEO of CYBERTEC PostgreSQL International GmbH, he has contributed extensively to the PostgreSQL community, helping clients worldwide. Hans brings a practical, solutions-focused approach to database administration drawn from supporting countless mission-critical environments. Who is it for? System and database administrators aiming to enhance their PostgreSQL expertise will find this book invaluable. It's also targeted at developers familiar with basic database concepts, seeking to deepen their understanding of PostgreSQL optimization and advanced features. Prior experience with SQL and database management is recommended. Ideal for IT professionals managing production database systems.

Snowflake Recipes: A Problem-Solution Approach to Implementing Modern Data Pipelines

Explore Snowflake’s core concepts and unique features that differentiates it from industry competitors, such as, Azure Synapse and Google BigQuery. This book provides recipes for architecting and developing modern data pipelines on the Snowflake data platform by employing progressive techniques, agile practices, and repeatable strategies. You’ll walk through step-by-step instructions on ready-to-use recipes covering a wide range of the latest development topics. Then build scalable development pipelines and solve specific scenarios common to all modern data platforms, such as, data masking, object tagging, data monetization, and security best practices. Throughout the book you’ll work with code samples for Amazon Web Services, Microsoft Azure, and Google Cloud Platform. There’s also a chapter devoted to solving machine learning problems with Snowflake. Authors Dillon Dayton and John Eipe are both Snowflake SnowPro Core certified, specializing in data and digital services, and understand the challenges of finding the right solution to complex problems. The recipes in this book are based on real world use cases and examples designed to help you provide quality, performant, and secured data to solve business initiatives. What You’ll Learn Handle structured and un- structured data in Snowflake. Apply best practices and different options for data transformation. Understand data application development. Implement data sharing, data governance and security. Who This book Is For Data engineers, scientists and analysts moving into Snowflake, looking to build data apps. This book expects basic knowledge in Cloud (AWS or Azure or GCP), SQL and Python

Snowflake Data Engineering

A practical introduction to data engineering on the powerful Snowflake cloud data platform. Data engineers create the pipelines that ingest raw data, transform it, and funnel it to the analysts and professionals who need it. The Snowflake cloud data platform provides a suite of productivity-focused tools and features that simplify building and maintaining data pipelines. In Snowflake Data Engineering, Snowflake Data Superhero Maja Ferle shows you how to get started. In Snowflake Data Engineering you will learn how to: Ingest data into Snowflake from both cloud and local file systems Transform data using functions, stored procedures, and SQL Orchestrate data pipelines with streams and tasks, and monitor their execution Use Snowpark to run Python code in your pipelines Deploy Snowflake objects and code using continuous integration principles Optimize performance and costs when ingesting data into Snowflake Snowflake Data Engineering reveals how Snowflake makes it easy to work with unstructured data, set up continuous ingestion with Snowpipe, and keep your data safe and secure with best-in-class data governance features. Along the way, you’ll practice the most important data engineering tasks as you work through relevant hands-on examples. Throughout, author Maja Ferle shares design tips drawn from her years of experience to ensure your pipeline follows the best practices of software engineering, security, and data governance. About the Technology Pipelines that ingest and transform raw data are the lifeblood of business analytics, and data engineers rely on Snowflake to help them deliver those pipelines efficiently. Snowflake is a full-service cloud-based platform that handles everything from near-infinite storage, fast elastic compute services, inbuilt AI/ML capabilities like vector search, text-to-SQL, code generation, and more. This book gives you what you need to create effective data pipelines on the Snowflake platform. About the Book Snowflake Data Engineering guides you skill-by-skill through accomplishing on-the-job data engineering tasks using Snowflake. You’ll start by building your first simple pipeline and then expand it by adding increasingly powerful features, including data governance and security, adding CI/CD into your pipelines, and even augmenting data with generative AI. You’ll be amazed how far you can go in just a few short chapters! What's Inside Ingest data from the cloud, APIs, or Snowflake Marketplace Orchestrate data pipelines with streams and tasks Optimize performance and cost About the Reader For software developers and data analysts. Readers should know the basics of SQL and the Cloud. About the Author Maja Ferle is a Snowflake Subject Matter Expert and a Snowflake Data Superhero who holds the SnowPro Advanced Data Engineer and the SnowPro Advanced Data Analyst certifications. Quotes An incredible guide for going from zero to production with Snowflake. - Doyle Turner, Microsoft A must-have if you’re looking to excel in the field of data engineering. - Isabella Renzetti, Data Analytics Consultant & Trainer Masterful! Unlocks the true potential of Snowflake for modern data engineers. - Shankar Narayanan, Microsoft Valuable insights will enhance your data engineering skills and lead to cost-effective solutions. A must read! - Frédéric L’Anglais, Maxa Comprehensive, up-to-date and packed with real-life code examples. - Albert Nogués, Danone

PostgreSQL Skills Development on Cloud: A Practical Guide to Database Management with AWS and Azure

This book provides a comprehensive approach to manage PostgreSQL cluster databases on Amazon Web Services and Azure Web Services on the cloud, as well as in Docker and container environments on a Red Hat operating system. Furthermore, detailed references for managing PostgreSQL on both Windows and Mac are provided. This book condenses all the fundamental and essential concepts you need to manage a PostgreSQL cluster into a one-stop guide that is perfect for newcomers to Postgres database administration. Each chapter of the book provides historical context and documents version changes of the PostgreSQL cluster, elucidates practical "how-to" methods, and includes illustrations and key word definitions, practices for application, a summary of key learnings, and questions to reinforce understanding. The book also outlines a clear study objective with a weekly learning schedule and hundreds of practice exercises, along with questions and answers. With its comprehensive and practical approach, this book will help you gain the confidence to manage all aspects of a PostgreSQL cluster in critical production environments so you can better support your organization's database infrastructure on the cloud and in containers. What You Will Learn Install and configure Postgres clusters on the cloud and in containers, monitor database logs, start and stop databases, troubleshoot, tune performance, backup and recover, and integrate with Amazon S3 and Azure Data Blob Manage Postgres databases on Amazon Web Services and Azure Web Services on the cloud, as well as in Docker and container environments on a Red Hat operating system Access sample references to scripting solutions and database management tools for working with Postgres, Redshift (based on Postgres 8.2), and Docker Create Amazon Machine Images (AMI) and Azure Images for managing a fleet of Postgres clusters on the cloud Reinforce knowledge with a weekly learning schedule and hundreds of practice exercises, along with questions and answers Progress from simple concepts, such as how to choose the correct instance type, to creating complex machine images Gain access to an Amazon AMI with a DBA admin tool, allowing you to learn Postgres, Redshift, and Docker in a cloud environment Refer to a comprehensive summary of documentations of Postgres, Amazon Web services, Azure Web services, and Red Hat Linux for managing all aspects of Postgres cluster management on the cloud Who This Book Is For Newcomers to PostgreSQL database administration and cross-platform support DBAs looking to master PostgreSQL on the cloud.

AI Engineering

Recent breakthroughs in AI have not only increased demand for AI products, they've also lowered the barriers to entry for those who want to build AI products. The model-as-a-service approach has transformed AI from an esoteric discipline into a powerful development tool that anyone can use. Everyone, including those with minimal or no prior AI experience, can now leverage AI models to build applications. In this book, author Chip Huyen discusses AI engineering: the process of building applications with readily available foundation models. The book starts with an overview of AI engineering, explaining how it differs from traditional ML engineering and discussing the new AI stack. The more AI is used, the more opportunities there are for catastrophic failures, and therefore, the more important evaluation becomes. This book discusses different approaches to evaluating open-ended models, including the rapidly growing AI-as-a-judge approach. AI application developers will discover how to navigate the AI landscape, including models, datasets, evaluation benchmarks, and the seemingly infinite number of use cases and application patterns. You'll learn a framework for developing an AI application, starting with simple techniques and progressing toward more sophisticated methods, and discover how to efficiently deploy these applications. Understand what AI engineering is and how it differs from traditional machine learning engineering Learn the process for developing an AI application, the challenges at each step, and approaches to address them Explore various model adaptation techniques, including prompt engineering, RAG, fine-tuning, agents, and dataset engineering, and understand how and why they work Examine the bottlenecks for latency and cost when serving foundation models and learn how to overcome them Choose the right model, dataset, evaluation benchmarks, and metrics for your needs Chip Huyen works to accelerate data analytics on GPUs at Voltron Data. Previously, she was with Snorkel AI and NVIDIA, founded an AI infrastructure startup, and taught Machine Learning Systems Design at Stanford. She's the author of the book Designing Machine Learning Systems, an Amazon bestseller in AI. AI Engineering builds upon and is complementary to Designing Machine Learning Systems (O'Reilly).

Hacking MySQL: Breaking, Optimizing, and Securing MySQL for Your Use Case

Your MySQL instances are probably broken. Many developers face slow-running queries, issues related to database architecture, replication, or database security—and that’s only the beginning. This book will deliver answers to your most pressing MySQL database questions related to performance, availability, or security by uncovering what causes databases to break in the first place. At its core, this book provides you with the knowledge necessary for you to break your database instances so you can better optimize it for performance and secure it from data breaches. In other words, you’ll discover the sorts of actions, minor and major, that degrade databases so you can fix and ultimately preempt them. MySQL sometimes acts according to its own rules, and this book will help you keep it working on your terms. At the same time, you will learn to optimize your backup and recovery procedures, determine when and which data to index to achieve maximum performance, and choose the best MySQL configurations, among other essential skills. Most MySQL books focus exclusively on optimization, but this book argues that it’s just as important to pay attention to the ways databases break. Indeed, after reading this book, you will be able to safely break your database instances to expose and overcome the nuanced issues that affect performance, availability, and security. What You Will Learn Know the basics of MySQL and the storage engines innoDB and MyISAM Spot the ways you are harming your database’s performance, availability and security without even realizing it Fix minor bugs and issues that have surprisingly serious impact Optimize schema, data types, queries, indexes, and partitions to head off issues Understand key MySQL security strategies Who This Book Is For Database administrators, web developers, systems administrators, and security professionals with an intermediary knowledge of database management systems and building applications in MySQL

Applied Data Science Using PySpark: Learn the End-to-End Predictive Model-Building Cycle

This comprehensive guide, featuring hand-picked examples of daily use cases, will walk you through the end-to-end predictive model-building cycle using the latest techniques and industry tricks. In Chapters 1, 2, and 3, we will begin by setting up the environment and covering the basics of PySpark, focusing on data manipulation. Chapter 4 delves into the art of variable selection, demonstrating various techniques available in PySpark. In Chapters 5, 6, and 7, we explore machine learning algorithms, their implementations, and fine-tuning techniques. Chapters 8 and 9 will guide you through machine learning pipelines and various methods to operationalize and serve models using Docker/API. Chapter 10 will demonstrate how to unlock the power of predictive models to create a meaningful impact on your business. Chapter 11 introduces some of the most widely used and powerful modeling frameworks to unlock real value from data. In this new edition, you will learn predictive modeling frameworks that can quantify customer lifetime values and estimate the return on your predictive modeling investments. This edition also includes methods to measure engagement and identify actionable populations for effective churn treatments. Additionally, a dedicated chapter on experimentation design has been added, covering steps to efficiently design, conduct, test, and measure the results of your models. All code examples have been updated to reflect the latest stable version of Spark. You will: Gain an overview of end-to-end predictive model building Understand multiple variable selection techniques and their implementations Learn how to operationalize models Perform data science experiments and learn useful tips

Data Engineering with AWS Cookbook

Data Engineering with AWS Cookbook serves as a comprehensive practical guide for building scalable and efficient data engineering solutions using AWS. With this book, you will master implementing data lakes, orchestrating data pipelines, and creating serving layers using AWS's robust services, such as Glue, EMR, Redshift, and Athena. With hands-on exercises and practical recipes, you will enhance your AWS-based data engineering projects. What this Book will help me do Gain the skills to design centralized data lake solutions and manage them securely at scale. Develop expertise in crafting data pipelines with AWS's ETL technologies like Glue and EMR. Learn to implement and automate governance, orchestration, and monitoring for data platforms. Build high-performance data serving layers using AWS analytics tools like Redshift and QuickSight. Effectively plan and execute data migrations to AWS from on-premises infrastructure. Author(s) Trâm Ngọc Phạm, Gonzalo Herreros González, Viquar Khan, and Huda Nofal bring together years of collective experience in data engineering and AWS cloud solutions. Each author's deep knowledge and passion for cloud technology have shaped this book into a valuable resource, geared towards practical learning and real-world application. Their approach ensures readers are not just learning but building tangible, impactful solutions. Who is it for? This book is geared towards data engineers and big data professionals engaged in or transitioning to cloud-based environments, specifically on AWS. Ideal readers are those looking to optimize workflows and master AWS tools to create scalable, efficient solutions. The content assumes a basic familiarity with AWS concepts like IAM roles and a command-line interface, ensuring all examples are accessible yet meaningful for those seeking advancement in AWS data engineering.

Managing Data as a Product

Discover how to transform your data architecture with the insights and techniques presented in Managing Data as a Product by Andrea Gioia. In this comprehensive guide, you'll explore how to design, implement, and maintain data-product-centered systems to meet modern demands, achieving scalable and sustainable data management tailored to your organization's needs. What this Book will help me do Understand the principles of data-product-centered architectures and their advantages. Learn to design, develop, and operate data products in production settings. Explore strategies to manage the lifecycle of data products efficiently. Gain insights into team topologies and data ownership for distributed systems. Discover data modeling techniques for AI-ready architectures. Author(s) Andrea Gioia is a renowned data architect and the creator of the Open Data Mesh Initiative. With over 20 years of experience, Andrea has successfully led complex data projects and is passionate about sharing his expertise. His writing is practical and driven by real-world challenges, aiming to equip engineers with actionable knowledge. Who is it for? This book is ideal for data engineers, software architects, and engineering leaders involved in shaping innovative data architectures. If you have foundational knowledge of data engineering and are eager to advance your expertise by adopting data-product principles, this book will suit your needs. It is for professionals aiming to modernize and optimize their approach to organizational data management.

Prompt Engineering for LLMs

Large language models (LLMs) are revolutionizing the world, promising to automate tasks and solve complex problems. A new generation of software applications are using these models as building blocks to unlock new potential in almost every domain, but reliably accessing these capabilities requires new skills. This book will teach you the art and science of prompt engineering-the key to unlocking the true potential of LLMs. Industry experts John Berryman and Albert Ziegler share how to communicate effectively with AI, transforming your ideas into a language model-friendly format. By learning both the philosophical foundation and practical techniques, you'll be equipped with the knowledge and confidence to build the next generation of LLM-powered applications. Understand LLM architecture and learn how to best interact with it Design a complete prompt-crafting strategy for an application Gather, triage, and present context elements to make an efficient prompt Master specific prompt-crafting techniques like few-shot learning, chain-of-thought prompting, and RAG

Learn FileMaker Pro 2024: The Comprehensive Guide to Building Custom Databases

FileMaker Pro is a development platform from Claris International Inc., a subsidiary of Apple Inc. The software makes it easy for everyone to create powerful, multi-user, cross-platform, relational database applications. This book navigates the reader through the software in a clear and logical manner, with each chapter building on the previous one. After an initial review of the user environment and application basics, the book delves into a deep exploration of the integrated development environment, which seamlessly combines the full stack of schema, business logic, and interface layers into a unified visual programming experience. Everything beginners need to get started is covered, along with advanced material that seasoned professionals will appreciate. Written by a professional developer with decades of real-world experience, "Learn FileMaker Pro 2024" is a comprehensive learning and reference guide. Join millions of users and developers worldwide in achieving a new level of workflow efficiency with FileMaker. For This New Edition This third edition includes clearer lessons and more examples, making it easier than ever to start planning, building, and deploying a custom database solution. It covers dozens of new and modified features introduced in versions 19.1 to 19.6, as well as the more recent 2023 (v20) and 2024 (v21) releases. Whatever your level of experience, this book has something new for you! What You’ll Learn · Plan and create custom tables, fields, and relationships · Write calculations using built-in and custom functions · Build layouts with dynamic objects, themes, and custom menus · Automate tasks with scripts and link them to objects and interface events · Keep database files secure and healthy · Integrate with external systems using ODBC, cURL, and the FM API · Deploy solutions to share with desktop, iOS, and web clients · Learn about summary reports, dynamic object references, and transactions · Delve into artificial intelligence with CoreML, OpenAI, and Semantic Finds Who This Book Is For Hobbyist developers, professional consultants, IT staff

Data-driven Models in Inverse Problems

Advances in learning-based methods are revolutionizing several fields in applied mathematics, including inverse problems, resulting in a major paradigm shift towards data-driven approaches. This volume, which is inspired by this cutting-edge area of research, brings together contributors from the inverse problem community and shows how to successfully combine model- and data-driven approaches to gain insight into practical and theoretical issues.

Apache Spark for Machine Learning

Dive into the power of Apache Spark as a tool for handling and processing big data required for machine learning. With this book, you will explore how to configure, execute, and deploy machine learning algorithms using Spark's scalable architecture and learn best practices for implementing real-world big data solutions. What this Book will help me do Understand the integration of Apache Spark with large-scale infrastructures for machine learning applications. Employ data processing techniques for preprocessing and feature engineering efficiently with Spark. Master the implementation of advanced supervised and unsupervised learning algorithms using Spark. Learn to deploy machine learning models within Spark ecosystems for optimized performance. Discover methods for analyzing big data trends and machine learning model tuning for improved accuracy. Author(s) The author, Deepak Gowda, is an experienced data scientist with over ten years of expertise in machine learning and big data. His career spans industries such as supply chain, cybersecurity, and more where he has utilized Apache Spark extensively. Deepak's teaching style is marked by clarity and practicality, making complex concepts approachable. Who is it for? Apache Spark for Machine Learning is tailored for data engineers, machine learning practitioners, and computer science students looking to advance their ability to process, analyze, and model using large datasets. If you're already familiar with basic machine learning and want to scale your solutions using Spark, this book is ideal for your studies and professional growth.

Apache Airflow Best Practices

"Apache Airflow Best Practices" is your go-to guide for mastering data workflow orchestration using Apache Airflow. This book introduces you to core concepts and features of Airflow and helps you efficiently design, deploy, and manage workflows. With detailed examples and hands-on tutorials, you'll learn how to tackle real-world challenges in data engineering. What this Book will help me do Understand and utilize the features and updates introduced in Apache Airflow 2.x. Design and implement robust, scalable, and efficient data pipelines and workflows. Learn best practices for deploying Apache Airflow in cloud environments such as AWS and GCP. Extend Airflow's functionality with custom plugins and advanced configuration. Monitor, maintain, and scale your Airflow deployment effectively for high availability. Author(s) Dylan Intorf, Dylan Storey, and Kendrick van Doorn are seasoned professionals in data engineering, data strategy, and software development. Between them, they bring decades of experience working in diverse industries like finance, tech, and life sciences. They bring their expertise into this practical guide to help practitioners understand and master Apache Airflow. Who is it for? This book is tailored for data professionals such as data engineers, scientists, and system administrators, offering valuable insights for new learners and experienced users. If you're starting with workflow orchestration, seeking to optimize your current Airflow implementation, or scaling efforts, this book aligns with your goals. Readers should have a basic knowledge of Python programming and data engineering principles.

Building Modern Data Applications Using Databricks Lakehouse

This book, "Building Modern Data Applications Using Databricks Lakehouse," provides a comprehensive guide for data professionals to master the Databricks platform. You'll learn to effectively build, deploy, and monitor robust data pipelines with Databricks' Delta Live Tables, empowering you to manage and optimize cloud-based data operations effortlessly. What this Book will help me do Understand the foundations and concepts of Delta Live Tables and its role in data pipeline development. Learn workflows to process and transform real-time and batch data efficiently using the Databricks lakehouse architecture. Master the implementation of Unity Catalog for governance and secure data access in modern data applications. Deploy and automate data pipeline changes using CI/CD, leveraging tools like Terraform and Databricks Asset Bundles. Gain advanced insights in monitoring data quality and performance, optimizing cloud costs, and managing DataOps tasks effectively. Author(s) Will Girten, the author, is a seasoned Solutions Architect at Databricks with over a decade of experience in data and AI systems. With a deep expertise in modern data architectures, Will is adept at simplifying complex topics and translating them into actionable knowledge. His books emphasize real-time application and offer clear, hands-on examples, making learning engaging and impactful. Who is it for? This book is geared towards data engineers, analysts, and DataOps professionals seeking efficient strategies to implement and maintain robust data pipelines. If you have a basic understanding of Python and Apache Spark and wish to delve deeper into the Databricks platform for streamlining workflows, this book is tailored for you.

Delta Lake: The Definitive Guide

Ready to simplify the process of building data lakehouses and data pipelines at scale? In this practical guide, learn how Delta Lake is helping data engineers, data scientists, and data analysts overcome key data reliability challenges with modern data engineering and management techniques. Authors Denny Lee, Tristen Wentling, Scott Haines, and Prashanth Babu (with contributions from Delta Lake maintainer R. Tyler Croy) share expert insights on all things Delta Lake--including how to run batch and streaming jobs concurrently and accelerate the usability of your data. You'll also uncover how ACID transactions bring reliability to data lakehouses at scale. This book helps you: Understand key data reliability challenges and how Delta Lake solves them Explain the critical role of Delta transaction logs as a single source of truth Learn the Delta Lake ecosystem with technologies like Apache Flink, Kafka, and Trino Architect data lakehouses with the medallion architecture Optimize Delta Lake performance with features like deletion vectors and liquid clustering

Handling and Mapping Geographic Information

With the increasing proliferation of data and the systematization of geographic information referencing, maps are now a major concern – not only for specialists, but also for urban planning and development organizations and the general public. However, while producing a map may seem straightforward, the actual process of transforming data into a useful map with a specific purpose is characterized by a series of precise operations that require knowledge in a variety of fields: statistics, geography, cartography and so on. Handling and Mapping Geographic Information presents a wide range of operations based on a variety of examples. Each chapter adopts a different approach, explaining the methodological choices made in relation to the theme and the pursued objective. This approach, encompassing the entire map production process, will enable all readers, whether students, researchers, teachers or planners, to understand the multiple roles that maps can play in the analysis of geographical data.

LLM Engineer's Handbook

The "LLM Engineer's Handbook" is your comprehensive guide to mastering Large Language Models from concept to deployment. Written by leading experts, it combines theoretical foundations with practical examples to help you build, refine, and deploy LLM-powered solutions that solve real-world problems effectively and efficiently. What this Book will help me do Understand the principles and approaches for training and fine-tuning Large Language Models (LLMs). Apply MLOps practices to design, deploy, and monitor your LLM applications effectively. Implement advanced techniques such as retrieval-augmented generation (RAG) and preference alignment. Optimize inference for high performance, addressing low-latency and high availability for production systems. Develop robust data pipelines and scalable architectures for building modular LLM systems. Author(s) Paul Iusztin and Maxime Labonne are experienced AI professionals specializing in natural language processing and machine learning. With years of industry and academic experience, they are dedicated to making complex AI concepts accessible and actionable. Their collaborative authorship ensures a blend of theoretical rigor and practical insights tailored for modern AI practitioners. Who is it for? This book is tailored for AI engineers, NLP professionals, and LLM practitioners who wish to deepen their understanding of Large Language Models. Ideal readers possess some familiarity with Python, AWS, and general AI concepts. If you aim to apply LLMs to real-world scenarios or enhance your expertise in AI-driven systems, this handbook is designed for you.

Aerospike: Up and Running

If you're a developer looking to build a distributed, resilient, scalable, high-performance application, you may be evaluating distributed SQL and NoSQL solutions. Perhaps you're considering the Aerospike database. This practical book shows developers, architects, and engineers how to get the highly scalable and extremely low-latency Aerospike database up and running. You will learn how to power your globally distributed applications and take advantage of Aerospike's hybrid memory architecture with the real-time performance of in-memory plus dependable persistence. After reading this book, you'll be able to build applications that can process up to tens of millions of transactions per second for millions of concurrent users on any scale of data. This practical guide provides: Step-by-step instructions on installing and connecting to Aerospike A clear explanation of the programming models available All the advice you need to develop your Aerospike application Coverage of issues such as administration, connectors, consistency, and security Code examples and tutorials to get you up and running quickly And more