talk-data.com talk-data.com

Topic

data

5765

tagged

Activity Trend

3 peak/qtr
2020-Q1 2026-Q1

Activities

5765 activities · Newest first

IBM PowerHA SystemMirror for i: Using Geographic Mirroring (Volume 4 of 4)

IBM® PowerHA® SystemMirror® for i is the IBM high-availability (HA), disk-based clustering solution for the IBM i operating system. When PowerHA for i is combined with IBM i clustering technology, PowerHA for i delivers a complete HA and disaster-recovery (DR) solution for business applications that are running in an IBM i environment. Use PowerHA for i to support HA capabilities with either native disk storage, IBM DS8000® storage servers, or IBM Storwize® storage servers. This IBM Redbooks® publication helps you to install, tailor, and configure IBM PowerHA SystemMirror for i to use with geographic mirroring and native storage. This publication provides you with planning information to prepare to use the various PowerHA offerings with geographic mirroring with IBM i native storage. It also provides implementation and management information. It provides guidance about troubleshooting these solutions and identifies the documentation that you need to capture before you call IBM Support. This book is part of a four-book set that gives you a complete understanding of PowerHA for i with native disk storage, IBM DS8000 storage servers, or IBM Storwize storage servers. The following IBM Redbooks publications are part of this PowerHA for i volume set: IBM PowerHA SystemMirror for i: Preparation, SG24-8400 IBM PowerHA SystemMirror for i: Using DS8000, SG24-8403 IBM PowerHA SystemMirror for i: Using IBM Storwize, SG24-8402 Important: The information that is presented in this volume set is for technical consultants, technical support staff, IT architects, and IT specialists who are responsible for providing HA and support for IBM i solutions. If you are new to HA, you need to first review the information that is presented in the first book of this volume set, IBM PowerHA SystemMirror for i: Preparation (Volume 1 of 4), SG24-8400, to obtain a general understanding of clustering technology, independent auxiliary storage pools (IASPs), and the PowerHA architecture.

Learning Pentaho CTools

Learning Pentaho CTools is a comprehensive guide to building sophisticated and custom analytics dashboards using the powerful capabilities of Pentaho CTools. This book walks you through the process of creating interactive dashboards, integrating data sources, and applying data visualization best practices. You'll quickly gain the expertise needed to create impactful dashboards with ease. What this Book will help me do Master installing and configuring CTools for Pentaho to jumpstart dashboard development. Harness diverse data sources and deliver data in formats like CSV, JSON, and XML for customized analytics. Design and implement dynamic, visually stunning dashboards using Community Dashboard Framework (CDF). Deploy and integrate plugins, leverage widgets, and manage dashboards effectively with version control. Enhance interactivity by customizing dashboard components, charts, and filters to suit unique requirements. Author(s) None Gaspar, an expert in Pentaho and its tools, has been a Senior Consultant at Pentaho, where he gained in-depth experience crafting analytics solutions. He brings to this book his teaching passion and field expertise, combining theoretical insights with practical applications. His approachable style ensures readers can follow technical concepts effectively. Who is it for? This book is ideal for developers who are looking to enhance their understanding of Pentaho's CTools portfolio to build advanced dashboards. A working knowledge of JavaScript and CSS will enable readers to get the most out of this guide. Whether you aim to extend your analytics capabilities or learn the tools from scratch, this book bridges the gap between learning and application.

Mastering Redis

"Mastering Redis" is your comprehensive guide to truly leveraging the power of the Redis data structure server. This hands-on resource offers detailed insights into scaling data with Redis clusters, optimizing memory, scripting with Lua, and integrating Redis with other NoSQL technologies to create robust, efficient applications. What this Book will help me do Select and utilize the appropriate Redis data structure to solve specific use cases efficiently. Implement Lua scripts on Redis for complex workflows and custom functionality. Optimize Redis configurations to achieve efficient memory usage and server performance. Integrate Redis with other NoSQL databases, such as MongoDB and Elasticsearch, for enhanced capabilities. Set up Redis Clusters and use Redis Sentinel for distributed and highly available setups. Author(s) Vidyasagar N V and None Nelson bring a wealth of expertise in software development and distributed systems to this book. Vidyasagar has extensive hands-on experience with Redis, enabling him to provide practical insights and best practices. Nelson complements this with deep knowledge of database optimization, making their combined perspective invaluable for anyone diving deep into Redis. Who is it for? This book is aimed at software developers who have an understanding of Redis basics and want to advance their proficiency. It is also targeted at developers aiming to implement Redis in production efficiently. By reading this book, readers will deepen their Redis skills and learn how to integrate it with other technologies to develop scalable, high-performance applications.

Mastering Redmine Second Edition - Second Edition

Mastering Redmine Second Edition provides a comprehensive guide to the popular open source project management tool, Redmine. With this book, you'll gain a solid understanding of effective Redmine use, from installing and configuring to advanced customizations and integrations. Explore how to optimize your workflow and manage projects with clarity and precision. What this Book will help me do Confidently install and configure Redmine for your organization. Harness Redmine for effective issue tracking and project hosting. Understand and implement Redmine's rich text formatting and permissions systems. Utilize time tracking features and custom fields to enhance project management. Explore and integrate essential Redmine plugins for improved functionality. Author(s) Andriy Lesyuk, an experienced Redmine expert, brings years of hands-on experience managing and customizing Redmine instances. His passion for open source and practical approach to project management makes this guide an invaluable resource for learning Redmine. Who is it for? This book is ideal for project managers and Redmine administrators looking to deepen their understanding of Redmine. If you're familiar with the basics of Redmine and aim to optimize, customize, and expand its use, this guide is for you. Whether managing projects or improving team collaborations, you'll find actionable insights to elevate your use of Redmine.

Network Reliability

In Engineering theory and applications, we think and operate in terms of logics and models with some acceptable and reasonable assumptions. The present text is aimed at providing modelling and analysis techniques for the evaluation of reliability measures (2-terminal, all-terminal, k-terminal reliability) for systems whose structure can be described in the form of a probabilistic graph. Among the several approaches of network reliability evaluation, the multiple-variable-inversion sum-of-disjoint product approach finds a well-deserved niche as it provides the reliability or unreliability expression in a most efficient and compact manner. However, it does require an efficiently enumerated minimal inputs (minimal path, spanning tree, minimal k-trees, minimal cut, minimal global-cut, minimal k-cut) depending on the desired reliability. The present book covers these two aspects in detail through the descriptions of several algorithms devised by the ‘reliability fraternity’ and explained through solved examples to obtain and evaluate 2-terminal, k-terminal and all-terminal network reliability/unreliability measures and could be its USP. The accompanying web-based supplementary information containing modifiable Matlab® source code for the algorithms is another feature of this book. A very concerted effort has been made to keep the book ideally suitable for first course or even for a novice stepping into the area of network reliability. The mathematical treatment is kept as minimal as possible with an assumption on the readers’ side that they have basic knowledge in graph theory, probabilities laws, Boolean laws and set theory.

Spring Persistence with Hibernate, Second Edition

Learn how to use the core Hibernate APIs and tools as part of the Spring Framework. This book illustrates how these two frameworks can be best utilized. Other persistence solutions available in Spring are also shown including the Java Persistence API (JPA). Spring Persistence with Hibernate, Second Edition has been updated to cover Spring Framework version 4 and Hibernate version 5. After reading and using this book, you'll have the fundamentals to apply these persistence solutions into your own mission-critical enterprise Java applications that you build using Spring. Persistence is an important set of techniques and technologies for accessing and using data, and ensuring that data is mobile regardless of specific applications and contexts. In Java development, persistence is a key factor in enterprise, e-commerce, and other transaction-oriented applications. Today, the agile and open source Spring Framework is the leading out-of-the-box, open source solution for enterprise Java developers; in it, you can find a number of Java persistence solutions What You'll Learn Use Spring Persistence, including using persistence tools in Spring as well as choosing the best Java persistence frameworks outside of Spring Take advantage of Spring Framework features such as Inversion of Control (IoC), aspect-oriented programming (AOP), and more Work with Spring JDBC, use declarative transactions with Spring, and reap the benefits of a lightweight persistence strategy Harness Hibernate and integrate it into your Spring-based enterprise Java applications for transactions, data processing, and more Integrate JPA for creating a well-layered persistence tier in your enterprise Java application Who This Book Is For This book is ideal for developers interested in learning more about persistence framework options on the Java platform, as well as fundamental Spring concepts. Because the book covers several persistence frameworks, it is suitable for anyone interested in learning more about Spring or any of the frameworks covered. Lastly, this book covers advanced topics related to persistence architecture and design patterns, and is ideal for beginning developers looking to learn more in these areas.

Apache Spark Machine Learning Blueprints

In 'Apache Spark Machine Learning Blueprints', you'll explore how to create sophisticated and scalable machine learning projects using Apache Spark. This project-driven guide covers practical applications including fraud detection, customer analysis, and recommendation engines, helping you leverage Spark's capabilities for advanced data science tasks. What this Book will help me do Learn to set up Apache Spark efficiently for machine learning projects, unlocking its powerful processing capabilities. Integrate Apache Spark with R for detailed analytical insights, empowering your decision-making processes. Create predictive models for use cases including customer scoring, fraud detection, and risk assessment with practical implementations. Understand and utilize Spark's parallel computing architecture for large-scale machine learning tasks. Develop and refine recommendation systems capable of handling large user bases and datasets using Spark. Author(s) Alex Liu is a seasoned data scientist and software developer specializing in machine learning and big data technology. With extensive experience in using Apache Spark for predictive analytics, Alex has successfully built and deployed scalable solutions across industries. Their teaching approach combines theory and practical insights, making cutting-edge technologies accessible and actionable. Who is it for? This book is ideal for data analysts, data scientists, and developers with a foundation in machine learning who are eager to apply their knowledge in big data contexts. If you have a basic familiarity with Apache Spark and its ecosystem, and you're looking to enhance your ability to build machine learning applications, this resource is for you. It's particularly valuable for those aiming to utilize Spark for extensive data operations and gain practical, project-based insights.

IBM z13 Technical Guide

Digital business has been driving the transformation of underlying IT infrastructure to be more efficient, secure, adaptive, and integrated. Information Technology (IT) must be able to handle the explosive growth of mobile clients and employees. IT also must be able to use enormous amounts of data to provide deep and real-time insights to help achieve the greatest business impact. This IBM® Redbooks® publication addresses the IBM Mainframe, the IBM z13™. The IBM z13 is the trusted enterprise platform for integrating data, transactions, and insight. A data-centric infrastructure must always be available with a 99.999% or better availability, have flawless data integrity, and be secured from misuse. It needs to be an integrated infrastructure that can support new applications. It needs to have integrated capabilities that can provide new mobile capabilities with real-time analytics delivered by a secure cloud infrastructure. IBM z13 is designed with improved scalability, performance, security, resiliency, availability, and virtualization. The superscalar design allows the z13 to deliver a record level of capacity over the prior IBM z Systems™. In its maximum configuration, z13 is powered by up to 141 client characterizable microprocessors (cores) running at 5 GHz. This configuration can run more than 110,000 millions of instructions per second (MIPS) and up to 10 TB of client memory. The IBM z13 Model NE1 is estimated to provide up to 40% more total system capacity than the IBM zEnterprise® EC12 (zEC1) Model HA1. This book provides information about the IBM z13 and its functions, features, and associated software support. Greater detail is offered in areas relevant to technical planning. It is intended for systems engineers, consultants, planners, and anyone who wants to understand the IBM z Systems functions and plan for their usage. It is not intended as an introduction to mainframes. Readers are expected to be generally familiar with existing IBM z Systems technology and terminology.

Mastering Data Visualization with Microsoft Visio Professional 2016

Microsoft Visio Professional 2016 is an essential tool for creating sophisticated data visualizations across a variety of contexts and industries. In 'Mastering Data Visualization with Microsoft Visio Professional 2016', you'll learn how to utilize Visio's powerful features to transform data into compelling graphics and actionable insights. What this Book will help me do Understand how to integrate external data from various sources into your Visio diagrams. Master the use of Visio's tools to represent information using data-driven graphics. Learn the process of designing and utilizing custom shapes and templates for tailored visualizations. Discover methods for automating diagram creation from structured and external data sources. Gain techniques to share and present interactive and professional visuals with a wide audience. Author(s) John Marshall, the author of 'Mastering Data Visualization with Microsoft Visio Professional 2016,' brings years of experience in data modeling and visualization. With an extensive technical background, Marshall is a renowned expert in leveraging visual tools to communicate complex ideas effectively. His approachable writing style makes highly technical concepts accessible to professionals at various levels. Who is it for? If you're a business intelligence professional, technical analyst, or a Microsoft Office power user looking to enhance your skills in creating impactful visualizations, this book is for you. Its step-by-step approach is ideal for users of Visio Professional starting out or seeking advanced techniques. You'll gain practical insights and learn to apply them effectively in your business or technical workflows, achieving refined data presentations.

IBM PowerHA SystemMirror for i: Preparation (Volume 1 of 4)

IBM® PowerHA® SystemMirror® for i is the IBM high-availability (HA), disk-based clustering solution for the IBM i operating system. When PowerHA for i is combined with IBM i clustering technology, it delivers a complete HA and disaster-recovery (DR) solution for business applications that are running in an IBM i environment. You can use PowerHA for i to support HA capabilities with either native disk storage, IBM DS8000® storage servers, or IBM Storwize® storage servers. This IBM Redbooks® publication gives a broad understanding of PowerHA for i and provides a general introduction to clustering technology, independent auxiliary storage pools (IASPs), PowerHA SystemMirror products, and the PowerHA architecture. This book is part of a four-book volume set that gives you a complete understanding of PowerHA for i and its use of native disk storage, IBM DS8000 storage servers, or IBM Storwize storage servers. The following IBM Redbooks publications are part of this PowerHA for i volume set: IBM PowerHA SystemMirror for i: Using DS8000, SG24-8403 IBM PowerHA SystemMirror for i: Using IBM Storwize, SG24-8402. IBM PowerHA SystemMirror for i: Using Geographic Mirroring, SG24-8401 Important: The information that is presented in this volume set is for technical consultants, technical support staff, IT architects, and IT specialists who are responsible for providing HA and support for IBM i solutions. If you are new to HA, first review the information that is presented in this book to get a general understanding of clustering technology, IASPs, and the PowerHA architecture. You can then select the appropriate follow-on book based on the storage solutions that you are planning to use.

JSON Quick Syntax Reference

This compact quick scripting syntax reference on JSON covers syntax and parameters central to JSON object definitions, using the NetBeans 8.1 open source and Eclipse IDE software tool packages. JSON Quick Syntax Reference covers the syntax used in the JSON object definition language, logically organized by topical chapters, and getting more advanced as chapters progress, covering structures and file formats which are best for use with HTML5. Furthermore, this book includes the key factors regarding the data footprint optimization work process, the in-lining of .CSS and .JS files, and why a data footprint optimization work process is important.

Streaming Architecture

More and more data-driven companies are looking to adopt stream processing and streaming analytics. With this concise ebook, you’ll learn best practices for designing a reliable architecture that supports this emerging big-data paradigm. Authors Ted Dunning and Ellen Friedman (Real World Hadoop) help you explore some of the best technologies to handle stream processing and analytics, with a focus on the upstream queuing or message-passing layer. To illustrate the effectiveness of these technologies, this book also includes specific use cases. Ideal for developers and non-technical people alike, this book describes: Key elements in good design for streaming analytics, focusing on the essential characteristics of the messaging layer New messaging technologies, including Apache Kafka and MapR Streams, with links to sample code Technology choices for streaming analytics: Apache Spark Streaming, Apache Flink, Apache Storm, and Apache Apex How stream-based architectures are helpful to support microservices Specific use cases such as fraud detection and geo-distributed data streams Ted Dunning is Chief Applications Architect at MapR Technologies, and active in the open source community. He currently serves as VP for Incubator at the Apache Foundation, as a champion and mentor for a large number of projects, and as committer and PMC member of the Apache ZooKeeper and Drill projects. Ted is on Twitter as @ted_dunning. Ellen Friedman, a committer for the Apache Drill and Apache Mahout projects, is a solutions consultant and well-known speaker and author, currently writing mainly about big data topics. With a PhD in Biochemistry, she has years of experience as a research scientist and has written about a variety of technical topics. Ellen is on Twitter as @Ellen_Friedman.

Cyber-Risk Informatics

This book provides a scientific modeling approach for conducting metrics-based quantitative risk assessments of cybersecurity vulnerabilities and threats. This book provides a scientific modeling approach for conducting metrics-based quantitative risk assessments of cybersecurity threats. The author builds from a common understanding based on previous class-tested works to introduce the reader to the current and newly innovative approaches to address the maliciously-by-human-created (rather than by-chance-occurring) vulnerability and threat, and related cost-effective management to mitigate such risk. This book is purely statistical data-oriented (not deterministic) and employs computationally intensive techniques, such as Monte Carlo and Discrete Event Simulation. The enriched JAVA ready-to-go applications and solutions to exercises provided by the author at the book’s specifically preserved website will enable readers to utilize the course related problems. • Enables the reader to use the book's website's applications to implement and see results, and use them making ‘budgetary’ sense • Utilizes a data analytical approach and provides clear entry points for readers of varying skill sets and backgrounds • Developed out of necessity from real in-class experience while teaching advanced undergraduate and graduate courses by the author Cyber-Risk Informatics is a resource for undergraduate students, graduate students, and practitioners in the field of Risk Assessment and Management regarding Security and Reliability Modeling. Mehmet Sahinoglu, a Professor (1990) Emeritus (2000), is the founder of the Informatics Institute (2009) and its SACS-accredited (2010) and NSA-certified (2013) flagship Cybersystems and Information Security (CSIS) graduate program (the first such full degree in-class program in Southeastern USA) at AUM, Auburn University’s metropolitan campus in Montgomery, Alabama. He is a fellow member of the SDPS Society, a senior member of the IEEE, and an elected member of ISI. Sahinoglu is the recipient of Microsoft's Trustworthy Computing Curriculum (TCC) award and the author of Trustworthy Computing (Wiley, 2007).

Professional Hadoop

The professional's one-stop guide to this open-source, Java-based big data framework Professional Hadoop is the complete reference and resource for experienced developers looking to employ Apache Hadoop in real-world settings. Written by an expert team of certified Hadoop developers, committers, and Summit speakers, this book details every key aspect of Hadoop technology to enable optimal processing of large data sets. Designed expressly for the professional developer, this book skips over the basics of database development to get you acquainted with the framework's processes and capabilities right away. The discussion covers each key Hadoop component individually, culminating in a sample application that brings all of the pieces together to illustrate the cooperation and interplay that make Hadoop a major big data solution. Coverage includes everything from storage and security to computing and user experience, with expert guidance on integrating other software and more. Hadoop is quickly reaching significant market usage, and more and more developers are being called upon to develop big data solutions using the Hadoop framework. This book covers the process from beginning to end, providing a crash course for professionals needing to learn and apply Hadoop quickly. Configure storage, UE, and in-memory computing Integrate Hadoop with other programs including Kafka and Storm Master the fundamentals of Apache Big Top and Ignite Build robust data security with expert tips and advice Hadoop's popularity is largely due to its accessibility. Open-source and written in Java, the framework offers almost no barrier to entry for experienced database developers already familiar with the skills and requirements real-world programming entails. Professional Hadoop gives you the practical information and framework-specific skills you need quickly.

Mastering the SAS DS2 Procedure

Enhance your SAS® data wrangling skills with high precision and parallel data manipulation using the new DS2 programming language.

This book addresses the new DS2 programming language from SAS, which combines the precise procedural power and control of the Base SAS DATA step language with the simplicity and flexibility of SQL. DS2 provides simple, safe syntax for performing complex data transformations in parallel and enables manipulation of native database data types at full precision. It also introduces PROC FEDSQL, a modernized SQL language that blends perfectly with DS2. You will learn to harness the power of parallel processing to speed up CPU-intensive computing processes in Base SAS and how to achieve even more speed by processing DS2 programs on massively parallel database systems. Techniques for leveraging Internet APIs to acquire data, avoiding large data movements when working with data from disparate sources, and leveraging DS2’s new data types for full-precision numeric calculations are presented, with examples of why these techniques are essential for the modern data wrangler.

While working through the code samples provided with this book, you will build a library of custom, reusable, and easily shareable DS2 program modules, execute parallelized DATA step programs to speed up a CPU-intensive process, and conduct advanced data transformations using hash objects and matrix math operations.

IBM TS7700 Release 3.3

IBM® TS7700 is a family of mainframe virtual tape solutions that optimize data protection and business continuance for IBM z Systems™ data. Through the use of virtualization and disk cache, the TS7700 family operates at disk speeds while maintaining compatibility with existing tape operations. Its fully integrated tiered storage hierarchy takes advantage of both disk and tape technologies to deliver performance for active data and best economics for inactive and archive data. This IBM Redbooks® publication describes the TS7700 R3.3 architecture, planning, migration, implementation, and operations. The latest TS7700 family of z Systems tape virtualization is offered as two models: IBM TS7720 features encryption-capable high-capacity cache that uses 3 TB SAS disk drives with RAID 6, which can scale to large capacities with the highest level of data protection. IBM TS7740 features encryption-capable 600 GB SAS drives with RAID 6 protection. Both models write data by policy to physical tape through attachment to high-capacity, high-performance IBM TS1150 and earlier IBM 3592 model tape drives that are installed in IBM TS3500 tape libraries. Physical tape support is optional on TS7720. TS7700 R3.3 also supports external key management for disk-based encryption by using IBM Security Key Lifecycle Manager. This book intended for system architects who want to integrate their storage systems for smoother operation.

Threat Forecasting

Drawing upon years of practical experience and using numerous examples and illustrative case studies, Threat Forecasting: Leveraging Big Data for Predictive Analysis discusses important topics, including the danger of using historic data as the basis for predicting future breaches, how to use security intelligence as a tool to develop threat forecasting techniques, and how to use threat data visualization techniques and threat simulation tools. Readers will gain valuable security insights into unstructured big data, along with tactics on how to use the data to their advantage to reduce risk. Presents case studies and actual data to demonstrate threat data visualization techniques and threat simulation tools Explores the usage of kill chain modelling to inform actionable security intelligence Demonstrates a methodology that can be used to create a full threat forecast analysis for enterprise networks of any size

Mastering Hibernate

Mastering Hibernate is your comprehensive guide to understanding and mastering Hibernate, a powerful Object-Relational Mapping tool for Java and .Net applications. Through this book, you will dive deep into the mechanics of Hibernate, exploring its core concepts and architecture. Whether you're working with SQL or NoSQL data stores, this book ensures you can unlock Hibernate's full potential. What this Book will help me do Grasp the internal workings of Hibernate, including its session management and entity lifecycle. Optimize mapping between Java classes and relational database structures for better performance. Effectively manage relationships and collections within your data models using Hibernate features. Utilize Hibernate's caching systems to improve application performance and scalability. Handle multi-tenant database configurations with confidence using Hibernate's architectural capabilities. Author(s) None Rad is an experienced software developer and educator specializing in Java-based applications and enterprise architecture. With years of hands-on practice using Hibernate in real-world scenarios, None Rad has curated this book to serve as a clear and practical guide. Their writing reflects deep technical expertise combined with an approachable and illustrative teaching style, ensuring learning is both effective and engaging. Who is it for? This book is ideal for software developers and engineers who are familiar with Java or other similar object-oriented programming languages. Whether you're a professional looking to deepen your understanding of Hibernate's internals or a developer aiming to create more efficient ORM solutions, this book has something for you. Readers should have a basic understanding of Java and relational databases, but no prior Hibernate expertise is required. By the end, you'll be equipped to confidently apply Hibernate to sophisticated data challenges.

Making Sense of Stream Processing

How can event streams help make your application more scalable, reliable, and maintainable? In this report, O’Reilly author Martin Kleppmann shows you how stream processing can make your data storage and processing systems more flexible and less complex. Structuring data as a stream of events isn’t new, but with the advent of open source projects such as Apache Kafka and Apache Samza, stream processing is finally coming of age. Using several case studies, Kleppmann explains how these projects can help you reorient your database architecture around streams and materialized views. The benefits of this approach include better data quality, faster queries through precomputed caches, and real-time user interfaces. Learn how to open up your data for richer analysis and make your applications more scalable and robust in the face of failures. Understand stream processing fundamentals and their similarities to event sourcing, CQRS, and complex event processing Learn how logs can make search indexes and caches easier to maintain Explore the integration of databases with event streams, using the new Bottled Water open source tool Turn your database architecture inside out by orienting it around streams and materialized views

The Evolution of Analytics

Machine learning is a hot topic in business. Even data-driven organizations that have spent years developing successful data analysis platforms, with many accurate statistical models in place, are now looking into this decades-old discipline. But how can companies turn hyped opportunities for machine learning into real business value? This report examines the growing momentum of machine learning in the analytics landscape, the challenges machine learning presents to businesses, and examples of how organizations are actively seeking to incorporate modern machine learning techniques into their production data infrastructures. Authors Patrick Hall, Wen Phan, and Katie Whitson look at two companies in depth—one in healthcare and one in finance—that are seeing the real impact of machine learning. Discover how machine learning can help your organization: Analyze and generate insights from large amounts of varied, messy, and unstructured data unfit for traditional statistical analysis Increase the predictive accuracy beyond what was previously possible Augment aging analytical processes and other decision-making tools