talk-data.com talk-data.com

Topic

data

3406

tagged

Activity Trend

3 peak/qtr
2020-Q1 2026-Q1

Activities

Showing filtered results

Filtering by: O'Reilly Data Engineering Books ×
IBM FlashSystem 9200 and 9100 Best Practices and Performance Guidelines

This IBM® Redbooks® publication captures several of the preferred practices and describes the performance gains that can be achieved by implementing the IBM FlashSystem® 9100. These practices are based on field experience. This book highlights configuration guidelines and preferred practices for the storage area network (SAN) topology, clustered system, back-end storage, storage pools and managed disks, volumes, remote copy services, and hosts. It explains how you can optimize disk performance with the IBM System Storage® Easy Tier® function. It also provides preferred practices for monitoring, maintaining, and troubleshooting. This book is intended for experienced storage, SAN, IBM FlashSystem, SAN Volume Controller and Storwize® administrators and technicians. Understanding his book requires advanced knowledge of these environments. Important, IBM FlashSystem 9200: On 11th February 2020 IBM announced the arrival of the IBM FlashSystem 9200 to the family. This book was written specifically for IBM FlashSystem 9100, however most of the general principles will apply to the IBM FlashSystem 9200. If you are in any doubt as to their applicability to the FlashSystem 9200 then you should work with your local IBM representative. This book will be updated to include FlashSystem 9200 in due course.

IBM FlashSystem and VMware Implementation and Best Practices Guide

This IBM® Redbooks® publication details the configuration and best practices for using IBM's FlashSystem family of storage products within a VMware environment. This book was published in 2021 and specifically addresses Spectrum Virtualize Version 8.4 with VMware vSphere Version 7.0. Topics illustrate planning, configuring, operations, and preferred practices that include integration of FlashSystem storage systems with the VMware vCloud suite of applications: - vSphere Web Client (VWC) - vStorage APIs for Storage Awareness (VASA) - vStorage APIs for Array Integration (VAAI) - Site Recovery Manager (SRM) - vSphere Metro Storage Cluster (vMSC) This book is intended for presales consulting engineers, sales engineers, and IBM clients who want to deploy IBM FlashSystem® storage systems in virtualized data centers that are based on VMware vSphere.

Advanced Analytics with Transact-SQL: Exploring Hidden Patterns and Rules in Your Data

Learn about business intelligence (BI) features in T-SQL and how they can help you with data science and analytics efforts without the need to bring in other languages such as R and Python. This book shows you how to compute statistical measures using your existing skills in T-SQL. You will learn how to calculate descriptive statistics, including centers, spreads, skewness, and kurtosis of distributions. You will also learn to find associations between pairs of variables, including calculating linear regression formulas and confidence levels with definite integration. No analysis is good without data quality. Advanced Analytics with Transact-SQL introduces data quality issues and shows you how to check for completeness and accuracy, and measure improvements in data quality over time. The book also explains how to optimize queries involving temporal data, such as when you search for overlapping intervals. More advanced time-oriented information in the book includes hazard and survival analysis. Forecasting with exponential moving averages and autoregression is covered as well. Every web/retail shop wants to know the products customers tend to buy together. Trying to predict the target discrete or continuous variable with few input variables is important for practically every type of business. This book helps you understand data science and the advanced algorithms use to analyze data, and terms such as data mining, machine learning, and text mining. Key to many of the solutions in this book are T-SQL window functions. Author Dejan Sarka demonstrates efficient statistical queries that are based on window functions and optimized through algorithms built using mathematical knowledge and creativity. The formulas and usage of those statistical procedures are explained so you can understand and modify the techniques presented. T-SQL is supported in SQL Server,Azure SQL Database, and in Azure Synapse Analytics. There are so many BI features in T-SQL that it might become your primary analytic database language. If you want to learn how to get information from your data with the T-SQL language that you already are familiar with, then this is the book for you. What You Will Learn Describe distribution of variables with statistical measures Find associations between pairs of variables Evaluate the quality of the data you are analyzing Perform time-series analysis on your data Forecast values of a continuous variable Perform market-basket analysis to predict customer purchasing patterns Predict target variable outcomes from one or more input variables Categorize passages of text by extracting and analyzing keywords Who This Book Is For Database developers and database administrators who want to translate their T-SQL skills into the world of business intelligence (BI) and data science. For readers who want to analyze large amounts of data efficiently by using their existing knowledge of T-SQL and Microsoft’s various database platforms such as SQL Server and Azure SQL Database. Also for readers who want to improve their querying by learning new and original optimization techniques.

Best Practices Guide for Databases on IBM FlashSystem

The purpose of this IBM® Redpaper® document is to provide best practice guidelines to design and implement IBM FlashSystem® storage for database workloads. The recommended settings and values are based on lab testing, proof of concept (PoC) and experience drawn from customer implementations. Suggestions that are presented in this document are applicable to most production database environments to increase performance of I/O and availability. However, more considerations might be required while designing, configuring, and implementing storage for extreme transactional, analytical, and database cluster environments. Customers are migrating database storage to IBM FlashSystem largely due to low latency performance of the IBM FlashSystem family of Storage. Using IBM FlashSystem, IBM customers are able to achieve low latency for queries and transactions from milliseconds to microseconds, realize a multi-fold increase in application level transactions per second, increase CPU efficiency and reduce database licensing costs. Recent additions of data reduction technologies to IBM FlashSystem further increase overall TCO benefits. All IBM FlashSystem models now offer compression, which can reduce database storage by 40 - 80% depending on database software. In addition to best practices that are described in this document, the IBM FlashSystem Worldwide Solutions Engineering Team can further assist customers with performing analysis of current database workloads for IBM FlashSystem benefits, perform PoCs at our labs, and help with implementation.

IBM TS4500 R7 Tape Library Guide

The IBM® TS4500 (TS4500) tape library is a next-generation tape solution that offers higher storage density and better integrated management than previous solutions. This IBM Redbooks® publication gives you a close-up view of the new IBM TS4500 tape library. In the TS4500, IBM delivers the density that today's and tomorrow's data growth requires. It has the cost-effectiveness and the manageability to grow with business data needs, while you preserve investments in IBM tape library products. Now, you can achieve a low cost per terabyte (TB) and a high TB density per square foot because the TS4500 can store up to 11 petabytes (PB) of uncompressed data in a single frame library or scale up to 2 PB per square foot to over 350 PB. The TS4500 offers the following benefits: High availability: Dual active accessors with integrated service bays reduce inactive service space by 40%. The Elastic Capacity option can be used to eliminate inactive service space. Flexibility to grow: The TS4500 library can grow from the right side and the left side of the first L frame because models can be placed in any active position. Increased capacity: The TS4500 can grow from a single L frame up to another 17 expansion frames with a capacity of over 23,000 cartridges. High-density (HD) generation 1 frames from the TS3500 library can be redeployed in a TS4500. Capacity on demand (CoD): CoD is supported through entry-level, intermediate, and base-capacity configurations. Advanced Library Management System (ALMS): ALMS supports dynamic storage management, which enables users to create and change logical libraries and configure any drive for any logical library. Support for IBM TS1160 while also supporting TS1155, TS1150, and TS1140 tape drive: The TS1160 gives organizations an easy way to deliver fast access to data, improve security, and provide long-term retention, all at a lower cost than disk solutions. The TS1160 offers high-performance, flexible data storage with support for data encryption. Also, this enhanced fifth-generation drive can help protect investments in tape automation by offering compatibility with existing automation. The TS1160 Tape Drive Model 60E delivers a dual 10 Gb or 25 Gb Ethernet host attachment interface that is optimized for cloud-based and hyperscale environments. The TS1160 Tape Drive Model 60F delivers a native data rate of 400 MBps, the same load/ready, locate speeds, and access times as the TS1155, and includes dual-port 16 Gb Fibre Channel support. Support of the IBM Linear Tape-Open (LTO) Ultrium 8 tape drive: The LTO Ultrium 8 offering represents significant improvements in capacity, performance, and reliability over the previous generation, LTO Ultrium 7, while still protecting your investment in the previous technology. Support of LTO 8 Type M cartridge (m8): The LTO Program introduced a new capability with LTO-8 drives. The ability of the LTO-8 drive to write 9 TB on a brand new LTO-7 cartridge instead of 6 TB as specified by the LTO-7 format. Such a cartridge is called an LTO-7 initialized LTO-8 Type M cartridge. Integrated TS7700 back-end Fibre Channel (FC) switches are available. Up to four library-managed encryption (LME) key paths per logical library are available. This book describes the TS4500 components, feature codes, specifications, supported tape drives, encryption, new integrated management console (IMC), command-line interface (CLI), and REST over SCSI (RoS) to obtain status information about library components. October 2020 - Added support for the 3592 model 60S tape drive that provides a dual-port 12 Gb SAS (Serial Attached SCSI) interface for host attachment.

Data Lakes For Dummies

Take a dive into data lakes “Data lakes” is the latest buzz word in the world of data storage, management, and analysis. Data Lakes For Dummies decodes and demystifies the concept and helps you get a straightforward answer the question: “What exactly is a data lake and do I need one for my business?” Written for an audience of technology decision makers tasked with keeping up with the latest and greatest data options, this book provides the perfect introductory survey of these novel and growing features of the information landscape. It explains how they can help your business, what they can (and can’t) achieve, and what you need to do to create the lake that best suits your particular needs. With a minimum of jargon, prolific tech author and business intelligence consultant Alan Simon explains how data lakes differ from other data storage paradigms. Once you’ve got the background picture, he maps out ways you can add a data lake to your business systems; migrate existing information and switch on the fresh data supply; clean up the product; and open channels to the best intelligence software for to interpreting what you’ve stored. Understand and build data lake architecture Store, clean, and synchronize new and existing data Compare the best data lake vendors Structure raw data and produce usable analytics Whatever your business, data lakes are going to form ever more prominent parts of the information universe every business should have access to. Dive into this book to start exploring the deep competitive advantage they make possible—and make sure your business isn’t left standing on the shore.

IBM Power Systems Private Cloud with Shared Utility Capacity: Featuring Power Enterprise Pools 2.0

This IBM® Redbooks® publication is a guide to IBM Power Private Cloud with Shared Utility Capacity featuring Power Enterprise Pools 2.0 (also known as PEP 2.0). This technology allows multiple servers in an to share base processor and memory resources, and draw upon pre-paid credits when the base is exceeded. Previously, the Shared Utility feature supported IBM Power System E950 (9040-MR9) and IBM Power System E980 (9080-M9S). It was extended in August 2020 to include the Scale-out Power Systems announced on July 14th 2020 and received dedicated processor support later in the year. The IBM Power System S922 (9009-22G), and IBM Power System S924 (9009-42G) servers which use the latest IBM POWER9™ processor-based technology and support the IBM AIX®, IBM i, and Linux operating systems are now supported. The previous Scale-out models: IBM Power System S922 (9009-22A), and IBM Power System S924 (9009-42A) servers cannot be added to an Enterprise Pool. The goal of this book is to provide an overview of the environment and guidance for planning a deployment. The paper also covers how to configure PEP 2.0. There are also chapters on migrating from PEP 1.0 to PEP 2.0 and various use cases. This publication is for professionals who want to acquire a better understanding of IBM Power Private Cloud, and Shared Utility. The intended audience includes: Clients Sales and marketing professionals Technical support professionals IBM Business Partners This book expands the set of Power Systems documentation by providing a desktop reference which offers a detailed technical description of IBM Power Private Cloud, and Shared Utility.

Self-Sovereign Identity

In a world of changing privacy regulations, identity theft, and online anonymity, identity is a precious and complex concept. Self-Sovereign Identity (SSI) is a set of technologies that move control of digital identity from third party “identity providers” directly to individuals, and it promises to be one of the most important trends for the coming decades. Now in Self-Sovereign Identity, privacy and personal data experts Drummond Reed and Alex Preukschat lay out a roadmap for a future of personal sovereignty powered by the Blockchain and cryptography. Cutting through the technical jargon with dozens of practical use cases from experts across all major industries, it presents a clear and compelling argument for why SSI is a paradigm shift, and shows how you can be ready to be prepared for it. About the Technology Trust on the internet is at an all-time low. Large corporations and institutions control our personal data because we’ve never had a simple, safe, strong way to prove who we are online. Self-sovereign identity (SSI) changes all that. About the Book In Self-Sovereign Identity: Decentralized digital identity and verifiable credentials, you’ll learn how SSI empowers us to receive digitally-signed credentials, store them in private wallets, and securely prove our online identities. It combines a clear, jargon-free introduction to this blockchain-inspired paradigm shift with interesting essays written by its leading practitioners. Whether for property transfer, ebanking, frictionless travel, or personalized services, the SSI model for digital trust will reshape our collective future. What's Inside The architecture of SSI software and services The technical, legal, and governance concepts behind SSI How SSI affects global business industry-by-industry Emerging standards for SSI About the Reader For technology and business readers. No prior SSI, cryptography, or blockchain experience required. About the Authors Drummond Reed is the Chief Trust Officer at Evernym, a technology leader in SSI. Alex Preukschat is the co-founder of SSIMeetup.org and AlianzaBlockchain.org. Quotes This book is a comprehensive roadmap to the most crucial fix for today’s broken Internet. - Brian Behlendorf, GM for Blockchain, Healthcare and Identity at the Linux Foundation If trusted relationships over the Internet are important to you or your business, this book is for you. - John Jordan, Executive Director, Trust over IP Foundation Decentralized identity represents not only a wide range of trust-enabling technologies, but also a paradigm shift in our increasingly digital-first world. - Rouven Heck, Executive Director, Decentralized Identity Foundation

Implementation Guide for IBM Elastic Storage System 3000

This IBM® Redbooks publication introduces and describes the IBM Elastic Storage® Server 3000 (ESS 3000) as a scalable, high-performance data and file management solution. The solution is built on proven IBM Spectrum® Scale technology, formerly IBM General Parallel File System (IBM GPFS). IBM Elastic Storage System 3000 is an all-Flash array platform. This storage platform uses NVMe-attached drives in ESS 3000 to provide significant performance improvements as compared to SAS-attached flash drives. This book provides a technical overview of the ESS 3000 solution and helps you to plan the installation of the environment. We also explain the use cases where we believe it fits best. Our goal is to position this book as the starting point document for customers that would use ESS 3000 as part of their IBM Spectrum Scale setups. This book is targeted toward technical professionals (consultants, technical support staff, IT Architects, and IT Specialists) who are responsible for delivering cost-effective storage solutions with ESS 3000.

Data Fabric as Modern Data Architecture

Data fabric is a hot concept in data management today. By encompassing the data ecosystem your company already has in place, this architectural design pattern provides your staff with one reliable place to go for data. In this report, author Alice LaPlante shows CIOs, CDOs, and CAOs how data fabric enables their users to spend more time analyzing than wrangling data. The best way to thrive during this intense period of digital transformation is through data. But after roaring through 2019, progress on getting the most out of data investments has lost steam. Only 38% of companies now say they've created a data-driven organization. This report describes how a data fabric can help you reach the all-important goal of data democratization. Learn how data fabric handles data prep, data delivery, and serves as a data catalog Use data fabric to handle data variety, a top challenge for many organizations Learn how data fabric spans any environment to support data for users and use cases from any source Examine data fabric's capabilities including data and metadata management, data quality, integration, analytics, visualization, and governance Get five pieces of advice for getting started with data fabric

Storage Multi-tenancy for Red Hat OpenShift Container Platform with IBM Storage

With IBM® Spectrum Virtualize and the Object-Based Access Control, you can implement multi-tenancy and secure storage usage in a Red Hat OpenShift environment. This IBM Redpaper® publication shows you how to secure the storage usage from the Openshift user to the IBM Spectrum® Virtualize array. You see how to restrict storage usage in a Red Hat Openshift Container Platform to avoid the over-consumption of storage by one or more user. These uses cases can be expanded to the use of this control to provide assistance with billing.

IBM Fibre Channel Endpoint Security for IBM DS8900F and IBM Z

This IBM® Redbooks® publication will help you install, configure, and use the new IBM Fibre Channel Endpoint Security function. The focus of this publication is about securing the connection between an IBM DS8900F and the IBM z15™. The solution is delivered with two levels of link security supported: support for link authentication on Fibre Channel links and support for link encryption of data in flight (which also includes link authentication). This solution is targeted for clients needing to adhere to Payment Card Industry (PCI) or other emerging data security standards, and those who are seeking to reduce or eliminate insider threats regarding unauthorized access to data.

97 Things Every Data Engineer Should Know

Take advantage of today's sky-high demand for data engineers. With this in-depth book, current and aspiring engineers will learn powerful real-world best practices for managing data big and small. Contributors from notable companies including Twitter, Google, Stitch Fix, Microsoft, Capital One, and LinkedIn share their experiences and lessons learned for overcoming a variety of specific and often nagging challenges. Edited by Tobias Macey, host of the popular Data Engineering Podcast, this book presents 97 concise and useful tips for cleaning, prepping, wrangling, storing, processing, and ingesting data. Data engineers, data architects, data team managers, data scientists, machine learning engineers, and software engineers will greatly benefit from the wisdom and experience of their peers. Topics include: The Importance of Data Lineage - Julien Le Dem Data Security for Data Engineers - Katharine Jarmul The Two Types of Data Engineering and Data Engineers - Jesse Anderson Six Dimensions for Picking an Analytical Data Warehouse - Gleb Mezhanskiy The End of ETL as We Know It - Paul Singman Building a Career as a Data Engineer - Vijay Kiran Modern Metadata for the Modern Data Stack - Prukalpa Sankar Your Data Tests Failed! Now What? - Sam Bail

Machine Learning for Oracle Database Professionals: Deploying Model-Driven Applications and Automation Pipelines

Database developers and administrators will use this book to learn how to deploy machine learning models in Oracle Database and in Oracle’s Autonomous Database cloud offering. The book covers the technologies that make up the Oracle Machine Learning (OML) platform, including OML4SQL, OML Notebooks, OML4R, and OML4Py. The book focuses on Oracle Machine Learning as part of the Oracle Autonomous Database collaborative environment. Also covered are advanced topics such as delivery and automation pipelines. Throughout the book you will find practical details and hand-on examples showing you how to implement machine learning and automate deployment of machine learning. Discussion around the examples helps you gain a conceptual understanding of machine learning. Important concepts discussed include the methods involved, the algorithms to choose from, and mechanisms for process and deployment. Seasoned database professionals looking to make the leap into machine learning as a growth path will find much to like in this book as it helps you step up and use your current knowledge of Oracle Database to transition into providing machine learning solutions. What You Will Learn Use the Oracle Machine Learning (OML) Notebooks for data visualization and machine learning model building and evaluation Understand Oracle offerings for machine learning Develop machine learning with Oracle database using the built-in machine learning packages Develop and deploy machine learning models using OML4SQL and OML4R Leverage the Oracle Autonomous Database and its collaborative environment for Oracle Machine Learning Develop and deploy machine learning projects in Oracle Autonomous Database Build an automated pipeline that can detect and handle changes in data/model performance Who This Book Is For Database developers and administrators who want to learn about machine learning, developers who want to build models and applications using Oracle Database’s built-in machine learning feature set, and administrators tasked with supporting applications on Oracle Database that make use of the Oracle Machine Learning feature set

Azure Data Factory by Example: Practical Implementation for Data Engineers

Data engineers who need to hit the ground running will use this book to build skills in Azure Data Factory v2 (ADF). The tutorial-first approach to ADF taken in this book gets you working from the first chapter, explaining key ideas naturally as you encounter them. From creating your first data factory to building complex, metadata-driven nested pipelines, the book guides you through essential concepts in Microsoft’s cloud-based ETL/ELT platform. It introduces components indispensable for the movement and transformation of data in the cloud. Then it demonstrates the tools necessary to orchestrate, monitor, and manage those components. The hands-on introduction to ADF found in this book is equally well-suited to data engineers embracing their first ETL/ELT toolset as it is to seasoned veterans of Microsoft’s SQL Server Integration Services (SSIS). The example-driven approach leads you through ADF pipeline construction from the ground up, introducing important ideas and making learning natural and engaging. SSIS users will find concepts with familiar parallels, while ADF-first readers will quickly master those concepts through the book’s steady building up of knowledge in successive chapters. Summaries of key concepts at the end of each chapter provide a ready reference that you can return to again and again. What You Will Learn Create pipelines, activities, datasets, and linked services Build reusable components using variables, parameters, and expressions Move data into and around Azure services automatically Transform data natively using ADF data flows and Power Query data wrangling Master flow-of-control and triggers for tightly orchestrated pipeline execution Publish and monitor pipelines easily and with confidence Who This Book Is For Data engineers and ETL developers taking their first steps in Azure Data Factory, SQL Server Integration Services users making the transition toward doing ETL in Microsoft’s Azure cloud, and SQL Server database administrators involved in data warehousing and ETL operations

IBM Spectrum Scale Immutability Introduction, Configuration Guidance, and Use Cases

This IBM Redpaper™ publication introduces the IBM Spectrum Scale immutability function. It shows how to set it up and presents different ways for managing immutable and append-only files. This publication also provides guidance for implementing IT security aspects in an IBM Spectrum Scale cluster by addressing regulatory requirements. It also describes two typical use cases for managing immutable files. One use case involves applications that manage file immutability; the other use case presents a solution to automatically set files to immutable within a IBM Spectrum Scale immutable fileset.

IBM Spectrum Archive Enterprise Edition V1.3.1.2: Installation and Configuration Guide

This IBM® Redbooks® publication helps you with the planning, installation, and configuration of the new IBM Spectrum® Archive Enterprise Edition (EE) Version 1.3.1.2 for the IBM TS4500, IBM TS3500, IBM TS4300, and IBM TS3310 tape libraries. IBM Spectrum Archive Enterprise Edition enables the use of the LTFS for the policy management of tape as a storage tier in an IBM Spectrum Scale based environment. It helps encourage the use of tape as a critical tier in the storage environment. This is the ninth edition of IBM Spectrum Archive Installation and Configuration Guide. IBM Spectrum Archive EE can run any application that is designed for disk files on a physical tape media. IBM Spectrum Archive EE supports the IBM Linear Tape-Open (LTO) Ultrium 8, 7, 6, and 5 tape drives in IBM® TS3310, TS3500, TS4300, and TS4500 tape libraries. In addition, IBM TS1160, TS1155, TS1150, and TS1140 tape drives are supported in TS3500 and TS4500 tape library configurations. IBM Spectrum Archive EE can play a major role in reducing the cost of storage for data that does not need the access performance of primary disk. The use of IBM Spectrum Archive EE to replace disks with physical tape in tier 2 and tier 3 storage can improve data access over other storage solutions because it improves efficiency and streamlines management for files on tape. IBM Spectrum Archive EE simplifies the use of tape by making it transparent to the user and manageable by the administrator under a single infrastructure. This publication is intended for anyone who wants to understand more about IBM Spectrum Archive EE planning and implementation. This book is suitable for IBM customers, IBM Business Partners, IBM specialist sales representatives, and technical specialists.

Database-Driven Web Development: Learn to Operate at a Professional Level with PERL and MySQL

Learn to operate at a professional level with HTML, CSS, DOM, JavaScript, PERL and the MySQL database. With plain language explanations and step-by-step examples, you will understand the key facets of web development that today’s employers are looking for. Encapsulating knowledge that is usually found in many books rather than one, this is your one-stop tutorial to becoming a web professional. You will learn how to use the PERL scripting language and the MySQL database to create powerful web applications. Each chapter will become progressively more challenging as you progress through experimentation and ultimately master database-driven web development via the web applications studied in the last chapters. Including practical tips and guidance gleaned from 20+ years of working as a web developer, Thomas Valentine provides you with all the information you need to prosper as a professional database-driven web professional. What You'll Learn Leverage standard web technologies to benefit a database-driven approach Create an effective web development workstation with databases in mind Use the PERL scripting language and the MySQL database effectively Maximize the Apache Web Server Who This Book Is For The primary audience for this book are those who know already know web development basics and web developers who want to master database driven web development. The skills required to understand the concepts put forth are a working knowledge of PERL and basic MySQL.

SAP HANA on IBM Power Systems Backup and Recovery Solutions

This IBM® Redpaper Redbooks publication provides guidance about a backup and recovery solution for SAP High-performance Analytic Appliance (HANA) running on IBM Power Systems. This publication provides case studies and how-to procedures that show backup and recovery scenarios. This publication provides information about how to protect data in an SAP HANA environment by using IBM Spectrum® Protect and IBM Spectrum Copy Data Manager. This publication focuses on the data protection solution, which is described through several scenarios. The information in this publication is distributed on an as-is basis without any warranty that is either expressed or implied. Support assistance for the use of this material is limited to situations where IBM Spectrum Scale or IBM Spectrum Protect are supported and entitled, and where the issues are specific to a blueprint implementation. The goal of the publication is to describe the best aspects and options for backup, snapshots, and restore of SAP HANA Multitenant Database Container (MDC) single and multi-tenant installations on IBM Power Systems by using theoretical knowledge, hands-on exercises, and documenting the findings through sample scenarios. This document provides resources about the following processes: Describing how to determine the best option, including SAP Landscape aspects to back up, snapshot, and restore of SAP HANA MDC single and multi-tenant installations based on IBM Spectrum Computing Suite, Red Hat Linux Relax and Recover (ReAR), and other products. Documenting key aspects, such as recovery time objective (RTO) and recovery point objective (RPO), backup impact (load, duration, scheduling), quantitative savings (for example, data deduplication), integration and catalog currency, and tips and tricks that are not covered in the product documentation. Using IBM Cloud® Object Storage and documenting how to use IBM Spectrum Protect to back up to the cloud. SAP HANA 2.0 SPS 05 has this feature that is built in natively. IBM Spectrum Protect for Enterprise Resource Planning (ERP) has this feature too. Documenting Linux ReaR to cover operating system (OS) backup because ReAR is used by most backup products, such as IBM Spectrum Protect and Symantec Endpoint Protection (SEP) to back up OSs. This publication targets technical readers including IT specialists, systems architects, brand specialists, sales teams, and anyone looking for a guide about how to implement the best options for SAP HANA backup and recovery on IBM Power Systems. Moreover, this publication provides documentation to transfer the how-to-skills to the technical teams and solution guidance to the sales team. This publication complements the documentation that is available at IBM Knowledge Center, and it aligns with the educational materials that are provided by IBM Garage™ for Systems Technical Education and Training.

IBM PowerVC Version 2.0 Introduction and Configuration

IBM® Power Virtualization Center (IBM® PowerVC™) is an advanced enterprise virtualization management offering for IBM Power Systems. This IBM Redbooks® publication introduces IBM PowerVC and helps you understand its functions, planning, installation, and setup. It also shows how IBM PowerVC can integrate with systems management tools such as Ansible or Terraform and that it also integrates well into a OpenShift container environment. IBM PowerVC Version 2.0.0 supports both large and small deployments, either by managing IBM PowerVM® that is controlled by the Hardware Management Console (HMC), or by IBM PowerVM NovaLink. With this capability, IBM PowerVC can manage IBM AIX®, IBM i, and Linux workloads that run on IBM POWER® hardware. IBM PowerVC is available as a Standard Edition, or as a Private Cloud Edition. IBM PowerVC includes the following features and benefits: Virtual image capture, import, export, deployment, and management Policy-based virtual machine (VM) placement to improve server usage Snapshots and cloning of VMs or volumes for backup or testing purposes Support of advanced storage capabilities such as IBM SVC vdisk mirroring of IBM Global Mirror Management of real-time optimization and VM resilience to increase productivity VM Mobility with placement policies to reduce the burden on IT staff in a simple-to-install and easy-to-use graphical user interface (GUI) Automated Simplified Remote Restart for improved availability of VMs ifor when a host is down Role-based security policies to ensure a secure environment for common tasks The ability to enable an administrator to enable Dynamic Resource Optimization on a schedule IBM PowerVC Private Cloud Edition includes all of the IBM PowerVC Standard Edition features and enhancements: A self-service portal that allows the provisioning of new VMs without direct system administrator intervention. There is an option for policy approvals for the requests that are received from the self-service portal. Pre-built deploy templates that are set up by the cloud administrator that simplify the deployment of VMs by the cloud user. Cloud management policies that simplify management of cloud deployments. Metering data that can be used for chargeback. This publication is for experienced users of IBM PowerVM and other virtualization solutions who want to understand and implement the next generation of enterprise virtualization management for Power Systems. Unless stated otherwise, the content of this publication refers to IBM PowerVC Version 2.0.0.