talk-data.com talk-data.com

Topic

Cloud Computing

infrastructure saas iaas

499

tagged

Activity Trend

471 peak/qtr
2020-Q1 2026-Q1

Activities

Showing filtered results

Filtering by: O'Reilly Data Engineering Books ×
SAP in 24 Hours, Sams Teach Yourself, Fifth Edition

Thoroughly updated and expanded! Includes new coverage on HANA, the cloud, and using SAP’s applications! In just 24 sessions of one hour or less, you’ll get up and running with the latest SAP technologies, applications, and solutions. Using a straightforward, step-by-step approach, each lesson strengthens your understanding of SAP from both a business and technical perspective, helping you gain practical mastery from the ground up on topics such as security, governance, validations, release management, SLA, and legal issues. Step-by-step instructions carefully walk you through the most common questions, issues, and tasks. Quizzes and exercises help you build and test your knowledge. Notes present interesting pieces of information. Tips offer advice or teach an easier way to do something. Cautions advise you about potential problems and help you steer clear of disaster. Learn how to… Understand SAP terminology, concepts, and solutions Install SAP on premises or in the cloud Master SAP’s revamped user interface Discover how and when to use in-memory HANA databases Integrate SAP Software as a Service (SaaS) solutions such as Ariba, Successfactors, Fieldglass, and hybris Find resources at SAP’s Service Marketplace, Developer Network, and Help Portal Avoid pitfalls in SAP project implementation, migration, and upgrades Discover how SAP fits with mobile devices, social media, big data, and the Internet of Things Start or accelerate your career working with SAP technologies

IBM Software Defined Environment

This IBM® Redbooks® publication introduces the IBM Software Defined Environment (SDE) solution, which helps to optimize the entire computing infrastructure--compute, storage, and network resources--so that it can adapt to the type of work required. In today's environment, resources are assigned manually to workloads, but that happens automatically in a SDE. In an SDE, workloads are dynamically assigned to IT resources based on application characteristics, best-available resources, and service level policies so that they deliver continuous, dynamic optimization and reconfiguration to address infrastructure issues. Underlying all of this are policy-based compliance checks and updates in a centrally managed environment. Readers get a broad introduction to the new architecture. Think integration, automation, and optimization. Those are enablers of cloud delivery and analytics. SDE can accelerate business success by matching workloads and resources so that you have a responsive, adaptive environment. With the IBM Software Defined Environment, infrastructure is fully programmable to rapidly deploy workloads on optimal resources and to instantly respond to changing business demands. This information is intended for IBM sales representatives, IBM software architects, IBM Systems Technology Group brand specialists, distributors, resellers, and anyone who is developing or implementing SDE.

Pro Couchbase Development: A NoSQL Platform for the Enterprise

Pro Couchbase Development: A NoSQL Platform for the Enterprise discusses programming for Couchbase using Java and scripting languages, querying and searching, handling migration, and integrating Couchbase with Hadoop, HDFS, and JSON. It also discusses migration from other NoSQL databases like MongoDB. This book is for big data developers who use Couchbase NoSQL database or want to use Couchbase for their web applications as well as for those migrating from other NoSQL databases like MongoDB and Cassandra. For example, a reason to migrate from Cassandra is that it is not based on the JSON document model with support for a flexible schema without having to define columns and supercolumns. The target audience is largely Java developers but the book also supports PHP and Ruby developers who want to learn about Couchbase. The author supplies examples in Java, PHP, Ruby, and JavaScript. After reading and using this hands-on guide for developing with Couchbase, you'll be able to build complex enterprise, database and cloud applications that leverage this powerful platform.

Virtualizing Hadoop: How to Install, Deploy, and Optimize Hadoop in a Virtualized Architecture

Plan and Implement Hadoop Virtualization for Maximum Performance, Scalability, and Business Agility Enterprises running Hadoop must absorb rapid changes in big data ecosystems, frameworks, products, and workloads. Virtualized approaches can offer important advantages in speed, flexibility, and elasticity. Now, a world-class team of enterprise virtualization and big data experts guide you through the choices, considerations, and tradeoffs surrounding Hadoop virtualization. The authors help you decide whether to virtualize Hadoop, deploy Hadoop in the cloud, or integrate conventional and virtualized approaches in a blended solution. First, Virtualizing Hadoop reviews big data and Hadoop from the standpoint of the virtualization specialist. The authors demystify MapReduce, YARN, and HDFS and guide you through each stage of Hadoop data management. Next, they turn the tables, introducing big data experts to modern virtualization concepts and best practices. Finally, they bring Hadoop and virtualization together, guiding you through the decisions you’ll face in planning, deploying, provisioning, and managing virtualized Hadoop. From security to multitenancy to day-to-day management, you’ll find reliable answers for choosing your best Hadoop strategy and executing it. Coverage includes the following: • Reviewing the frameworks, products, distributions, use cases, and roles associated with Hadoop • Understanding YARN resource management, HDFS storage, and I/O • Designing data ingestion, movement, and organization for modern enterprise data platforms • Defining SQL engine strategies to meet strict SLAs • Considering security, data isolation, and scheduling for multitenant environments • Deploying Hadoop as a service in the cloud • Reviewing the essential concepts, capabilities, and terminology of virtualization • Applying current best practices, guidelines, and key metrics for Hadoop virtualization • Managing multiple Hadoop frameworks and products as one unified system • Virtualizing master and worker nodes to maximize availability and performance • Installing and configuring Linux for a Hadoop environment

Implementing IBM FlashSystem 840

Almost all technological components in the data center are getting faster: central processing units, networks, storage area networks (SANs), and memory. All of them have improved their speed by a minimum of 10X; some of them by 100X, for example, data networks. However, spinning disk performance has only increased by 1.2 times. IBM® FlashSystem™ 840 version 1.3 closes this gap. The FlashSystem 840 is optimized for the data center to enable organizations of all sizes to strategically harness the value of stored data. It provides flexible capacity and extreme performance for the most demanding applications, including virtualized or bare-metal online transaction processing (OLTP) and online analytical processing (OLAP) databases, virtual desktop infrastructures (VDI), technical computing applications, and cloud environments. The system accelerates response times with IBM MicroLatency® access times as low as 90 µs write latency and 135 µs read latency to enable faster decision making. The introduction of a low capacity 1 TB flash module allows the FlashSystem 840 to be configured in capacity points as low as 2 TB in protected RAID 5 mode. Coupled with 10 GB iSCSI, the FlashSystem is positioned to bring extreme performance to small and medium-sized businesses (SMB) and growth markets. Implementing the IBM FlashSystem® 840 provides value that goes beyond those benefits that are seen on disk-based arrays. These benefits include better user experience, server and application consolidation, development cycle reduction, application scalability, data center footprint savings, and improved price performance economics. This IBM Redbooks® publication discusses IBM FlashSystem 840 version 1.3. It provides in-depth knowledge of the product architecture, software and hardware, its implementation, and hints and tips. Also illustrated are use cases that show real-world solutions for tiering, flash-only, and preferred read, as well as examples of the benefits gained by integrating the FlashSystem storage into business environments. Also described are product integration scenarios running the IBM FlashSystem 840 with the IBM SAN Volume Controller, and the IBM Storwize® family of products such V7000, V5000, and the V3700, as well as considerations when integrating with the IBM FlashSystem 840. The preferred practice guidance is provided for your FlashSystem environment with IBM 16 Gbps b-type products and features, focusing on Fibre Channel design. This book is intended for pre-sales and post-sales technical support professionals and storage administrators, and for anyone who wants to understand and learn how to implement this exciting technology.

Oracle Database 12c DBA Handbook

The definitive reference for every Oracle DBA—completely updated for Oracle Database 12 c Oracle Database 12c DBA Handbook is the quintessential tool for the DBA with an emphasis on the big picture—enabling administrators to achieve effective and efficient database management. Fully revised to cover every new feature and utility, this Oracle Press guide shows how to harness cloud capability, perform a new installation, upgrade from previous versions, configure hardware and software, handle backup and recovery, and provide failover capability. The newly revised material features high-level and practical content on cloud integration, storage management, performance tuning, information management, and the latest on a completely revised security program. Shows how to administer a scalable, flexible Oracle enterprise database Includes new chapters on cloud integration, new security capabilities, and other cutting-edge features All code and examples available online

Oracle Exadata Expert’s Handbook

The Practical, Authoritative, 360-Degree Technical Guide to Oracle Exadata: From Setup to Administration, Optimization, Tuning, and Troubleshooting The blazingly fast Oracle Exadata Database Machine is being embraced by thousands of large-scale users worldwide: by governments, the military, enterprise organizations, cloud service providers, and anyone who needs extreme performance. Now, provides authoritative guidance to running Oracle Exadata with maximum reliability, effectiveness, performance, and efficiency. Oracle Exadata Expert’s Handbook Six renowned Oracle technology experts have brought together core technical information, experience, best practices, and insider tips in a concise reference. Covering both 11 g and 12 c versions of Oracle Exadata software, they deliver hands-on coverage of best practices, setup, migration, monitoring, administration, performance tuning, and troubleshooting. Whether you’re an Oracle Exadata DBA, DMA, architect, or manager, you need these insights. Get a 360-degree overview of the Oracle Exadata Database Machine Efficiently deploy RAC within the Oracle Exadata ecosystem Fully leverage Storage Cell’s extraordinary performance, via Offloading, Smart Scans, and Hybrid Columnar Compression Manage Exadata with OEM 12 c: perform setup, configuration, asset/target discovery, and day-to-day administration Tune Oracle Exadata for even better performance Perform Exadata Backup/Recovery/DR with RMAN and Data Guard Migrate to Oracle Exadata from other platforms Use Oracle Exadata with the ZFS Storage Appliance Consolidate within the Exadata Database Cloud

IBM Software Defined Infrastructure for Big Data Analytics Workloads

This IBM® Redbooks® publication documents how IBM Platform Computing, with its IBM Platform Symphony® MapReduce framework, IBM Spectrum Scale (based Upon IBM GPFS™), IBM Platform LSF®, the Advanced Service Controller for Platform Symphony are work together as an infrastructure to manage not just Hadoop-related offerings, but many popular industry offeringsm such as Apach Spark, Storm, MongoDB, Cassandra, and so on. It describes the different ways to run Hadoop in a big data environment, and demonstrates how IBM Platform Computing solutions, such as Platform Symphony and Platform LSF with its MapReduce Accelerator, can help performance and agility to run Hadoop on distributed workload managers offered by IBM. This information is for technical professionals (consultants, technical support staff, IT architects, and IT specialists) who are responsible for delivering cost-effective cloud services and big data solutions on IBM Power Systems™ to help uncover insights among client’s data so they can optimize product development and business results.

IBM System Storage Solutions Handbook

The IBM® System Storage® Solutions Handbook helps you solve your current and future data storage business requirements to achieve enhanced storage efficiency by design to allow managed cost, capacity of growth, greater mobility, and stronger control over storage performance and management. It describes the current IBM storage products, including IBM FlashSystem™, disk, and tape, and virtualized solutions, such as IBM Storage Cloud, IBM SmartCloud® Virtual Storage Center, and IBM Spectrum™ Storage. This IBM Redbooks® publication provides overviews and pointers for information about the current IBM System Storage products, showing how IBM delivers the right mix of products for nearly every aspect of business continuance and business efficiency. IBM storage products can help you store, safeguard, retrieve, and share your data. The following topics are covered: Part 1 introduces IBM storage solutions. It provides overviews of the IBM storage solutions, including IBM Spectrum Storage™, IBM Storage Cloud, IBM SmartCloud Virtual Storage Center (VSC), and the IBM PureSystems® products. Part 2 describes the IBM disk and flash products that include IBM DS Series (entry-level, midrange, and enterprise offerings), IBM XIV® storage, IBM Storwize® products, and the IBM FlashSystem offerings. Part 3 is an overview of the IBM tape drives, IBM tape automation products, and IBM tape virtualization solutions and products. Part 4 describes storage networking infrastructure, switches and directors to form storage area network (SAN) solutions, and converged networks and data center networking. Part 5 describes the IBM storage software portfolio, including IBM SAN Volume Controller, IBM Tivoli® Storage Manager, Tivoli Storage Productivity Center, and IBM Security Key Lifecycle Manager. Part 6 describes the IBM z/OS® storage management software and tools. The appendixes provide information about the High Performance Storage System (HPSS) and recently withdrawn IBM storage products. This book is intended as a reference for basic and comprehensive information about the IBM Storage products portfolio. It provides a starting point for establishing your own enterprise storage environment.

IBM Spectrum Accelerate: Deployment, Usage, and Maintenance

IBM® Spectrum™ Accelerate, a member of the IBM Spectrum Storage™, is an agile software-defined storage solution for enterprise and cloud that builds on the customer-proven and mature IBM XIV® storage software. The key characteristic of Spectrum Accelerate is that it can be easily deployed and run on purpose-built or existing hardware chosen by the customer. IBM Spectrum Accelerate enables rapid deployment of high-performance and scalable block data storage infrastructure over commodity hardware, either on-premises or off-premises. This IBM Redbooks® publication provides a broad understanding of IBM Spectrum Accelerate. The book introduces Spectrum Accelerate and discusses planning and preparation that are essential for a successful deployment of the solution. The deployment itself is explained through a step-by-step approach, using either a graphical user interface (GUI) based method or a simple command-line interface (CLI) based procedure. Subsequent chapters explain the logical configuration of the system, host support and business continuity functions, and migration. Although it makes many references to the XIV storage software, the book also emphasizes where IBM Spectrum Accelerate differs from XIV. Finally, a substantial portion of the book is dedicated to maintenance and troubleshooting to provide detailed guidance for the customer support personnel.

Oracle Database Upgrade, Migration & Transformation Tips & Techniques

A practical roadmap for database upgrade, migration, and transformation This Oracle Press guide provides best practices for migrating between different operating systems and platforms, transforming existing databases to use different storage or enterprise systems, and upgrading databases from one release to the next. Based on the expert authors’ real-world experience, Oracle Database Upgrade, Migration & Transformation Tips & Techniques will help you choose the best migration path for your project and develop an effective methodology. Code examples and detailed checklists are included in this comprehensive resource. Leverage the features of Oracle Data Guard to migrate an Oracle Database Use Oracle Recovery Manager, transportable tablespace sets, and transportable database toolsets to migrate between platforms Migrate databases with export/import Use Oracle GoldenGate for zero or near-zero downtime migrations Take advantage of the Cross-Platform Transportable Tablespace Set utility Migrate to new storage platforms using the features of Oracle Automatic Storage Management Upgrade to Oracle Database 12c with the Database Upgrade Assistant tool Move seamlessly to Oracle's engineered systems Migrate to the cloud

Implementing IBM FlashSystem 900

Today's global organizations depend on being able to unlock business insights from massive volumes of data. Now, with IBM® FlashSystem™ 900, powered by IBM FlashCore™ technology, they can make faster decisions based on real-time insights and unleash the power of the most demanding applications, including online transaction processing (OLTP) and analytics databases, virtual desktop infrastructures (VDIs), technical computing applications, and cloud environments. This IBM Redbooks® publication introduces clients to the IBM FlashSystem® 900. It provides in-depth knowledge of the product architecture, software and hardware, implementation, and hints and tips. Also illustrated are use cases that show real-world solutions for tiering, flash-only, and preferred-read, and also examples of the benefits gained by integrating the FlashSystem storage into business environments. This book is intended for pre-sales and post-sales technical support professionals and storage administrators, and for anyone who wants to understand how to implement this new and exciting technology. This book describes the following offerings of the IBM Spectrum™ Storage family: IBM Spectrum Storage™ IBM Spectrum Control IBM Spectrum Virtualize IBM Spectrum Scale IBM Spectrum Accelerate

IBM Spectrum Scale (formerly GPFS)

This IBM® Redbooks® publication updates and complements the previous publication: Implementing the IBM General Parallel File System in a Cross Platform Environment, SG24-7844, with additional updates since the previous publication version was released with IBM General Parallel File System (GPFS™). Since then, two releases have been made available up to the latest version of IBM Spectrum™ Scale 4.1. Topics such as what is new in Spectrum Scale, Spectrum Scale licensing updates (Express/Standard/Advanced), Spectrum Scale infrastructure support/updates, storage support (IBM and OEM), operating system and platform support, Spectrum Scale global sharing - Active File Management (AFM), and considerations for the integration of Spectrum Scale in IBM Tivoli® Storage Manager (Spectrum Protect) backup solutions are discussed in this new IBM Redbooks publication. This publication provides additional topics such as planning, usability, best practices, monitoring, problem determination, and so on. The main concept for this publication is to bring you up to date with the latest features and capabilities of IBM Spectrum Scale as the solution has become a key component of the reference architecture for clouds, analytics, mobile, social media, and much more. This publication targets technical professionals (consultants, technical support staff, IT Architects, and IT Specialists) responsible for delivering cost effective cloud services and big data solutions on IBM Power Systems™ helping to uncover insights among clients' data so they can take actions to optimize business results, product development, and scientific discoveries.

IBM z13 Technical Guide

Digital business has been driving the transformation of underlying IT infrastructure to be more efficient, secure, adaptive, and integrated. Information Technology (IT) must be able to handle the explosive growth of mobile clients and employees. IT also must be able to use enormous amounts of data to provide deep and real-time insights to help achieve the greatest business impact. This IBM® Redbooks® publication addresses the new IBM Mainframe, the IBM z13. The IBM z13 is the trusted enterprise platform for integrating data, transactions, and insight. A data-centric infrastructure must always be available with a 99.999% or better availability, have flawless data integrity, and be secured from misuse. It needs to be an integrated infrastructure that can support new applications. It needs to have integrated capabilities that can provide new mobile capabilities with real-time analytics delivered by a secure cloud infrastructure. IBM z13 is designed with improved scalability, performance, security, resiliency, availability, and virtualization. The superscalar design allows the z13 to deliver a record level of capacity over the prior z Systems. In its maximum configuration, z13 is powered by up to 141 client characterizable microprocessors (cores) running at 5 GHz. This configuration can run more than 110,000 millions of instructions per second (MIPS) and up to 10 TB of client memory. The IBM z13 Model NE1 is estimated to provide up to 40% more total system capacity than the IBM zEnterprise® EC12 (zEC1) Model HA1. This book provides information about the IBM z13 and its functions, features, and associated software support. Greater detail is offered in areas relevant to technical planning. It is intended for systems engineers, consultants, planners, and anyone who wants to understand the IBM z Systems functions and plan for their usage. It is not intended as an introduction to mainframes. Readers are expected to be generally familiar with existing IBM z Systems technology and terminology.

CMDB Systems

CMDB Systems: Making Change Work in the Age of Cloud and Agile shows you how an integrated database across all areas of an organization’s information system can help make organizations more efficient reduce challenges during change management and reduce total cost of ownership (TCO). In addition, this valuable reference provides guidelines that will enable you to avoid the pitfalls that cause CMDB projects to fail and actually shorten the time required to achieve an implementation of a CMDB. Drawing upon extensive experience and using illustrative real world examples, Rick Sturm, Dennis Drogseth and Dan Twing discuss: Unique insights from extensive industry exposure, research and consulting on the evolution of CMDB/CMS technology and ongoing dialog with the vendor community in terms of current and future CMDB/CMS design and plans Proven and structured best practices for CMDB deployments Clear and documented insights into the impacts of cloud computing and other advances on CMDB/CMS futures Discover unique insights from industry experts who consult on the evolution of CMDB/CMS technology and will show you the steps needed to successfully plan, design and implement CMDB Covers related use-cases from retail, manufacturing and financial verticals from real-world CMDB deployments Provides structured best practices for CMDB deployments Discusses how CMDB adoption can lower total cost of ownership, increase efficiency and optimize the IT enterprise

IBM z13 Technical Introduction

This IBM® Redbooks® publication introduces the IBM z13™. IBM z13 delivers a data and transaction system reinvented as a system of insight for digital business. IBM z Systems™ leadership is extended with these features: Improved ability to meet service level agreements with new processor chip technology that includes simultaneous multithreading, analytical vector processing, redesigned and larger cache, and enhanced accelerators for hardware compression and cryptography Better availability and more efficient use of critical data with up to 10 TB available redundant array of independent memory (RAIM) Validation of transactions, management, and assignment of business priority for SAN devices through updates to the I/O subsystem Continued management of heterogeneous workloads with IBM z BladeCenter Extension (zBX) Model 004 and IBM z Unified Resource Manager This Redbooks publication can help you become familiar with the z Systems platform, and understand how the platform can help integrate data, transactions, and insight for faster and more accurate business decisions. This book explains how, with innovations and traditional strengths, IBM z13 can play an essential role in today's IT environments, and satisfy the demands for cloud deployments, analytics, mobile, and social applications in a trustful, reliable, and secure environment with operations that lessen business risk.

Hadoop Virtualization

Hadoop was built to use local data storage on a dedicated group of commodity hardware, but many organizations are choosing to save money (and operational headaches) by running Hadoop in the cloud. This O'Reilly report focuses on the benefits of deploying Hadoop to a private cloud environment, and provides an overview of best practices to maximize performance. Private clouds provide lower capital expenses than on-site clusters and offer lower operating expenses than public cloud deployment. Author Courtney Webster shows you what's involved in Hadoop virtualization, and how you can efficiently plan a private cloud deployment. Topics include: How Hadoop virtualization offers scalable capability for future growth and minimal downtime Why a private cloud offers unique benefits with comparable (and even improved) performance How you can literally set up Hadoop in a private cloud in minutes How aggregation can be used on top of (or instead of) virtualization Which resources and practices are best for a private cloud deployment How cloud-based management tools lower the complexity of initial configuration and maintenance

Field Guide to Hadoop

If your organization is about to enter the world of big data, you not only need to decide whether Apache Hadoop is the right platform to use, but also which of its many components are best suited to your task. This field guide makes the exercise manageable by breaking down the Hadoop ecosystem into short, digestible sections. You’ll quickly understand how Hadoop’s projects, subprojects, and related technologies work together. Each chapter introduces a different topic—such as core technologies or data transfer—and explains why certain components may or may not be useful for particular needs. When it comes to data, Hadoop is a whole new ballgame, but with this handy reference, you’ll have a good grasp of the playing field. Topics include: Core technologies—Hadoop Distributed File System (HDFS), MapReduce, YARN, and Spark Database and data management—Cassandra, HBase, MongoDB, and Hive Serialization—Avro, JSON, and Parquet Management and monitoring—Puppet, Chef, Zookeeper, and Oozie Analytic helpers—Pig, Mahout, and MLLib Data transfer—Scoop, Flume, distcp, and Storm Security, access control, auditing—Sentry, Kerberos, and Knox Cloud computing and virtualization—Serengeti, Docker, and Whirr

Hadoop MapReduce v2 Cookbook - Second Edition

Explore insights from vast datasets with "Hadoop MapReduce v2 Cookbook - Second Edition." This book serves as a practical guide for developers and system administrators who aim to master big data processing using Hadoop v2. By engaging with its step-by-step recipes, you will learn to harness the Hadoop MapReduce ecosystem for scalable and efficient data solutions. What this Book will help me do Master the configuration and management of Hadoop YARN, MapReduce v2, and HDFS clusters. Integrate big data tools such as Hive, HBase, Pig, Mahout, and Nutch with Hadoop v2. Develop analytics solutions for large-scale datasets using MapReduce-based applications. Address specific challenges like data classification, recommendations, and text analytics leveraging Hadoop MapReduce. Deploy and manage big data clusters effectively, including options for cloud environments. Author(s) The authors behind "Hadoop MapReduce v2 Cookbook - Second Edition" combine their deep expertise in big data technology and years of experience working directly with Hadoop. They have helped numerous organizations implement scalable data processing solutions and are passionate about teaching others. Their approach ensures readers gain both foundational knowledge and practical skills. Who is it for? This book is perfect for developers and system administrators who want to learn Hadoop MapReduce v2, including configuring and managing big data clusters. Beginners with basic Java knowledge can follow along to advance their skills in big data processing. Ideal for those transitioning to Hadoop v2 or requiring practical recipes for immediate application. Great for professionals aiming to deepen their expertise in scalable data technologies.

Oracle RMAN Database Duplication

RMAN is Oracle’s flagship backup and recovery tool, but did you know it’s also an effective database duplication tool? Oracle RMAN Database Duplication is a deep dive into RMAN’s duplication feature set, showing how RMAN can make it so much easier for you as a database administrator to satisfy the many requests from developers and testers for database copies and refreshes for use in their work. You’ll learn to make and refresh duplicate databases with a single command, and of course you can automate and schedule that command so that developers and testers are supplied with regular, known good databases without any manual intervention on your part. Fast and easy provisioning of databases for developers and testers is a driving force in the move to cloud computing and virtualization. RMAN’s robust database duplication feature set plays right into this growing need for ease of provisioning, enabling easy duplication of known-good databases on demand, across operating systems such as between Linux and Solaris, and even across storage environments such as when duplicating from a RAC/ASM environment to a single-node instance using regular file system storage. Oracle RMAN Database Duplication is your thorough guide to providing amazing business value to your organization by way of fast and easy provisioning of database duplicates in service of development and testing projects.