talk-data.com talk-data.com

Topic

Cloud Computing

infrastructure saas iaas

4055

tagged

Activity Trend

471 peak/qtr
2020-Q1 2026-Q1

Activities

4055 activities · Newest first

IBM DS8880 Architecture and Implementation (Release 8.3)

Abstract This IBM® Redbooks® publication describes the concepts, architecture, and implementation of the IBM DS8880 family. The book provides reference information to assist readers who need to plan for, install, and configure the DS8880 systems. The IBM DS8000® family is a high-performance, high-capacity, highly secure, and resilient series of disk storage systems. The DS8880 family is the latest and most advanced of the DS8000 offerings to date. The high availability, multiplatform support, including IBM Z, and simplified management tools help provide a cost-effective path to an on-demand and cloud-based infrastructures. The IBM DS8880 family now offers business-critical, all-flash, and hybrid data systems that span a wide range of price points: DS8884 -- Business Class DS8886 -- Enterprise Class DS8888 -- Analytics Class The DS8884 and DS8886 are available as either hybrid models, or can be configured as all-flash. Each model represents the most recent in this series of high-performance, high-capacity, flexible, and resilient storage systems. These systems are intended to address the needs of the most demanding clients. Two powerful IBM POWER8® processor-based servers manage the cache to streamline disk I/O, maximizing performance and throughput. These capabilities are further enhanced with the availability of the second generation of high-performance flash enclosures (HPFEs Gen-2) and newer flash drives. Like its predecessors, the DS8880 supports advanced disaster recovery (DR) solutions, business continuity solutions, and thin provisioning. All disk drives in the DS8880 storage system include the Full Disk Encryption (FDE) feature. The DS8880 can automatically optimize the use of each storage tier, particularly flash drives, by using the IBM Easy Tier® feature.

Ceph Cookbook - Second Edition

Dive into Ceph Cookbook, the ultimate guide for implementing and managing Ceph storage systems with practical solutions. With this book, you will learn to install, configure, and optimize Ceph storage clusters while mastering integration aspects such as cloud solutions. Discover troubleshooting techniques and best practices for efficient storage operations. What this Book will help me do Understand and deploy Ceph storage systems effectively. Perform performance tuning and cluster benchmarking for Ceph. Integrate Ceph storage with cloud platforms and applications seamlessly. Operate and troubleshoot Ceph clusters in production environments. Adopt advanced techniques such as erasure-coding and RBD mirroring in Ceph. Author(s) This book is authored by experts Karan Singh and team, who bring years of professional experience in the domain of storage systems design and implementation. Their deep understanding of Ceph's deployment across various applications ensures a hands-on approach to the subject. The authors' intention is to equip readers with practical and actionable knowledge. Who is it for? This resource caters to storage architects, cloud engineers, and system administrators looking to enhance their expertise in scalable storage solutions. Ideal for readers who are familiar with Linux and basic storage concepts but want to specialize in the Ceph ecosystem. Readers aiming to deploy cost-efficient and reliable software-defined storage solutions will find it invaluable.

Big Data Analytics with SAS

Discover how to leverage the power of SAS for big data analytics in 'Big Data Analytics with SAS.' This book helps you unlock key techniques for preparing, analyzing, and reporting on big data effectively using SAS. Whether you're exploring integration with Hadoop and Python or mastering SAS Studio, you'll advance your analytics capabilities. What this Book will help me do Set up a SAS environment for performing hands-on data analytics tasks efficiently. Master the fundamentals of SAS programming for data manipulation and analysis. Use SAS Studio and Jupyter Notebook to interface with SAS efficiently and effectively. Perform preparatory data workflows and advanced analytics, including predictive modeling and reporting. Integrate SAS with platforms like Hadoop, SAP HANA, and Cloud Foundry for scaling analytics processes. Author(s) None Pope is a seasoned data analytics expert with extensive experience in SAS and big data platforms. With a passion for demystifying complex data workflows, None teaches SAS techniques in an approachable way. Their expert insights and practical examples empower readers to confidently analyze and report on data. Who is it for? If you're a SAS professional or a data analyst looking to expand your skills in big data analysis, this book is for you. It suits readers aiming to integrate SAS into diverse tech ecosystems or seeking to learn predictive modeling and reporting with SAS. Both beginners and those familiar with SAS can benefit.

Mastering MongoDB 3.x

"Mastering MongoDB 3.x" is your comprehensive guide to mastering the world of MongoDB, the leading NoSQL database. This book equips you with both foundational and advanced skills to effectively design, develop, and manage MongoDB-powered applications. Discover how to build fault-tolerant systems and dive deep into database internals, deployment strategies, and much more. What this Book will help me do Gain expertise in advanced querying using indexing and data expressions for efficient data retrieval. Master MongoDB administration for both on-premise and cloud-based environments efficiently. Learn data sharding and replication techniques to ensure scalability and fault tolerance. Understand the intricacies of MongoDB internals, including performance optimization techniques. Leverage MongoDB for big data processing by integrating with complex data pipelines. Author(s) Alex Giamas is a seasoned database developer and administrator with strong expertise in NoSQL technologies, particularly MongoDB. With years of experience guiding teams on creating and optimizing database structures, Alex ensures clear and practical methods for learning the essential aspects of MongoDB. His writing focuses on actionable knowledge and practical solutions for modern database challenges. Who is it for? This book is perfect for database developers, system architects, and administrators who are already familiar with database concepts and are looking to deepen their knowledge in NoSQL databases, specifically MongoDB. Whether you're working on building web applications, scaling data systems, or ensuring fault tolerance, this book provides the guidance to optimize your database management skill set.

Summary

Buzzfeed needs to be able to understand how its users are interacting with the myriad articles, videos, etc. that they are posting. This lets them produce new content that will continue to be well-received. To surface the insights that they need to grow their business they need a robust data infrastructure to reliably capture all of those interactions. Walter Menendez is a data engineer on their infrastructure team and in this episode he describes how they manage data ingestion from a wide array of sources and create an interface for their data scientists to produce valuable conclusions.

Preamble

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to launch your next project you’ll need somewhere to deploy it. Check out Linode at dataengineeringpodcast.com/linode and get a $20 credit to try out their fast and reliable Linux virtual servers for running your data pipelines or trying out the tools you hear about on the show. Continuous delivery lets you get new features in front of your users as fast as possible without introducing bugs or breaking production and GoCD is the open source platform made by the people at Thoughtworks who wrote the book about it. Go to dataengineeringpodcast.com/gocd to download and launch it today. Enterprise add-ons and professional support are available for added peace of mind. Go to dataengineeringpodcast.com to subscribe to the show, sign up for the newsletter, read the show notes, and get in touch. You can help support the show by checking out the Patreon page which is linked from the site. To help other people find the show you can leave a review on iTunes, or Google Play Music, and tell your friends and co-workers Your host is Tobias Macey and today I’m interviewing Walter Menendez about the data engineering platform at Buzzfeed

Interview

Introduction How did you get involved in the area of data management? How is the data engineering team at Buzzfeed structured and what kinds of projects are you responsible for? What are some of the types of data inputs and outputs that you work with at Buzzfeed? Is the core of your system using a real-time streaming approach or is it primarily batch-oriented and what are the business needs that drive that decision? What does the architecture of your data platform look like and what are some of the most significant areas of technical debt? Which platforms and languages are most widely leveraged in your team and what are some of the outliers? What are some of the most significant challenges that you face, both technically and organizationally? What are some of the dead ends that you have run into or failed projects that you have tried? What has been the most successful project that you have completed and how do you measure that success?

Contact Info

@hackwalter on Twitter walterm on GitHub

Links

Data Literacy MIT Media Lab Tumblr Data Capital Data Infrastructure Google Analytics Datadog Python Numpy SciPy NLTK Go Language NSQ Tornado PySpark AWS EMR Redshift Tracking Pixel Google Cloud Don’t try to be google Stop Hiring DevOps Engineers and Start Growing Them

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Support Data Engineering Podcast

Python for R Users

The definitive guide for statisticians and data scientists who understand the advantages of becoming proficient in both R and Python The first book of its kind, Python for R Users: A Data Science Approach makes it easy for R programmers to code in Python and Python users to program in R. Short on theory and long on actionable analytics, it provides readers with a detailed comparative introduction and overview of both languages and features concise tutorials with command-by-command translations—complete with sample code—of R to Python and Python to R. Following an introduction to both languages, the author cuts to the chase with step-by-step coverage of the full range of pertinent programming features and functions, including data input, data inspection/data quality, data analysis, and data visualization. Statistical modeling, machine learning, and data mining—including supervised and unsupervised data mining methods—are treated in detail, as are time series forecasting, text mining, and natural language processing. • Features a quick-learning format with concise tutorials and actionable analytics • Provides command-by-command translations of R to Python and vice versa • Incorporates Python and R code throughout to make it easier for readers to compare and contrast features in both languages • Offers numerous comparative examples and applications in both programming languages • Designed for use for practitioners and students that know one language and want to learn the other • Supplies slides useful for teaching and learning either software on a companion website Python for R Users: A Data Science Approach is a valuable working resource for computer scientists and data scientists that know R and would like to learn Python or are familiar with Python and want to learn R. It also functions as textbook for students of computer science and statistics. A. Ohri is the founder of Decisionstats.com and currently works as a senior data scientist. He has advised multiple startups in analytics off-shoring, analytics services, and analytics education, as well as using social media to enhance buzz for analytics products. Mr. Ohri's research interests include spreading open source analytics, analyzing social media manipulation with mechanism design, simpler interfaces for cloud computing, investigating climate change and knowledge flows. His other books include R for Business Analytics and R for Cloud Computing.

In this podcast, Andrea Gallego, Principal & Global Technology Lead @ Boston Consulting Group, talks about her journey as a data science practitioner in the consulting space. She talks about some of the industry practices that up and rising data science professionals must deploy and talks about some operational hacks to help create a robust data science team. It is a must-listen conversation for practitioner folks in the industry trying to deploy a data science team and build solutions for a service industry.

Timeline: 0:29 Andrea's journey. 5:41 Andrea's current role. 8:02 Seasoned data professional to COO role. 11:27 The essentials for having analytics at scale. 14:56 First steps to creating an analytics practice. 18:33 Defining an engineering first company. 22:33 A different understanding of data engineering. 26:40 Mistakes businesses make in their data science practice. 30:21 Some good business problems that data science can solve. 36:42 Democratization of data vs. privacy in companies. 38:04 Tech to business challenges. 40:11 Important KPIs for building a data science practice. 43:47 Hacks to hiring good data science candidates. 49:07 Art of doing business and science of doing business. 52:16 Andrea's secret to success. 55:12 Andrea's favorite read. 58:35 Closing remarks.

Andrea's Recommended Read: Arrival by Ted Chiang http://amzn.to/2h6lJpv Build to Last by Jim Collins http://amzn.to/2yMCsam Designing Agentive Technology: AI That Works for People Paperback http://amzn.to/2ySDHGp

Podcast Link: https://futureofdata.org/andrea-gallego-bcg-managing-analytics-practice/

Andrea's BIO: Andrea is Principal & Global Technology Lead @ Boston Consulting Group. Prior to BCG, Andrea was COO of QuantumBlack’s Cloud platform. She also manages the cloud platform team and helps drive the vision and future of McKinsey Analytics’ digital capabilities. Andrea has broad expertise in computer science, cloud computing, digital transformation strategy, and analytics solutions architecture. Prior to joining the Firm, Andrea was a technologist at Booz Allen Hamilton. She holds a BS in Economics and MS in Analytics (with a concentration in computing methods for analytics).

About #Podcast:

FutureOfData podcast is a conversation starter to bring leaders, influencers, and lead practitioners to discuss their journey to create the data-driven future.

Wanna Join? If you or any you know wants to join in, Register your interest @ http://play.analyticsweek.com/guest/

Want to sponsor? Email us @ [email protected]

Keywords: FutureOfData Data Analytics Leadership Podcast Big Data Strategy

Oracle Application Express by Design: Managing Cost, Schedule, and Quality

Learn the many design decisions that must be made before starting to build a large Oracle Application Express (APEX) application for the cloud or enterprise. One of APEX's key strengths is the fact that it is a Rapid Application Development (RAD) tool. This is also a major weakness when it tempts developers to start coding too soon. Small applications that consist of tens of pages can be coded without a lot of design work because they can be re-factored quickly when design flaws are discovered. Design flaws in large cloud and enterprise applications that consist of hundreds or thousands of pages are not so easy to re-factor due to the time needed to redevelop and retest the application, not to mention the risk of breaking functionality in subtle ways. Designing a large application before coding starts is a profitable exercise because a thoughtful design goes a long way in mitigating cost overruns and schedule slippage while simultaneously enhancing quality. This book takes into account perspectives from other non-developer stakeholders such as maintenance developers, business analysts, testers, technical writers, end users, and business owners. Overlooking these perspectives is one of the chief causes of expensive rework late in the development cycle. Oracle Application Express by Design illustrates APEX design principles by using architecture diagrams, screen shots, and explicit code snippets to guide developers through the many design choices and complex interrelationship issues that must be evaluated before embarking on large APEX projects. This book: Guides you through important, up-front APEX design decisions Helps you to optimize your design by keeping all stakeholders in mind Explicit code examples show how design impacts cost, schedule, and quality What You Will Learn Pick and choose from the list of designs before coding begins Bake optimal quality into the underlying fabric of an APEX application Think and design from outside the developer’s narrow perspective Optimize APEX application designs to satisfy multiple stakeholder groups Evaluate design options through hands-on, explicit code examples Define and measure success for large cloud and enterprise APEX applications Who This Book Is For APEX developers and development teams

podcast_episode
by Val Kroll , Julie Hoyer , Tim Wilson (Analytics Power Hour - Columbus (OH) , Mark Edmondson (/ IIH Nordic) , Moe Kiss (Canva) , Michael Helbling (Search Discovery)
GCP

You're listening to this podcast, so you're, obviously, well-attuned to the cutting edge of all things digital. But, in this episode, we're going to discuss a couple (or countless) products/platforms (PaaS — Platforms as a Service! Who knew that was a thing?!) from a little upstart company based in California. Google wouldn't actually return our calls (okay…we didn't call them), so we went with an Even Better Option: Mark Edmondson — Data Insight Developer at IIH Nordic, Google Developer Expert, author of so many R packages he had to write a package just to count them, delightfully accented Brit who now calls Denmark home, and a guy who tried to solve Twitter political discussions through text mining (not kidding — it's discussed in this episode) — joined the gang to do their First Ever three-continent simulcast. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

Data Warehousing in the Age of Artificial Intelligence

Nearly 7,000 new mobile applications appear every day, and a constant stream of data gives them life. Many organizations rely on a predictive analytics model to turn data into useful business information and ensure the predictions remain accurate as data changes. It can be a complex, time-consuming process. This book shows how to automate and accelerate that process using machine learning (ML) on a modern data warehouse that runs on any cloud. Product specialists from MemSQL explain how today’s modern data warehouses provide the foundations to implement ML algorithms that run efficiently. Through several real-time use cases, you’ll learn how to quickly identify the right metrics to make actionable business decisions. This book explores foundational ML and artificial intelligence concepts to help you understand: How data warehouses accelerate deployment and simplify manageability How companies make a choice between cloud and on-premises deployments for building data processing applications Ways to build analytics and visualizations for business intelligence on historical data The technologies and architecture for building and deploying real-time data pipelines This book demonstrates specific models and examples for building supervised and unsupervised real-time ML applications, and gives practical advice on how to make the choice between building an ML pipeline or buying an existing solution. If you need to use data accurately and efficiently, a real-time data warehouse is a critical business tool.

Introduction to GPUs for Data Analytics

Moore’s law has finally run out of steam for CPUs. The number of x86 cores that can be placed cost-effectively on a single chip has reached a practical limit, making higher densities prohibitively expensive for most applications. Fortunately, for big data analytics, machine learning, and database applications, a more capable and cost-effective alternative for scaling compute performance is already available: the graphics processing unit, or GPU. In this report, executives at Kinetica and Sierra Communications explain how incorporating GPUs is ideal for keeping pace with the relentless growth in streaming, complex, and large data confronting organizations today. Technology professionals, business analysts, and data scientists will learn how their organizations can begin implementing GPU-accelerated solutions either on premise or in the cloud. This report explores: How GPUs supplement CPUs to enable continued price/performance gains The many database and data analytics applications that can benefit from GPU acceleration Why GPU databases with user-defined functions (UDFs) can simplify and unify the machine learning/deep learning pipeline How GPU-accelerated databases can process streaming data from the Internet of Things and other sources in real time The performance advantage of GPU databases in demanding geospatial analytics applications How cognitive computing—the most compute-intensive application currently imaginable—is now within reach, using GPUs

Learning Ceph - Second Edition

Dive into 'Learning Ceph' to master Ceph, the powerful open-source storage solution known for its scalability and reliability. By following the book's clear instructions, you'll be equipped to deploy, configure, and integrate Ceph into your infrastructure for exabyte-scale data management. What this Book will help me do Understand the architectural principles of Ceph and its uses. Gain practical skills in deploying and managing a Ceph cluster. Learn to monitor and troubleshoot Ceph systems effectively. Explore integration possibilities with OpenStack and other platforms. Apply advanced techniques like erasure coding and CRUSH map optimization. Author(s) The authors are experienced software engineers and open-source contributors with deep expertise in storage systems and distributed computing. They bring practical, real-world examples and accessible explanations to complex topics like Ceph architecture and operation. Their passion for empowering professionals with robust technical skills shines through in this book. Who is it for? This book is ideal for system administrators, cloud engineers, or storage professionals looking to expand their knowledge of software-defined storage solutions. Whether you're new to Ceph or seeking advanced tips for optimization, this guide has something for every skill level. Prerequisite knowledge includes familiarity with Linux and server architecture concepts.

In this episode, Microsoft's Corporate Vice President for Cloud Artificial Intelligence, Joseph Sirosh, joins host Kyle Polich to share some of the Microsoft's latest and most exciting innovations in AI development platforms. Last month, Microsoft launched a set of three powerful new capabilities in Azure Machine Learning for advanced developers to exploit big data, GPUs, data wrangling and container-based model deployment. Extended show notes found here. Thanks to our sponsor Springboard.  Check out Springboard's Data Science Career Track Bootcamp.

Web Development with MongoDB and Node - Third Edition

Explore the power of combining Node.js and MongoDB to build modern, scalable web applications in 'Web Development with MongoDB and Node.' You'll not only learn how to integrate these two technologies effectively, but you'll also gain practical insights into using modern frameworks like Express and Angular to build feature-rich web apps. What this Book will help me do Master core concepts of Node.js and MongoDB for efficient web development. Learn to build and configure a web server using the Express.js framework. Implement data persistence with MongoDB using the Mongoose ODM library. Automate testing using tools like Mocha and streamline workflows with Gulp. Deploy applications to cloud platforms like Heroku, AWS, and Microsoft Azure. Author(s) Jason Krol and None Joseph D'mello, along with None Satheesh, bring extensive experience in web development and technical writing to this book. The authors have collectively worked on cutting-edge web technologies for years and are passionate about sharing their expertise to help developers create efficient web applications. Who is it for? This book is perfect for JavaScript developers at any proficiency level who are looking to expand their skills into full-stack development with Node.js and MongoDB. Even if you have a basic understanding of JavaScript and HTML, this book will guide you through building complete web applications from scratch. If you're eager to learn and create performant, scalable web apps, this book is for you.

Microsoft Power BI Cookbook

This comprehensive guide dives deep into the Power BI platform, teaching you how to create insightful data models, reports, and dashboards that drive business decisions. With hands-on recipes and real-world examples, this book is a practical resource for mastering the full range of Power BI's capabilities. What this Book will help me do Understand and apply data cleansing and transformation using Power BI tools. Create and utilize intuitive data models for business intelligence reporting. Leverage DAX and M languages for advanced data analysis and custom solutions. Build dynamic, user-specific dashboards and visualizations for impactful insights. Integrate Power BI with Microsoft Excel, SQL Server, and cloud services for extended functionality. Author(s) None Powell is an experienced data analyst and Microsoft BI solution architect with extensive expertise in Power BI. He has worked on numerous BI projects, providing practical solutions using Microsoft's data platform technologies. Through detailed, scenario-based writing, he shares his knowledge to help readers excel in their BI endeavors. Who is it for? This book is perfect for business intelligence professionals and analysts seeking to expand their skills in Power BI. Ideal readers may have foundational Power BI knowledge and look to master advanced techniques. If you aim to build impactful BI solutions and are motivated to handle complex data integrations, this book will be instrumental.

IBM TS4500 R4 Tape Library Guide

Abstract The IBM® TS4500 (TS4500) tape library is a next-generation tape solution that offers higher storage density and integrated management than previous solutions. This IBM Redbooks® publication gives you a close-up view of the new IBM TS4500 tape library. In the TS4500, IBM delivers the density that today's and tomorrow's data growth requires. It has the cost-effectiveness and the manageability to grow with business data needs, while you preserve existing investments in IBM tape library products. Now, you can achieve both a low cost per terabyte (TB) and a high TB density per square foot, because the TS4500 can store up to 8.25 petabytes (PB) of uncompressed data in a single frame library or scale up at 1.5 PB per square foot to over 263 PB, which is more than 4 times the capacity of the IBM TS3500 tape library. The TS4500 offers these benefits: High availability dual active accessors with integrated service bays to reduce inactive service space by 40%. The Elastic Capacity option can be used to completely eliminate inactive service space. Flexibility to grow: The TS4500 library can grow from both the right side and the left side of the first L frame because models can be placed in any active position. Increased capacity: The TS4500 can grow from a single L frame up to an additional 17 expansion frames with a capacity of over 23,000 cartridges. High-density (HD) generation 1 frames from the existing TS3500 library can be redeployed in a TS4500. Capacity on demand (CoD): CoD is supported through entry-level, intermediate, and base-capacity configurations. Advanced Library Management System (ALMS): ALMS supports dynamic storage management, which enables users to create and change logical libraries and configure any drive for any logical library. Support for the IBM TS1155 while also supporting TS1150 and TS1140 tape drive: The TS1155 gives organizations an easy way to deliver fast access to data, improve security, and provide long-term retention, all at a lower cost than disk solutions. The TS1155 offers high-performance, flexible data storage with support for data encryption. Also, this enhanced fifth-generation drive can help protect investments in tape automation by offering compatibility with existing automation. The new TS1155 Tape Drive Model 55E delivers a 10 Gb Ethernet host attachment interface optimized for cloud-based and hyperscale environments. The TS1155 Tape Drive Model 55F delivers a native data rate of 360 MBps, the same load/ready, locate speeds, and access times as the TS1150, and includes dual-port 8 Gb Fibre Channel support. Support of the IBM Linear Tape-Open (LTO) Ultrium 7 tape drive: The LTO Ultrium 7 offering represents significant improvements in capacity, performance, and reliability over the previous generation, LTO Ultrium 6, while they still protect your investment in the previous technology. Integrated TS7700 back-end Fibre Channel (FC) switches are available. Up to four library-managed encryption (LME) key paths per logical library are available. This book describes the TS4500 components, feature codes, specifications, supported tape drives, encryption, new integrated management console (IMC), and command-line interface (CLI). You learn how to accomplish several specific tasks: Improve storage density with increased expansion frame capacity up to 2.4 times and support 33% more tape drives per frame. Manage storage by using the ALMS feature. Improve business continuity and disaster recovery with dual active accessor, automatic control path failover, and data path failover. Help ensure security and regulatory compliance with tape-drive encryption and Write Once Read Many (WORM) media. Support IBM LTO Ultrium 7, 6, and 5, IBM TS1155, TS1150, and TS1140 tape drives. Provide a flexible upgrade path for users who want to expand their tape storage as their needs grow. Reduce the storage footprint and simplify cabling with 10 U of rack space on top of the library. This guide is for anyone who wants to understand more about the IBM TS4500 tape library. It is particularly suitable for IBM clients, IBM Business Partners, IBM specialist sales representatives, and technical specialists.

EU General Data Protection Regulation (GDPR): An Implementation and Compliance Guide - Second edition

The updated second edition of the bestselling guide to the changes your organisation needs to make to comply with the EU GDPR. “The clear language of the guide and the extensive explanations, help to explain the many doubts that arise reading the articles of the Regulation.” Giuseppe G. Zorzino The EU General Data Protection Regulation (GDPR) will supersede the 1995 EU Data Protection Directive (DPD) and all EU member states’ national laws based on it – including the UK Data Protection Act 1998 – in May 2018. All organisations – wherever they are in the world – that process the personal data of EU residents must comply with the Regulation. Failure to do so could result in fines of up to €20 million or 4% of annual global turnover. This book provides a detailed commentary on the GDPR, explains the changes you need to make to your data protection and information security regimes, and tells you exactly what you need to do to avoid severe financial penalties. Product overview Now in its second edition, EU GDPR – An Implementation and Compliance Guide is a clear and comprehensive guide to this new data protection law, explaining the Regulation, and setting out the obligations of data processors and controllers in terms you can understand. Topics covered include: The role of the data protection officer (DPO) – including whether you need one and what they should do. Risk management and data protection impact assessments (DPIAs), including how, when and why to conduct a DPIA. Data subjects’ rights, including consent and the withdrawal of consent; subject access requests and how to handle them; and data controllers’ and processors’ obligations. International data transfers to “third countries” – including guidance on adequacy decisions and appropriate safeguards; the EU-US Privacy Shield; international organisations; limited transfers; and Cloud providers. How to adjust your data protection processes to transition to GDPR compliance, and the best way of demonstrating that compliance. A full index of the Regulation to help you find the articles and stipulations relevant to your organisation. New for the second edition: Additional definitions. Further guidance on the role of the DPO. Greater clarification on data subjects’ rights. Extra guidance on data protection impact assessments. More detailed information on subject access requests (SARs). Clarification of consent and the alternative lawful bases for processing personal data. New appendix: implementation FAQ. The GDPR will have a significant impact on organisational data protection regimes around the world. EU GDPR – An Implementation and Compliance Guide shows you exactly what you need to do to comply with the new law.

Competing on Analytics: Updated, with a New Introduction

The New Edition of a Business Classic This landmark work, the first to introduce business leaders to analytics, reveals how analytics are rewriting the rules of competition. Updated with fresh content, Competing on Analytics provides the road map for becoming an analytical competitor, showing readers how to create new strategies for their organizations based on sophisticated analytics. Introducing a five-stage model of analytical competition, Davenport and Harris describe the typical behaviors, capabilities, and challenges of each stage. They explain how to assess your company’s capabilities and guide it toward the highest level of competition. With equal emphasis on two key resources, human and technological, this book reveals how even the most highly analytical companies can up their game. With an emphasis on predictive, prescriptive, and autonomous analytics for marketing, supply chain, finance, M&A, operations, R&D, and HR, the book contains numerous new examples from different industries and business functions, such as Disney’s vacation experience, Google’s HR, UPS’s logistics, the Chicago Cubs’ training methods, and Firewire Surfboards’ customization. Additional new topics and research include: Data scientists and what they do Big data and the changes it has wrought Hadoop and other open-source software for managing and analyzing data Data products—new products and services based on data and analytics Machine learning and other AI technologies The Internet of Things and its implications New computing architectures, including cloud computing Embedding analytics within operational systems Visual analytics The business classic that turned a generation of leaders into analytical competitors, Competing on Analytics is the definitive guide for transforming your company’s fortunes in the age of analytics and big data.

Using IBM Spectrum Copy Data Management with IBM FlashSystem A9000 or A9000R and SAP HANA

Data is the currency of the new economy, and organizations are increasingly tasked with finding better ways to protect, recover, access, share, and use it. IBM Spectrum™ Copy Data Management is aimed at using existing data in a manner that is efficient, automated, scalable. It helps you manage all of those snapshot and IBM FlashCopy® images made to support DevOps, data protection, disaster recovery, and Hybrid Cloud computing environments. This IBM® Redpaper™ publication specifically addresses IBM Spectrum Copy Data Management in combination with IBM FlashSystem® A9000 or A9000R when used for Automated Disaster Recovery of SAP HANA.

Essentials of Cloud Application Development on IBM Bluemix

Abstract This IBM® Redbooks® publication is based on the Presentations Guide of the course Essentials of Cloud Application Development on IBM Bluemix that was developed by the IBM Redbooks team in partnership with IBM Skills Academy Program. This course is designed to teach university students the basic skills that are required to develop, deploy, and test cloud-based applications that use the IBM Bluemix® cloud services. The primary target audience for this course is university students in undergraduate computer science and computer engineer programs with no previous experience working in cloud environments. However, anyone new to cloud computing can also benefit from this course. After completing this course, you should be able to accomplish the following tasks: Define cloud computing Describe the factors that lead to the adoption of cloud computing Describe the choices that developers have when creating cloud applications Describe infrastructure as a service, platform as a service, and software as a service Describe IBM Bluemix and its architecture Identify the runtimes and services that IBM Bluemix offers Describe IBM Bluemix infrastructure types Create an application in IBM Bluemix Describe the IBM Bluemix dashboard, catalog, and documentation features Explain how the application route is used to test an application from the browser Create services in IBM Bluemix Describe how to bind services to an application in IBM Bluemix Describe the environment variables that are used with IBM Bluemix services Explain what are IBM Bluemix organizations, domains, spaces, and users Describe how to create an IBM SDK for Node.js application that runs on IBM Bluemix Explain how to manage your IBM Bluemix account with the Cloud Foundry CLI Describe how to set up and use the IBM Bluemix plug-in for Eclipse Describe the role of Node.js for server-side scripting Describe IBM Bluemix DevOps Services and the capabilities of IBM DevOps Services Identify the Web IDE features in IBM Bluemix DevOps Describe how to connect a Git repository client to Bluemix DevOps Services project Explain the pipeline build and deploy processes that IBM Bluemix DevOps Services use Describe how IBM Bluemix DevOps Services integrate with the IBM Bluemix cloud Describe the agile planning tools in IBM Bluemix Describe the characteristics of REST APIs Explain the advantages of the JSON data format Describe an example of REST APIs using Watson Describe the main types of data services in IBM Bluemix Describe the benefits of IBM Cloudant® Explain how Cloudant databases and documents are accessed from IBM Bluemix Describe how to use REST APIs to interact with Cloudant database Describe Bluemix mobile backend as a service (MBaaS) and the MBaaS architecture Describe the Push Notifications service Describe the App ID service Describe the Kinetise service Describe how to create Bluemix Mobile applications by using MobileFirst Services Starter Boilerplate The workshop materials were created in June 2017. Therefore, all IBM Bluemix features that are described in this Presentations Guide and IBM Bluemix user interfaces that are used in the examples are current as of June 2017.