talk-data.com talk-data.com

Topic

Cyber Security

cybersecurity information_security data_security privacy

2078

tagged

Activity Trend

297 peak/qtr
2020-Q1 2026-Q1

Activities

2078 activities · Newest first

ABCs of IBM z/OS System Programming Volume 2

Abstract The ABCs of IBM® z/OS® System Programming is a 13-volume collection that provides an introduction to the z/OS operating system and the hardware architecture. Whether you are a beginner or an experienced system programmer, the ABCs collection provides the information that you need to start your research into z/OS and related subjects. If you want to become more familiar with z/OS in your current environment or if you are evaluating platforms to consolidate your e-business applications, the ABCs collection can serve as a powerful technical tool. This volume describes the basic system programming activities related to implementing and maintaining the z/OS installation and provides details about the modules that are used to manage jobs and data. It covers the following topics: Overview of the parmlib definitions and the IPL process. The parameters and system data sets necessary to IPL and run a z/OS operating system are described, along with the main daily tasks for maximizing performance of the z/OS system. Basic concepts related to subsystems and subsystem interface and how to use the subsystem services that are provided by IBM subsystems. Job management in the z/OS system using the JES2 and JES3 job entry subsystems. It provides a detailed discussion about how JES2 and JES3 are used to receive jobs into the operating system, schedule them for processing by z/OS, and control their output processing. The link pack area (LPA), LNKLST, authorized libraries, and the role of VLF and LLA components. An overview of SMP/E for z/OS. An overview of IBM Language Environment® architecture and descriptions of Language Environment’s full program model, callable services, storage management model, and debug information. Other volumes in this series include the following content: Volume 1: Introduction to z/OS and storage concepts, TSO/E, ISPF, JCL, SDSF, and z/OS delivery and installation Volume 3: Introduction to DFSMS, data set basics, storage management, hardware and software, catalogs, and DFSMStvs Volume 4: Communication Server, TCP/IP, and IBM VTAM® Volume 5: Base and IBM Parallel Sysplex®, System Logger, Resource Recovery Services (RRS), global resource serialization (GRS), z/OS system operations, automatic restart management (ARM), IBM Geographically Dispersed Parallel Sysplex™ (IBM GDPS®) Volume 6: Introduction to security, IBM RACF®, Digital certificates and PKI, Kerberos, cryptography and z990 integrated cryptography, zSeries firewall technologies, LDAP, and Enterprise Identity Mapping (EIM) Volume 7: Printing in a z/OS environment, Infoprint Server, and Infoprint Central Volume 8: An introduction to z/OS problem diagnosis Volume 9: z/OS UNIX System Services Volume 10: Introduction to IBM z/Architecture®, the IBM Z platform and IBM Z connectivity, LPAR concepts, HCD, and the DS Storage Solution Volume 11: Capacity planning, performance management, WLM, IBM RMF™, and SMF Volume 12: WLM Volume 13: JES3, JES3 SDSF

Summary

Cloud computing and ubiquitous virtualization have changed the ways that our applications are built and deployed. This new environment requires a new way of tracking and addressing the security of our systems. ThreatStack is a platform that collects all of the data that your servers generate and monitors for unexpected anomalies in behavior that would indicate a breach and notifies you in near-realtime. In this episode ThreatStack’s director of operations, Pete Cheslock, and senior infrastructure security engineer, Patrick Cable, discuss the data infrastructure that supports their platform, how they capture and process the data from client systems, and how that information can be used to keep your systems safe from attackers.

Preamble

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to run a bullet-proof data platform. Go to dataengineeringpodcast.com/linode to get a $20 credit and launch a new server in under a minute. For complete visibility into the health of your pipeline, including deployment tracking, and powerful alerting driven by machine-learning, DataDog has got you covered. With their monitoring, metrics, and log collection agent, including extensive integrations and distributed tracing, you’ll have everything you need to find and fix performance bottlenecks in no time. Go to dataengineeringpodcast.com/datadog today to start your free 14 day trial and get a sweet new T-Shirt. Go to dataengineeringpodcast.com to subscribe to the show, sign up for the newsletter, read the show notes, and get in touch. Your host is Tobias Macey and today I’m interviewing Pete Cheslock and Pat Cable about the data infrastructure and security controls at ThreatStack

Interview

Introduction How did you get involved in the area of data management? Why don’t you start by explaining what ThreatStack does?

What was lacking in the existing options (services and self-hosted/open source) that ThreatStack solves for?

Can you describe the type(s) of data that you collect and how it is structured? What is the high level data infrastructure that you use for ingesting, storing, and analyzing your customer data?

How do you ensure a consistent format of the information that you receive? How do you ensure that the various pieces of your platform are deployed using the proper configurations and operating as intended? How much configuration do you provide to the end user in terms of the captured data, such as sampling rate or additional context?

I understand that your original architecture used RabbitMQ as your ingest mechanism, which you then migrated to Kafka. What was your initial motivation for that change?

How much of a benefit has that been in terms of overall complexity and cost (both time and infrastructure)?

How do you ensure the security and provenance of the data that you collect as it traverses your infrastructure? What are some of the most common vulnerabilities that you detect in your client’s infrastructure? For someone who wants to start using ThreatStack, what does the setup process look like? What have you found to be the most challenging aspects of building and managing the data processes in your environment? What are some of the projects that you have planned to improve the capacity or capabilities of your infrastructure?

Contact Info

Pete Cheslock

@petecheslock on Twitter Website petecheslock on GitHub

Patrick Cable

@patcable on Twitter Website patcable on GitHub

ThreatStack

Website @threatstack on Twitter threatstack on GitHub

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

ThreatStack SecDevO

Mastering Microsoft Power BI

Dive right into the powerful world of Microsoft Power BI with this comprehensive guide. This book takes you through every step of mastering Power BI, from data modeling to creating actionable visualizations. You'll find clear explanations and practical steps to improve your data analytics and enhance business decision-making. What this Book will help me do Learn to connect and transform data using Power Query M Language to create clean, structured datasets. Understand how to design scalable and performance-optimized Power BI Data Models for effective analytics. Develop professional, visually appealing and interactive reports and dashboards to convey insights confidently. Implement best practices for managing Power BI solutions, including deployment, version control, and monitoring. Gain practical knowledge to administer Power BI across organizational structures, ensuring security and efficiency. Author(s) None Powell is a seasoned expert in business intelligence and a passionate educator in the field of data analytics. With extensive hands-on experience in Microsoft Power BI, None has supported many organizations in unlocking the potential of their data. The approachable writing style reflects a real-world yet proficient understanding of Power BI's capabilities. Who is it for? This book is ideal for business intelligence professionals looking to deepen their expertise in Microsoft Power BI. Readers already familiar with basic BI concepts and Power BI will gain significant technical depth. It suits professionals keen to enhance their data modeling, visualization, and analytics skills. If you're aiming to create impactful dashboards and benefit from advanced insights, this book is for you.

IBM TS4500 R4 Tape Library Guide

Abstract The IBM® TS4500 (TS4500) tape library is a next-generation tape solution that offers higher storage density and integrated management than previous solutions. This IBM Redbooks® publication gives you a close-up view of the new IBM TS4500 tape library. In the TS4500, IBM delivers the density that today's and tomorrow's data growth requires. It has the cost-effectiveness and the manageability to grow with business data needs, while you preserve existing investments in IBM tape library products. Now, you can achieve both a low cost per terabyte (TB) and a high TB density per square foot, because the TS4500 can store up to 8.25 petabytes (PB) of uncompressed data in a single frame library or scale up at 1.5 PB per square foot to over 263 PB, which is more than 4 times the capacity of the IBM TS3500 tape library. The TS4500 offers these benefits: High availability dual active accessors with integrated service bays to reduce inactive service space by 40%. The Elastic Capacity option can be used to completely eliminate inactive service space. Flexibility to grow: The TS4500 library can grow from both the right side and the left side of the first L frame because models can be placed in any active position. Increased capacity: The TS4500 can grow from a single L frame up to an additional 17 expansion frames with a capacity of over 23,000 cartridges. High-density (HD) generation 1 frames from the existing TS3500 library can be redeployed in a TS4500. Capacity on demand (CoD): CoD is supported through entry-level, intermediate, and base-capacity configurations. Advanced Library Management System (ALMS): ALMS supports dynamic storage management, which enables users to create and change logical libraries and configure any drive for any logical library. Support for the IBM TS1155 while also supporting TS1150 and TS1140 tape drive: The TS1155 gives organizations an easy way to deliver fast access to data, improve security, and provide long-term retention, all at a lower cost than disk solutions. The TS1155 offers high-performance, flexible data storage with support for data encryption. Also, this enhanced fifth-generation drive can help protect investments in tape automation by offering compatibility with existing automation. The new TS1155 Tape Drive Model 55E delivers a 10 Gb Ethernet host attachment interface optimized for cloud-based and hyperscale environments. The TS1155 Tape Drive Model 55F delivers a native data rate of 360 MBps, the same load/ready, locate speeds, and access times as the TS1150, and includes dual-port 8 Gb Fibre Channel support. Support of the IBM Linear Tape-Open (LTO) Ultrium 8 tape drive: The LTO Ultrium 8 offering represents significant improvements in capacity, performance, and reliability over the previous generation, LTO Ultrium 7, while they still protect your investment in the previous technology. Support of LTO 8 Type M cartridge (M8): The LTO Program is introducing a new capability with LTO-8 drives. The ability of the LTO-8 drive to write 9 TB on a brand new LTO-7 cartridge instead of 6 TB as specified by the LTO-7 format. Such a cartridge is called an LTO-7 initialized LTO-8 Type M cartridge. Integrated TS7700 back-end Fibre Channel (FC) switches are available. Up to four library-managed encryption (LME) key paths per logical library are available. This book describes the TS4500 components, feature codes, specifications, supported tape drives, encryption, new integrated management console (IMC), and command-line interface (CLI). You learn how to accomplish several specific tasks: Improve storage density with increased expansion frame capacity up to 2.4 times and support 33% more tape drives per frame. Manage storage by using the ALMS feature. Improve business continuity and disaster recovery with dual active accessor, automatic control path failover, and data path failover. Help ensure security and regulatory compliance with tape-drive encryption and Write Once Read Many (WORM) media. Support IBM LTO Ultrium 8, 7, 6, and 5, IBM TS1155, TS1150, and TS1140 tape drives. Provide a flexible upgrade path for users who want to expand their tape storage as their needs grow. Reduce the storage footprint and simplify cabling with 10 U of rack space on top of the library. This guide is for anyone who wants to understand more about the IBM TS4500 tape library. It is particularly suitable for IBM clients, IBM Business Partners, IBM specialist sales representatives, and technical specialists.

Mastering Qlik Sense

Mastering Qlik Sense is a comprehensive guide designed to empower you to utilize Qlik Sense for advanced data analytics and dynamic visualizations. This book provides detailed insights into creating seamless Business Intelligence solutions tailored to your needs. Whether you're building dashboards, optimizing data models, or exploring Qlik Cloud functionalities, this book has you covered. What this Book will help me do Build interactive and insightful dashboards using Qlik Sense's intuitive tools. Learn to model data efficiently and apply best practices for optimized performance. Master the Qlik Sense APIs and create advanced custom extensions. Understand enterprise security measures including role-based access controls. Gain expertise in migrating from QlikView to Qlik Sense effectively Author(s) Juan Ignacio Vitantonio is an experienced expert in Business Intelligence solutions and data analytics. With a profound understanding of Qlik technologies, Juan has developed and implemented impactful BI solutions across various industries. His writing reflects his practical knowledge and passion for empowering users with actionable insights into data. Who is it for? This book is perfect for BI professionals, data analysts, and organizations aiming to leverage Qlik Sense for advanced analytics. Ideal for those with a foundational grasp of Qlik Sense, it also provides comprehensive guidance for QlikView users transitioning to Qlik Sense. If you want to improve your BI solutions and data-driven decision-making skills, this book is for you.

Cleaning Up the Data Lake with an Operational Data Hub

The data lake was once heralded as the answer to the flood of big data that arrived in a variety of structured and unstructured formats. But, due to the ease of integration and the lack of governance, data lakes in many companies have devolved into unusable data swamps. This short ebook shows you how to solve this problem using an Operational Data Hub (ODH) to collect, store, index, cleanse, harmonize, and master data of all shapes and formats. Gerhard Ungerer—CTO and co-founder of Random Bit LLC—explains how the ODH supports transactional integrity so that the hub can serve as integration point for enterprise applications. You’ll also learn how the ODH helps you leverage the investment in your data lake (or swamp), so that the data trapped there can finally be ingested, processed, and provisioned. With this ebook, you’ll learn how an ODH: Allows you to focus on categorizing data for easy and fast retrieval Provides flexible storage models, indexing support, query capabilities, security, and a governance framework Delivers flexible storage models; support for indexing, scripting, and automation; query capabilities; transactional integrity; and security Includes a governance model to help you access, ingest, harmonize, materialize, provision, and consume data

MarkLogic Cookbook

Learn how to get the most out of MarkLogic with recipes from people who understand this powerful multi-model database platform from the inside out. MarkLogic comes with a broad set of capabilities to help you quickly integrate data from silos, but it takes time to learn how to harness that power. In this three-part series, key members of the MarkLogic team—including engineers who built the database—provide targeted recipes to get you up to speed. In Part 1, you’ll learn how to solve real-world problems with XQuery, the functional language for working with hierarchical data structures such as XML. Part 2 helps you solve common search-related problems with recipes that work with MarkLogic 9 as well as with older versions. With recipes in Part 3, you’ll explore the multiple ways MarkLogic represents data. XQuery: Gain XQuery peak performance, and explore its use in maps, documents, document security, the task server, and administration Search-related problems: Conduct document searches, score search results, understand how data is used, and search with the Optic API MarkLogic and data: Work with input transformations, tokenization, template-driven extraction, and redaction

Summary

As software lifecycles move faster, the database needs to be able to keep up. Practices such as version controlled migration scripts and iterative schema evolution provide the necessary mechanisms to ensure that your data layer is as agile as your application. Pramod Sadalage saw the need for these capabilities during the early days of the introduction of modern development practices and co-authored a book to codify a large number of patterns to aid practitioners, and in this episode he reflects on the current state of affairs and how things have changed over the past 12 years.

Preamble

Hello and welcome to the Data Engineering Podcast, the show about modern data infrastructure When you’re ready to launch your next project you’ll need somewhere to deploy it. Check out Linode at dataengineeringpodcast.com/linode and get a $20 credit to try out their fast and reliable Linux virtual servers for running your data pipelines or trying out the tools you hear about on the show. Go to dataengineeringpodcast.com to subscribe to the show, sign up for the newsletter, read the show notes, and get in touch. You can help support the show by checking out the Patreon page which is linked from the site. To help other people find the show you can leave a review on iTunes, or Google Play Music, and tell your friends and co-workers Your host is Tobias Macey and today I’m interviewing Pramod Sadalage about refactoring databases and integrating database design into an iterative development workflow

Interview

Introduction How did you get involved in the area of data management? You first co-authored Refactoring Databases in 2006. What was the state of software and database system development at the time and why did you find it necessary to write a book on this subject? What are the characteristics of a database that make them more difficult to manage in an iterative context? How does the practice of refactoring in the context of a database compare to that of software? How has the prevalence of data abstractions such as ORMs or ODMs impacted the practice of schema design and evolution? Is there a difference in strategy when refactoring the data layer of a system when using a non-relational storage system? How has the DevOps movement and the increased focus on automation affected the state of the art in database versioning and evolution? What have you found to be the most problematic aspects of databases when trying to evolve the functionality of a system? Looking back over the past 12 years, what has changed in the areas of database design and evolution?

How has the landscape of tooling for managing and applying database versioning changed since you first wrote Refactoring Databases? What do you see as the biggest challenges facing us over the next few years?

Contact Info

Website pramodsadalage on GitHub @pramodsadalage on Twitter

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

Database Refactoring

Website Book

Thoughtworks Martin Fowler Agile Software Development XP (Extreme Programming) Continuous Integration

The Book Wikipedia

Test First Development DDL (Data Definition Language) DML (Data Modification Language) DevOps Flyway Liquibase DBMaintain Hibernate SQLAlchemy ORM (Object Relational Mapper) ODM (Object Document Mapper) NoSQL Document Database MongoDB OrientDB CouchBase CassandraDB Neo4j ArangoDB Unit Testing Integration Testing OLAP (On-Line Analytical Processing) OLTP (On-Line Transaction Processing) Data Warehouse Docker QA==Quality Assurance HIPAA (Health Insurance Portability and Accountability Act) PCI DSS (Payment Card Industry Data Security Standard) Polyglot Persistence Toplink Java ORM Ruby on Rails ActiveRecord Gem

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Support Data Engineering Podcast

In this podcast, Rahul Kashyap(@RCKashyap) talks about the state of security, technology, and business crossroad on Security and the mindset of a security led technologist. He sheds some light on past, present, and future security risks discussed some common leadership concerns, and how a technologist could circumvent that. This podcast is a must for all technologists and wannabe technologists to grow their organization.

Timeline: 0:29 Rahul's journey. 4:40 Rahul's current role. 7:58 How the types of cyberattacks have changed. 12:53 How has IT interaction evolved? 16:50 Problems security industry. 20:12 Market mindset vs. security mindset. 23:10 Ownership of data. 27:02 Cloud, saas, and security. 31:40 Priorities for securing an enterprise. 34:50 How security is secure enough. 37:40 Providing a stable core to the business. 41:11 The state of data science vis a vis security. 44:05 Future of security, data science, and AI. 46:14 Distributed computing and security. 50:30 Tenets of Rahul's success. 53:15 Rahul's favorite read. 54:35 Closing remarks.

Rahul's Recommended Read: Mindset: The New Psychology of Success – Carol S. Dweck http://amzn.to/2GvEX2F

Podcast Link: https://futureofdata.org/rckashyap-cylance-on-state-of-security-technologist-mindset-futureofdata-podcast/

Rahul's BIO: Rahul Kashyap is the Global Chief Technology Officer at Cylance, where he is responsible for strategy, products, and architecture.

Rahul has been instrumental in building several key security technologies viz: Network Intrusion Prevention Systems (NIPS), Host Intrusion Prevention Systems (HIPS), Web Application Firewalls (WAF), Whitelisting, Endpoint/Server Host Monitoring (EDR), and Micro-virtualization. He has been awarded several patents for his innovations. Rahul is an accomplished pen-tester and has in-depth knowledge of OS, networking, and security products.

Rahul has written several security research papers, blogs, and articles that are widely quoted and referenced by media around the world. He has built, led, and scaled award-winning teams that innovate and solve complex security challenges in both large and start-up companies.

He is frequently featured in several podcasts, webinars, and media briefings. Rahul has been a speaker at several top security conferences like BlackHat, BlueHat, Hack-In-The-Box, RSA, DerbyCon, BSides, ISSA International, OWASP, InfoSec UK, and others. He was named 'Silicon Valley's 40 under 40' by Silicon Valley Business Journal.

Rahul mentors entrepreneurs who work with select VC firms and is on the advisory board of tech start-ups.

About #Podcast:

FutureOfData podcast is a conversation starter to bring leaders, influencers, and lead practitioners to discuss their journey to create the data-driven future.

Wanna Join? If you or any you know wants to join in, Register your interest @ http://play.analyticsweek.com/guest/

Want to sponsor? Email us @ [email protected]

Keywords:

FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy

Camel in Action, Second Edition

Camel in Action, Second Edition is the most complete Camel book on the market. Written by core developers of Camel and the authors of the highly acclaimed first edition, this book distills their experience and practical insights so that you can tackle integration tasks like a pro. About the Technology Apache Camel is a Java framework that implements enterprise integration patterns (EIPs) and comes with over 200 adapters to third-party systems. A concise DSL lets you build integration logic into your app with just a few lines of Java or XML. By using Camel, you benefit from the testing and experience of a large and vibrant open source community. About the Book Camel in Action, Second Edition is the definitive guide to the Camel framework. It starts with core concepts like sending, receiving, routing, and transforming data. It then goes in depth on many topics such as how to develop, debug, test, deal with errors, secure, scale, cluster, deploy, and monitor your Camel applications. The book also discusses how to run Camel with microservices, reactive systems, containers, and in the cloud. What's Inside Coverage of all relevant EIPs Camel microservices with Spring Boot Camel on Docker and Kubernetes Error handling, testing, security, clustering, monitoring, and deployment Hundreds of examples in Java and XML About the Reader Readers should be familiar with Java. This book is accessible to beginners and invaluable to experts. About the Authors Claus Ibsen is a senior principal engineer working for Red Hat specializing in cloud and integration. He has worked on Apache Camel for the last nine years where he heads the project. Claus lives in Denmark. Jonathan Anstey is an engineering manager at Red Hat and a core Camel contributor. He lives in Newfoundland, Canada. Quotes I highly recommend this book to anyone with even a passing interest in Apache Camel. Do take Camel for a ride...and don't get the hump! - From the Foreword by James Strachan, Creator of Apache Camel Claus and Jon are great writers, relying on figures and diagrams where needed and presenting lots of code snippets and worked examples. - From the Foreword by Dr. Mark Little, Technical Director of JBoss The second edition of this all-time classic is an indispensable companion for your Apache Camel rides. - Gregor Zurowski, Apache Camel Committer The absolute best way to learn and use Camel - top to bottom, front to back, and all the way through. Camel is a fantastic tool - every Java coder should have a copy of this book. - Rick Wagner, Red Hat An excellent book and the definite reference for experienced engineers. - Yan Guo, EventBrite

SQL Server 2017 Administration Inside Out, First Edition

Conquer SQL Server 2017 administration—from the inside out Dive into SQL Server 2017 administration—and really put your SQL Server DBA expertise to work. This supremely organized reference packs hundreds of timesaving solutions, tips, and workarounds—all you need to plan, implement, manage, and secure SQL Server 2017 in any production environment: on-premises, cloud, or hybrid. Four SQL Server experts offer a complete tour of DBA capabilities available in SQL Server 2017 Database Engine, SQL Server Data Tools, SQL Server Management Studio, and via PowerShell. Discover how experts tackle today’s essential tasks—and challenge yourself to new levels of mastery. • Install, customize, and use SQL Server 2017’s key administration and development tools • Manage memory, storage, clustering, virtualization, and other components • Architect and implement database infrastructure, including IaaS, Azure SQL, and hybrid cloud configurations • Provision SQL Server and Azure SQL databases • Secure SQL Server via encryption, row-level security, and data masking • Safeguard Azure SQL databases using platform threat protection, firewalling, and auditing • Establish SQL Server IaaS network security groups and user-defined routes • Administer SQL Server user security and permissions • Efficiently design tables using keys, data types, columns, partitioning, and views • Utilize BLOBs and external, temporal, and memory-optimized tables • Master powerful optimization techniques involving concurrency, indexing, parallelism, and execution plans • Plan, deploy, and perform disaster recovery in traditional, cloud, and hybrid environments For Experienced SQL Server Administrators and Other Database Professionals • Your role: Intermediate-to-advanced level SQL Server database administrator, architect, developer, or performance tuning expert • Prerequisites: Basic understanding of database administration procedures

Teradata Cookbook

Are you ready to master Teradata, one of the leading relational database management systems for data warehousing? In the "Teradata Cookbook," you will find over 85 recipes covering vital tasks like querying, performance tuning, and administrative operations. With clear and practical instructions, this book will equip you with the skills necessary to optimize data storage and analytics in your organization. What this Book will help me do Master Teradata's advanced features for efficient data warehousing applications. Understand and employ Teradata SQL for effective data manipulation and analytics. Explore practical solutions for Teradata administration tasks, including user and security management. Learn performance tuning techniques to enhance the efficiency of your queries and processes. Acquire detailed knowledge about Teradata's architecture and its unique capabilities. Author(s) The authors of "Teradata Cookbook" are experienced professionals in database management and data warehousing. With a deep understanding of Teradata's architecture and use in real-world applications, they bring a wealth of knowledge to each of the book's recipes. Their focus is to provide practical, actionable insights to help you tackle challenges you may face. Who is it for? This book is ideal for database administrators, data analysts, and professionals working with data warehousing who want to leverage the power of Teradata. Whether you are new to this database management system or looking to enhance your expertise, this cookbook provides practical solutions and in-depth insights, making it an essential resource.

IBM z14 Technical Guide

Abstract This IBM® Redbooks® publication describes the new member of the IBM Z family, IBM z14®. IBM z14 is the trusted enterprise platform for pervasive encryption, integrating data, transactions, and insights into the data. A data-centric infrastructure must always be available with a 99.999% or better availability, have flawless data integrity, and be secured from misuse. It also must be an integrated infrastructure that can support new applications. Finally, it must have integrated capabilities that can provide new mobile capabilities with real-time analytics that are delivered by a secure cloud infrastructure. IBM z14 servers are designed with improved scalability, performance, security, resiliency, availability, and virtualization. The superscalar design allows z14 servers to deliver a record level of capacity over the prior IBM Z platforms. In its maximum configuration, z14 is powered by up to 170 client characterizable microprocessors (cores) running at 5.2 GHz. This configuration can run more than 146,000 million instructions per second (MIPS) and up to 32 TB of client memory. The IBM z14 Model M05 is estimated to provide up to 35% more total system capacity than the IBM z13® Model NE1. This Redbooks publication provides information about IBM z14 and its functions, features, and associated software support. More information is offered in areas that are relevant to technical planning. It is intended for systems engineers, consultants, planners, and anyone who wants to understand the IBM Z servers functions and plan for their usage. It is intended as an introduction to mainframes. Readers are expected to be generally familiar with existing IBM Z technology and terminology.

Machine Learning and Security

Can machine learning techniques solve our computer security problems and finally put an end to the cat-and-mouse game between attackers and defenders? Or is this hope merely hype? Now you can dive into the science and answer this question for yourself. With this practical guide, you’ll explore ways to apply machine learning to security issues such as intrusion detection, malware classification, and network analysis. Machine learning and security specialists Clarence Chio and David Freeman provide a framework for discussing the marriage of these two fields, as well as a toolkit of machine-learning algorithms that you can apply to an array of security problems. This book is ideal for security engineers and data scientists alike. Learn how machine learning has contributed to the success of modern spam filters Quickly detect anomalies, including breaches, fraud, and impending system failure Conduct malware analysis by extracting useful information from computer binaries Uncover attackers within the network by finding patterns inside datasets Examine how attackers exploit consumer-facing websites and app functionality Translate your machine learning algorithms from the lab to production Understand the threat attackers pose to machine learning solutions

Mastering PostgreSQL 10

Mastering PostgreSQL 10 delves into the depths of PostgreSQL development and administration, guiding readers through advanced functionalities of the database. Covering topics such as query optimization, replication, high availability, and migration, this book equips you with the skills needed to harness the full power of PostgreSQL 10. What this Book will help me do Learn to optimize database queries to enhance performance in PostgreSQL 10. Understand advanced replication techniques and how to implement high availability. Gain expertise in managing security, backups and performing data migrations effectively. Explore query tuning and indexing strategies to speed up your database applications. Handle troubleshooting challenges by understanding problems and their solutions. Author(s) The authors of Mastering PostgreSQL 10 are experts in the field of databases, with years of experience in designing, developing, and managing PostgreSQL systems. They are passionate educators dedicated to helping professionals maximize their potential with PostgreSQL. Their practical and approachable style ensures that even complex topics are clearly explained. Who is it for? This book is ideal for PostgreSQL data architects and administrators who want to master advanced features of PostgreSQL 10. It is best suited for individuals who have prior database administration experience and a working knowledge of SQL. Readers aiming to enhance performance and implement transformations in their PostgreSQL setups will benefit immensely. Those tasked with ensuring high availability, migration, and recovery of PostgreSQL will find this book invaluable.

MySQL 8 Cookbook

With "MySQL 8 Cookbook," dive into over 150 practical recipes tailored for database professionals aiming to master MySQL 8. You will explore setup, querying, and advanced features like security and performance tuning. This book is your comprehensive guide to efficient database handling in MySQL 8. What this Book will help me do Efficiently set up and configure a MySQL 8 environment. Master advanced querying techniques using new MySQL features such as CTEs and window functions. Execute robust data backup and recovery strategies with MySQL 8. Implement performance improvements with tools and features like descending indexes and query optimizers. Secure, manage, and optimize databases to support scalable, high-performance applications. Author(s) Karthik Appigatla is a seasoned database administrator and developer with extensive expertise in MySQL and relational database systems. With years of industry experience, he brings a practical perspective to database solutions. His passion is to empower learners by simplifying complex database concepts with a hands-on approach. Who is it for? This book is tailored for MySQL developers or administrators who seek ready solutions for their MySQL challenges. Whether you're upgrading to MySQL 8 or want to leverage its latest features, this cookbook is for you. Ideal for those with basic Linux and SQL experience aiming to build advanced MySQL knowledge and skills.

ABCs of IBM z/OS System Programming Volume 3

Abstract The ABCs of IBM z/OS® System Programming is a 13-volume collection that provides an introduction to the z/OS operating system and the hardware architecture. Whether you are a beginner or an experienced system programmer, the ABCs collection provides the information that you need to start your research into z/OS and related subjects. The ABCs collection serves as a powerful technical tool to help you become more familiar with z/OS in your current environment, or to help you evaluate platforms to consolidate your e-business applications. This edition is updated to z/OS Version 2 Release 3. The other volumes contain the following content: Volume 1: Introduction to z/OS and storage concepts, TSO/E, ISPF, JCL, SDSF, and z/OS delivery and installation Volume 2: z/OS implementation and daily maintenance, defining subsystems, IBM Job Entry Subsystem 2 (JES2) and JES3, link pack area (LPA), LNKLST, authorized libraries, System Modification Program Extended (SMP/E), IBM Language Environment Volume 4: Communication Server, TCP/IP, and IBM VTAM® Volume 5: Base and IBM Parallel Sysplex®, System Logger, Resource Recovery Services (RRS), global resource serialization (GRS), z/OS system operations, automatic restart manager (ARM), IBM Geographically Dispersed Parallel Sysplex™ (IBM GDPS) Volume 6: Introduction to security, IBM RACF®, Digital certificates and PKI, Kerberos, cryptography and z990 integrated cryptography, zSeries firewall technologies, LDAP, and Enterprise Identity Mapping (EIM) Volume 7: Printing in a z/OS environment, Infoprint Server, and Infoprint Central Volume 8: An introduction to z/OS problem diagnosis Volume 9: z/OS UNIX System Services Volume 10: Introduction to IBM z/Architecture®, the IBM Z platform, IBM Z connectivity, LPAR concepts, HCD, and DS Storage Solution. Volume 11: Capacity planning, performance management, WLM, IBM RMF™, and SMF Volume 12: WLM Volume 13: JES3, JES3 SDSF

Liberty in IBM CICS: Deploying and Managing Java EE Applications

Abstract This IBM® Redbooks® publication is intended for IBM CICS® system programmers and IBM Z architects. It describes how to deploy and manage Java EE 7 web-based applications in an IBM CICS Liberty JVM server and access data on IBM Db2® for IBM z/OS® and IBM MQ for z/OS sub systems. In this book, we describe the key steps to create and install a Liberty JVM server within a CICS region. We then describe how to best use the different deployment techniques for Java EE applications and the specific considerations when deploying applications that use JDBC, JMS, and the new CICS link to Liberty API. Finally, we describe how to secure web applications in CICS Liberty, including transport-level security and request authentication and authorization by using IBM RACF® and LDAP registries. Information is also provided about how to build a high availability infrastructure and how to use the logging and monitoring functions that are available in the CICS Liberty environment. This book is based on IBM CICS Transaction Server (CICS TS) V5.4 that uses the embedded IBM WebSphere® Application Server Liberty technology. It is also applicable to CICS TS V5.3 with the fixes for the continuous delivery APAR PI77502 applied. Sample applications are used throughout this publication and are freely available for download from the IBM CICSDev GitHub organization along with detailed deployment instructions.

IBM QRadar Version 7.3 Planning and Installation Guide

Abstract With the advances of technology and the reoccurrence of data leaks, cyber security is a bigger challenge than ever before. Cyber attacks evolve as quickly as the technology itself, and hackers are finding more innovative ways to break security controls to access confidential data and to interrupt services. Hackers reinvent themselves using new technology features as a tool to expose companies and individuals. Therefore, cyber security cannot be reactive but must go a step further by implementing proactive security controls that protect one of the most important assets of every organization: the company's information. This IBM® Redbooks® publication provides information about implementing IBM QRadar® for Security Intelligence and Event Monitoring (SIEM) and protecting an organization's networks through a sophisticated technology, which permits a proactive security posture. It is divided in to the following major sections to facilitate the integration of QRadar with any network architecture: Chapter 2, "Before the installation" on page 3 provides a review of important requirements before the installation of the product. Chapter 3, "Installing IBM QRadar V7.3" on page 57 provides step-by-step procedures to guide you through the installation process. Chapter 4, "After the installation" on page 77 helps you to configure additional features and perform checks after the product is installed. QRadar is an IBM Security prime product that is designed to be integrated with corporate network devices to keep a real-time monitoring of security events through a centralized console. Through this book, any network or security administrator can understand the product's features and benefits.

In this podcast, Paul Ballew(@Ford) talks about best practices when running a data science organization spanned across multiple continents. He shared the importance of being Smart, Nice, and Inquisitive in creating tomorrow's workforce today. He sheds some light on the importance of appreciating culture when defining forward-looking policies. He also builds a case for a non-native group and discusses ways to implement data science as a central organization(with no hub-spoke model). This podcast is great for future data science leaders leading organizations with a broad consumer base and multiple geo-political silos.

Timeline: 0:29 Paul's journey. 5:10 Paul's current role. 8:10 Insurance and data analytics. 13:00 Who will own the insurance in the time of automation. 18:22 Recruiting models in technologies. 21:54 Embracing technological change. 25:03 Will we have more analytics in Ford cars? 28:25 How does Ford stay competitive from a technology perspective. 30:30 Challenges for Analytics officer in Ford. 32:36 Ingredients of a good hire. 34:12 How is the data science team structured in Ford. 36:15 Dealing with shadow groups. 39:00 Successful KPIs. 40:33 Who owns data? 42:27 Who should own the security of data assets. 44:05 Examples of successful data science groups. 46:30 Practises for remaining bias-free. 48:55 Getting started running a global data science team. 52:45 How does Paul's keep himself updated. 54:18 Paul's favorite read. 55:45 Closing remarks.

Paul's Recommended Read: The Outsiders Paperback – S. E. Hinton http://amzn.to/2Ai84Gl

Podcast Link: https://futureofdata.org/paul-ballewford-running-global-data-science-group-futureofdata-podcast/

Paul's BIO: Paul Ballew is vice president and Global Chief Data and Analytics officer, Ford Motor Company, effective June 1, 2017. At the same time, he also was elected a Ford Motor Company officer. In this role, he leads Ford’s global data and analytics teams for the enterprise. Previously, Ballew was Global Chief Data and Analytics Officer, a position to which he was named in December 2014. In this role, he has been responsible for establishing and growing the company’s industry-leading data and analytics operations that are driving significant business value throughout the enterprise. Prior to joining Ford, he was Chief Data, Insight & Analytics Officer at Dun & Bradstreet. In this capacity, he was responsible for the company’s global data and analytic activities along with the company’s strategic consulting practice. Previously, Ballew served as Nationwide’s senior vice president for Customer Insight and Analytics. He directed customer analytics, market research, and information and data management functions, and supported the company’s marketing strategy. His responsibilities included the development of Nationwide’s customer analytics, data operations, and strategy. Ballew joined Nationwide in November 2007 and established the company’s Customer Insights and Analytics capabilities.

Ballew sits on the boards of Neustar, Inc. and Hyatt Hotels Corporation. He was born in 1964 and has a bachelor’s and master’s degree in Economics from the University of Detroit.

About #Podcast:

FutureOfData podcast is a conversation starter to bring leaders, influencers, and lead practitioners to discuss their journey in creating the data-driven future.

Wanna Join? If you or any you know wants to join in, Register your interest @ http://play.analyticsweek.com/guest/

Want to sponsor? Email us @ [email protected]

Keywords:

FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy