talk-data.com talk-data.com

Topic

Cyber Security

cybersecurity information_security data_security privacy

2078

tagged

Activity Trend

297 peak/qtr
2020-Q1 2026-Q1

Activities

2078 activities · Newest first

Understanding Oracle APEX 20 Application Development: Think Like an Application Express Developer

This book shows developers and Oracle professionals how to build practical, non-trivial web applications using Oracle’s rapid application development environment – Application Express (APEX). This third edition Is revised to cover the new features and user interface experience found in APEX 20. Interactive grids and form regions are two of the newer aspects of APEX covered in this edition. The book is targeted at those who are new to APEX and just beginning to develop real projects for deployment, as well as those who are familiar with APEX and want a deeper understanding. The book takes you through the development of a demo web application that illustrates the concepts all APEX programmers should know. This book introduces the world of APEX properties, explaining the functionality supported by each page component as well as the techniques developers use to achieve that functionality. Topics include conditional formatting, user-customized reports, data entry forms, concurrency and lost updates, and security control. Specific attention is given in the book to the thought process involved in choosing and assembling APEX components and features to deliver a specific result. Understanding Oracle APEX 20 Application Development, 3rd Edition is the ideal book to take you from an understanding of the individual pieces of APEX to an understanding of how those pieces are assembled into polished applications. What You Will Learn Build attractive, highly functional web apps from the ground up Enhance and customize pages created by the APEX wizards Understand the security implications of page design Write PL/SQL code for process activity and verification Build complex components such as forms and interactive grids Who This Book Is For Developers new to APEXwho desire a strong fundamental understanding of how APEX applications work. For existing developers and database administrators desiring to mine the most value from APEX by improving their development techniques.

Learn MongoDB 4.x

Explore the capabilities of MongoDB 4.x with this comprehensive guide designed for developers and administrators working with NoSQL databases. Dive into topics such as database design, advanced query handling, and security configuration, and gain hands-on experience through practical examples and insights. What this Book will help me do Learn to configure and install MongoDB 4.x for development and administration. Understand the principles of NoSQL schema design for optimal performance. Perform complex queries and operations to manage your MongoDB databases. Secure your MongoDB setup with role-based access control and encryption techniques. Monitor and optimize database performance for production environments. Author(s) None Bierer, the author of 'Learn MongoDB 4.x,' is a seasoned database expert with extensive experience in NoSQL technologies. With a focus on practicality and clear explanations, None brings deep insights into MongoDB's development and administration. Who is it for? This book is ideal for early-career developers, system administrators, and database enthusiasts eager to break into NoSQL technologies. If you are familiar with Python and basic database concepts, this book will guide you through mastering MongoDB. It's perfect for those building dynamic backend systems.

SQL Server 2019 Administrator's Guide - Second Edition

SQL Server 2019 Administrator's Guide provides a complete walkthrough of administering, managing, and optimizing SQL Server 2019. You'll gain the expertise needed to implement secure and efficient database solutions suitable for enterprise-scale environments. This book systematically explores the tools, techniques, and best practices essential to mastering SQL Server 2019. What this Book will help me do Optimize database queries and design using indexing techniques to resolve performance issues effectively. Implement robust backup and recovery mechanisms following advanced security policies. Utilize SQL Server 2019 tools for automation in monitoring, maintaining, and managing health checks. Integrate SQL Server with Azure for Big Data processing and scalability. Set up highly available and stable Always-On environments for enterprise databases. Author(s) Marek Chmel and Vladimír Mužný are seasoned database administrators with years of hands-on experience in SQL Server and database infrastructure. Their collaborative writing approach emphasizes real-world scenarios and examples that make technical concepts accessible. With accolades in professional database education and a passion for teaching, they provide a guiding hand through complex database subjects. Who is it for? This book is ideal for database administrators, developers, and IT professionals who seek to enhance their expertise with SQL Server 2019. Readers should have a basic understanding of database principles and familiarity with prior versions of SQL Server. Whether you're stepping into advanced administration or seeking to fine-tune your enterprise database infrastructure, this book is tailored for you.

MongoDB Topology Design: Scalability, Security, and Compliance on a Global Scale

Create a world-class MongoDB cluster that is scalable, reliable, and secure. Comply with mission-critical regulatory regimes such as the European Union’s General Data Protection Regulation (GDPR). Whether you are thinking of migrating to MongoDB or need to meet legal requirements for an existing self-managed cluster, this book has you covered. It begins with the basics of replication and sharding, and quickly scales up to cover everything you need to know to control your data and keep it safe from unexpected data loss or downtime. This book covers best practices for stable MongoDB deployments. For example, a well-designed MongoDB cluster should have no single point of failure. The book covers common use cases when only one or two data centers are available. It goes into detail about creating geopolitical sharding configurations to cover the most stringent data protection regulation compliance. The book also covers different tools and approaches for automating and monitoring a cluster with Kubernetes, Docker, and popular cloud provider containers. What You Will Learn Get started with the basics of MongoDB clusters Protect and monitor a MongoDB deployment Deepen your expertise around replication and sharding Keep effective backups and plan ahead for disaster recovery Recognize and avoid problems that can occur in distributed databases Build optimal MongoDB deployments within hardware and data center limitations Who This Book Is For Solutions architects, DevOps architects and engineers, automation and cloud engineers, and database administrators who are new to MongoDB and distributed databases or who need to scale up simple deployments. This book is a complete guide to planning a deployment for optimal resilience, performance, and scaling, and covers all the details required to meet the new set of data protection regulations such as the GDPR. This book is particularly relevant for large global organizations such as financial and medical institutions, as well as government departments that need to control data in the whole stack and are prohibited from using managed cloud services.

Microservices in SAP HANA XSA: A Guide to REST APIs Using Node.js

Build enterprise-grade microservices in the SAP HANA Advanced Model (XSA). This book explains building scalable APIs in XSA and the benefits of building microservices with SAP HANA XSA. This book covers the cloud foundry (CF) architecture and how SAP HANA XSA follows the model. It begins with the details of the different architectural layers of applications hosted in XSA (specifically, microservices). Everything you need to know is presented, including analyzing requests, modularization, database ingestion, building JSON responses, and scaling your microservices. You will learn to use developmental tools such as the SAP WEB IDE, POSTMAN, and the SAP HANA Cockpit for XSA, including debugging examples on SAP HANA XSA with code snippets showing how microservices can be developed, debugged, scaled, and deployed on SAP HANA XSA. Microservices are divided into security and authentication, request handling, modularization of Node.js, and interaction with the SAP HANA database containers and response formatting. An end-to-end scenario is presented of a Node.js REST API that uses HTTP methods, concluding with deploying an SAP HANA XSA project to a production environment. This book is simple enough to help you implement a Node.js module in order to understand the development of microservices, and complex enough for architects to design their next business-ready solution integrating UAA security, application modularization, and an end-to-end REST API on SAP HANA XSA. What You Will Learn Know the definition and architecture of cloud foundry and its application on SAP HANA XSA Understand REST principles and different HTTP methods Explore microservices (Node.js) development Database interaction from Node (executing SQL statements and stored procedures) Who This Book Is For Architects designing business-ready solutions that integrate UAA security, application modularization, and an end-to-end REST API on SAP HANA XSA

Data Management at Scale

As data management and integration continue to evolve rapidly, storing all your data in one place, such as a data warehouse, is no longer scalable. In the very near future, data will need to be distributed and available for several technological solutions. With this practical book, you’ll learnhow to migrate your enterprise from a complex and tightly coupled data landscape to a more flexible architecture ready for the modern world of data consumption. Executives, data architects, analytics teams, and compliance and governance staff will learn how to build a modern scalable data landscape using the Scaled Architecture, which you can introduce incrementally without a large upfront investment. Author Piethein Strengholt provides blueprints, principles, observations, best practices, and patterns to get you up to speed. Examine data management trends, including technological developments, regulatory requirements, and privacy concerns Go deep into the Scaled Architecture and learn how the pieces fit together Explore data governance and data security, master data management, self-service data marketplaces, and the importance of metadata

SQL Injection Strategies

SQL Injection Strategies is the go-to guide for understanding and mastering the concepts and practical aspects of SQL injection. You will comprehensively learn about the processes to identify vulnerabilities in web applications and databases, how to safely test for SQL injection, and strategies to defend against such attacks. The book balances theory and practice effectively, offering tools and techniques for both learning and application. What this Book will help me do Gain a firm understanding of what SQL injection is and how it affects web and mobile applications. Learn to set up a safe and effective environment for practicing SQL injection techniques. Discover manual and tool-assisted methods for testing and performing SQL injection. Understand defense measures to mitigate and defend against SQL injection vulnerabilities. Be able to apply SQL injection knowledge to secure various systems including web, mobile, and IoT platforms. Author(s) None Galluccio, Gabriele Lombari, and their co-authors are seasoned professionals with extensive experience in cybersecurity and web application development. Their expertise in identifying system vulnerabilities and devising comprehensive defense mechanisms is well-recognized. This book reflects their commitment to teaching practical security techniques needed in today's technology-driven world. Who is it for? This book is designed for penetration testers, cybersecurity enthusiasts, ethical hackers, and technology practitioners seeking to understand SQL injection. Beginners with no prior experience in SQL injection as well as intermediate-level users looking to deepen their knowledge will find value. It's ideal for anyone looking for practical, hands-on guidance in securing applications and learning about common vulnerabilities.

Financial Times is increasing its digital revenue by allowing business people to make data-driven decisions. Providing an Airflow based platform where data engineers, data scientists, BI experts and others can run language agnostic jobs was a huge swing. One of the most successful steps in the platform’s development was building our own execution environment, allowing stakeholders to self deploy jobs without cross team dependencies on top of the unlimited scale of Kubernetes. In this talk we share how we have integrated and extended Airflow at Financial Times. The main topics we will cover include: Providing team level security isolation Removing cross team dependencies Creating execution environment for independently creating and deploying R, Python, JAVA, Spark, etc jobs Reducing latency when sharing data between task instances Integrating all these features on top of Kubernetes

Astronomer is focused on improving Airflow’s user experience through the entire lifecycle — from authoring + testing DAGs, to building containers and deploying the DAGs, to running and monitoring both the DAGs and the infrastructure that they are operating within — with an eye towards increased security and governance as well. In this talk we walk you through some current UX challenges, an overview of how the Astronomer platform addresses the major challenges, and also provide sneak peek of the things that we’re working on in the coming months to improve Airflow’s user experience. This is a sponsored talk, presented by Astronomer .

This talk discusses how to build an Airflow based data platform that can take advantage of popular ML tools (Jupyter, Tensorflow, Spark) while creating an easy-to-manage/monitor As the field of data science grows in popularity, companies find themselves in need of a single common language that can connect their data science teams and data infrastructure teams. Data scientists want rapid iteration, infrastructure engineers want monitoring and security controls, and product owners want their solutions deployed in time for quarterly reports. This talk will discuss how to build an Airflow based data platform that can take advantage of popular ML tools (Jupyter, Tensorflow, Spark) while creating an easy-to-manage/monitor ecosystem for data infrastructure and support team. In this talk, we will take an idea from a single-machine Jupyter Notebook to a cross-service Spark + Tensorflow pipeline, to a canary tested, production-ready model served on Google Cloud Functions. We will show how Apache Airflow can connect all layers of a data team to deliver rapid results.

In the contemporary world security is important more than ever - Airflow installations are no exception. Google Cloud Platform and Cloud Composer offer useful security options for running your DAGs and tasks in a way so you effectively can manage a risk of data exfiltration and access to the system is limited. This is a sponsored talk, presented by Google Cloud .

In this talk, we share the lessons learned while building a scheduler-as-a-service leveraging Apache Airflow to achieve improved stability and security for one of the largest gaming companies. The platform integrates with different data sources and meets varied SLA’s across workflows owned by multiple game studios. In particular, we present a comprehensive self-serve airflow architecture with multi-tenancy, auto-dag generation, SSO-integration with improved ease of deployment. Within Electronic Arts, to provide scheduler-as-a-service and to support hundreds of thousands of execution workflows, each team requires an isolated environment with access to a central data lake containing several petabytes of anonymized player and game metrics. Leveraging Airflow, each team is provided a private code repository and namespace with which they can deploy their DAGs at their own behest. To support agile development cycles, a private testing sandbox and auto-deployment to an isolated multi-tenant airflow platform has been made available to game studios. In production, a single dockerized airflow deployment on Kubernetes is utilized to ensure highly availability and single-step deployment. Custom SSO-integration and RBAC-based operator and sensor whitelisting allows for secure logical isolation. In addition, providing dynamic DAG instantiation capability helps address varied SLA’s during game launch seasons that are staggered through a financial year.

In this talk we review how Airflow helped create a tool to detect data anomalies. Leveraging Airflow for process management, database interoperability, and authentication created an easy path forward to achieve scale, decrease the development time and pass security audits. While Airflow is generally looked at as a solution to manage data pipelines, integrating tools with Airflow can also speed up development of those tools. The Data Anomaly Detector was created at One Medical to scan thousands of metrics per day for data anomalies. It’s a complicated tool and much of that complexity was outsourced to Airflow. Because the data infrastructure at One Medical was already built around Airflow, and Airflow had many desirable features, it made sense to build the tool to integrate closely with Airflow. The end result was more time could be spend on building features to do statistical analysis, and less effort had to be spent on database authentication, interoperability or process management. It’s an interesting example of how Airflow can be leveraged to build data intensive tools.

Summary The majority of analytics platforms are focused on use internal to an organization by business stakeholders. As the availability of data increases and overall literacy in how to interpret it and take action improves there is a growing need to bring business intelligence use cases to a broader audience. GoodData is a platform focused on simplifying the work of bringing data to employees and end users. In this episode Sheila Jung and Philip Farr discuss how the GoodData platform is being used, how it is architected to provide scalable and performant analytics, and how it integrates into customer’s data platforms. This was an interesting conversation about a different approach to business intelligence and the importance of expanded access to data.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management What are the pieces of advice that you wish you had received early in your career of data engineering? If you hand a book to a new data engineer, what wisdom would you add to it? I’m working with O’Reilly on a project to collect the 97 things that every data engineer should know, and I need your help. Go to dataengineeringpodcast.com/97things to add your voice and share your hard-earned expertise. When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! GoodData is revolutionizing the way in which companies provide analytics to their customers and partners. Start now with GoodData Free that makes our self-service analytics platform available to you at no cost. Register today at dataengineeringpodcast.com/gooddata Your host is Tobias Macey and today I’m interviewing Sheila Jung and Philip Farr about how GoodData is building a platform that lets you share your analytics outside the boundaries of your organization

Interview

Introduction How did you get involved in the area of data management? Can you start by describing what you are building at GoodData and some of its origin story? The business intelligence market has been around for decades now and there are dozens of options with different areas of focus. What are the factors that might motivate me to choose GoodData over the other contenders in the space? What are the use cases and industries that you focus on supporting with GoodData? How has the market of business intelligence tools evolved in recent years?

What are the contributing trends in technology and business use cases that are driving that change?

What are some of the ways that your customers are embedding analytics into their own products? What are the differences in processing and serving capabilities between an internally used business intelligence tool, and one that is used for embedding into externally used systems?

What unique challenges are posed by the embedded analytics use case? How do you approach topics such as security, access control, and latency in a multitenant analytics platform?

What guidelines have you found to be most useful when addressing the concerns of accuracy and interpretability of the data being presented? How is the GoodData platform architected?

What are the complexities that you have had to design around in order to provide performant access to your customers’ data sources in an interactive use case? What are the off-the-shelf components that you have been able to integrate into the platform,

Artificial Intelligence in Healthcare

Artificial Intelligence (AI) in Healthcare is more than a comprehensive introduction to artificial intelligence as a tool in the generation and analysis of healthcare data. The book is split into two sections where the first section describes the current healthcare challenges and the rise of AI in this arena. The ten following chapters are written by specialists in each area, covering the whole healthcare ecosystem. First, the AI applications in drug design and drug development are presented followed by its applications in the field of cancer diagnostics, treatment and medical imaging. Subsequently, the application of AI in medical devices and surgery are covered as well as remote patient monitoring. Finally, the book dives into the topics of security, privacy, information sharing, health insurances and legal aspects of AI in healthcare. Highlights different data techniques in healthcare data analysis, including machine learning and data mining Illustrates different applications and challenges across the design, implementation and management of intelligent systems and healthcare data networks Includes applications and case studies across all areas of AI in healthcare data

Send us a text  Want to be featured as a guest on Making Data Simple? Reach out to us at [[email protected]] and tell us why you should be next.

Abstract Hosted by Al Martin, VP, Data and AI Expert Services and Learning at IBM , Making Data Simple provides the latest thinking on big data, A.I., and the implications for the enterprise from a range of experts.

This week on Making Data Simple, we have Priya Srinivasan Director, IBM Data and AI Expert Labs SWAT. In this week’s podcast we talk about Data (world’s new oil), AI (world’s refinery), Cloud (the pipeline), Unified Governance and Data Ops, Security, Analytics and Services.

Show Notes 4:50 Expert labs and what it means 5:52 Al talks how SWAT was for him 6:10 Priya discusses SWAT now 7:18 Priya gives examples of SWAT 9:04 Deliverables from Expert Labs 13:37 Al talks about Services 17:30 Priya talks about solving long term problems 20:04 Priya discusses GROW (Guidance, Resources, and Outreach for Women) 21:25 Al asks Priya what excites her

Linkedin - https://www.linkedin.com/in/sripriya-srinivasan-385a0812/ Twitter - https://twitter.com/Priyavikram2

GROW - https://w3-connections.ibm.com/wikis/home?lang=en-us#!/wiki/W7e7074647e13_420c_9abf_875dd706e4b4/page/Welcome%20to%20GROW%20in%20Hybrid%20Cloud%20-%20Guidance,%20Resources,%20Outreach%20for%20Women%20in%20Hybrid%20Cloud

Connect with the Team Producer Kate Brown - LinkedIn. Producer Michael Sestak - LinkedIn. Producer Meighann Helene - LinkedIn.

Producer Steve Templeton - LinkedIn. Host Al Martin - LinkedIn and Twitter. Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.

Securing Data on Threat Detection Using IBM Spectrum Scale and IBM QRadar: An Enhanced Cyber Resiliency Solution

Having appropriate storage for hosting business-critical data and advanced Security Information and Event Management (SIEM) software for deep inspection, detection, and prioritization of threats has become a necessity for any business. This IBM® Redpaper publication explains how the storage features of IBM Spectrum® Scale, when combined with the log analysis, deep inspection, and detection of threats that are provided by IBM QRadar®, help reduce the impact of incidents on business data. Such integration provides an excellent platform for hosting unstructured business data that is subject to regulatory compliance requirements. This paper describes how IBM Spectrum Scale File Audit Logging can be integrated with IBM QRadar. Using IBM QRadar, an administrator can monitor, inspect, detect, and derive insights for identifying potential threats to the data that is stored on IBM Spectrum Scale. When the threats are identified, you can quickly act on them to mitigate or reduce the impact of incidents. We further demonstrate how the threat detection by IBM QRadar can proactively trigger data snapshots or cyber resiliency workflow in IBM Spectrum Scale to protect the data during threat. This paper is intended for chief technology officers, solution engineers, security architects, and systems administrators.

Before the COVID-19 crisis, we were already acutely aware of the need for a broader conversation around data privacy: look no further than the Snowden revelations, Cambridge Analytica, the New York Times Privacy Project, the General Data Protection Regulation (GDPR) in Europe, and the California Consumer Privacy Act (CCPA). In the age of COVID-19, these issues are far more acute. We also know that governments and businesses exploit crises to consolidate and rearrange power, claiming that citizens need to give up privacy for the sake of security. But is this tradeoff a false dichotomy? And what type of tools are being developed to help us through this crisis? In this episode, Katharine Jarmul, Head of Product at Cape Privacy, a company building systems to leverage secure, privacy-preserving machine learning and collaborative data science, will discuss all this and more, in conversation with Dr. Hugo Bowne-Anderson, data scientist and educator at DataCamp.Links from the show

FROM THE INTERVIEW

Katharine on TwitterKatharine on LinkedInContact Tracing in the Real World (By Ross Anderson)The Price of the Coronavirus Pandemic (By Nick Paumgarten)Do We Need to Give Up Privacy to Fight the Coronavirus? (By Julia Angwin)Introducing the Principles of Equitable Disaster Response (By Greg Bloom)Cybersecurity During COVID-19 ( By Bruce Schneier)

Optimize the Value of Your Data with Oracle and IBM Flash Storage Solutions

In this multicloud and cognitive era, information continues to grow rapidly. By 2025, IDC says worldwide data will grow by 61% to 175 zettabytes, with as much of the data in data centers as in the cloud. IT environments with Oracle deployments will need to accommodate that data growth, including storing, copying, mirroring, and protecting the data. When IT budgets are constrained but data keeps growing, storage costs can consume more than their fair share of the IT budget. The leading-edge portfolio of storage solutions and essential technologies of IBM® can help organizations stay ahead of the information explosion. Designed with built-in efficiency, these solutions represent preferred practices that address the following main storage objectives for hybrid multicloud environments: Stop storing so much Store more with what you have. Move Oracle and related data to balance performance and efficiency IBM offers true enterprise class storage support for Oracle deployments at a low total cost of ownership (TCO). With flash disk, tape, storage network hardware, consolidated management console, software-defined storage solutions, and security software, IBM can provide Oracle customers the full spectrum of products to meet their availability, retention, security, and compliance requirements.

IBM AIX Enhancements and Modernization

This IBM® Redbooks publication is a comprehensive guide that covers the IBM AIX® operating system (OS) layout capabilities, distinct features, system installation, and maintenance, which includes AIX security, trusted environment, and compliance integration, with the benefits of IBM Power Virtualization Management (PowerVM®) and IBM Power Virtualization Center (IBM PowerVC), which includes cloud capabilities and automation types. The objective of this book is to introduce IBM AIX modernization features and integration with different environments: General AIX enhancements AIX Live Kernel Update individually or using Network Installation Manager (NIM) AIX security features and integration AIX networking enhancements PowerVC integration and features for cloud environments AIX deployment using IBM Terraform and IBM Cloud Automation Manager AIX automation that uses configuration management tools PowerVM enhancements and features Latest disaster recovery (DR) solutions AIX Logical Volume Manager (LVM) and Enhanced Journaled File System (JFS2) AIX installation and maintenance techniques