talk-data.com talk-data.com

Topic

Data Governance

data_management compliance data_quality

417

tagged

Activity Trend

90 peak/qtr
2020-Q1 2026-Q1

Activities

417 activities · Newest first

Building Effective Privacy Programs

Presents a structured approach to privacy management, an indispensable resource for safeguarding data in an ever-evolving digital landscape In today’s data-driven world, protecting personal information has become a critical priority for organizations of all sizes. Building Effective Privacy Programs: Cybersecurity from Principles to Practice equips professionals with the tools and knowledge to design, implement, and sustain robust privacy programs. Seamlessly integrating foundational principles, advanced privacy concepts, and actionable strategies, this practical guide serves as a detailed roadmap for navigating the complex landscape of data privacy. Bridging the gap between theoretical concepts and practical implementation, Building Effective Privacy Programs combines in-depth analysis with practical insights, offering step-by-step instructions on building privacy-by-design frameworks, conducting privacy impact assessments, and managing compliance with global regulations. In-depth chapters feature real-world case studies and examples that illustrate the application of privacy practices in a variety of scenarios, complemented by discussions of emerging trends such as artificial intelligence, blockchain, IoT, and more. Providing timely and comprehensive coverage of privacy principles, regulatory compliance, and actionable strategies, Building Effective Privacy Programs: Addresses all essential areas of cyberprivacy, from foundational principles to advanced topics Presents detailed analysis of major laws, such as GDPR, CCPA, and HIPAA, and their practical implications Offers strategies to integrate privacy principles into business processes and IT systems Covers industry-specific applications for healthcare, finance, and technology sectors Highlights successful privacy program implementations and lessons learned from enforcement actions Includes glossaries, comparison charts, sample policies, and additional resources for quick reference Written by seasoned professionals with deep expertise in privacy law, cybersecurity, and data protection, Building Effective Privacy Programs: Cybersecurity from Principles to Practice is a vital reference for privacy officers, legal advisors, IT professionals, and business executives responsible for data governance and regulatory compliance. It is also an excellent textbook for advanced courses in cybersecurity, information systems, business law, and business management.

AI is moving fast, but are organizations prepared to keep up? In this episode, data professional Laura Madsen joins us to unpack why most companies are lagging behind, how tech debt is holding businesses back, and why knowledge graphs are the way forward. Join us for a bold conversation on why the AI revolution needs better data governance, not just bigger models. What You'll Learn: Who's thriving in disruption, which industries embrace AI, and why others are stuck The hidden cost of tech debt and why most organizations avoid real transformation The power of knowledge graphs, and why they're the key to making AI work at scale  What AI still can't do for us, and the gaps we need to fill with human expertise   Follow Laura on LinkedIn!   Register for free to be part of the next live session: https://bit.ly/3XB3A8b   Follow us on Socials: LinkedIn YouTube Instagram (Mavens of Data) Instagram (Maven Analytics) TikTok Facebook Medium X/Twitter

This presentation explores the evolution of data platforms—from data warehouses to data lakes—and highlights why the data mesh has emerged as an alternative approach for scalable, flexible architectures. Learn how Amazon DataZone addresses the complexities of data governance in a data mesh, simplifying federated governance, enabling secure access control, and automating metadata management.

Business intelligence has been transforming organizations for decades, yet many companies still struggle with widespread adoption. With less than 40% of employees in most organizations having access to BI tools, there's a significant 'information underclass' making decisions without data-driven insights. How can businesses bridge this gap and achieve true information democracy? While new technologies like generative AI and semantic layers offer promising solutions, the fundamentals of data quality and governance remain critical. What balance should organizations strike between investing in innovative tools and strengthening their data infrastructure? How can you ensure your business becomes a 'data athlete' capable of making hyper-decisive moves in an uncertain economic landscape? Howard Dresner is founder and Chief Research Officer at Dresner Advisory Services and a leading voice in Business Intelligence (BI), credited with coining the term “Business Intelligence” in 1989. He spent 13 years at Gartner as lead BI analyst, shaping its research agenda and earning recognition as Analyst of the Year, Distinguished Analyst, and Gartner Fellow. He also led Gartner’s BI conferences in Europe and North America. Before founding Dresner Advisory in 2007, Howard was Chief Strategy Officer at Hyperion Solutions, where he drove strategy and thought leadership, helping position Hyperion as a leader in performance management prior to its acquisition by Oracle.  Howard has written two books, The Performance Management Revolution – Business Results through Insight and Action, and Profiles in Performance – Business Intelligence Journeys and the Roadmap for Change - both published by John Wiley & Sons. In the episode, Richie and Howard explore the surprising low penetration of business intelligence in organizations, the importance of data governance and infrastructure, the evolving role of AI in BI, and the strategic initiatives driving BI usage, and much more. Links Mentioned in the Show: Dresner Advisory ServicesHoward’s Book - Profiles in Performance: Business Intelligence Journeys and the Roadmap for ChangeConnect with HowardSkill Track: Power BI FundamentalsRelated Episode: The Next Generation of Business Intelligence with Colin Zima, CEO at OmniRewatch RADAR AI  New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

As modern data ecosystems grow in complexity, ensuring transparency, discoverability, and governance in data workflows becomes critical. Apache Airflow, a powerful workflow orchestration tool, enables data engineers to build scalable pipelines, but without proper visibility into data lineage, ownership, and quality, teams risk operating in a black box. In this talk, we will explore how integrating Airflow with a data catalog can bring clarity and transparency to data workflows. We’ll discuss how metadata-driven orchestration enhances data governance, enables lineage tracking, and improves collaboration across teams. Through real-world use cases, we will demonstrate how Airflow can automate metadata collection, update data catalogs dynamically, and ensure data quality at every stage of the pipeline. Attendees will walk away with practical strategies for implementing a transparent data workflow that fosters trust, efficiency, and compliance in their data infrastructure.

Metadata management is a cornerstone of effective data governance, yet it presents unique challenges distinct from traditional data engineering. At scale, efficiently extracting metadata from relational and NoSQL databases demands specialized solutions. To address this, our team has developed custom Airflow operators that scan and extract metadata across various database technologies, orchestrating 100+ production jobs to ensure continuous and reliable metadata collection. Now, we’re expanding beyond databases to tackle non-traditional data sources such as file repositories and message queues. This shift introduces new complexities, including processing structured and unstructured files, managing schema evolution in streaming data, and maintaining metadata consistency across heterogeneous sources. In this session, we’ll share our approach to building scalable metadata scanners, optimizing performance, and ensuring adaptability across diverse data environments. Attendees will gain insights into designing efficient metadata pipelines, overcoming common pitfalls, and leveraging Airflow to drive metadata governance at scale.

This talk explores EDB’s journey from siloed reporting to a unified data platform, powered by Airflow. We’ll delve into the architectural evolution, showcasing how Airflow orchestrates a diverse range of use cases, from Analytics Engineering to complex MLOps pipelines. Learn how EDB leverages Airflow and Cosmos to integrate dbt for robust data transformations, ensuring data quality and consistency. We’ll provide a detailed case study of our MLOps implementation, demonstrating how Airflow manages training, inference, and model monitoring pipelines for Azure Machine Learning models. Discover the design considerations driven by our internal data governance framework and gain insights into our future plans for AIOps integration with Airflow.

At OLX, we connect millions of people daily through our online marketplace while relying on robust data pipelines. In this talk, we explore how the DAG Factory concept elevates data governance, lineage, and discovery by centralizing operator logic and restricting direct DAG creation. This approach enforces code quality, optimizes resources, maintains infrastructure hygiene and enables smooth version upgrades. We then leverage consistent naming conventions in Airflow to build targeted namespaces, aligning teams with global policies while preserving autonomy. Integrating external tools like AWS Lake Formation and Open Metadata further unifies governance, making it straightforward to manage and secure data. This is critical when handling hundreds or even thousands of active DAGs. If the idea of storing 1,600 pipelines in one folder seems overwhelming, join us to learn how the DAG Factory concept simplifies pipeline management. We’ll also share insights from OLX, highlighting how thoughtful design fosters oversight, efficiency, and discoverability across diverse use cases.

Kasriel Kay, leading data democratization at Velotix, joined Yuliia and Dumke to challenge conventional wisdom about data governance and catalogs. Kasriel argues that data catalogs provide visibility but fail to deliver business value, comparing them to "buying JIRA and expecting agile practices." He advocates for shifting from restrictive data governance to data enablement through policy-based access control that considers user attributes, data sensitivity, and business context. Kasriel explains how AI-driven policy engines can learn from organizational behavior to automatically grant appropriate data access while maintaining compliance, ultimately reducing time-to-insight and unlocking missed business opportunities.

Based on a large retail project, discover how to evolve an IT system built through incremental layers (monoliths, [micro]services, streaming, governance, applications…) into a data-centric, real-time, high-performance system, where governance, rule and data catalogs, data mesh, and scalability are integrated by design—not as add-on layers.

As organisations scale AI and move towards Data Products, success depends on trusted, high-quality data underpinned by strong governance. In this fireside chat, Chemist Warehouse shares how domain-aligned metadata, data quality, and governance, powered by Alation, enable a unified delivery framework using Critical Data Elements (CDEs) to reduce risk, drive self-service, and build a foundation for AI-ready analytics and future data product initiatives.

Three out of four companies are betting big on AI – but most are digging on shifting ground. In this $100 billion gold rush, none of these investments will pay off without data quality and strong governance – and that remains a challenge for many organizations. Not every enterprise has a solid data governance practice and maturity models vary widely. As a result, investments in innovation initiatives are at risk of failure. What are the most important data management issues to prioritize? See how your organization measures up and get ahead of the curve with Actian.

A great data system must begin with a clear vision, is shaped through measurable milestones, and ultimately proves its worth through meaningful use.
Join us as we explore the journey of building the National Disability Data Asset—a groundbreaking initiative linking data about Australians with disability in a way never done before. This world-first effort, brings together, Commonwealth, State and Territory data across most subject domains to improve the lives of Australians with disability. It’s a story of bold ambition, shaped by the voices of users, data governance, and challenged by the complexity of data sharing in 21st century Australia under 20th century legislation.

Discover how Australia’s national public broadcaster accelerated its data governance journey through a people and process-led approach. This session highlights how effective change management, cultural alignment, and data literacy helped embed accountability, foster trust, and drive maturity across all aspects of data governance in a complex federated organisation. Explore the transformative impact of a people-led approach to data governance and gain insights into effective strategies for fostering accountability and trust.

The modern data stack has transformed how organizations work with data, but are our BI tools keeping pace with these changes? As data schemas become increasingly fluid and analysis needs range from quick explorations to production-grade reporting, traditional approaches are being challenged. How can we create analytics experiences that accommodate both casual spreadsheet users and technical data modelers? With semantic layers becoming crucial for AI integration and data governance growing in importance, what skills do today's BI professionals need to master? Finding the balance between flexibility and governance is perhaps the greatest challenge facing data teams today. Colin Zima is the Co-Founder and CEO of Omni, a business intelligence platform focused on making data more accessible and useful for teams of all sizes. Prior to Omni, he was Chief Analytics Officer and VP of Product at Looker, where he helped shape the product and data strategy leading up to its acquisition by Google for $2.6 billion. Colin’s background spans roles in data science, analytics, and product leadership, including positions at Google, HotelTonight, and as founder of the restaurant analytics startup PrimaTable. He holds a degree in Operations Research and Financial Engineering from Princeton University and began his career as a Structured Credit Analyst at UBS. In the episode, Richie and Colin explore the evolution of BI tools, the challenges of integrating casual and rigorous data analysis, the role of semantic layers, and the impact of AI on business intelligence. They discuss the importance of understanding business needs, creating user-focused dashboards, and the future of data products, and much more. Links Mentioned in the Show: OmniConnect with ColinSkill Track: Design in Power BIRelated Episode: Self-Service Business Intelligence with Sameer Al-Sakran, CEO at MetabaseRegister for RADAR AI - June 26 New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Beyond Chatbots: Building Autonomous Insurance Applications With Agentic AI Framework

The insurance industry is at the crossroads of digital transformation, facing challenges from market competition and customer expectations. While conventional ML applications have historically provided capabilities in this domain, the emergence of Agentic AI frameworks presents a revolutionary opportunity to build truly autonomous insurance applications. We will address issues related to data governance and quality while discussing how to monitor/evaluate fine-tune models. We'll demonstrate the application of the agentic framework in the insurance context and how these autonomous agents can work collaboratively to handle complex insurance workflows — from submission intake and risk evaluation to expedited quote generation. This session demonstrates how to architect intelligent insurance solutions using Databricks Mosaic AI agentic core components including Unity Catalog, Playground, model evaluation/guardrails, privacy filters, AI functions and AI/BI Genie.

This session is repeated. This introductory workshop caters to data engineers seeking hands-on experience and data architects looking to deepen their knowledge. The workshop is structured to provide a solid understanding of the following data engineering and streaming concepts: Introduction to Lakeflow and the Data Intelligence Platform Getting started with Lakeflow Declarative Pipelines for declarative data pipelines in SQL using Streaming Tables and Materialized Views Mastering Databricks Workflows with advanced control flow and triggers Understanding serverless compute Data governance and lineage with Unity Catalog Generative AI for Data Engineers: Genie and Databricks Assistant We believe you can only become an expert if you work on real problems and gain hands-on experience. Therefore, we will equip you with your own lab environment in this workshop and guide you through practical exercises like using GitHub, ingesting data from various sources, creating batch and streaming data pipelines, and more.

Sponsored by: Immuta | Protecting People Data: How Shell Empowers HR to Drive a Brighter Future

HR departments increasingly rely on data to improve workforce planning and experiences. However, managing and getting value from this data can be challenging, especially given the complex technology landscape and the need to ensure data security and compliance. Shell has placed a high priority on safeguarding its people data while empowering its HR department with the tools and access they need to make informed decisions. This session will explore the transformation of Shell's Central Data Platform, starting with their HR use case. You’ll hear about:- The role of automation and data governance, quality, and literacy in Shell’s strategy.- Why they chose Databricks and Immuta for enhanced policy-based access control.- The future for Shell and their vision for a data marketplace to truly embrace a culture of global data sharing.The result? A robust, scalable HR Data Platform that is securely driving a brighter future for Shell and its employees.