talk-data.com talk-data.com

Filter by Source

Select conferences and events

People (59 results)

See all 59 →
Showing 19 results

Activities & events

Title & Speakers Event

This month we’re delighted to be joined by Tesco’s Head of Data and we’ll also hear about the latest in AI Governance on Databricks from the Advancing Analytics team!

17:30 - 18:00: Arrival & Networking 18:00 - 18:10: Opening Remarks & Introductions

18:10 - 18:40: Building Sustainable Data Products with AI Readiness at Scale for Cyber - Varun S Gangoor - Head of Data at Tesco In this session, we’ll explore how sustainable data product design can enable AI readiness and drive innovation in cybersecurity. We’ll look at the key principles behind building scalable, secure and reusable data products that support cyber analytics, machine learning and GenAI use cases at scale. We’ll also discuss practical approaches to data governance, automation and platform engineering that help cyber teams turn data into actionable intelligence and strengthen overall cyber resilience.

18.40-19.10 Governing AI In Databricks: Building Trust And Compliance At Scale - Gavi Regunath, Advancing Analytics CAIO and Databricks MVP, joined by Terry McCann, Advancing Analytics CEO AI governance isn’t optional - it’s foundational. With 45% of organisations concerned about data accuracy and bias, and 40% worried about privacy, the need for robust oversight is clear. From model safety to compliance tooling, we’ll show how to embed trust into your AI systems, without slowing innovation.

19:00 onwards: Pizza, Drinks & Networking Enjoy some delicious pizza and beverages while networking with peers.

Join us for a fantastic evening of learning and networking at the London Databricks meetup!

November 2025 - Databricks Meetup
Emilie Nenquin – Head of Data & Intelligence @ VRT , Stijn Dolphen – Team Lead & Analytics Engineer @ Dataroots

Send us a text In this episode, we explore how public media can build scalable, transparent, and mission-driven data infrastructure - with Emilie Nenquin, Head of Data & Intelligence at VRT, and Stijn Dolphen, Team Lead & Analytics Engineer at Dataroots. Emilie shares how she architected VRT’s data transformation from the ground up: evolving from basic analytics to a full-stack data organization with 45+ specialists across engineering, analytics, AI, and user management. We dive into the strategic shift from Adobe Analytics to Snowplow, and what it means to own your data pipeline in a public service context. Stijn joins to unpack the technical decisions behind VRT’s current architecture, including real-time event tracking, metadata modeling, and integrating 70+ digital platforms into a unified ecosystem. 💡 Topics include: Designing data infrastructure for transparency and scaleBuilding a modular, privacy-conscious analytics stackMetadata governance across fragmented content systemsRecommendation systems for discovery, not just engagementThe circular relationship between data quality and AI performanceApplying machine learning in service of cultural and civic missionsWhether you're leading a data team, rethinking your stack, or exploring ethical AI in media, this episode offers practical insights into how data strategy can align with public value.

Adobe Analytics AI/ML Analytics Data Quality Snowplow
DataTopics: All Things Data, AI & Tech
Global Azure 2025-05-08 · 09:00

We’re thrilled to be part of the Global Azure worldwide series, bringing the energy to London on Wednesday, May 8th! In partnership with the Microsoft Azure Community User Group, join us for a day of in-person tech talks, learning, and networking with some of the brightest minds in the Azure ecosystem.

Agenda & Sessions 10:00 - 10:10 – Welcome & Opening Remarks

10:10 - 10:40 – Jonah Andersson Senior Azure Consultant \| Microsoft MVP \| MCT Topic: Think about the practical uses, benefits and enhanced flexibility you and your development team can achieve by creating event-driven and cloud-native applications, with the added advantage of hosting them on containers in the Azure cloud platform.

Build .NET applications on Azure Functions with a twist. Learn how to combine and integrate serverless development with the cloud native technologies today for microservices and event-driven cloud native apps.

Expect insights from a seasoned DevSecOps expert and community leader with a deep focus on secure cloud development.

10:40 - 11:10 – Matteo\, Director of Software Engineering\, Avanade UK & Ireland \| MVP Session: Are we still talking about security in your development lifecycle in 2025? An exploration of modern software security—from secret scanning and static code analysis to supply chain management and secure development practices that are still overlooked in many projects today.

11:10 - 11:40 – Clifford\, Freelance Developer & Airbus A320 Pilot \| MVP Session: Using AI and Image Recognition to Build an Aircraft Discover how Azure Cognitive Services and .NET MAUI are used to develop a mobile app for identifying and tracking aircraft components during assembly. A technical session with an aviation twist.

11:40 - 12:10 – Lunch Break

12:10 - 12:40 – Jake Walsh & Steve Brown Azure Solution Architect & Azure Technical Consultant Session: How to Ace Those Azure Exams Get practical guidance, tips, and recommended resources for Azure certification prep across topics like infrastructure, virtual desktops, AI, and more—plus insights on effective learning strategies from two experienced pros.

12:40 - 13:10 – Hejer Krichene\, Cloud Solution Expert \| MVP Session: Control Azure Costs Using AI Learn how to manage and optimize Azure spending using AI-driven tools and predictive analytics. This session covers monitoring, automation, cost forecasting, and identifying optimization opportunities.

13:10 - 13:40 – Marcel Lupo, DevOps & Azure MVP Session: Transforming DevOps with AI – Insights from Azure AI Studio See how large language models and Azure AI Studio can enhance DevOps workflows. This session features a case study using Terraform, GitHub Actions, and customized governance to automate and streamline cloud development.

By registering for this in person event you are agreeing to have your e-mail shared with building security for building access. Your privacy is important to us. This privacy statement explains the personal data Microsoft processes, how Microsoft processes it, and for what purposes. You can view our full privacy statement here: https://privacy.microsoft.com/privacystatement

Global Azure

We are thrilled to invite you to our in-person Meetup in London in collaboration with AND Digital!

Following on from the successful API Summit 2024 event, join this session to learn more about the future of APIs, and the intersection of AI, plus look ahead at what the latest advancements in AI, APIs, Microservices and Kubernetes will mean for you.

When November 14, 18:00 - 20:30

Where 18 Henrietta Street London, WC2E 8QH

Agenda 18:00 - 18:30 Welcome, snacks & Drinks 18:30 - 19:15 API Summit Updates and Recap by Andy Klitovchenko (Sr Solutions Engineer, Kong)

  • Key highlights from the recent API Summit
  • Discussions on major trends and insights, including advancements in API security, management and new technologies

19:15 - 20:00 AI Ethics 101: A practical guide to applied AI Ethics for Technologists by Sidrah Hassan (AI Ethics Consultant, AND Digital) 20:00 - 20:30 Networking, snacks & drinks

Talks 1) API Summit 2024: Kong News and Announcements Recap - blog post During the Meetup, we will discuss the following exciting news & announcements:

  • Kong Konnect is the API Platform for AI
  • AI Gateway 3.8: Semantic Caching and Security, New LLM Load-Balancing Algorithms, and More LLMs
  • Kong Gateway 3.8: Enhanced Performance, Comprehensive Security, Extensibility, and Ease of Use
  • Insomnia 10: Unlimited Collection Runner, Invite Control, and AI Runner for Developers
  • Serverless Gateways: Lightweight, Cost-Effective, and Fully Managed Kong Gateways
  • Dedicated Cloud Gateways: Deploy Kong Gateways Anywhere — Now With Azure Support
  • LLM Analytics in Kong Konnect for GenAI Traffic
  • Konnect Service Catalog: Shine a Light on Shadow APIs Lurking in Your IT Infrastructure
  • Kong's New Premium Technology Partner Program Elevates Integration Development
  • Kong Mesh 2.9: Increased Security Configurations and Health Check Capabilities
  • GA Support for Managing Kong with Terraform

2) AI Ethics 101: A practical guide to applied AI Ethics for Technologists A foundational overview of what AI ethics is for technologists, emphasising key principles and frameworks for addressing ethical dilemmas in AI projects. The session highlights the importance of inclusive design, bias mitigation, and stakeholder engagement, empowering attendees to integrate ethical considerations into their work.

Speakers Andy Klitovchenko (Sr Solutions Engineer, Kong) Andrew Klitovchenko is a Senior Solutions Engineer at Kong, a Cloud connectivity company. Andrew helps organisations modernise their API and Service Connectivity governance journeys and make their API and Service Connectivity strategies a competitive advantage. Before his current role, in similar positions, he helped large enterprises with their data streaming and analytics, data privacy and application performance monitoring use cases. Outside of work, Andrew likes playing football, hiking, 3D printing, travelling and exploring new countries and cultures.

Sidrah Hassan (AI Ethics Consultant, AND Digital) Sidrah Hassan is a passionate AI Ethicist on a mission to shape technology for the greater good. With a background spanning across user research and product management Sidrah is dedicated to championing ethical design and implementation. Beyond her role as an AI Ethics Consultant at AND Digital, Sidrah also creates social media content for the BBC, shedding light on both the promises and pitfalls of AI. Through her work, she inspires others to envision a future where AI uplifts humanity and does not hinder it.

See you there! Kong Community Team

London Tech Meetup: AI, APIs & End of Year Insights
Lukas Schulte – Co-founder and CEO @ SDF , Tobias Macey – host

Summary In this episode of the Data Engineering Podcast Lukas Schulte, co-founder and CEO of SDF, explores the development and capabilities of this fast and expressive SQL transformation tool. From its origins as a solution for addressing data privacy, governance, and quality concerns in modern data management, to its unique features like static analysis and type correctness, Lucas dives into what sets SDF apart from other tools like DBT and SQL Mesh. Tune in for insights on building a business around a developer tool, the importance of community and user experience in the data engineering ecosystem, and plans for future development, including supporting Python models and enhancing execution capabilities. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementImagine catching data issues before they snowball into bigger problems. That’s what Datafold’s new Monitors do. With automatic monitoring for cross-database data diffs, schema changes, key metrics, and custom data tests, you can catch discrepancies and anomalies in real time, right at the source. Whether it’s maintaining data integrity or preventing costly mistakes, Datafold Monitors give you the visibility and control you need to keep your entire data stack running smoothly. Want to stop issues before they hit production? Learn more at dataengineeringpodcast.com/datafold today!Your host is Tobias Macey and today I'm interviewing Lukas Schulte about SDF, a fast and expressive SQL transformation tool that understands your schemaInterview IntroductionHow did you get involved in the area of data management?Can you describe what SDF is and the story behind it?What's the story behind the name?What problem are you solving with SDF?dbt has been the dominant player for SQL-based transformations for several years, with other notable competition in the form of SQLMesh. Can you give an overview of the venn diagram for features and functionality across SDF, dbt and SQLMesh?Can you describe the design and implementation of SDF?How have the scope and goals of the project changed since you first started working on it?What does the development experience look like for a team working with SDF?How does that differ between the open and paid versions of the product?What are the features and functionality that SDF offers to address intra- and inter-team collaboration?One of the challenges for any second-mover technology with an established competitor is the adoption/migration path for teams who have already invested in the incumbent (dbt in this case). How are you addressing that barrier for SDF?Beyond the core migration path of the direct functionality of the incumbent product is the amount of tooling and communal knowledge that grows up around that product. How are you thinking about that aspect of the current landscape?What is your governing principle for what capabilities are in the open core and which go in the paid product?What are the most interesting, innovative, or unexpected ways that you have seen SDF used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on SDF?When is SDF the wrong choice?What do you have planned for the future of SDF?Contact Info LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Links SDFSemantic Data Warehouseasdf-vmdbtSoftware Linting)SQLMeshPodcast EpisodeCoalescePodcast EpisodeApache IcebergPodcast EpisodeDuckDB Podcast Episode SDF Classifiersdbt Semantic Layerdbt expectationsApache DatafusionIbisThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Data Engineering Data Management Datafold dbt Python SQL SQLMesh
Data Engineering Podcast

JOIN LIVE

In this era of generative AI and large language models (LLMs), organizations must establish robust practices for leveraging AI capabilities responsibly and at scale. The AI Gateway pattern has emerged as a critical solution.

AI is inherently driven by APIs, accelerating API traffic growth. But securely adopting AI across an organization requires addressing cross-cutting concerns like data security, AI governance, multi-model integration, and cost optimization.

Whether you're a developer looking to build multi-AI applications faster, a platform team enabling self-service AI capabilities, or a data scientist needing robust AI workflows - this is a must-attend session to future-proof your AI strategy. Unlock the transformative power of generative AI while ensuring governance and responsible adoption.

9:00am - What is an AI Gateway? by Marco Palladino (CTO, Kong)

10:00am - Down the AI Rabbit Hole by Shane Utt (Staff Software Engineer, Kong)

10:30am - AI Gateway Demo by Jack Tysoe (Field Engineer, Kong)

11:00am - Q&A

===

Talk 1) What is an AI Gateway?

In this session, Marco Palladino, Kong’s CTO & Co-Founder, will introduce you to the AI Gateway pattern - a centralized way for organizations to manage and control their AI models, applications, and services.

After Marco’s introduction, we’ll transition into a fireside chat. We’ll open it up to the audience for anything you want to ask, covering topics such as:

  • Challenges and considerations when implementing an AI Gateway
  • Best practices for AI governance and risk management
  • Future trends and developments in the AI Gateway space
  • Addressing audience questions and concerns

This is your opportunity to ask questions, share your experiences, or raise any additional topics related to AI Gateways and how you’re leveraging AI.

Talk 2) Down the AI Rabbit Hole: Leveraging AI in Your Projects Without Ending Up Lost in Wonderland

Generative AI is transforming the world around us, and is quickly becoming a part of the conversation as we greenfield new features and applications. It is very alluring to deliver AI features into our existing products, and think about new projects we might build around AI. However, you might have already found that the journey into the realm of AI often feels like tumbling down the rabbit hole into wonderland - a maze of complexity and uncertainty.

In this talk we'll dive into some of that complexity and uncertainty and discuss the AI/ML landscape as it is today. We'll discuss practical strategies for experimentation and how to even get started in this space. We'll cover how we've been approaching AI at Kong, and the importance of remembering that these AI services are ultimately served via APIs - and how API management is needed in order to move these projects from experimental to production.

If you're currently looking at how to leverage AI in your projects and how to mitigate complexity and risks, join us to share in the journey and our experiences together. We'll cover how you might avoid falling down the rabbit hole, or maybe discuss the situations where you might just want to visit wonderland after all?

JOIN LIVE

By registering for this event, I understand that my personal information will be processed in accordance with Kong's Privacy Policy.

Kong Developer Day: AI + APIs

JOIN LIVE

In this era of generative AI and large language models (LLMs), organizations must establish robust practices for leveraging AI capabilities responsibly and at scale. The AI Gateway pattern has emerged as a critical solution.

AI is inherently driven by APIs, accelerating API traffic growth. But securely adopting AI across an organization requires addressing cross-cutting concerns like data security, AI governance, multi-model integration, and cost optimization.

Whether you're a developer looking to build multi-AI applications faster, a platform team enabling self-service AI capabilities, or a data scientist needing robust AI workflows - this is a must-attend session to future-proof your AI strategy. Unlock the transformative power of generative AI while ensuring governance and responsible adoption.

9:00am - What is an AI Gateway? by Marco Palladino (CTO, Kong)

10:00am - Down the AI Rabbit Hole by Shane Utt (Staff Software Engineer, Kong)

10:30am - AI Gateway Demo by Jack Tysoe (Field Engineer, Kong)

11:00am - Q&A

===

Talk 1) What is an AI Gateway?

In this session, Marco Palladino, Kong’s CTO & Co-Founder, will introduce you to the AI Gateway pattern - a centralized way for organizations to manage and control their AI models, applications, and services.

After Marco’s introduction, we’ll transition into a fireside chat. We’ll open it up to the audience for anything you want to ask, covering topics such as:

  • Challenges and considerations when implementing an AI Gateway
  • Best practices for AI governance and risk management
  • Future trends and developments in the AI Gateway space
  • Addressing audience questions and concerns

This is your opportunity to ask questions, share your experiences, or raise any additional topics related to AI Gateways and how you’re leveraging AI.

Talk 2) Down the AI Rabbit Hole: Leveraging AI in Your Projects Without Ending Up Lost in Wonderland

Generative AI is transforming the world around us, and is quickly becoming a part of the conversation as we greenfield new features and applications. It is very alluring to deliver AI features into our existing products, and think about new projects we might build around AI. However, you might have already found that the journey into the realm of AI often feels like tumbling down the rabbit hole into wonderland - a maze of complexity and uncertainty.

In this talk we'll dive into some of that complexity and uncertainty and discuss the AI/ML landscape as it is today. We'll discuss practical strategies for experimentation and how to even get started in this space. We'll cover how we've been approaching AI at Kong, and the importance of remembering that these AI services are ultimately served via APIs - and how API management is needed in order to move these projects from experimental to production.

If you're currently looking at how to leverage AI in your projects and how to mitigate complexity and risks, join us to share in the journey and our experiences together. We'll cover how you might avoid falling down the rabbit hole, or maybe discuss the situations where you might just want to visit wonderland after all?

JOIN LIVE

By registering for this event, I understand that my personal information will be processed in accordance with Kong's Privacy Policy.

Kong Developer Day: AI + APIs

JOIN LIVE

In this era of generative AI and large language models (LLMs), organizations must establish robust practices for leveraging AI capabilities responsibly and at scale. The AI Gateway pattern has emerged as a critical solution.

AI is inherently driven by APIs, accelerating API traffic growth. But securely adopting AI across an organization requires addressing cross-cutting concerns like data security, AI governance, multi-model integration, and cost optimization.

Whether you're a developer looking to build multi-AI applications faster, a platform team enabling self-service AI capabilities, or a data scientist needing robust AI workflows - this is a must-attend session to future-proof your AI strategy. Unlock the transformative power of generative AI while ensuring governance and responsible adoption.

9:00am - What is an AI Gateway? by Marco Palladino (CTO, Kong)

10:00am - Down the AI Rabbit Hole by Shane Utt (Staff Software Engineer, Kong)

10:30am - AI Gateway Demo by Jack Tysoe (Field Engineer, Kong)

11:00am - Q&A

===

Talk 1) What is an AI Gateway?

In this session, Marco Palladino, Kong’s CTO & Co-Founder, will introduce you to the AI Gateway pattern - a centralized way for organizations to manage and control their AI models, applications, and services.

After Marco’s introduction, we’ll transition into a fireside chat. We’ll open it up to the audience for anything you want to ask, covering topics such as:

  • Challenges and considerations when implementing an AI Gateway
  • Best practices for AI governance and risk management
  • Future trends and developments in the AI Gateway space
  • Addressing audience questions and concerns

This is your opportunity to ask questions, share your experiences, or raise any additional topics related to AI Gateways and how you’re leveraging AI.

Talk 2) Down the AI Rabbit Hole: Leveraging AI in Your Projects Without Ending Up Lost in Wonderland

Generative AI is transforming the world around us, and is quickly becoming a part of the conversation as we greenfield new features and applications. It is very alluring to deliver AI features into our existing products, and think about new projects we might build around AI. However, you might have already found that the journey into the realm of AI often feels like tumbling down the rabbit hole into wonderland - a maze of complexity and uncertainty.

In this talk we'll dive into some of that complexity and uncertainty and discuss the AI/ML landscape as it is today. We'll discuss practical strategies for experimentation and how to even get started in this space. We'll cover how we've been approaching AI at Kong, and the importance of remembering that these AI services are ultimately served via APIs - and how API management is needed in order to move these projects from experimental to production.

If you're currently looking at how to leverage AI in your projects and how to mitigate complexity and risks, join us to share in the journey and our experiences together. We'll cover how you might avoid falling down the rabbit hole, or maybe discuss the situations where you might just want to visit wonderland after all?

JOIN LIVE

By registering for this event, I understand that my personal information will be processed in accordance with Kong's Privacy Policy.

Kong Developer Day: AI + APIs

JOIN LIVE

In this era of generative AI and large language models (LLMs), organizations must establish robust practices for leveraging AI capabilities responsibly and at scale. The AI Gateway pattern has emerged as a critical solution.

AI is inherently driven by APIs, accelerating API traffic growth. But securely adopting AI across an organization requires addressing cross-cutting concerns like data security, AI governance, multi-model integration, and cost optimization.

Whether you're a developer looking to build multi-AI applications faster, a platform team enabling self-service AI capabilities, or a data scientist needing robust AI workflows - this is a must-attend session to future-proof your AI strategy. Unlock the transformative power of generative AI while ensuring governance and responsible adoption.

9:00am - What is an AI Gateway? by Marco Palladino (CTO, Kong)

10:00am - Down the AI Rabbit Hole by Shane Utt (Staff Software Engineer, Kong)

10:30am - AI Gateway Demo by Jack Tysoe (Field Engineer, Kong)

11:00am - Q&A

===

Talk 1) What is an AI Gateway?

In this session, Marco Palladino, Kong’s CTO & Co-Founder, will introduce you to the AI Gateway pattern - a centralized way for organizations to manage and control their AI models, applications, and services.

After Marco’s introduction, we’ll transition into a fireside chat. We’ll open it up to the audience for anything you want to ask, covering topics such as:

  • Challenges and considerations when implementing an AI Gateway
  • Best practices for AI governance and risk management
  • Future trends and developments in the AI Gateway space
  • Addressing audience questions and concerns

This is your opportunity to ask questions, share your experiences, or raise any additional topics related to AI Gateways and how you’re leveraging AI.

Talk 2) Down the AI Rabbit Hole: Leveraging AI in Your Projects Without Ending Up Lost in Wonderland

Generative AI is transforming the world around us, and is quickly becoming a part of the conversation as we greenfield new features and applications. It is very alluring to deliver AI features into our existing products, and think about new projects we might build around AI. However, you might have already found that the journey into the realm of AI often feels like tumbling down the rabbit hole into wonderland - a maze of complexity and uncertainty.

In this talk we'll dive into some of that complexity and uncertainty and discuss the AI/ML landscape as it is today. We'll discuss practical strategies for experimentation and how to even get started in this space. We'll cover how we've been approaching AI at Kong, and the importance of remembering that these AI services are ultimately served via APIs - and how API management is needed in order to move these projects from experimental to production.

If you're currently looking at how to leverage AI in your projects and how to mitigate complexity and risks, join us to share in the journey and our experiences together. We'll cover how you might avoid falling down the rabbit hole, or maybe discuss the situations where you might just want to visit wonderland after all?

JOIN LIVE

By registering for this event, I understand that my personal information will be processed in accordance with Kong's Privacy Policy.

Kong Developer Day: AI + APIs

Please join the session that is best suited to your time zone...this topic is repeated at 2 different times. In this session, we will demonstrate how IBM’s Chief Privacy Office has innovated to expand its mature Privacy program into an Integrated Governance Program, adapting existing systems to address both ethical AI and privacy regulations, and move towards a continuous compliance approach.

Presenters: John Bowman, Anaysha Parker

John Bowman joined IBM’s Chief Privacy Office in October 2022 as AI Ethics Market Strategy Lead. His role includes advising on enhancements to support regulatory compliance, helping to create a deployment framework, and commercialisation of select IBM CPO assets. Previously, John was a Senior Principal in Promontory, IBM Consulting; and before that John worked at the UK Ministry of Justice where he was Head of EU and International Data Protection Policy.

Anaysha Parker is a member of the Market Strategy team within IBM’s Chief Privacy Office. Over the last three years, Anaysha has specialized in AI ethics and contributed to the evolution of IBM’s AI Ethics program and use case review process. Today, she focuses on showcasing IBM’s thought leadership around data and AI governance to assist clients in similar journeys. Before working with IBM’s Chief Privacy Office, Anaysha was a transformation consultant in the US Federal Sector.

*** Please join us at the session that is best suited to your time zone. Note that this topic is: 1. Repeated at two different times to accommodate various time zones\, because it is 2. Posted simultaneously in multiple meetup groups world-wide *** It is recommended that you register at this Webex link ahead of time to receive a calendar invite and reminder. https://ibm.webex.com/weblink/register/red2c73f5a192662539fe6786c1bc27e6

Implementing Ethical AI and Privacy with an Integrated Governance Program

Please join the session that is best suited to your time zone...this topic is repeated at 2 different times. In this session, we will demonstrate how IBM’s Chief Privacy Office has innovated to expand its mature Privacy program into an Integrated Governance Program, adapting existing systems to address both ethical AI and privacy regulations, and move towards a continuous compliance approach.

Presenters: John Bowman, Anaysha Parker

John Bowman joined IBM’s Chief Privacy Office in October 2022 as AI Ethics Market Strategy Lead. His role includes advising on enhancements to support regulatory compliance, helping to create a deployment framework, and commercialisation of select IBM CPO assets. Previously, John was a Senior Principal in Promontory, IBM Consulting; and before that John worked at the UK Ministry of Justice where he was Head of EU and International Data Protection Policy.

Anaysha Parker is a member of the Market Strategy team within IBM’s Chief Privacy Office. Over the last three years, Anaysha has specialized in AI ethics and contributed to the evolution of IBM’s AI Ethics program and use case review process. Today, she focuses on showcasing IBM’s thought leadership around data and AI governance to assist clients in similar journeys. Before working with IBM’s Chief Privacy Office, Anaysha was a transformation consultant in the US Federal Sector.

*** Please join us at the session that is best suited to your time zone. Note that this topic is: 1. Repeated at two different times to accommodate various time zones\, because it is 2. Posted simultaneously in multiple meetup groups world-wide *** It is recommended that you register at this Webex link ahead of time to receive a calendar invite and reminder. https://ibm.webex.com/weblink/register/red2c73f5a192662539fe6786c1bc27e6

Implementing Ethical AI and Privacy with an Integrated Governance Program

Please join the session that is best suited to your time zone...this topic is repeated at 2 different times. In this session, we will demonstrate how IBM’s Chief Privacy Office has innovated to expand its mature Privacy program into an Integrated Governance Program, adapting existing systems to address both ethical AI and privacy regulations, and move towards a continuous compliance approach.

Presenters: John Bowman, Anaysha Parker

John Bowman joined IBM’s Chief Privacy Office in October 2022 as AI Ethics Market Strategy Lead. His role includes advising on enhancements to support regulatory compliance, helping to create a deployment framework, and commercialisation of select IBM CPO assets. Previously, John was a Senior Principal in Promontory, IBM Consulting; and before that John worked at the UK Ministry of Justice where he was Head of EU and International Data Protection Policy.

Anaysha Parker is a member of the Market Strategy team within IBM’s Chief Privacy Office. Over the last three years, Anaysha has specialized in AI ethics and contributed to the evolution of IBM’s AI Ethics program and use case review process. Today, she focuses on showcasing IBM’s thought leadership around data and AI governance to assist clients in similar journeys. Before working with IBM’s Chief Privacy Office, Anaysha was a transformation consultant in the US Federal Sector.

*** Please join us at the session that is best suited to your time zone. Note that this topic is: 1. Repeated at two different times to accommodate various time zones\, because it is 2. Posted simultaneously in multiple meetup groups world-wide *** It is recommended that you register at this Webex link ahead of time to receive a calendar invite and reminder. https://ibm.webex.com/weblink/register/rbb11f9f443acdb60dcecdf82aff067ee

Implementing Ethical AI and Privacy with an Integrated Governance Program

Please join the session that is best suited to your time zone...this topic is repeated at 2 different times. In this session, we will demonstrate how IBM’s Chief Privacy Office has innovated to expand its mature Privacy program into an Integrated Governance Program, adapting existing systems to address both ethical AI and privacy regulations, and move towards a continuous compliance approach.

Presenters: John Bowman, Anaysha Parker

John Bowman joined IBM’s Chief Privacy Office in October 2022 as AI Ethics Market Strategy Lead. His role includes advising on enhancements to support regulatory compliance, helping to create a deployment framework, and commercialisation of select IBM CPO assets. Previously, John was a Senior Principal in Promontory, IBM Consulting; and before that John worked at the UK Ministry of Justice where he was Head of EU and International Data Protection Policy.

Anaysha Parker is a member of the Market Strategy team within IBM’s Chief Privacy Office. Over the last three years, Anaysha has specialized in AI ethics and contributed to the evolution of IBM’s AI Ethics program and use case review process. Today, she focuses on showcasing IBM’s thought leadership around data and AI governance to assist clients in similar journeys. Before working with IBM’s Chief Privacy Office, Anaysha was a transformation consultant in the US Federal Sector.

*** Please join us at the session that is best suited to your time zone. Note that this topic is: 1. Repeated at two different times to accommodate various time zones\, because it is 2. Posted simultaneously in multiple meetup groups world-wide *** It is recommended that you register at this Webex link ahead of time to receive a calendar invite and reminder. https://ibm.webex.com/weblink/register/rbb11f9f443acdb60dcecdf82aff067ee

Implementing Ethical AI and Privacy with an Integrated Governance Program
Fredrik Forslund – author , Richard Stiennon – author , Russ B. Ernst – author

Design, implement, and integrate a complete data sanitization program In Net Zeros and Ones: How Data Erasure Promotes Sustainability, Privacy, and Security, a well-rounded team of accomplished industry veterans delivers a comprehensive guide to managing permanent and sustainable data erasure while complying with regulatory, legal, and industry requirements. In the book, you’ll discover the why, how, and when of data sanitization, including why it is a crucial component in achieving circularity within IT operations. You will also learn about future-proofing yourself against security breaches and data leaks involving your most sensitive information—all while being served entertaining industry anecdotes and commentary from leading industry personalities. The authors also discuss: Several new standards on data erasure, including the soon-to-be published standards by the IEEE and ISO How data sanitization strengthens a sustainability or Environmental, Social, and Governance (ESG) program How to adhere to data retention policies, litigation holds, and regulatory frameworks that require certain data to be retained for specific timeframes An ideal resource for ESG, data protection, and privacy professionals, Net Zeros and Ones will also earn a place in the libraries of application developers and IT asset managers seeking a one-stop explanation of how data erasure fits into their data and asset management programs.

data data-engineering data-security-privacy data security & privacy Cyber Security
O'Reilly Data Engineering Books

Recently there has been a lot of buzz in the data community on the topic of metadata management. It’s often discussed in the context of data discovery, data provenance, data governance, and data privacy. Even Gartner and Forrester have created the new Active Metadata Management and Enterprise Data Fabric categories to highlight the development in this area.

However, metadata management isn’t actually a new problem. It has just taken on a whole new dimension with the widespread adoption of the Modern Data Stack. What used to be a small, esoteric issue that only concerned the core data team has exploded into complex, organizational challenges that plagued companies large and small.

In this talk, we’ll explain how a Modern Metadata Platform (MMP) can help solve these new challenges and the key ingredients to building a scalable and extensible MMP.

Connect with us: Website: https://databricks.com Facebook: https://www.facebook.com/databricksinc Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/data... Instagram: https://www.instagram.com/databricksinc/

Data Governance Databricks Modern Data Stack Fabric
Databricks DATA + AI Summit 2023
Sean Falconer – guest @ Skyflow , Tobias Macey – host

Summary The best way to make sure that you don’t leak sensitive data is to never have it in the first place. The team at Skyflow decided that the second best way is to build a storage system dedicated to securely managing your sensitive information and making it easy to integrate with your applications and data systems. In this episode Sean Falconer explains the idea of a data privacy vault and how this new architectural element can drastically reduce the potential for making a mistake with how you manage regulated or personally identifiable information.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! Atlan is the metadata hub for your data ecosystem. Instead of locking all of that information into a new silo, unleash its transformative potential with Atlan’s active metadata capabilities. Push information about data freshness and quality to your business intelligence, automatically scale up and down your warehouse based on usage patterns, and let the bots answer those questions in Slack so that the humans can focus on delivering real value. Go to dataengineeringpodcast.com/atlan today to learn more about how you can take advantage of active metadata and escape the chaos. Modern data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days or even weeks. By the time errors have made their way into production, it’s often too late and damage is done. Datafold built automated regression testing to help data and analytics engineers deal with data quality in their pull requests. Datafold shows how a change in SQL code affects your data, both on a statistical level and down to individual rows and values before it gets merged to production. No more shipping and praying, you can now know exactly what will change in your database! Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Visit dataengineeringpodcast.com/datafold today to book a demo with Datafold. Data teams are increasingly under pressure to deliver. According to a recent survey by Ascend.io, 95% in fact reported being at or over capacity. With 72% of data experts reporting demands on their team going up faster than they can hire, it’s no surprise they are increasingly turning to automation. In fact, while only 3.5% report having current investments in automation, 85% of data teams plan on investing in automation in the next 12 months. 85%!!! That’s where our friends at Ascend.io come in. The Ascend Data Automation Cloud provides a unified platform for data ingestion, transformation, orchestration, and observability. Ascend users love its declarative pipelines, powerful SDK, elegant UI, and extensible plug-in architecture, as well as its support for Python, SQL, Scala, and Java. Ascend automates workloads on Snowflake, Databricks, BigQuery, and open source Spark, and can be deployed in AWS, Azure, or GCP. Go to dataengineeringpodcast.com/ascend and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $5,000 when you become a customer. Your host is Tobias Macey and today I’m interviewing Sean Falconer about the idea of a data privacy vault and how the Skyflow team are working to make it turn-key

Interview

Introduction How did you get involved in the area of data management? Can you describe what Skyflow is and the story behind it? What is a "data privacy vault" and how does it differ from strategies such as privacy engineering or existing data governance patterns? What are the primary use cases and capabilities that you are focused on solving for with Skyflow?

Who is the target customer for Skyflow (e.g. how does it enter an organization)?

How is the Skyflow platform architected?

How have the design and goals of the system changed or evolved over time?

Can you describe the process of integrating with Skyflow at the application level? For organizations that are building analytical capabilities on top of the data managed in their applications, what are the interactions with Skyflow at each of the stages in the data lifecycle? One of the perennial problems with distributed systems is the challenge of joining data across machine boundaries. How do you mitigate that problem? On your website there are different "vaults" advertised in the form of healthcare, fintech, and PII. What are the different requirements across each of those problem domains?

What are the commonalities?

As a relatively new company in an emerging product category, what are some of the customer education challenges that you are facing? What are the most interesting, innovative, or unexpected ways that you have seen Skyflow used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Skyflow? When is Skyflow the wrong choice? What do you have planned for the future of Skyflow?

Contact Info

LinkedIn @seanfalconer on Twitter Website

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don’t forget to check out our other show, Podcast.init to learn about the Python language, its community, and the innovative ways it is being used. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you’ve learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on iTunes and tell your friends and co-workers

Links

Skyflow Privacy Engineering Data Governance Homomorphic Encryption Polymorphic Encryption

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Support Data Engineering Podcast

Airflow Analytics AWS Azure BI BigQuery CI/CD Cloud Computing Data Engineering Data Governance Data Management Data Quality Databricks Datafold dbt GCP Java Kubernetes MongoDB MySQL postgresql Python Scala Cyber Security Snowflake Spark SQL
Steve Touw – guest @ Immuta , Stephen Bailey – guest @ Immuta , Tobias Macey – host

Summary Data governance is a term that encompasses a wide range of responsibilities, both technical and process oriented. One of the more complex aspects is that of access control to the data assets that an organization is responsible for managing. The team at Immuta has built a platform that aims to tackle that problem in a flexible and maintainable fashion so that data teams can easily integrate authorization, data masking, and privacy enhancing technologies into their data infrastructure. In this episode Steve Touw and Stephen Bailey share what they have built at Immuta, how it is implemented, and how it streamlines the workflow for everyone involved in working with sensitive data. If you are starting down the path of implementing a data governance strategy then this episode will provide a great overview of what is involved.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management What are the pieces of advice that you wish you had received early in your career of data engineering? If you hand a book to a new data engineer, what wisdom would you add to it? I’m working with O’Reilly on a project to collect the 97 things that every data engineer should know, and I need your help. Go to dataengineeringpodcast.com/97things to add your voice and share your hard-earned expertise. When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! Feature flagging is a simple concept that enables you to ship faster, test in production, and do easy rollbacks without redeploying code. Teams using feature flags release new software with less risk, and release more often. ConfigCat is a feature flag service that lets you easily add flags to your Python code, and 9 other platforms. By adopting ConfigCat you and your manager can track and toggle your feature flags from their visual dashboard without redeploying any code or configuration, including granular targeting rules. You can roll out new features to a subset or your users for beta testing or canary deployments. With their simple API, clear documentation, and pricing that is independent of your team size you can get your first feature flags added in minutes without breaking the bank. Go to dataengineeringpodcast.com/configcat today to get 35% off any paid plan with code DATAENGINEERING or try out their free forever plan. You invest so much in your data infrastructure – you simply can’t afford to settle for unreliable data. Fortunately, there’s hope: in the same way that New Relic, DataDog, and other Application Performance Management solutions ensure reliable software and keep application downtime at bay, Monte Carlo solves the costly problem of broken data pipelines. Monte Carlo’s end-to-end Data Observability Platform monitors and alerts for data issues across your data warehouses, data lakes, ETL, and business intelligence. The platform uses machine learning to infer and learn your data, proactively identify data issues, assess its impact through lineage, and notify those who need to know before it impacts the business. By empowering data teams with end-to-end data reliability, Monte Carlo helps organizations save time, increase revenue, and restore trust in their data. Visit dataengineeringpodcast.com/montecarlo today to request a demo and see how Monte Carlo delivers data observability across your data inf

AI/ML API BI Dashboard Data Engineering Data Governance Data Management Datadog ETL/ELT Kubernetes Monte Carlo New Relic Python
Roberto Maranca – VP of Data Excellence @ Schneider Electric

In this podcast, @RobertoMaranca shared his thoughts on running a large data-driven organization. He shared his thoughts on the future of data organizations through compliance and privacy. He shared how businesses could survive policy like GDPR and prepare themselves for better data transparency and visibility. This podcast is great for leadership, leading a transnational corporation.

TIMELINE: 0:28 Roberto's journey. 8:18 Best practices as a data steward. 16:58 Data leadership and GDPR. 22:18 Impact of GDPR. 25:34 GDPR creating better knowledge archive. 29:27 GDPR and IOT infrastructure. 35:08 Shadow IT phenomenon and consumer privacy. 44:54 Suggestions for enterprises to deal with privacy disruption. 50:52 Data debt. 53:10 Opportunities in new privacy frameworks. 57:52 Roberto's success mantra. 1:02:38 Roberto's favorite reads.

Roberto's Recommended Read: Team of Teams: New Rules of Engagement for a Complex World by General Stanley McChrystal and Tantum Collins https://amzn.to/2kUxW1K Do Androids Dream of Electric Sheep?: The inspiration for the films Blade Runner and Blade Runner 2049 by Philip K. Dick https://amzn.to/2xOOpxZ A Scanner Darkly by Philip K. Dick https://amzn.to/2sAsUMs Other Philip K. Dick Books @ https://amzn.to/2JBwwY0

Podcast Link: https://futureofdata.org/data-leadership-through-privacy-gdpr-by-robertomaranca/

Roberto's BIO: With almost 25 years of experience in the world of IT and Data, Roberto has spent most its working life with General Electric in their Capital Division, where since 2014, as Chief Data Officer for their International Unit, he has been overlooking the implementation of the Data Governance and Quality frameworks, spanning from supporting risk model validation to enabling divestitures and leading their more recent Basel III data initiatives. For the last year, he has held the role of Chief Data Officer at Lloyds Banking Group, shaping and implementing a new Data Strategy and dividing his time between BCBS 239 and GDPR programs.

Roberto has got a Master’s Degree in Aeronautical Engineering from “Federico II” Naples University.

About #Podcast:

FutureOfData podcast is a conversation starter to bring leaders, influencers, and lead practitioners to discuss their journey to create the data-driven future.

Want to sponsor? Email us @ [email protected]

Keywords:

FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy

Big Data Data Governance GDPR/CCPA IoT
The Future of Data Podcast | conversation with leaders, influencers, and change makers in the World of Data & Analytics

An in-depth guide to the changes your organization needs to make to comply with the EU GDPR.

The EU General Data Protection Regulation (GDPR) will supersede the 1995 EU Data Protection Directive (DPD) and all EU member states’ national laws based on it – including the UK Data Protection Act 1998 – in May 2018.

All organizations – wherever they are in the world – that process the personally identifiable information (PII) of EU residents must comply with the Regulation. Failure to do so could result in fines of up to €20 million or 4% of annual global turnover.

US organizations that process EU residents’ personal data can comply with the GDPR via the EU-US Privacy Shield, which replaced the EU-US Safe Harbor framework in 2016. The Privacy Shield is based on the DPD, and will likely be updated once the GDPR is applied in May 2018.

This book provides a detailed commentary on the GDPR, explains the changes you need to make to your data protection and information security regimes, and tells you exactly what you need to do to avoid severe financial penalties.

Product overview

EU GDPR – An Implementation and Compliance Guide is a clear and comprehensive guide to this new data protection law, explaining the Regulation, and setting out the obligations of data processors and controllers in terms you can understand.

Topics covered include:

The role of the data protection officer (DPO) – including whether you need one and what they should do. Risk management and data protection impact assessments (DPIAs), including how, when and why to conduct a DPIA. Data subjects’ rights, including consent and the withdrawal of consent; subject access requests and how to handle them; and data controllers’ and processors’ obligations. International data transfers to “third countries” – including guidance on adequacy decisions and appropriate safeguards; the EU-US Privacy Shield; international organizations; limited transfers; and Cloud providers. How to adjust your data protection processes to transition to GDPR compliance, and the best way of demonstrating that compliance. A full index of the Regulation to help you find the articles and stipulations relevant to your organization.

The GDPR will have a significant impact on organizational data protection regimes around the world. EU GDPR – An implementation and Compliance Guide shows you exactly what you need to do to comply with the new law.

About the authors

IT Governance is a leading global provider of IT governance, risk management, and compliance expertise, and we pride ourselves on our ability to deliver a broad range of integrated, high-quality solutions that meet the real-world needs of our international client base.

Our privacy team – led by Alan Calder, Richard Campo, and Adrian Ross – has substantial experience in privacy, data protection, compliance, and information security. This experience, and our understanding of the background and drivers for the GDPR, are combined in this manual to provide the world’s first guide to implementing the new data protection regulation.

data data-engineering data-security-privacy eu-general-data-protection-regulation-gdpr eu general data protection regulation (gdpr) Cloud Computing GDPR/CCPA Cyber Security
O'Reilly Data Engineering Books
Showing 19 results