talk-data.com talk-data.com

Filter by Source

Select conferences and events

People (264 results)

See all 264 →

Companies (5 results)

See all 5 →
Snowflake 203 speakers
author Developer Advocate Developer Relations
Sowflake 1 speaker

Activities & events

Title & Speakers Event

This is an Online event, the Teams link will be published on the right of this page for those who have registered. 18:30: From Raw to Refined: Building Production Data Pipelines That Scale - Pradeep Kalluri 19:55 Prize Draw - Packt eBooks

Session details: From Raw to Refined: Building Production Data Pipelines That Scale - Pradeep Kalluri Every organization needs to move data from source systems to analytics platforms, but most teams struggle with reliability at scale. In this talk, I'll share the three-zone architecture pattern I use to build production data pipelines that process terabytes daily while maintaining data quality and operational simplicity.

You'll learn: - Why the traditional "single pipeline" approach breaks at scale - How to structure pipelines using Raw\, Curated\, and Refined zones - Practical patterns for handling batch and streaming data with Kafka and Spark - Real incidents and lessons learned from production systems - Tools and technologies that work (PySpark\, Airflow\, Snowflake)

This isn't theory—it's battle-tested patterns from years of building data platforms. Whether you're designing your first data pipeline or scaling an existing platform, you'll walk away with actionable techniques you can apply immediately.

Speaker: Pradeep Kalluri Data Engineer \| NatWest \| Building Scalable Data Platforms Data Engineer with 3+ years of experience building production data platforms at NatWest, Accenture, and Capgemini. Specialized in cloud-native architectures, real-time processing with Kafka and Spark, and data quality frameworks. Published technical writer on Medium, sharing practical lessons from production systems. Passionate about making data platforms reliable and trustworthy.

(Online) From Raw to Refined: Building Production Data Pipelines That Scale

Important: Register on the event website to receive joining link. (rsvp on meetup will NOT receive joining link).

This is virtual event for our AI global community, please double-check your local time. Can't make it live? Register anyway to receive the webinar recording.

Description: Welcome to the weekly AI Deep Dive Webinar Series. Join us for deep dive tech talks on AI, hands-on experiences on code labs, workshops, and networking with speakers & fellow developers from all over the world.

Tech Talk: Evaluating AI Agent Reliability Speaker: Anupam Datta (Snowflake) \| Josh Reini (Snowflake) Abstract: Agents often fail in ways you can’t see. They could return a final answer while taking a broken path: drifting from the goal, making irrational plan jumps, or misusing tools. Was the goal achieved efficiently? Did the plan make sense? Were the right tools used? Did the agent follow through? These hidden mistakes silently rack up compute costs, spike latency, and cause brittle behavior that collapses in production. Traditional evals won’t flag any of it because they only check the output, not the decisions that produced it. This session introduces the Agent GPA (Goal-Plan-Action) framework, available in the open-source TruLens library. Benchmark tests show the Agent GPA framework consistently outperformed standard LLM evaluators, giving teams scalable and trustworthy insight into agent behavior

  • 95% error detection (vs. 55% baseline methods)
  • 86% accuracy in pinpointing where an error occurred (vs. 49% baseline methods)
  • Human reviewers using the GPA framework caught 100% of the internal agent errors in the TRAIL/GAIA dataset.

You’ll learn how to inspect an agent’s reasoning steps, detect issues like hallucinations, bad tool calls, and missed actions, and leave knowing how to make your agent truly production-ready.

Speakers/Topics: Stay tuned as we are updating speakers and schedules. If you have a keen interest in speaking to our community, we invite you to submit topics for consideration: Submit Topics

More upcoming sessions:

Local and Global AI Community on Discord Join us on discord for local and global AI tech community:

  • Events chat: chat and connect with speakers and global and local attendees;
  • Learning AI: events, learning materials, study groups;
  • Startups: innovation, projects collaborations, founders/co-founders;
  • Jobs and Careers: job openings, post resumes, hiring managers *
AI Webinar Series (Virtual) - Evaluating AI Agent Reliability

Important: Register on the event website to receive joining link. (rsvp on meetup will NOT receive joining link).

This is virtual event for our AI global community, please double-check your local time. Can't make it live? Register anyway to receive the webinar recording.

Description: Welcome to the weekly AI Deep Dive Webinar Series. Join us for deep dive tech talks on AI, hands-on experiences on code labs, workshops, and networking with speakers & fellow developers from all over the world.

Tech Talk: Evaluating AI Agent Reliability Speaker: Anupam Datta (Snowflake) \| Josh Reini (Snowflake) Abstract: Agents often fail in ways you can’t see. They could return a final answer while taking a broken path: drifting from the goal, making irrational plan jumps, or misusing tools. Was the goal achieved efficiently? Did the plan make sense? Were the right tools used? Did the agent follow through? These hidden mistakes silently rack up compute costs, spike latency, and cause brittle behavior that collapses in production. Traditional evals won’t flag any of it because they only check the output, not the decisions that produced it. This session introduces the Agent GPA (Goal-Plan-Action) framework, available in the open-source TruLens library. Benchmark tests show the Agent GPA framework consistently outperformed standard LLM evaluators, giving teams scalable and trustworthy insight into agent behavior

  • 95% error detection (vs. 55% baseline methods)
  • 86% accuracy in pinpointing where an error occurred (vs. 49% baseline methods)
  • Human reviewers using the GPA framework caught 100% of the internal agent errors in the TRAIL/GAIA dataset.

You’ll learn how to inspect an agent’s reasoning steps, detect issues like hallucinations, bad tool calls, and missed actions, and leave knowing how to make your agent truly production-ready.

Speakers/Topics: Stay tuned as we are updating speakers and schedules. If you have a keen interest in speaking to our community, we invite you to submit topics for consideration: Submit Topics

More upcoming sessions:

Local and Global AI Community on Discord Join us on discord for local and global AI tech community:

  • Events chat: chat and connect with speakers and global and local attendees;
  • Learning AI: events, learning materials, study groups;
  • Startups: innovation, projects collaborations, founders/co-founders;
  • Jobs and Careers: job openings, post resumes, hiring managers *
AI Webinar Series (Virtual) - Evaluating AI Agent Reliability

Important: Register on the event website to receive joining link. (rsvp on meetup will NOT receive joining link).

This is virtual event for our AI global community, please double-check your local time. Can't make it live? Register anyway to receive the webinar recording.

Description: Welcome to the weekly AI Deep Dive Webinar Series. Join us for deep dive tech talks on AI, hands-on experiences on code labs, workshops, and networking with speakers & fellow developers from all over the world.

Tech Talk: Evaluating AI Agent Reliability Speaker: Anupam Datta (Snowflake) \| Josh Reini (Snowflake) Abstract: Agents often fail in ways you can’t see. They could return a final answer while taking a broken path: drifting from the goal, making irrational plan jumps, or misusing tools. Was the goal achieved efficiently? Did the plan make sense? Were the right tools used? Did the agent follow through? These hidden mistakes silently rack up compute costs, spike latency, and cause brittle behavior that collapses in production. Traditional evals won’t flag any of it because they only check the output, not the decisions that produced it. This session introduces the Agent GPA (Goal-Plan-Action) framework, available in the open-source TruLens library. Benchmark tests show the Agent GPA framework consistently outperformed standard LLM evaluators, giving teams scalable and trustworthy insight into agent behavior

  • 95% error detection (vs. 55% baseline methods)
  • 86% accuracy in pinpointing where an error occurred (vs. 49% baseline methods)
  • Human reviewers using the GPA framework caught 100% of the internal agent errors in the TRAIL/GAIA dataset.

You’ll learn how to inspect an agent’s reasoning steps, detect issues like hallucinations, bad tool calls, and missed actions, and leave knowing how to make your agent truly production-ready.

Speakers/Topics: Stay tuned as we are updating speakers and schedules. If you have a keen interest in speaking to our community, we invite you to submit topics for consideration: Submit Topics

More upcoming sessions:

Local and Global AI Community on Discord Join us on discord for local and global AI tech community:

  • Events chat: chat and connect with speakers and global and local attendees;
  • Learning AI: events, learning materials, study groups;
  • Startups: innovation, projects collaborations, founders/co-founders;
  • Jobs and Careers: job openings, post resumes, hiring managers *
AI Webinar Series (Virtual) - Evaluating AI Agent Reliability

Important: Register on the event website to receive joining link. (rsvp on meetup will NOT receive joining link).

This is virtual event for our AI global community, please double-check your local time. Can't make it live? Register anyway to receive the webinar recording.

Description: Welcome to the weekly AI Deep Dive Webinar Series. Join us for deep dive tech talks on AI, hands-on experiences on code labs, workshops, and networking with speakers & fellow developers from all over the world.

Tech Talk: Evaluating AI Agent Reliability Speaker: Anupam Datta (Snowflake) \| Josh Reini (Snowflake) Abstract: Agents often fail in ways you can’t see. They could return a final answer while taking a broken path: drifting from the goal, making irrational plan jumps, or misusing tools. Was the goal achieved efficiently? Did the plan make sense? Were the right tools used? Did the agent follow through? These hidden mistakes silently rack up compute costs, spike latency, and cause brittle behavior that collapses in production. Traditional evals won’t flag any of it because they only check the output, not the decisions that produced it. This session introduces the Agent GPA (Goal-Plan-Action) framework, available in the open-source TruLens library. Benchmark tests show the Agent GPA framework consistently outperformed standard LLM evaluators, giving teams scalable and trustworthy insight into agent behavior

  • 95% error detection (vs. 55% baseline methods)
  • 86% accuracy in pinpointing where an error occurred (vs. 49% baseline methods)
  • Human reviewers using the GPA framework caught 100% of the internal agent errors in the TRAIL/GAIA dataset.

You’ll learn how to inspect an agent’s reasoning steps, detect issues like hallucinations, bad tool calls, and missed actions, and leave knowing how to make your agent truly production-ready.

Speakers/Topics: Stay tuned as we are updating speakers and schedules. If you have a keen interest in speaking to our community, we invite you to submit topics for consideration: Submit Topics

More upcoming sessions:

Local and Global AI Community on Discord Join us on discord for local and global AI tech community:

  • Events chat: chat and connect with speakers and global and local attendees;
  • Learning AI: events, learning materials, study groups;
  • Startups: innovation, projects collaborations, founders/co-founders;
  • Jobs and Careers: job openings, post resumes, hiring managers *
AI Webinar Series (Virtual) - Evaluating AI Agent Reliability

Important: Register on the event website to receive joining link. (rsvp on meetup will NOT receive joining link).

This is virtual event for our AI global community, please double-check your local time. Can't make it live? Register anyway to receive the webinar recording.

Description: Welcome to the weekly AI Deep Dive Webinar Series. Join us for deep dive tech talks on AI, hands-on experiences on code labs, workshops, and networking with speakers & fellow developers from all over the world.

Tech Talk: Evaluating AI Agent Reliability Speaker: Anupam Datta (Snowflake) \| Josh Reini (Snowflake) Abstract: Agents often fail in ways you can’t see. They could return a final answer while taking a broken path: drifting from the goal, making irrational plan jumps, or misusing tools. Was the goal achieved efficiently? Did the plan make sense? Were the right tools used? Did the agent follow through? These hidden mistakes silently rack up compute costs, spike latency, and cause brittle behavior that collapses in production. Traditional evals won’t flag any of it because they only check the output, not the decisions that produced it. This session introduces the Agent GPA (Goal-Plan-Action) framework, available in the open-source TruLens library. Benchmark tests show the Agent GPA framework consistently outperformed standard LLM evaluators, giving teams scalable and trustworthy insight into agent behavior

  • 95% error detection (vs. 55% baseline methods)
  • 86% accuracy in pinpointing where an error occurred (vs. 49% baseline methods)
  • Human reviewers using the GPA framework caught 100% of the internal agent errors in the TRAIL/GAIA dataset.

You’ll learn how to inspect an agent’s reasoning steps, detect issues like hallucinations, bad tool calls, and missed actions, and leave knowing how to make your agent truly production-ready.

Speakers/Topics: Stay tuned as we are updating speakers and schedules. If you have a keen interest in speaking to our community, we invite you to submit topics for consideration: Submit Topics

More upcoming sessions:

Local and Global AI Community on Discord Join us on discord for local and global AI tech community:

  • Events chat: chat and connect with speakers and global and local attendees;
  • Learning AI: events, learning materials, study groups;
  • Startups: innovation, projects collaborations, founders/co-founders;
  • Jobs and Careers: job openings, post resumes, hiring managers *
AI Webinar Series - Evaluating AI Agent Reliability

Important: Register on the event website to receive joining link. (rsvp on meetup will NOT receive joining link).

This is virtual event for our AI global community, please double-check your local time. Can't make it live? Register anyway to receive the webinar recording.

Description: Welcome to the weekly AI Deep Dive Webinar Series. Join us for deep dive tech talks on AI, hands-on experiences on code labs, workshops, and networking with speakers & fellow developers from all over the world.

Tech Talk: Evaluating AI Agent Reliability Speaker: Anupam Datta (Snowflake) \| Josh Reini (Snowflake) Abstract: Agents often fail in ways you can’t see. They could return a final answer while taking a broken path: drifting from the goal, making irrational plan jumps, or misusing tools. Was the goal achieved efficiently? Did the plan make sense? Were the right tools used? Did the agent follow through? These hidden mistakes silently rack up compute costs, spike latency, and cause brittle behavior that collapses in production. Traditional evals won’t flag any of it because they only check the output, not the decisions that produced it. This session introduces the Agent GPA (Goal-Plan-Action) framework, available in the open-source TruLens library. Benchmark tests show the Agent GPA framework consistently outperformed standard LLM evaluators, giving teams scalable and trustworthy insight into agent behavior

  • 95% error detection (vs. 55% baseline methods)
  • 86% accuracy in pinpointing where an error occurred (vs. 49% baseline methods)
  • Human reviewers using the GPA framework caught 100% of the internal agent errors in the TRAIL/GAIA dataset.

You’ll learn how to inspect an agent’s reasoning steps, detect issues like hallucinations, bad tool calls, and missed actions, and leave knowing how to make your agent truly production-ready.

Speakers/Topics: Stay tuned as we are updating speakers and schedules. If you have a keen interest in speaking to our community, we invite you to submit topics for consideration: Submit Topics

More upcoming sessions:

Local and Global AI Community on Discord Join us on discord for local and global AI tech community:

  • Events chat: chat and connect with speakers and global and local attendees;
  • Learning AI: events, learning materials, study groups;
  • Startups: innovation, projects collaborations, founders/co-founders;
  • Jobs and Careers: job openings, post resumes, hiring managers *
AI Webinar Series - Evaluating AI Agent Reliability

Important: Register on the event website to receive joining link. (rsvp on meetup will NOT receive joining link).

This is virtual event for our AI global community, please double-check your local time. Can't make it live? Register anyway to receive the webinar recording.

Description: Welcome to the weekly AI Deep Dive Webinar Series. Join us for deep dive tech talks on AI, hands-on experiences on code labs, workshops, and networking with speakers & fellow developers from all over the world.

Tech Talk: Evaluating AI Agent Reliability Speaker: Anupam Datta (Snowflake) \| Josh Reini (Snowflake) Abstract: Agents often fail in ways you can’t see. They could return a final answer while taking a broken path: drifting from the goal, making irrational plan jumps, or misusing tools. Was the goal achieved efficiently? Did the plan make sense? Were the right tools used? Did the agent follow through? These hidden mistakes silently rack up compute costs, spike latency, and cause brittle behavior that collapses in production. Traditional evals won’t flag any of it because they only check the output, not the decisions that produced it. This session introduces the Agent GPA (Goal-Plan-Action) framework, available in the open-source TruLens library. Benchmark tests show the Agent GPA framework consistently outperformed standard LLM evaluators, giving teams scalable and trustworthy insight into agent behavior

  • 95% error detection (vs. 55% baseline methods)
  • 86% accuracy in pinpointing where an error occurred (vs. 49% baseline methods)
  • Human reviewers using the GPA framework caught 100% of the internal agent errors in the TRAIL/GAIA dataset.

You’ll learn how to inspect an agent’s reasoning steps, detect issues like hallucinations, bad tool calls, and missed actions, and leave knowing how to make your agent truly production-ready.

Speakers/Topics: Stay tuned as we are updating speakers and schedules. If you have a keen interest in speaking to our community, we invite you to submit topics for consideration: Submit Topics

More upcoming sessions:

Local and Global AI Community on Discord Join us on discord for local and global AI tech community:

  • Events chat: chat and connect with speakers and global and local attendees;
  • Learning AI: events, learning materials, study groups;
  • Startups: innovation, projects collaborations, founders/co-founders;
  • Jobs and Careers: job openings, post resumes, hiring managers *
AI Webinar Series (Virtual) - Evaluating AI Agent Reliability

Important: Register on the event website to receive joining link. (rsvp on meetup will NOT receive joining link).

This is virtual event for our AI global community, please double-check your local time. Can't make it live? Register anyway to receive the webinar recording.

Description: Welcome to the weekly AI Deep Dive Webinar Series. Join us for deep dive tech talks on AI, hands-on experiences on code labs, workshops, and networking with speakers & fellow developers from all over the world.

Tech Talk: Evaluating AI Agent Reliability Speaker: Anupam Datta (Snowflake) \| Josh Reini (Snowflake) Abstract: Agents often fail in ways you can’t see. They could return a final answer while taking a broken path: drifting from the goal, making irrational plan jumps, or misusing tools. Was the goal achieved efficiently? Did the plan make sense? Were the right tools used? Did the agent follow through? These hidden mistakes silently rack up compute costs, spike latency, and cause brittle behavior that collapses in production. Traditional evals won’t flag any of it because they only check the output, not the decisions that produced it. This session introduces the Agent GPA (Goal-Plan-Action) framework, available in the open-source TruLens library. Benchmark tests show the Agent GPA framework consistently outperformed standard LLM evaluators, giving teams scalable and trustworthy insight into agent behavior

  • 95% error detection (vs. 55% baseline methods)
  • 86% accuracy in pinpointing where an error occurred (vs. 49% baseline methods)
  • Human reviewers using the GPA framework caught 100% of the internal agent errors in the TRAIL/GAIA dataset.

You’ll learn how to inspect an agent’s reasoning steps, detect issues like hallucinations, bad tool calls, and missed actions, and leave knowing how to make your agent truly production-ready.

Speakers/Topics: Stay tuned as we are updating speakers and schedules. If you have a keen interest in speaking to our community, we invite you to submit topics for consideration: Submit Topics

More upcoming sessions:

Local and Global AI Community on Discord Join us on discord for local and global AI tech community:

  • Events chat: chat and connect with speakers and global and local attendees;
  • Learning AI: events, learning materials, study groups;
  • Startups: innovation, projects collaborations, founders/co-founders;
  • Jobs and Careers: job openings, post resumes, hiring managers *
AI Webinar Series (Virtual) - Evaluating AI Agent Reliability

To meet regulatory standards like Solvency II and LDTI, reinsurance firms must simulate how thousands of insurance policies perform under thousands of potential economic futures.

In data terms, this creates a "Cartesian explosion"—multiplying two manageable datasets creates a result set of billions of rows. This massive computational workload frequently overwhelms traditional on-premise infrastructure.

In this session, Zsolt Revay and Thomas Mager detail the engineering behind "Overlay," the internal data application PartnerRe built to solve this scaling challenge. Overlay’s architecture decouples business logic from compute power. We use Datavirtuality as a semantic orchestration layer, while leveraging Snowflake strictly as a heavy calculation engine.

By using an aggressive SQL push-down strategy, we treat the database as a compute cluster. This allows us to process that Cartesian explosion of policy data and stochastic scenarios right where the data lives—without moving it across the network.

Key Technical Discussion Points:

  • Optimisation: Handling massive joins via parallel processing and strategic table materialisation.
  • Performance: Using Elastic Virtual Warehouses to achieve linear scaling, reducing processing time from hours to minutes.
  • Economics: Using Auto-Suspend and Resource Monitors to keep this massive compute power cost-effective.

Attendees will learn how to architect a solution that balances the auditability required by regulators with the on-demand agility required by actuaries.

Event Details

📍 14th January \| Clayton Hotel 18:00 \| Food & Drinks 🍽️🥂 18:30 \| Scaling Reinsurance Compliance: The Engineering Behind PartnerRe’s "Overlay" 19:15 \| Drinks & Networking 🍻

Don’t miss this opportunity to connect with fellow data professionals and gain valuable insights into efficient Snowflake use, Gen AI applications, and proving the value of solid data modeling! See you there!

Speakers: Zsolt Revay - Head of Data Engineering Thomas Mager - Head Data and Analytics Platforms

Scaling Reinsurance Compliance: The Engineering Behind PartnerRe’s "Overlay"

Snowflake Users and Future Users!

I can't believe another year has flown by.

Join us for a night of festive cheer, fun networking, and all the data banter you can handle at this Snowflake Social Event!

Connect with fellow data lovers, swap stories, and dive into Snowflake conversations in a laid-back atmosphere.

No presentations, no pitches this time. Just pure data chats, delicious finger food, and drinks to keep the Christmas spirit flowing.

See you there for an evening of laughs, insights, and Snowflake magic! 🎄✨

Agenda

🕕 From 6:00 PM until late

  • Data chats
  • Food and drinks
Snowflake Social - Christmas Party and Data Chats

Data Vault is the foundation for modern, auditable data warehouses—but traditional implementation can be slow, orchestration complex, and deployment cycles lengthy. This session demonstrates how a metadata-driven approach with Stream2Vault (S2V) transforms that process into something fast, intuitive, and adaptable.

Starting from standardized YAML definitions of Hubs, Links, and Satellites, S2V validates, generates, and deploys production-ready code in a matter of minutes. This rapid cycle makes it easy to evolve the model, apply changes, and push them into production without friction. The walkthrough will cover:

  • Their design principals - business data model first
  • YAML definitions – a simple, consistent way to describe Data Vault objects.
  • Commands in action – validate, generate, deploy in one streamlined workflow.
  • Snowflake Dynamic Tables & monitoring – enabling near real-time processing and removing the need for complex orchestration.
  • How you embed in an organization - architecture overview
  • S2V by numbers - measurable outcomes from a client story

By combining rapid implementation with near real-time execution, Stream2Vault brings true agility and consistency to Data Vault projects. The session concludes with results from client implementations, highlighting the measurable gains in speed, adaptability, and operational simplicity.

Speaker Bio Viktor is a Data and Software Engineer who bridges data architecture and application development. He specializes in Data Vault automation and building efficient cloud data pipelines. As the developer of Stream2Vault (S2V), a Python-based CLI for generating and deploying Data Vault models, he demonstrates how to apply strong engineering principles to Data Vault 2.0 to build scalable, intuitive and performant

Template-Driven Data Vault: A Code Centric Approach to Master Complexity
Event DSC DACH 25 2025-12-10

In their tech tutorial, Ved and Ranjan showed how to leverage Snowflake’s AI capabilities to build a semantic layer for trusted, business-ready insights. They demonstrated modeling metrics in Snowflake and enriching them with Cortex AI functions such as classification, summarization, and semantic search. The session also covered enabling natural language querying while maintaining enterprise-grade governance. By the end, participants learned how to unlock the full potential of Snowflake as an AI-driven platform for data-driven decision-making.

This tutorial by Ved Prakash & Ranjan Melanta was held on October 14th at DSC DACH 25 in Vienna.

Follow us on social media : LinkedIn: https://www.linkedin.com/company/11184830/admin/ Instagram: https://www.instagram.com/datasciconf/ Facebook page: https://www.facebook.com/DataSciConference Website: https://datasciconference.com/

In his talk, Ved shared a practical journey of implementing conversational analytics using Snowflake Cortex. He detailed the process from initial concept to full business adoption. Rather than focusing on theory, he highlighted real-world experiences in building and deploying a "talk to your data" solution. The session emphasized how to bridge the gap between technical capabilities and the needs of business stakeholders.

This speech by Ved Prakash was held on October 15th at DSC DACH 25 in Vienna.

Follow us on social media : LinkedIn: https://www.linkedin.com/company/11184830/admin/ Instagram: https://www.instagram.com/datasciconf/ Facebook page: https://www.facebook.com/DataSciConference Website: https://datasciconference.com/

Our 13th edition of the Belgium dbt Meetup will be co-hosted together with the Belgium Snowflake User Group!

dbt Meetups are networking events open to all folks working with data! Talks predominantly focus on community members' experience with dbt, however, you'll catch presentations on broader topics such as analytics engineering, data stacks, data ops, modeling, testing, and team structures.

🏠Venue: Telenet HQ, Liersesteenweg 4, 2800 Mechelen 🤝Organizers: Belgium Snowflake User Group

Speakers, agenda, RSVP https://usergroups.snowflake.com/events/details/snowflake-belgium-presents-belgium-snowflake-user-group-telenet-data-amp-ai-journey/

We are always looking for speakers! To submit a session for one of the next meetups, please use our Sessionize page. Are you in doubt if you're ready to give a talk? Check out dbt Labs's guide on how to deliver a fantastic presentation!

➡️ Join the dbt Slack community: https://www.getdbt.com/community/ 🤝 For the best Meetup experience, make sure to join the #local-belgium channel in dbt Slack (https://slack.getdbt.com/)!

dbt is the standard in data transformation, used by over 40,000 organizations worldwide. Through the application of software engineering best practices like modularity, version control, testing, and documentation, dbt’s analytics engineering workflow helps teams work more efficiently to produce data the entire organization can trust.

Learn more: https://www.getdbt.com/

Belgium dbt Meetup #13 - co-hosted with the Belgium Snowflake User Group
Xebia Women in Data third edition