In this talk we’ll learn Infrastructure-as-Code by automating the world’s most popular game: Minecraft. Using Packer, Terraform and GitHub Actions, we’ll build a server, configure Linux, provision cloud infrastructure and operate it through GitOps. Finally, we’ll demonstrate how to go beyond automating traditional cloud control planes—automating the Minecraft world itself by using Terraform to build and demolish structures like castles and pyramids before our very eyes!
talk-data.com
Topic
IaC
Infrastructure as Code (IaC)
36
tagged
Activity Trend
Top Events
Learn how to seamlessly modernize a Linux application stack in Azure. Participants will be able to step through the migration process with a Linux/Postgres/Java application and utilize infrastructure as a code (IaC) for scale migrations.
Please RSVP and arrive at least 5 minutes before the start time, at which point remaining spaces are open to standby attendees.
Learn how to seamlessly modernize a Linux application stack in Azure. Participants will be able to step through the migration process with a Linux/Postgres/Java application and utilize infrastructure as a code (IaC) for scale migrations.
Please RSVP and arrive at least 5 minutes before the start time, at which point remaining spaces are open to standby attendees.
Learn how to seamlessly modernize a Linux application stack in Azure. Participants will be able to step through the migration process with a Linux/Postgres/Java application and utilize infrastructure as a code (IaC) for scale migrations.
Please RSVP and arrive at least 5 minutes before the start time, at which point remaining spaces are open to standby attendees.
In this talk we’ll learn Infrastructure-as-Code by automating the world’s most popular game: Minecraft. Using Packer, Terraform and GitHub Actions, we’ll build a server, configure Linux, provision cloud infrastructure and operate it through GitOps. Finally, we’ll demonstrate how to go beyond automating traditional cloud control planes—automating the Minecraft world itself by using Terraform to build and demolish structures like castles and pyramids before our very eyes!
How do you deliver a modern, governed analytics stack to dozens of independent and competing companies, each with their own priorities, budgets, and data platforms? SpareBank 1 built a platform-as-a-service using Infrastructure as Code to provision dbt environments on demand. This session shares how SB1 modernized legacy data warehouses and scaled dbt across a network of banks, offering lessons for any organization supporting multiple business units or regions.
Pulumi is the Infrastructure as Code platform that boosts your team’s capacity using familiar programming languages, helping you ship faster, scale confidently, and enable secure developer self-service. Built on 8+ years of infrastructure intelligence, Pulumi combines proven expertise with AI that works at your pace, under your control.
Workshop focused on scaling the IDP for AI workloads and extending infrastructure as code (IaC).
Deploying AI models efficiently and consistently is a challenge many organizations face. This session will explore how Vizient built a standardized MLOps stack using Databricks and Azure DevOps to streamline model development, deployment and monitoring. Attendees will gain insights into how Databricks Asset Bundles were leveraged to create reproducible, scalable pipelines and how Infrastructure-as-Code principles accelerated onboarding for new AI projects. The talk will cover: End-to-end MLOps stack setup, ensuring efficiency and governance CI/CD pipeline architecture, automating model versioning and deployment Standardizing AI model repositories, reducing development and deployment time Lessons learned, including challenges and best practices By the end of this session, participants will have a roadmap for implementing a scalable, reusable MLOps framework that enhances operational efficiency across AI initiatives.
Westat, a leader in data-driven research for 60 years+, has implemented a centralized Databricks platform to support hundreds of research projects for government, foundations, and private clients. This initiative modernizes Westat’s technical infrastructure while maintaining rigorous statistical standards and streamlining data science. The platform enables isolated project environments with strict data boundaries, centralized oversight, and regulatory compliance. It allows project-specific customization of compute and analytics, and delivers scalable computing for complex analyses. Key features include config-driven Infrastructure as Code (IaC) with Terragrunt, custom tagging and AWS cost integration for ROI tracking, budget policies with alerts for proactive cost management, and a centralized dashboard with row-level security for self-service cost analytics. This unified approach provides full financial visibility and governance while empowering data teams to deliver value. Audio for this session is delivered in the conference mobile app, you must bring your own headphones to listen.
The role of data teams and data engineers is evolving. No longer just pipeline builders or dashboard creators, today’s data teams must evolve to drive business strategy, enable automation, and scale with growing demands. Best practices seen in the software engineering world (Agile development, CI/CD, and Infrastructure-as-code) from the DevOps movement are gradually making their way into data engineering. We believe these changes have led to the rise of DataOps and a new wave of best practices that will transform the discipline of data engineering. But how do you transform a reactive team into a proactive force for innovation? We’ll explore the key principles for building a resilient, high-impact data team—from structuring for collaboration, testing, automation, to leveraging modern orchestration tools. Whether you’re leading a team or looking to future-proof your career, you’ll walk away with actionable insights on how to stay ahead in the rapidly changing data landscape.
This session shows how engineers can use Gemini Cloud Assist and Gemini Code Assist to speed up the software development life cycle (SDLC) and improve service quality. You’ll learn how to shorten release cycles; improve delivery quality with best practices and generated code, including tests and infrastructure as code (IaC); and gain end-to-end visibility into service setup, consumption, cost, and observability. In a live demo, we’ll showcase the integrated flow and highlight code generation with GitLab and Jira integration. And we’ll show how Gemini Cloud Assist provides deeper service-quality insights.
How to successfully migrate petabyte-scale of Cassandra clusters to Spanner without requiring code changes. The use case addresses various lifecycle aspects, including IaC, containerization, gradual migration, performance testing, security, centralized observability and multi-region operations.
This hands-on lab equips you with the essential skills to manage and automate your infrastructure using Terraform. Learn to define and provision infrastructure resources across various providers using HashiCorp Configuration Language (HCL). Explore core concepts like resource dependencies and understand how to safely build, change, and destroy infrastructure using Terraform's declarative approach. This hands-on experience will empower you to streamline deployments, enhance consistency, and improve overall infrastructure management efficiency.
If you register for a Learning Center lab, please ensure that you sign up for a Google Cloud Skills Boost account for both your work domain and personal email address. You will need to authenticate your account as well (be sure to check your spam folder!). This will ensure you can arrive and access your labs quickly onsite. You can follow this link to sign up!
Automation enables engineering teams to reduce duplication of effort and build consistency around various processes, especially observability. Providing out-of-the-box solutions and using infrastructure as code are some of the ways to automate your systems so all of your teams can onboard and get all the right features. In this talk, we will discuss how Datadog and Project44 have created self-service platforms so their engineers can automatically obtain observability into their systems.
This Session is hosted by a Google Cloud Next Sponsor.
Visit your registration profile at g.co/cloudnext to opt out of sharing your contact information with the sponsor hosting this session.
Get ready to dive into the world of DevOps & Cloud tech! This session will help you navigate the complex world of Cloud and DevOps with confidence. This session is ideal for new grads, career changers, and anyone feeling overwhelmed by the buzz around DevOps. We'll break down its core concepts, demystify the jargon, and explore how DevOps is essential for success in the ever-changing technology landscape, particularly in the emerging era of generative AI. A basic understanding of software development concepts is helpful, but enthusiasm to learn is most important.
Vishakha is a Senior Cloud Architect at Google Cloud Platform with over 8 years of DevOps and Cloud experience. Prior to Google, she was a DevOps engineer at AWS and a Subject Matter Expert (SME) for the IaC offering CloudFormation in the NorthAm region. She has experience in diverse domains including Financial Services, Retail, and Online Media. She primarily focuses on Infrastructure Architecture, Design & Automation (IaC), Public Cloud (AWS, GCP), Kubernetes/CNCF tools, Infrastructure Security & Compliance, CI/CD & GitOps, and MLOPS.
It’s time for another episode of the Data Engineering Central Podcast. In this episode, we cover … * AWS Lambda + DuckDB and Delta Lake (Polars, Daft, etc). * IAC - Long Live Terraform. * Databricks Data Quality with DQX. * Unity Catalog releases for DuckDB and Polars * Bespoke vs Managed Data Platforms * Delta Lake vs. Iceberg and UinFORM for a single table. Thanks for b…
This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit dataengineeringcentral.substack.com/subscribe
In this talk, data engineers from AB CarVal will discuss how to orchestrate jobs in an efficient and timely manner for business-critical data that arrives on a non-regular cadence, and why Infrastructure-as-Code is important and how to extend this to your dbt Cloud jobs running on Snowflake.
Speaker: Rafael Cohn-Gruenwald Sr. Data Engineer Alliance Bernstein
Read the blog to learn about the latest dbt Cloud features announced at Coalesce, designed to help organizations embrace analytics best practices at scale https://www.getdbt.com/blog/coalesce-2024-product-announcements
Since the beginning of 2024, the Warner Brothers Discovery team supporting the CNN data platform has been undergoing an extensive migration project from dbt Core to dbt Cloud. Concurrently, the team is also segmenting their project into multi-project frameworks utilizing dbt Mesh. In this talk, Zachary will review how this transition has simplified data pipelines, improved pipeline performance and data quality, and made data collaboration at scale more seamless.
He'll discuss how dbt Cloud features like the Cloud IDE, automated testing, documentation, and code deployment have enabled the team to standardize on a single developer platform while also managing dependencies effectively. He'll share details on how the automation framework they built using Terraform streamlines dbt project deployments with dbt Cloud to a ""push-button"" process. By leveraging an infrastructure as code experience, they can orchestrate the creation of environment variables, dbt Cloud jobs, Airflow connections, and AWS secrets with a unified approach that ensures consistency and reliability across projects.
Speakers: Mamta Gupta Staff Analytics Engineer Warner Brothers Discovery
Zachary Lancaster Manager, Data Engineering Warner Brothers Discovery
Read the blog to learn about the latest dbt Cloud features announced at Coalesce, designed to help organizations embrace analytics best practices at scale https://www.getdbt.com/blog/coalesce-2024-product-announcements
There are undoubtedly similarities between the disciplines of analytics engineering and DevOps: in fact, dbt was founded with the goal of helping data professionals embrace DevOps principles as part of the data workflow. As the embedded DevOps engineer for a mature analytics engineering function, Katie Claiborne, Founding Analytics Engineer at Duet, observed parallels between analytics-as-code and infrastructure-as-code, particularly tools like Terraform. In this talk, she'll examine how analytics engineering is a means of empowerment for data practitioners and discuss infrastructure engineering as a means of scaling dbt Cloud deployments. Learn about similarities between analytics and infrastructure configuration tools, how to apply the concepts you've learned about analytics engineering towards new disciplines like DevOps, and how to extend engineering principles beyond data transformation and into the world of infrastructure.
Speaker: Katie Claiborne Founding Analytics Engineer Duet
Read the blog to learn about the latest dbt Cloud features announced at Coalesce, designed to help organizations embrace analytics best practices at scale https://www.getdbt.com/blog/coalesce-2024-product-announcements