Explores how Amazon Q accelerates DevOps workflows by enabling AI-powered automation and improved orchestration within the DevOps lifecycle.
talk-data.com
Topic
DevOps
216
tagged
Activity Trend
Top Events
Discussion on how Amazon Q changes the DevOps workflow by enabling AI-powered automation and orchestration in the development lifecycle.
Overview of the technical foundations enabling smarter automation in modern DevOps with GenAI.
Outlines the technical foundations and key components required to implement smarter automation across DevOps pipelines.
A practical, hands-on 1-hour workshop that teaches how to visualize, analyze, and optimize workflows using Value Stream Mapping (VSM). Participants map a delivery workflow, identify bottlenecks, and apply lean principles to deliver more value to the customer. The session includes a hands-on mapping exercise, group discussion on constraints and solutions, and action planning for next steps.
DCMR transformeerde in vijf jaar van een klein datateam naar een professionele data-organisatie. Pieter Vreeburg deelt hoe zij dit bereikten met Agile, DevOps en tooling zoals ownR – inclusief lessen, successen en uitdagingen.
Practical strategies to help you design, optimize, and operate MongoDB deployments for performance, resilience, and growth Key Features Identify and fix performance bottlenecks with practical diagnostic and optimization strategies Optimize schema design, indexing, storage, and system resources for real-world workloads Scale confidently with in-depth coverage of replication, sharding, and cluster management techniques Purchase of the print or Kindle book includes a free PDF eBook Book Description With data as the new competitive edge, performance has become the need of the hour. As applications handle exponentially growing data and user demand for speed and reliability rises, three industry experts distill their decades of experience to offer you guidance on designing, building, and operating databases that deliver fast, scalable, and resilient experiences. MongoDB’s document model and distributed architecture provide powerful tools for modern applications, but unlocking their full potential requires a deep understanding of architecture, operational patterns, and tuning best practices. This MongoDB book takes a hands-on approach to diagnosing common performance issues and applying proven optimization strategies from schema design and indexing to storage engine tuning and resource management. Whether you’re optimizing a single replica set or scaling a sharded cluster, this book provides the tools to maximize deployment performance. Its modular chapters let you explore query optimization, connection management, and monitoring or follow a complete learning path to build a rock-solid performance foundation. With real-world case studies, code examples, and proven best practices, you’ll be ready to troubleshoot bottlenecks, scale efficiently, and keep MongoDB running at peak performance in even the most demanding production environments. What you will learn Diagnose and resolve common performance bottlenecks in deployments Design schemas and indexes that maximize throughput and efficiency Tune the WiredTiger storage engine and manage system resources for peak performance Leverage sharding and replication to scale and ensure uptime Monitor, debug, and maintain deployments proactively to prevent issues Improve application responsiveness through client driver configuration Who this book is for This book is for developers, database administrators, system architects, and DevOps engineers focused on performance optimization of MongoDB. Whether you’re building high-throughput applications, managing deployments in production, or scaling distributed systems, you’ll gain actionable insights. Basic knowledge of MongoDB is assumed, with chapters designed progressively to support learners at all levels.
In this hands-on 2-hour workshop, you'll map a delivery workflow, identify bottlenecks, and apply lean principles to improve your delivery’s effectiveness.
A practical deep-dive into Azure DevOps pipelines, the Azure CLI, and how to combine pipeline, bicep, and python templates to build a fully automated web app deployment system. Deploying a new proof of concept app within an actual enterprise environment never was faster.
What does AI transformation really look like inside a 180-year-old company? In this episode of Data Unchained, we are joined by Younes Hairej, founder and CEO of Aokumo Inc, a trailblazing company helping enterprises in Japan and beyond bridge the gap between business intent and AI execution. From deploying autonomous AI agents that eliminate the need for dashboards and YAML, to revitalizing siloed, analog systems in manufacturing, Younes shares what it takes to modernize legacy infrastructure without starting over. Cyberpunk by jiglr | https://soundcloud.com/jiglrmusic Music promoted by https://www.free-stock-music.com Creative Commons Attribution 3.0 Unported License https://creativecommons.org/licenses/by/3.0/deed.en_US
ArtificialIntelligence #EnterpriseAI #AITransformation #Kubernetes #DevOps #GenAI #DigitalTransformation #OpenSourceAI #DataInfrastructure #BusinessInnovation #AIInJapan #LegacyModernization #MetadataStrategy #AIOrchestration #CloudNative #AIAutomation #DataGovernance #MLOps #IntelligentAgents #TechLeadership
Hosted on Acast. See acast.com/privacy for more information.
Database code often works well in a test environment, but end up causing significant performance issues when rolled out to production. In turn, these performance issues lead to lost productivity, excess resource consumption, and lost time spent troubleshooting the issue. But what if we could catch those issues during development? Join Kevin Kline to learn how SolarWind's SQL Sentry can shift performance tuning left to the dev cycle so those issues never occur on production systems in the first place.
Learn the essentials of creating web apps with some of the most popular programming languages PHP, MySQL, & JavaScript All-in-One For Dummies bundles the essentials of coding in some of the most in-demand web development languages. You'll learn to create your own data-driven web applications and interactive web content. The three powerful languages covered in this book form the backbone of top online apps like Wikipedia and Etsy. Paired with the basics of HTML and CSS—also covered in this All-in-One Dummies guide—you can make dynamic websites with a variety of elements. This book makes it easy to get started. You'll also find coverage of advanced skills, as well as resources you'll appreciate when you're ready to level up. Get beginner-friendly instructions and clear explanations of how to program websites in common languages Understand the basics of object-oriented programming, interacting with databases, and connecting front- and back-end code Learn how to work according to popular DevOps principles, including containers and microservices Troubleshoot problems in your code and avoid common web development mistakes This All-in-One is a great value for new programmers looking to pick up web development skills, as well as those with more experience who want to expand to building web apps.
Extreme weather events threaten industries and economic stability. NOAA’s National Centers for Environmental Information (NCEI) addresses this through the Industry Proving Grounds (IPG), which modernizes data delivery by collaborating with sectors like re/insurance and retail to develop practical, data-driven solutions. This presentation explores IPG’s technical innovations, including implementing Polars for efficient data processing, AWS for scalability, and CI/CD pipelines for streamlined deployment. These tools enhance data accessibility, reduce latency, and support real-time decision-making. By integrating scientific computing, cloud technology, and DevOps, NCEI improves climate resilience and provides a model for leveraging open-source tools to address global challenges.
Airflow has been used by many companies as a core part of their internal data platform. Would you be interested in finding out how Airflow could play a pivotal role in achieving data engineering excellence and efficiency using modern data architecture. The best practices, tools and setup to achieve a stable but yet cost effective way of running small or big workloads, let’s find out! In this workshop we will review how an organisation can setup data platform architecture around Airflow and necessary requirements. Airflow and it’s role in Data Platform Different ways to organise airflow environment enabling scalability and stability Useful open source libraries and custom plugins allowing efficiency How to manage multi-tenancy, cost savings Challenges and factors to keep in mind using Success Matrix! This workshop should be suitable for any Architect, Data Engineer or Devops aiming to build/enhance their internal Data Platform. At the end of this workshop you would have solid understanding of initial setup and ways to optimise further getting most out of the tool for your own organisation.
At the enterprise level, managing Airflow deployments across multiple teams can become complex, leading to bottlenecks and slowed development cycles. We will share our journey of decentralizing Airflow repositories to empower data engineering teams with multi-tenancy, clean folder structures, and streamlined DevOps processes. We dive into how restructuring our Airflow architecture and utilizing repository templates allowed teams to generate new data pipelines effortlessly. This approach enables engineers to focus on business logic without worrying about underlying Airflow configurations. By automating deployments and reducing manual errors through CI/CD pipelines, we minimized operational overhead. However, this transformation wasn’t without challenges. We’ll discuss obstacles we faced, such as maintaining code consistency, variables, and utility functions across decentralized repositories; ensuring compliance in a multi-tenant environment; and managing the learning curve associated with new workflows. Join us to discover practical insights on how decentralizing Airflow repositories can boost team productivity and adapt to evolving business needs with minimal effort.
In this season of the Analytics Engineering podcast, Tristan is digging deep into the world of developer tools and databases. There are few more widely used developer tools than Docker. From its launch back in 2013, Docker has completely changed how developers ship applications. In this episode, Tristan talks to Solomon Hykes, the founder and creator of Docker. They trace Docker's rise from startup obscurity to becoming foundational infrastructure in modern software development. Solomon explains the technical underpinnings of containerization, the pivotal shift from platform-as-a-service to open-source engine, and why Docker's developer experience was so revolutionary. The conversation also dives into his next venture Dagger, and how it aims to solve the messy, overlooked workflows of software delivery. Bonus: Solomon shares how AI agents are reshaping how CI/CD gets done and why the next revolution in DevOps might already be here. For full show notes and to read 6+ years of back issues of the podcast's companion newsletter, head to https://roundup.getdbt.com. The Analytics Engineering Podcast is sponsored by dbt Labs.
Moving AI projects from pilot to production requires substantial effort for most enterprises. AI Engineering provides the foundation for enterprise delivery of AI and generative AI solutions at scale unifying DataOps, MLOps and DevOps practices. This session will highlight AI engineering best practices across these dimensions covering people, processes and technology.
Deploying AI models efficiently and consistently is a challenge many organizations face. This session will explore how Vizient built a standardized MLOps stack using Databricks and Azure DevOps to streamline model development, deployment and monitoring. Attendees will gain insights into how Databricks Asset Bundles were leveraged to create reproducible, scalable pipelines and how Infrastructure-as-Code principles accelerated onboarding for new AI projects. The talk will cover: End-to-end MLOps stack setup, ensuring efficiency and governance CI/CD pipeline architecture, automating model versioning and deployment Standardizing AI model repositories, reducing development and deployment time Lessons learned, including challenges and best practices By the end of this session, participants will have a roadmap for implementing a scalable, reusable MLOps framework that enhances operational efficiency across AI initiatives.
The session will cover how to use Unity Catalog governed system tables to understand what is happening in Databricks. We will touch on key scenarios for FinOps, DevOps and SecOps to ensure you have a well-observed Data Intelligence Platform. Learn about new developments in system tables and other features that will help you observe your Databricks instance.
In this presentation, we'll show how we achieved a unified development experience for teams working on Mercedes-Benz Data Platforms in AWS and Azure. We will demonstrate how we implemented Azure to AWS and AWS to Azure data product sharing (using Delta Sharing and Cloud Tokens), integration with AWS Glue Iceberg tables through UniForm and automation to drive everything using Azure DevOps Pipelines and DABs. We will also show how to monitor and track cloud egress costs and how we present a consolidated view of all the data products and relevant cost information. The end goal is to show how customers can offer the same user experience to their engineers and not have to worry about which cloud or region the Data Product lives in. Instead, they can enroll in the data product through self-service and have it available to them in minutes, regardless of where it originates.