talk-data.com talk-data.com

Topic

DevOps

software_development it_operations continuous_delivery

216

tagged

Activity Trend

25 peak/qtr
2020-Q1 2026-Q1

Activities

216 activities · Newest first

Enterprises modernizing core systems on Azure face a familiar set of challenges: legacy mainframes, complex code refactors, and fragmented DevOps pipelines. Join Cognition to get a practical view of how autonomous AI agents like Devin help engineering teams accelerate large-scale modernization efforts. Witness how engineers used Devin to refactor COBOL applications, automate migration pipelines into Azure DevOps and GitHub, and validate migrated workloads for production.

AI-powered workflows with GitHub and Azure DevOps

Modernize your DevOps strategy with Agentic DevOps by migrating your Azure Repos to GitHub while continuing to leverage the investments you’ve made in Azure Boards and Azure Pipelines. We’ll walk through real-world patterns for hybrid adoption, show how to integrate GitHub, Azure Boards and Azure Pipelines, and share best practices for enabling agent-based workflows with the MCP Servers for Azure DevOps, Playwright and Azure.

Delivered in a silent stage breakout.

Introducing the new Azure Copilot for the new era of Intelligent Agents

Get ready to explore how Azure Copilot is transforming the way IT, DevOps, and Developers manage, secure, and optimize cloud environments—ushering in an era of agentic AI that goes beyond simple automation. This session unveils powerful new capabilities that enable AI to act more independently and collaboratively with users, streamlining tasks like incident response, infrastructure provisioning, security posture management, and beyond.​

Fishbowl AI in DevOps: A fishbowl conversation is a form of dialogue that can be used when discussing topics within large groups. Fishbowl conversations are sometimes also used in participatory events such as unconferences. The advantage of fishbowl is that it allows the entire group to participate in a conversation. Several people can join the discussion.

We often hear, using AI in DevOps as done in the Dev space will not work, and others say, we are already doing it. Let us discuss and learn together.

Mastering Snowflake DataOps with DataOps.live: An End-to-End Guide to Modern Data Management

This practical, in-depth guide shows you how to build modern, sophisticated data processes using the Snowflake platform and DataOps.live —the only platform that enables seamless DataOps integration with Snowflake. Designed for data engineers, architects, and technical leaders, it bridges the gap between DataOps theory and real-world implementation, helping you take control of your data pipelines to deliver more efficient, automated solutions. . You’ll explore the core principles of DataOps and how they differ from traditional DevOps, while gaining a solid foundation in the tools and technologies that power modern data management—including Git, DBT, and Snowflake. Through hands-on examples and detailed walkthroughs, you’ll learn how to implement your own DataOps strategy within Snowflake and maximize the power of DataOps.live to scale and refine your DataOps processes. Whether you're just starting with DataOps or looking to refine and scale your existing strategies, this book—complete with practical code examples and starter projects—provides the knowledge and tools you need to streamline data operations, integrate DataOps into your Snowflake infrastructure, and stay ahead of the curve in the rapidly evolving world of data management. What You Will Learn Explore the fundamentals of DataOps , its differences from DevOps, and its significance in modern data management Understand Git’s role in DataOps and how to use it effectively Know why DBT is preferred for DataOps and how to apply it Set up and manage DataOps.live within the Snowflake ecosystem Apply advanced techniques to scale and evolve your DataOps strategy Who This Book Is For Snowflake practitioners—including data engineers, platform architects, and technical managers—who are ready to implement DataOps principles and streamline complex data workflows using DataOps.live.

Microsoft Power Platform Solutions Architect's Handbook - Second Edition

Dive into 'Microsoft Power Platform Solution Architect's Handbook' to master the art of designing and delivering enterprise-grade solutions using Microsoft's cutting-edge Power Platform. Through a mix of practical examples and hands-on tutorials, this book equips you to harness tools like AI, Copilot, and DevOps for building innovative, scalable applications tailored to enterprise needs. What this Book will help me do Acquire the knowledge to effectively utilize AI tools such as Power Platform Copilot and ChatGPT to enhance application intelligence. Understand and apply enterprise-grade solution architecture principles for scalable and secure application development. Gain expertise in integrating heterogenous systems with Power Platform Pipes and third-party APIs. Develop proficiency in creating and maintaining reusable Dataverse data models. Learn to establish and manage a Center of Excellence to govern and scale Power Platform solutions. Author(s) Hugo Herrera is an experienced solution architect specializing in the Microsoft Power Platform with a deep focus on integrating AI and cloud-native strategies. With years of hands-on experience in enterprise software development and architectural design, Hugo brings real-world insights into his writing, emphasizing practical application of advanced concepts. His approach is clear, structured, and aimed at empowering readers to excel. Who is it for? This book is tailored for IT professionals like solution architects, enterprise architects, and technical consultants who are looking to elevate their capabilities in Power Platform development. It is also suitable for individuals with an intermediate understanding of Power Platform seeking to spearhead enterprise-level digital transformation projects. Ideal readers are those ready to deepen their integration, data modeling, and AI usage skills within the Microsoft ecosystem, particularly for enterprise applications.

Rewriting the data playbook at Virgin Media O2

At Virgin Media O2, we believe that strong processes and culture matter more than any individual tool. In this talk, we’ll share how we’ve applied DevOps and software engineering principles to transform our data capabilities and enable true data modernization at scale. We’ll take you behind the scenes of how these practices shaped the design and delivery of our enterprise Data Mesh, with dbt at its core, empowering our teams to move faster, build trust in data, and fully embrace a modern, decentralized approach.

Integrating machine learning with DevOps practices is essential for organizations to stay competitive. This hands-on workshop will introduce you to JFrog ML and its capabilities, empowering data scientists and DevOps teams to seamlessly manage the end-to-end machine learning lifecycle. Learn to securely build, deploy, and maintain machine learning models with JFrog’s powerful platform, while enhancing collaboration between data scientists and DevOps teams.

DNB, Norway’s largest bank, began building a cloud-based self-service Data & AI Platform in 2017, delivering its first capabilities by 2018. Initially focused on ML and analytics, the platform expanded in 2021 to include traditional data warehouses and modern data products. Snowflake was officially launched in 2023 after a successful PoC and pilot.

In this talk, we’ll walk through our journey.

Where We Came From

•Discover how legacy data warehouse bottlenecks sparked a shift toward decentralised, self-service data capabilities.

Where We Are

•Learn how DNB enabled teams to own and operate their data products through: •Streamlined domain onboarding •“DevOps for data” and “SQL as code” practices •Automated services for historisation (PSA)

Where We’re Going

•Explore how DNB is evolving its data mesh with: •A hybrid model of decentralised and centralised data products •Generative AI, metadata automation, and development support •Enhanced tooling and services for data consumers

Summary In this crossover episode of the AI Engineering Podcast, host Tobias Macey interviews Brijesh Tripathi, CEO of Flex AI, about revolutionizing AI engineering by removing DevOps burdens through "workload as a service". Brijesh shares his expertise from leading AI/HPC architecture at Intel and deploying supercomputers like Aurora, highlighting how access friction and idle infrastructure slow progress. Join them as they discuss Flex AI's innovative approach to simplifying heterogeneous compute, standardizing on consistent Kubernetes layers, and abstracting inference across various accelerators, allowing teams to iterate faster without wrestling with drivers, libraries, or cloud-by-cloud differences. Brijesh also shares insights into Flex AI's strategies for lifting utilization, protecting real-time workloads, and spanning the full lifecycle from fine-tuning to autoscaled inference, all while keeping complexity at bay.

Pre-amble I hope you enjoy this cross-over episode of the AI Engineering Podcast, another show that I run to act as your guide to the fast-moving world of building scalable and maintainable AI systems. As generative AI models have grown more powerful and are being applied to a broader range of use cases, the lines between data and AI engineering are becoming increasingly blurry. The responsibilities of data teams are being extended into the realm of context engineering, as well as designing and supporting new infrastructure elements that serve the needs of agentic applications. This episode is an example of the types of work that are not easily categorized into one or the other camp.

Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData teams everywhere face the same problem: they're forcing ML models, streaming data, and real-time processing through orchestration tools built for simple ETL. The result? Inflexible infrastructure that can't adapt to different workloads. That's why Cash App and Cisco rely on Prefect. Cash App's fraud detection team got what they needed - flexible compute options, isolated environments for custom packages, and seamless data exchange between workflows. Each model runs on the right infrastructure, whether that's high-memory machines or distributed compute. Orchestration is the foundation that determines whether your data team ships or struggles. ETL, ML model training, AI Engineering, Streaming - Prefect runs it all from ingestion to activation in one platform. Whoop and 1Password also trust Prefect for their data operations. If these industry leaders use Prefect for critical workflows, see what it can do for you at dataengineeringpodcast.com/prefect.Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. Your host is Tobias Macey and today I'm interviewing Brijesh Tripathi about FlexAI, a platform offering a service-oriented abstraction for AI workloadsInterview IntroductionHow did you get involved in machine learning?Can you describe what FlexAI is and the story behind it?What are some examples of the ways that infrastructure challenges contribute to friction in developing and operating AI applications?How do those challenges contribute to issues when scaling new applications/businesses that are founded on AI?There are numerous managed services and deployable operational elements for operationalizing AI systems. What are some of the main pitfalls that teams need to be aware of when determining how much of that infrastructure to own themselves?Orchestration is a key element of managing the data and model lifecycles of these applications. How does your approach of "workload as a service" help to mitigate some of the complexities in the overall maintenance of that workload?Can you describe the design and architecture of the FlexAI platform?How has the implementation evolved from when you first started working on it?For someone who is going to build on top of FlexAI, what are the primary interfaces and concepts that they need to be aware of?Can you describe the workflow of going from problem to deployment for an AI workload using FlexAI?One of the perennial challenges of making a well-integrated platform is that there are inevitably pre-existing workloads that don't map cleanly onto the assumptions of the vendor. What are the affordances and escape hatches that you have built in to allow partial/incremental adoption of your service?What are the elements of AI workloads and applications that you are explicitly not trying to solve for?What are the most interesting, innovative, or unexpected ways that you have seen FlexAI used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on FlexAI?When is FlexAI the wrong choice?What do you have planned for the future of FlexAI?Contact Info LinkedInParting Question From your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Links Flex AIAurora Super ComputerCoreWeaveKubernetesCUDAROCmTensor Processing Unit (TPU)PyTorchTritonTrainiumASIC == Application Specific Integrated CircuitSOC == System On a ChipLoveableFlexAI BlueprintsTenstorrentThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

The scale-up company Solynta focuses on hybrid potato breeding, which helps achieve improvements in yield, disease resistance, and climate adaptation. Scientific innovation is part of our core business. Plant selections are highly data-driven, involving, for example, drone observations and genetic data. Minimal time-to-production for new ideas is essential, which is facilitated by our custom AWS devops platform. This platform focusses on automation and accessible data storage.

In this talk, we introduce how computer vision (YOLO and SAM modelling) enables monitoring traits of plants in the field, and how we operate these models. This further entails: • Our experience from training and evaluating models on drone images • Trade-offs selecting AWS services, Terraform modules and Python packages for automation and robustness • Our team setup that allows IT specialists and biologists to work together effectively

The talk will provide practical insights for both data scientists and DevOps engineers. The main takeaways are that object detection and segmentation from drone maps, at scale, are achievable for a small team. Furthermore, with the right approach, you can standardise a DevOps platform to let operations and developers work together.

DORA metrics are the gold standard for measuring software delivery performance and stability. However, conventional methods of capturing these metrics are increasingly challenged by siloed DevOps toolchains, manual data collection, and the growing prevalence of AI-generated code in production. Enterprise delivery pipelines demand resilience and accuracy, but today's measurement systems struggle with both integration complexity and the specialized expertise required to operate in large, distributed environments. This talk will discuss these challenges in detail and show how Generative AI can elevate DORA from static, descriptive dashboards to dynamic diagnostic, prescriptive, and predictive insights—unlocking a new era of actionable intelligence.