talk-data.com talk-data.com

Topic

performance testing

2

tagged

Activity Trend

1 peak/qtr
2020-Q1 2026-Q1

Activities

2 activities · Newest first

In this talk, Don explores how GitHub Agentic Workflows - a framework developed at GitHub Next - can revolutionise F# library development through automated performance and test improvements. The approach introduces "Continuous Test Improvement" and "Continuous Performance Improvement" where AI agents automatically research, measure, optimise, and re-measure code performance in a continuous loop, all while maintaining human oversight through pull request reviews and goal-setting. This semi-automatic engineering approach represents a fundamental shift in software development: from manual coding to AI-assisted completions, to task-oriented programming, and now to event-triggered agentic workflows. Don will demonstrate practical applications in F# libraries, showing how these workflows can identify performance bottlenecks, generate benchmarks, implement optimisations, and verify improvements - all while preserving code correctness through automated testing. Learn how this emerging technology could transform how we maintain and optimise F# libraries, making high-performance code more accessible to the entire F# community.

This talk bridges the gap between theoretical performance testing concepts and hard-earned lessons from real-world implementation. We dive into actionable techniques that will help you deploy and maintain a fruitful Continuous Performance Testing practice. We address a wide spectrum of common mistakes and misunderstandings: the crucial differences between Performance Testing and Load Testing, environment must-haves, pitfalls in metrics management, result analysis challenges, and much more.

By the end of this presentation, you'll be better equipped to implement a truly continuous approach, enabling your team to deliver faster, stronger, and better applications that meet modern performance expectations.