In this talk, Don explores how GitHub Agentic Workflows - a framework developed at GitHub Next - can revolutionise F# library development through automated performance and test improvements. The approach introduces "Continuous Test Improvement" and "Continuous Performance Improvement" where AI agents automatically research, measure, optimise, and re-measure code performance in a continuous loop, all while maintaining human oversight through pull request reviews and goal-setting. This semi-automatic engineering approach represents a fundamental shift in software development: from manual coding to AI-assisted completions, to task-oriented programming, and now to event-triggered agentic workflows. Don will demonstrate practical applications in F# libraries, showing how these workflows can identify performance bottlenecks, generate benchmarks, implement optimisations, and verify improvements - all while preserving code correctness through automated testing. Learn how this emerging technology could transform how we maintain and optimise F# libraries, making high-performance code more accessible to the entire F# community.
talk-data.com
Topic
automated testing
3
tagged
Activity Trend
1
peak/qtr
2020-Q1
2026-Q1
Live-demo showing automated testing of large language models. Addresses non-determinism in ML systems and demonstrates how a second LLM can act as a judge. Also explores Retrieval Augmented Generation (RAG) for querying documents and guiding tests.
Join this in-depth session on automated testing in the Power Platform. Learn how to address common challenges, insights into approaches to test and monitor solutions, how to build robust applications, integrate quality gates into CI/CD, and how to apply engineering excellence principles to both code-first and low-code development.