As generative and agentic AI systems move from prototypes to production, builders must balance innovation with trust, safety, and compliance. This talk covers evaluation gaps (multistep reasoning, tool use, domain-specific workflows; contamination and fragile metrics), bias and safety (demographic bias, hallucinations, unsafe autonomy with regulatory and legal obligations), continuous monitoring (MLOps strategies for drift detection, risk scoring, and compliance auditing in deployed systems), and tools and standards (open-source libraries like LangTest and HELM, stress-test and red-teaming datasets, and guidance from NIST, CHAI, and ISO).
talk-data.com
Topic
helm
1
tagged
Activity Trend
3
peak/qtr
2020-Q1
2026-Q1
Top Events
Workshop: Pulumi and Kubernetes - Better Together
1
Identifying vulnerabilities in public Kubernetes Helm charts
1
Are your Helm charts secure? Uncovering hidden supply chain threats
1
Governing and Evaluating Generative & Agentic AI in Regulated Industries
1
How is tooling for Data Scientists evolving in the era of AI-assist development?
1
Workshop: Pulumi and Kubernetes - Better Together
1
Are your Helm charts secure? Uncovering hidden supply chain threats
1
Governing and Evaluating Generative & Agentic AI in Regulated Industries
1
BLN DevOps July edition #49
1
Top Speakers
Filtering by:
Governing and Evaluating Generative & Agentic AI in Regulated Industries
×