Discussion on AI-assisted development for data science, tooling evolution, safety, fairness, and compliance in regulated industries. Examples draw from healthcare; references to open-source libraries and standards such as LangTest, HELM, NIST, CHAI, and ISO.
talk-data.com
Topic
helm
9
tagged
Activity Trend
Top Events
Top Speakers
As generative and agentic AI systems move from prototypes to production, builders must balance innovation with trust, safety, and compliance. This talk explores the unique evaluation and monitoring challenges of next-generation AI, with healthcare as a case study of one of the most regulated domains: Evaluation gaps: why conventional benchmarks miss multi-step reasoning, tool use, and domain-specific workflows—and how contamination and fragile metrics distort results. Bias and safety: demographic bias, hallucinations, and unsafe autonomy that trigger regulatory, legal, and contractual obligations for fairness and safety assessments. Continuous monitoring: practical MLOps strategies for drift detection, risk scoring, and compliance auditing in deployed systems. Tools and standards: open-source libraries like LangTest and HELM, new stress-test and red teaming datasets, and emerging guidance from NIST, CHAI, and ISO. While the examples draw heavily from healthcare, the lessons are broadly applicable to anyone building and deploying generative or agentic AI systems in highly regulated industries where safety, fairness, and compliance are paramount.
A practical workshop exploring threats, attack scenarios, and strategies for securing Helm charts using Cloudsmith's artifact management. Topics include verifying assets (public Helm charts, dependencies, and images), automating compliance with Trivy, and enforcing runtime OPA Gatekeeper policies to protect Kubernetes deployments. Learn to audit and manage Helm charts before distribution to prevent supply-chain attacks. Bonus: hands-on Instruqt lab analyzing insecure chart templates and demonstrating how to scan and validate Helm charts prior to production Kubernetes deployment.
This practical workshop explores common threats, attack scenarios, and proven strategies for securing Helm charts through Cloudsmith's artifact management, maintaining supply chain integrity and regulatory compliance. Topics include: verifying every asset (public Helm charts, dependencies, and images from popular OSS projects before deployment); automating compliance with Trivy and enforcing runtime OPA Gatekeeper security policies in real-time; preventing supply chain attacks by auditing and managing Helm charts before distributing through secure repositories; and acknowledging the manual overhead, as most charts are insecure-by-default and require further security checks by your team. Bonus: Hands-on Instruqt lab platform that analyzes actual insecure chart templates and demonstrates how to scan and detect vulnerabilities with open-source tools, implement security standards, and properly validate Helm charts prior to production Kubernetes deployment.
The talk focuses on the practical implementation of GitOps in a hybrid infrastructure setup, designing Helm charts and provisioning infrastructure with Terraform. Target audience: DevOps engineers or platform engineers building internal developer platforms, especially those working with Kubernetes.
30-minute talk on the evolving threat landscape around Helm charts in public repositories. We’ll discuss real-world incidents such as the Codecov supply chain attack and hypothetical attack vectors like 'ChartSploit', highlighting how seemingly benign configurations can be exploited. Topics include anatomy of vulnerable charts, risk areas (RBAC misconfigurations, dependency vulnerabilities), and actionable strategies to secure Kubernetes environments—auditing deployments, verifying chart integrity, enforcing strict access controls, and adopting DevSecOps practices.
Hands-on workshop on using Pulumi to deploy and manage Kubernetes applications, including the Pulumi Kubernetes provider, Pulumi Docker provider, integration with YAML manifests and Helm charts, and running Pulumi IaC programs in a GitOps fashion.
As generative and agentic AI systems move from prototypes to production, builders must balance innovation with trust, safety, and compliance. This talk covers evaluation gaps (multistep reasoning, tool use, domain-specific workflows; contamination and fragile metrics), bias and safety (demographic bias, hallucinations, unsafe autonomy with regulatory and legal obligations), continuous monitoring (MLOps strategies for drift detection, risk scoring, and compliance auditing in deployed systems), and tools and standards (open-source libraries like LangTest and HELM, stress-test and red-teaming datasets, and guidance from NIST, CHAI, and ISO).
Explore how to use Pulumi with Kubernetes to deploy and manage containerized workloads, integrate Pulumi with existing Kubernetes resources (manifests or Helm charts), and run Pulumi IaC programs in a GitOps fashion.