Facing challenges with the cost and performance of your AI inference workloads? This talk presents TPUs and Google Kubernetes Engine (GKE) as a solution for achieving both high throughput and low latency while optimizing costs with open source models and libraries. Learn how to leverage TPUs to scale massive inference workloads efficiently.
talk-data.com
M
Speaker
Mustafa Ozuysal
1
talks
Senior ML Researcher
HUBX
Filter by Event / Source
Talks & appearances
1 activities · Newest first