Discussion on chips and compute options for DACH, including EU chip landscape, Cerebras and SiPearl, GPUs, cloud vs on-prem, and TCO/energy considerations.
talk-data.com
Topic
gpus
15
tagged
Activity Trend
Top Events
Top Speakers
Discussion on chips and compute options for DACH, including the EU landscape with Cerebras and SiPearl, GPUs, cloud vs on-prem options, total cost of ownership (TCO), and energy considerations.
Discussion on chips and compute options for the DACH region, featuring Cerebras and SiPearl on EU chip landscape, GPUs, cloud vs on-prem, and total cost of ownership/energy considerations.
Chips and compute for DACH (Cerebras, SiPearl/EU landscape, GPUs, cloud vs on-prem, TCO and energy)
Chips and compute for DACH (Cerebras, SiPearl/EU landscape, GPUs, cloud vs on-prem, TCO and energy).
Dialogue 1: Chips and compute for DACH (Cerebras, SiPearl/EU landscape, GPUs, cloud vs on-prem, TCO and energy)
Chips and compute options for the DACH region with Cerebras and SiPearl, including GPUs, cloud vs on-prem, total cost of ownership and energy considerations.
Hands-on lab session continuing Part 1, containerizing and deploying the AI agent and validating GPU-accelerated deployment on Cloud Run.
Hands-on lab session focusing on building and securing production-ready services on Cloud Run, including setting up an MCP server and authentication.
Dialogue focusing on chips and compute options for DACH, featuring representatives from Cerebras and SiPearl (EU landscape) discussing GPUs, cloud vs on-prem, and related considerations.
Discussion on AI compute options for DACH, including Cerebras and SiPearl/EU landscape, GPUs, cloud vs on-prem, total cost of ownership, and energy considerations.
Discussion on chips and compute for DACH, including EU landscape (Cerebras/SiPearl), GPUs, cloud vs on-prem, TCO and energy considerations.
Chips and compute options for DACH: Cerebras, SiPearl/EU landscape, GPUs, cloud vs on-prem, TCO and energy.
Chips and compute for DACH — discussion on chips and compute options in the DACH region, featuring Cerebras and SiPearl on EU landscape, GPUs, cloud vs on-prem, and total cost of ownership/energy.
Talk: A live, end-to-end demo of wiring an open-source vision-language model (SmolVLM) into Vertex AI - with a lightning primer on what Vertex AI is, which quotas matter (GPUs), and how to pick the right model tier for your latency x cost sweet spot. We’ll then drive that endpoint from a Firebase web app that streams camera frames and spits back analytics in milliseconds - real-time video AI minus the heavyweight MLOps baggage. Target Audience: Cloud and DevOps engineers, full-stack developers, AI-ML hobbyists, and startup builders already shipping (or keen to ship) on GCP who want a pragmatic path to weaving generative/computer-vision AI into their products and pipelines. Takeaways: A checklist for matching model size/tier to performance and budget. A lightweight pattern for streaming video to that endpoint via Firebase and turning frames into instant insights.