talk-data.com talk-data.com

Topic

apis

57

tagged

Activity Trend

16 peak/qtr
2020-Q1 2026-Q1

Activities

57 activities · Newest first

Chez Radio France, nous avons exploré le concept de Backend for Frontend (BFF) en le poussant aux limites de ses capacités. Ce talk propose un retour d'expérience détaillé sur les mécanismes que nous avons mis en place pour optimiser et sécuriser cette architecture, en réponse aux besoins spécifiques de nos applications.\n\nNous aborderons les choix techniques, les défis rencontrés, et les solutions pratiques qui nous ont permis de gérer efficacement les interactions entre nos frontends et backends. Venez découvrir comment le BFF peut transformer la gestion des flux de données et améliorer la scalabilité de vos projets.

Fishing vessels are on track to generate 10 million hours of video footage annually, creating a massive machine learning operations challenge. At AI.Fish, we are building an end-to-end system enabling non-technical users to harness AI for catch monitoring and classification both on-board and in the cloud. This talk explores our journey in building these approachable systems and working toward answering an old question: How many fish are in the ocean?

Microsoft’s AI services are normally exposed via HTTPS endpoints and are secured by set of keys that need to be stored and managed. But how can you manage those endpoints and keys at scale? Enter Azure API Management (APIM) which wraps your APIs and gives you complete control over these requests by applying authentication, authorization, logging, throttling to requests.

In this episode, we are partnering with the AI Advisory team in Microsoft for Startups to explore a unique use case by a leading startup in the program: OneAI. OneAI unique approach to Enterprise AI is to curate and fine-tune the world's top AI capabilities and package them as APIs, empowering businesses to deploy tailored AI solutions in days. During this episode, we will explore how their teams in building AI solutions are designed to ensure consistent, predictable output, and alignment with the source documents, bolstering trust and enhancing business outcomes. We will also, share some product demos around building their AI Agent, optimizing both the tuning process and long-term performance in terms of cost, speed, and carbon footprint, all while emphasizing transparency and explainability.

APIs dominate the web, accounting for the majority of all internet traffic. And more AI means more APIs, because they act as an important mechanism to move data into and out of AI applications, AI agents, and large language models (LLMs). So how can you make sure all of these APIs are secure? In this session, we’ll take you through OWASP’s top 10 API and LLM security risks, and show you how to mitigate these risks using Google Cloud’s security portfolio, including Apigee, Model Armor, Cloud Armor, Google Security Operations, and Security Command Center.

This hands-on lab equips you with the practical skills to build and deploy a real-world AI-powered chat application leveraging the Gemini LLM APIs. You'll learn to containerize your application using Cloud Build, deploy it seamlessly to Cloud Run, and explore how to interact with the Gemini LLM to generate insightful responses. This hands-on experience will provide you with a solid foundation for developing engaging and interactive conversational applications.

If you register for a Learning Center lab, please ensure that you sign up for a Google Cloud Skills Boost account for both your work domain and personal email address. You will need to authenticate your account as well (be sure to check your spam folder!). This will ensure you can arrive and access your labs quickly onsite. You can follow this link to sign up!