Chez Radio France, nous avons exploré le concept de Backend for Frontend (BFF) en le poussant aux limites de ses capacités. Ce talk propose un retour d'expérience détaillé sur les mécanismes que nous avons mis en place pour optimiser et sécuriser cette architecture, en réponse aux besoins spécifiques de nos applications.\n\nNous aborderons les choix techniques, les défis rencontrés, et les solutions pratiques qui nous ont permis de gérer efficacement les interactions entre nos frontends et backends. Venez découvrir comment le BFF peut transformer la gestion des flux de données et améliorer la scalabilité de vos projets.
talk-data.com
Topic
apis
57
tagged
Activity Trend
Top Events
Suite au talk du 11 décembre 2024, expliquant que les nouvelles API Node.js ne sont pas toujours nécessaires et qu'elles introduisent des complexités.
Fishing vessels are on track to generate 10 million hours of video footage annually, creating a massive machine learning operations challenge. At AI.Fish, we are building an end-to-end system enabling non-technical users to harness AI for catch monitoring and classification both on-board and in the cloud. This talk explores our journey in building these approachable systems and working toward answering an old question: How many fish are in the ocean?
Day 5 self-paced session on OpenAI's ecosystem: APIs, assistants, and beyond.
Overview of LOGFORCE's AI technology in cybersecurity and beyond, covering Real-Time Correlations of Non-Linear Events, Enhancing Transparency in AI, Future-Proof Solutions, and SynA: The Synthetic Analyst. Hosted by Gio Pecora.
Microsoft’s AI services are normally exposed via HTTPS endpoints and are secured by set of keys that need to be stored and managed. But how can you manage those endpoints and keys at scale? Enter Azure API Management (APIM) which wraps your APIs and gives you complete control over these requests by applying authentication, authorization, logging, throttling to requests.
In this episode, we are partnering with the AI Advisory team in Microsoft for Startups to explore a unique use case by a leading startup in the program: OneAI. OneAI unique approach to Enterprise AI is to curate and fine-tune the world's top AI capabilities and package them as APIs, empowering businesses to deploy tailored AI solutions in days. During this episode, we will explore how their teams in building AI solutions are designed to ensure consistent, predictable output, and alignment with the source documents, bolstering trust and enhancing business outcomes. We will also, share some product demos around building their AI Agent, optimizing both the tuning process and long-term performance in terms of cost, speed, and carbon footprint, all while emphasizing transparency and explainability.
Seamlessly search your first and third party enterprise data with Google Agentspace. Discover its ready-to-use agents and other upcoming features, transforming how you work and access information.
Check out Google’s recommended architecture for bringing agents to your products, powered by Gemini, Firebase, Google Cloud, and Angular.
Enter our court and get pro-level jumpshot coaching from Google AI. Get instant feedback on your form, receive personalized tips, and find out exactly how to elevate your game.
APIs dominate the web, accounting for the majority of all internet traffic. And more AI means more APIs, because they act as an important mechanism to move data into and out of AI applications, AI agents, and large language models (LLMs). So how can you make sure all of these APIs are secure? In this session, we’ll take you through OWASP’s top 10 API and LLM security risks, and show you how to mitigate these risks using Google Cloud’s security portfolio, including Apigee, Model Armor, Cloud Armor, Google Security Operations, and Security Command Center.
Did you know? You only need One API to Rule Them All! Google's Gen AI SDK provides a simple path to both the Gemini API and Vertex AI!
Play a game of pinball to learn about how Model Armor can protect your LLM app. Shoot shots to send prompts to Gemini, and watch as Model Armor detects and blocks prohibited prompts and responses.
This hands-on lab equips you with the practical skills to build and deploy a real-world AI-powered chat application leveraging the Gemini LLM APIs. You'll learn to containerize your application using Cloud Build, deploy it seamlessly to Cloud Run, and explore how to interact with the Gemini LLM to generate insightful responses. This hands-on experience will provide you with a solid foundation for developing engaging and interactive conversational applications.
If you register for a Learning Center lab, please ensure that you sign up for a Google Cloud Skills Boost account for both your work domain and personal email address. You will need to authenticate your account as well (be sure to check your spam folder!). This will ensure you can arrive and access your labs quickly onsite. You can follow this link to sign up!
Use agentic AI to unify diverse data sources – including Firestore, web searches, and API services – to create individualized experiences for each customer and help them purchase your products.
Build smart context systems with retrieval-augmented generation (RAG) pipelines, vector embeddings, or built-in search capabilities.
From idea to app in minutes. Call our app-building AI agent, which will help you build a functional app over a short call, powered by Customer Engagement Suite with Google AI.
Explore pre-built AI agents available in Google Agentspace. Learn how to quickly deploy and customize these agents for your needs.
Transform vehicle operational data into actionable intelligence. Reduce maintenance downtime, optimize fleet operations, and make data-driven decisions through interactive diagnostics using Gemini models and BigQuery.
New workflow features are coming to Google Workspace! Browse premade workflow templates to get started or build workflows using Google AI.