AstraZeneca has implemented a "platform" approach, which serves as a centralized repository of standardized, enterprise grade, reusable services and capabilities that are accessible to AI factories. This platform includes user interfaces, APIs that integrate AI services with enterprise systems, supporting resources like data import tools and agent orchestration services. AstraZeneca will share how, starting with a few generative AI use cases, they have successfully identified common services and capabilities, subsequently standardizing these elements to maximize their applicability through the platform. These solutions leverage technologies like GPT models, Natural Language Processing and Retrieval Augmented Generation (RAG) architecture.
talk-data.com
Topic
API
Application Programming Interface (API)
856
tagged
Activity Trend
Top Events
Organizations today require more than dashboards—they need applications that combine insights with data collection and action capabilities to drive meaningful change. In this session, Stipo Josipovic (Director of Product) will showcase the key innovations enabling this shift, from expanded write-back capabilities to workflow automation features.
You'll learn about Sigma's growing data app capabilities, including:
Enhanced write-back features: Redshift and upcoming BigQuery support, bulk data entry, and form-based collection for structured workflows Advanced security controls: Conditional editing and row-level security for precise data governance Intuitive interface components: Containers, modals, and tabbed navigation for app-like experiences Powerful Actions framework: API integrations, notifications, and automated triggers to drive business processes This session covers both recently released features and Sigma's upcoming roadmap, including detail views, simplified form-building, and new API actions to integrate with your tech stack. Discover how Sigma helps organizations move beyond analysis to meaningful action.
➡️ Learn more about Data Apps: https://www.sigmacomputing.com/product/data-applications?utm_source=youtube&utm_medium=organic&utm_campaign=data_apps_conference&utm_content=pp_data_apps
➡️ Sign up for your free trial: https://www.sigmacomputing.com/go/free-trial?utm_source=youtube&utm_medium=video&utm_campaign=free_trial&utm_content=free_trial
sigma #sigmacomputing #dataanalytics #dataanalysis #businessintelligence #cloudcomputing #clouddata #datacloud #datastructures #datadriven #datadrivendecisionmaking #datadriveninsights #businessdecisions #datadrivendecisions #embeddedanalytics #cloudcomputing #SigmaAI #AI #AIdataanalytics #AIdataanalysis #GPT #dataprivacy #python #dataintelligence #moderndataarchitecture
AI agents have enterprises in a chock-hold. From drafting your emails and scheduling your calendar to chatbots and omni-channel contact centre solutions with API integrations a lot is changing in white collar jobs. But, alongside the rise of trad wives we have the Stepford Wives, so I am 3D printing a robot to make my bed, iron and empty the dishwasher. How will embodied AI reshape what it means to be human, and how do we stay ahead of the curve.
This concise yet comprehensive guide shows developers and architects how to tackle data integration challenges with MuleSoft. Authors Pooja Kamath and Diane Kesler take you through the process necessary to build robust and scalable integration solutions step-by-step. Supported by real-world use cases, Building Integrations with MuleSoft teaches you to identify and resolve performance bottlenecks, handle errors, and ensure the reliability and scalability of your integration solutions. You'll explore MuleSoft's robust set of connectors and their components, and use them to connect to systems and applications from legacy databases to cloud services. Ask the right questions to determine your use case, define requirements, decide on reuse versus rebuild, and create sequence and context diagrams Master tools like the Anypoint Platform, Anypoint Studio, Code Builder, GitHub, and Maven Design APIs with RAML and OAS and craft effective requests and responses Write MUnit tests, validate DataWeave expressions, and use Postman Collections Deploy Mule applications to CloudHub, use API Manager to create API proxies, and secure APIs with Mule OAuth 2.0 Learn message orchestration techniques for routers, transactions, error handling, For Each, Parallel For Each, and batch processing
This book takes an advanced dive into using Tableau for professional data visualization and analytics. You will learn techniques for crafting highly interactive dashboards, optimizing their performance, and leveraging Tableau's APIs and server features. With a focus on real-world applications, this resource serves as a guide for professionals aiming to master advanced Tableau skills. What this Book will help me do Build robust, high-performing Tableau data models for enterprise analytics. Use advanced geospatial techniques to create dynamic, data-rich mapping visualizations. Leverage APIs and developer tools to integrate Tableau with other platforms. Optimize Tableau dashboards for performance and interactivity. Apply best practices for content management and data security in Tableau implementations. Author(s) Pablo Sáenz de Tejada and Daria Kirilenko are seasoned Tableau experts with vast professional experience in implementing advanced analytics solutions. Pablo specializes in enterprise-level dashboard design and has trained numerous professionals globally. Daria focuses on integrating Tableau into complex data ecosystems, bringing a practical and innovative approach to analytics. Who is it for? This book is tailored for professionals such as Tableau developers, data analysts, and BI consultants who already have a foundational knowledge of Tableau. It is ideal for those seeking to deepen their skills and gain expertise in tackling advanced data visualization challenges. Whether you work in corporate analytics or enjoy exploring data in your own projects, this book will enhance your Tableau proficiency.
This project aims to develop an intelligent system using computer vision to identify individual jaguars by their unique facial and body patterns. A Vision Transformer (ViT) and advanced self-attention models will be used for segmentation and classification, with fine-tuned embeddings to enhance accuracy. The system will aid zoologists in tracking jaguars, especially after natural disasters, and will be deployed as an API for practical use.
APIs dominate the web, accounting for the majority of all internet traffic. And more AI means more APIs, because they act as an important mechanism to move data into and out of AI applications, AI agents, and large language models (LLMs). So how can you make sure all of these APIs are secure? In this session, we’ll take you through OWASP’s top 10 API and LLM security risks, and show you how to mitigate these risks using Google Cloud’s security portfolio, including Apigee, Model Armor, Cloud Armor, Google Security Operations, and Security Command Center.
Learn how to manage security controls and licenses for thousands of users, and tie it all together with APIs. We’ll show you ways to manage developer access more efficiently, build custom management integrations, and keep your CISO happy at the same time. We’ll also demo the new Gemini Code Assist integration with Apigee, which lets developers use Gemini Code Assist chat to generate context-aware OpenAPI specifications that reuse components from other APIs in their organization for efficiency and reference organizational security standards.
Unlock the power of code execution with Gemini 2.0 Flash! This hands-on lab demonstrates how to generate and run Python code directly within the Gemini API. Learn to use this capability for tasks like solving equations, processing text, and building code-driven applications.
If you register for a Learning Center lab, please ensure that you sign up for a Google Cloud Skills Boost account for both your work domain and personal email address. You will need to authenticate your account as well (be sure to check your spam folder!). This will ensure you can arrive and access your labs quickly onsite. You can follow this link to sign up!
This hands-on lab introduces Gemini 2.0 Flash, the powerful new multimodal AI model from Google DeepMind, available through the Gemini API in Vertex AI. You'll explore its significantly improved speed, performance, and quality while learning to leverage its capabilities for tasks like text and code generation, multimodal data processing, and function calling. The lab also covers advanced features such as asynchronous methods, system instructions, controlled generation, safety settings, grounding with Google Search, and token counting.
If you register for a Learning Center lab, please ensure that you sign up for a Google Cloud Skills Boost account for both your work domain and personal email address. You will need to authenticate your account as well (be sure to check your spam folder!). This will ensure you can arrive and access your labs quickly onsite. You can follow this link to sign up!
Cloud Run is an ideal platform for hosting AI applications – for example, you can use Cloud Run with AI frameworks like LangChain or Firebase Genkit to orchestrate calls to AI models on Vertex AI, vector databases, and other APIs. In this session, we’ll dive deep into building AI agents on Cloud Run to solve complex tasks and explore several techniques, including tool calling, multi-agent systems, memory state management, and code execution. We’ll showcase interactive examples using popular frameworks.
In today’s fast-paced market, data is key to innovation. This session explores how Apigee, combined with Google Distributed Cloud, enables organizations to unlock the value of their data, regardless of its location. Learn how to operationalize data across legacy systems, the cloud, and edge environments to build cutting-edge solutions like generative AI and advanced analytics. Discover how Apigee simplifies data accessibility and interoperability, accelerating your time to market and maximizing the potential of your data assets.
Tensor Processing Units (TPUs) are a hardware accelerator designed by Google specifically for large-scale AI/ML computations. Google's new Trillium TPUs are our most performant and energy-efficient TPUs to date, and offer unprecedented levels of scalability. Ray is a unified framework for orchestrating AI/ML workloads on large compute clusters. Ray offers Python-native APIs for training, inference, tuning, reinforcement learning, and more. In this lightning talk, we will demonstrate how you can use Ray to manage workloads on TPUs with an easy-to-use API. We will cover: 1) Training your models with MaxText, 2) Tuning models with Huggingface, and 3) Serving models with vLLM. Audience can gain an understanding of how to build a complete, end-to-end AI/ML infrastructure with Ray and TPUs.
Flutter powers beautiful apps featuring custom designs, high performance, and access to the latest iOS APIs and features, while Firebase SDKs for Flutter make it easy to add backend functionality. Join this session to learn how to use these technologies together to build full-stack apps for iOS and then share that code across Android, web, desktop, and more.
Unlock the true potential of your e-commerce platform by delivering exceptional product discovery experiences. This session delves into the nuances of transitioning to intelligent search with Vertex AI Search for Commerce. Discover how to architect robust and scalable data pipelines for catalog and event ingestion, the very foundation for personalized product discovery and delighted users. We'll explore the anatomy of critical data like search events and demonstrate how a thoughtful approach to data ingestion, which directly fuels the quality of your search results. More than just search, we'll unveil how combining these APIs unlocks opportunities for implementing creative and engaging features, paving the way for innovative experiences like conversational commerce. Join us to learn how to transform your search into a revenue-driving engine.
In today’s complex landscape, APIs can reside anywhere – on premises, in the cloud, or across multiple cloud providers. Join us to discover how API Hub, powered by its innovative on-ramp framework and app integration, delivers a truly unified view of your entire API ecosystem. This session is for practitioners looking to learn how to better discover, manage, and secure all their APIs, regardless of location, with comprehensive analytics and consistent governance policies.
Unlock the power of natural language with Looker Agents! This technical deep dive will walk you through an agentic architecture in Looker Conversational Analytics and showcase how the Chief Product Officer of Zeotap is helping Zeotap customers “chat with their data” within the Zeotap platform using the new Conversational Analytics API. Learn how to build custom data agents, answer questions in Workspace, and create analytics applications with the power of conversational AI.
This talk demonstrates a fashion app that leverages the power of AlloyDB, Google Cloud’s fully managed PostgreSQL-compatible database, to provide users with intelligent recommendations for matching outfits. User-uploaded data of their clothes triggers a styling insight on how to pair the outfit with matching real-time fashion advice. This is enabled through an intuitive contextual search (vector search) powered by AlloyDB and Google’s ScaNN index to deliver faster vector search results, low-latency querying, and response times. While we’re at it, we’ll showcase the power of the AlloyDB columnar engine on joins required by the application to generate style recommendations. To complete the experience, we’ll engage the Vertex AI Gemini API package from Spring and LangChain4j integrations for generative recommendations and a visual representation of the personalized style. This entire application is built on a Java Spring Boot framework and deployed serverlessly on Cloud Run, ensuring scalability and cost efficiency. This talk explores how these technologies work together to create a dynamic and engaging fashion experience.
Experience a new way to interact with LLM-powered agents! With Gemini 2.0 and Multimodal Live API, users can give audible instructions and show visual content from a camera or screen, while receiving spoken responses from the model. This enables more natural, timely communication and unlocks multimodal agent workflows. This session showcases how existing agent experiences can be adapted for voice and visual cues, and explores new possibilities with this technology.