Atelier de 2 heures sur l'extraction de données des sites web avec Python, automatisation de la collecte et de l'analyse, avec Jupyter notebooks.
talk-data.com
Topic
deep learning
20
tagged
Activity Trend
Top Events
Formation immersive de 3 heures couvrant les bases du développement web, les API, l’intelligence artificielle et le deep learning, avec des exercices concrets (calculatrice, tirage au sort, convertisseur d’image). Animée en direct par un formateur expert; ouverte à tous sans prérequis.
Formation immersive de 3 heures couvrant les bases du développement web, les API et l’intelligence artificielle et le deep learning, avec des outils accessibles et ludiques. Challenge après challenge, les participants gagnent en autonomie et comprennent le fonctionnement des logiciels.
Abstract: In this talk we will explore the world of imaging in digital pathology and discover gigapixel images and how to look at them. We will learn how deep learning can help predict cancer recurrence in the case of prostate cancer and how models can help pathologists discover new biomarkers.
The aim of the project is predicting how biomolecules bind, like proteins and DNA, is crucial for breakthroughs in genetics, drug discovery, and disease research. Traditional methods are slow and costly. Our project uses AI to predict binding strength directly from 3D structures, drastically cutting time and costs. By training a deep learning model on a specific protein and many mutated DNA variants, we can quickly determine the strength of the interaction. This helps researchers scale, test ideas faster, improve models sooner and expedite scientific progress.
We have built AI-driven tools to automate the assessment of key heart parameters from point-of-care ultrasound, including Right Atrial Pressure (RAP) and Ejection Fraction (EF). In collaboration with UCSF, we trained deep learning models on a proprietary dataset of over 15,000 labeled ultrasound studies and deployed the full pipeline in a real-time iOS app integrated with the Butterfly probe. A UCSF-led clinical trial has validated the RAP workflow, and we are actively expanding the system to support EF prediction using both A4C and PLAX views. This talk will present our end-to-end pipeline, from dataset development and model training to mobile deployment—demonstrating how AI can enable real-time heart assessments directly at the point of care.
We have built AI-driven tools to automate the assessment of key heart parameters from point-of-care ultrasound, including Right Atrial Pressure (RAP) and Ejection Fraction (EF). In collaboration with UCSF, we trained deep learning models on a proprietary dataset of over 15,000 labeled ultrasound studies and deployed the full pipeline in a real-time iOS app integrated with the Butterfly probe. A UCSF-led clinical trial has validated the RAP workflow, and we are actively expanding the system to support EF prediction using both A4C and PLAX views.\n\nThis talk will present our end-to-end pipeline, from dataset development and model training to mobile deployment—demonstrating how AI can enable real-time heart assessments directly at the point of care.
We have built AI-driven tools to automate the assessment of key heart parameters from point-of-care ultrasound, including Right Atrial Pressure (RAP) and Ejection Fraction (EF). In collaboration with UCSF, we trained deep learning models on a proprietary dataset of over 15,000 labeled ultrasound studies and deployed the full pipeline in a real-time iOS app integrated with the Butterfly probe. A UCSF-led clinical trial has validated the RAP workflow, and we are actively expanding the system to support EF prediction using both A4C and PLAX views. This talk will present our end-to-end pipeline, from dataset development and model training to mobile deployment—demonstrating how AI can enable real-time heart assessments directly at the point of care.
We have built AI-driven tools to automate the assessment of key heart parameters from point-of-care ultrasound, including Right Atrial Pressure (RAP) and Ejection Fraction (EF). In collaboration with UCSF, we trained deep learning models on a proprietary dataset of over 15,000 labeled ultrasound studies and deployed the full pipeline in a real-time iOS app integrated with the Butterfly probe. A UCSF-led clinical trial has validated the RAP workflow, and we are actively expanding the system to support EF prediction using both A4C and PLAX views. This talk will present our end-to-end pipeline, from dataset development and model training to mobile deployment—demonstrating how AI can enable real-time heart assessments directly at the point of care.
We have built AI-driven tools to automate the assessment of key heart parameters from point-of-care ultrasound, including Right Atrial Pressure (RAP) and Ejection Fraction (EF). In collaboration with UCSF, we trained deep learning models on a proprietary dataset of over 15,000 labeled ultrasound studies and deployed the full pipeline in a real-time iOS app integrated with the Butterfly probe. A UCSF-led clinical trial has validated the RAP workflow, and we are actively expanding the system to support EF prediction using both A4C and PLAX views. This talk will present our end-to-end pipeline, from dataset development and model training to mobile deployment—demonstrating how AI can enable real-time heart assessments directly at the point of care.
We have built AI-driven tools to automate the assessment of key heart parameters from point-of-care ultrasound, including Right Atrial Pressure (RAP) and Ejection Fraction (EF). In collaboration with UCSF, we trained deep learning models on a proprietary dataset of over 15,000 labeled ultrasound studies and deployed the full pipeline in a real-time iOS app integrated with the Butterfly probe. A UCSF-led clinical trial has validated the RAP workflow, and we are actively expanding the system to support EF prediction using both A4C and PLAX views.\n\nThis talk will present our end-to-end pipeline, from dataset development and model training to mobile deployment—demonstrating how AI can enable real-time heart assessments directly at the point of care.
On digital platforms (or in physical settings) where customers arrive repeatedly, platforms must decide which options to offer them, as this influences their subsequent choices as well as the outcomes for both user and platform. This project will 1) classify customers on a digital platform based on their latent motivation (think: whether they're here to learn vs earn, or which out of several options is the primary emotion driving them) using deep mixture models, 2) predict consumer actions by using deep learning to estimate parameters of the consumer choice model, and 3) use reinforcement learning to select customer offerings/recommendations in order to optimize a long-term outcome.
Second workshop in our series on financial data automation and analysis. Outline covers Data Acquisition & Preprocessing (Yahoo Finance API, SQLite storage, generating additional variables); Exploratory Analysis & Time-Series Features; Forecasting Techniques (Prophet, ARIMA, Deep Learning); Visualization & Deployment (Plotly, Streamlit Cloud, GitHub Actions).
In this talk, we will share insights into the use of ML techniques, such as object detection and classification, to improve video meetings on our Cisco devices. We'll discuss our wide range of ML models and their respective use cases. The session will include a focused examination of our head detection model, detailing the fundamental principles and demonstrating the specific functionalities it facilitates to refine the video meeting experience.
Personal Meditation Guide. Jhāna.AI is an interactive voice assistant which uses real-time brain sensing to guide the user in ancient Jhāna Meditation, for reaching states of concentration, bliss and calm, and finding relief from pain. Jhāna.AI uses cutting-edge technologies of biofeedback, deep learning, and natural speech interaction for personalised guided meditation sessions.
Introduction to reinforcement learning and deep learning concepts.
Workshop covering Foundations of Machine Learning and Deep Learning; Medical Imaging and Computer Vision; Video Processing and Pose Estimation; Comprehensive Project Work and Integration.
Foundations of Machine Learning and Deep Learning; Medical Imaging and Computer Vision; Video Processing and Pose Estimation; Comprehensive Project Work and Integration
We will trace 3D reconstruction from classical SfM/MVS to the deep-learning shift, transformer-based models like VGGT that tackle multiple 3D vision tasks at once. This talk is for anyone at the intersection of deep learning and 3D vision who wants to understand how these tools are redefining the state of the art and the future of spatial AI.
Hear from industry titans who are leveraging AI to drive unprecedented growth.