talk-data.com talk-data.com

Topic

deep learning

20

tagged

Activity Trend

1 peak/qtr
2020-Q1 2026-Q1

Activities

20 activities · Newest first

The aim of the project is predicting how biomolecules bind, like proteins and DNA, is crucial for breakthroughs in genetics, drug discovery, and disease research. Traditional methods are slow and costly. Our project uses AI to predict binding strength directly from 3D structures, drastically cutting time and costs. By training a deep learning model on a specific protein and many mutated DNA variants, we can quickly determine the strength of the interaction. This helps researchers scale, test ideas faster, improve models sooner and expedite scientific progress.

We have built AI-driven tools to automate the assessment of key heart parameters from point-of-care ultrasound, including Right Atrial Pressure (RAP) and Ejection Fraction (EF). In collaboration with UCSF, we trained deep learning models on a proprietary dataset of over 15,000 labeled ultrasound studies and deployed the full pipeline in a real-time iOS app integrated with the Butterfly probe. A UCSF-led clinical trial has validated the RAP workflow, and we are actively expanding the system to support EF prediction using both A4C and PLAX views. This talk will present our end-to-end pipeline, from dataset development and model training to mobile deployment—demonstrating how AI can enable real-time heart assessments directly at the point of care.

We have built AI-driven tools to automate the assessment of key heart parameters from point-of-care ultrasound, including Right Atrial Pressure (RAP) and Ejection Fraction (EF). In collaboration with UCSF, we trained deep learning models on a proprietary dataset of over 15,000 labeled ultrasound studies and deployed the full pipeline in a real-time iOS app integrated with the Butterfly probe. A UCSF-led clinical trial has validated the RAP workflow, and we are actively expanding the system to support EF prediction using both A4C and PLAX views.\n\nThis talk will present our end-to-end pipeline, from dataset development and model training to mobile deployment—demonstrating how AI can enable real-time heart assessments directly at the point of care.

We have built AI-driven tools to automate the assessment of key heart parameters from point-of-care ultrasound, including Right Atrial Pressure (RAP) and Ejection Fraction (EF). In collaboration with UCSF, we trained deep learning models on a proprietary dataset of over 15,000 labeled ultrasound studies and deployed the full pipeline in a real-time iOS app integrated with the Butterfly probe. A UCSF-led clinical trial has validated the RAP workflow, and we are actively expanding the system to support EF prediction using both A4C and PLAX views. This talk will present our end-to-end pipeline, from dataset development and model training to mobile deployment—demonstrating how AI can enable real-time heart assessments directly at the point of care.

We have built AI-driven tools to automate the assessment of key heart parameters from point-of-care ultrasound, including Right Atrial Pressure (RAP) and Ejection Fraction (EF). In collaboration with UCSF, we trained deep learning models on a proprietary dataset of over 15,000 labeled ultrasound studies and deployed the full pipeline in a real-time iOS app integrated with the Butterfly probe. A UCSF-led clinical trial has validated the RAP workflow, and we are actively expanding the system to support EF prediction using both A4C and PLAX views. This talk will present our end-to-end pipeline, from dataset development and model training to mobile deployment—demonstrating how AI can enable real-time heart assessments directly at the point of care.

We have built AI-driven tools to automate the assessment of key heart parameters from point-of-care ultrasound, including Right Atrial Pressure (RAP) and Ejection Fraction (EF). In collaboration with UCSF, we trained deep learning models on a proprietary dataset of over 15,000 labeled ultrasound studies and deployed the full pipeline in a real-time iOS app integrated with the Butterfly probe. A UCSF-led clinical trial has validated the RAP workflow, and we are actively expanding the system to support EF prediction using both A4C and PLAX views. This talk will present our end-to-end pipeline, from dataset development and model training to mobile deployment—demonstrating how AI can enable real-time heart assessments directly at the point of care.

We have built AI-driven tools to automate the assessment of key heart parameters from point-of-care ultrasound, including Right Atrial Pressure (RAP) and Ejection Fraction (EF). In collaboration with UCSF, we trained deep learning models on a proprietary dataset of over 15,000 labeled ultrasound studies and deployed the full pipeline in a real-time iOS app integrated with the Butterfly probe. A UCSF-led clinical trial has validated the RAP workflow, and we are actively expanding the system to support EF prediction using both A4C and PLAX views.\n\nThis talk will present our end-to-end pipeline, from dataset development and model training to mobile deployment—demonstrating how AI can enable real-time heart assessments directly at the point of care.

On digital platforms (or in physical settings) where customers arrive repeatedly, platforms must decide which options to offer them, as this influences their subsequent choices as well as the outcomes for both user and platform. This project will 1) classify customers on a digital platform based on their latent motivation (think: whether they're here to learn vs earn, or which out of several options is the primary emotion driving them) using deep mixture models, 2) predict consumer actions by using deep learning to estimate parameters of the consumer choice model, and 3) use reinforcement learning to select customer offerings/recommendations in order to optimize a long-term outcome.

Second workshop in our series on financial data automation and analysis. Outline covers Data Acquisition & Preprocessing (Yahoo Finance API, SQLite storage, generating additional variables); Exploratory Analysis & Time-Series Features; Forecasting Techniques (Prophet, ARIMA, Deep Learning); Visualization & Deployment (Plotly, Streamlit Cloud, GitHub Actions).

In this talk, we will share insights into the use of ML techniques, such as object detection and classification, to improve video meetings on our Cisco devices. We'll discuss our wide range of ML models and their respective use cases. The session will include a focused examination of our head detection model, detailing the fundamental principles and demonstrating the specific functionalities it facilitates to refine the video meeting experience.

Personal Meditation Guide. Jhāna.AI is an interactive voice assistant which uses real-time brain sensing to guide the user in ancient Jhāna Meditation, for reaching states of concentration, bliss and calm, and finding relief from pain. Jhāna.AI uses cutting-edge technologies of biofeedback, deep learning, and natural speech interaction for personalised guided meditation sessions.

We will trace 3D reconstruction from classical SfM/MVS to the deep-learning shift, transformer-based models like VGGT that tackle multiple 3D vision tasks at once. This talk is for anyone at the intersection of deep learning and 3D vision who wants to understand how these tools are redefining the state of the art and the future of spatial AI.