talk-data.com talk-data.com

Filter by Source

Select conferences and events

People (31 results)

See all 31 →
Showing 15 results

Activities & events

Title & Speakers Event
Is clean code dead? 2025-09-18 · 07:00

17:30 Apertura de la sala

18:00 Introducción por parte de Intelequia

18:10 Introducción por parte de Power BI & Fabric Barcelona

18:20 Primera sesión: Paola Londoño, Neovantas People analytics en Power BI: personas, no solo números. Caso de uso de análisis de datos en RRHH y Power BI analizando la clásica matriz de evaluación del desempeño (9 box) junto con KPI's transversales sobre la satisfacción, el ajuste al puesto y la situación personal de los equipos. El uso de SVG y DAX en este caso de uso puede servir de inspiración para otros tipos de informes.

18:55 Segunda sesión: Cristina Tosco, Intelequia Analiza, toma decisiones e interactúa con tus datos. Fusiona capacidades de Power BI y Power Apps Power BI transforma tus datos en información clara y visual para entender el rendimiento de tu negocio. Pero ¿y si pudieras ir un paso más allá? Imagina que uno de tus indicadores no solo mostrara información, sino que también te permitiera interactuar con los datos directamente desde el informe: actualizar registros, lanzar procesos o tomar decisiones en tiempo real. En esta charla descubrirás cómo integrar Power Apps dentro de Power BI para crear soluciones interactivas que no solo informan, sino que también permiten actuar. Una combinación poderosa que transformará tus informes en herramientas vivas y conectadas con la realidad de tu negocio.

19:30 Tercera sesión: Samuel Piña, Nestlé Nuevas posibilidades del formato PBI Enhanced Report format (PBIR) En la actualidad, existen múltiples métodos para gestionar el modelo de datos programáticamente. El nuevo formato PBIR de Power BI (aún en fase de vista previa) también permite manipular la parte visual con código. En la charla, presentaremos algunas ideas de las posibilidades que se abren gracias a este nuevo formato.

20:00 Networking con catering gracias a Intelequia!

21:00 Cierre

Los ponentes:

Paola Londoño, Neovantas Psicóloga especialista en análisis de datos conductuales con experiencia en rrhh, ventas y consultoria. Mi misión es ayudar a empresas a conectar mejor con clientes y equipos, combinando conocimientos técnicos y de negocio con una compresión profunda de los elementos que motivan el comportamiento humano.

Cristina Tosco, Intelequia Cristina Tosco, ingeniera informática, con experiencia como desarrolladora full-stack. Solía trabajar como programadora en la implementación de aplicaciones nativas y desarrollo web. Sin embargo, cuando se encontró con las tecnologías de Microsoft Power Platform, quedó fascinada. Decidió hacer un giro en su carrera de inmediato y ahora lleva más de un año formándose y descubriendo este maravilloso mundo. Desde entonces, este cambio de rumbo ha sido, sin duda, su decisión más acertada. Actualmente desempeña el puesto de Business Apps Specialist en Intelequia Technologies, donde junto a su equipo se encarga de proporcionar soluciones para empresas basadas en aplicaciones de modelos y automatismos. Analizamos y desarrollamos iniciativas para optimizar y agilizar procesos empresariales a través de soluciones tecnológicas innovadoras.

Samuel Piña, Nestlé +15 años de experiencia en BI, principalmente en SAP BW y, durante los últimos 5, también en Power BI, siempre trabajando para multinacionales. Antes, varias posiciones en Logísitca y Control de Producción. Actualmente, Global Product Manager Reporting & Dashboards T&P, Nestlé

Psicología, Integracion PowerApps, PBIR, y Networking! (Powered by Intelequia)
Women Do Tech Too Conference 2024-10-02 · 12:00

Women Do Tech Too celebrates the vibrant and diverse women's tech community. It is driven by the belief that diversity and inclusion are paramount in fostering innovation and driving progress in the tech sector. This platform is not only for women but also welcomes participation from all individuals who share a passion for championing diversity and equity in the tech community.

Seating and Security

We will have limited seats! Please remember this when replying to the RSVP and update your response if you cannot attend.

Agenda

2 PM - Welcoming notes: Self doubt and career progression by Cécile Chateau, Engineering Program Manager Director 2:15 PM - From Idea to Action: WomenInTech, a safe place to find inspiration, knowledge, and roles models by Alejandra Paredes, Sofware Developer Engineer, and Estelle Thou, Software Developer Engineer

There will be three main consecutive tracks. Each track features three presentations followed by a Q&A session. 3 PM - Professional retraining, Successes & failures, Recognition at work Ensure a future of collaboration and diversity in the Tech Industry by Clara Philippot, Ada Tech School Paris Campus director “How are we training the new generation of developers to learn and iterate from collaboration, agile methodology and empathy” The path of staff engineer by Paola Ducolin, Staff Software Engineer (Datadog) “Earlier this year, I was promoted to Staff Engineer at my current company, Datadog. It was a three-year-long path. In this lightning talk, I will share the journey with its ups and downs.” EPM: Product or Engineering? by Agnès Masson-Sibut, Engineering Program Manager Have you ever wondered what EPM means and what do we do? If are mostly part of the Product organisation or Engineering organisation? Hopefully, everything will be clearer after this talk.

4 PM - Coffee break

4:15 PM - Privacy and Security Cookies 101 by Julie Chevrier, Software Developer Engineer “Have you ever wondered what happens after you click on a cookie consent banner and what the impact of your choice on the ads you see is? Join me to understand what is exactly a cookie and how it is used for advertising!“ How to make recommendations in a world without 3rd party cookies by Lucie Mader, Senior Machine Learning Engineer “Depending on the browser you're using and the website you're visiting, the products in the ads you see might seem strange. We'll discuss this issue and its possible relationship to third-party cookies in this talk.” Privacy in the age of Generative AI by Jaspreet Sandhu, Senior Machine Learning Engineer "With the advent and widespread integration of Generative AI across applications, industrial or personal, how do we prevent misuse and ensure data privacy, security, and ethical use? This talk delves into the challenges and strategies for safeguarding sensitive information and maintaining user trust in the evolving landscape of AI-driven technologies."

🚻 5:15 PM - Break

5:30 PM - User experience How to translate women’s empowerment into a brand visual identity by Camille Lannel-Lamotte, UI Designer Uncover how color theory, symbolism, and language come together to shape the new brand image and get an insider’s view of the key elements that define it. From Vision to Experience: The Product Manager's Journey in Shaping User-Centric Products by Salma, Senior Product Manager “Evolution of product managers' roles in creating user-centric products, transitioning from initial vision to crafting meaningful user experiences.” Crafting Consistency: Integrating a new theme in Criteo’s React Design System by Claire Dochez, Software Developer Engineer Last year, our team integrated a new theme into Criteo’s design system. This talk will cover the journey, emphasizing the key steps, challenges faced, and lessons learned along the way.

👋 6:30 PM - Closing notes Have a break and find YOUR own balance with the Wheel of Life! by Sandrine Planchon, Human-Minds - Coach in mental health prevention & Creator of disconnecting experiences When everything keeps getting faster, to the point of sometimes throwing you off balance, what about slowing down for a moment and reflecting on YOUR own need of balance in your life? The Wheel of Life can show a way to access it!

🍸 7 PM - Rooftop Cocktail (weather permitting)

If you register for this event, you consent to CRITEO's use of your image, video, voice, or all three. In addition, you waive any right to inspect or approve the finished video recording. You agree that any such image, video, or audio recording and any reproduction thereof shall remain the property of the author and may be used by Criteo as it sees fit. You understand that this consent is perpetual, cannot be revoked by me, and is binding. You understand that these images may appear publicly on Criteo's website, social media accounts, and/or other marketing materials.

Women Do Tech Too Conference

When: June 27, 2024 – 10:00 AM Pacific / 1:00 PM Eastern

Register for the Zoom: https://voxel51.com/computer-vision-events/june-27-2024-ai-machine-learning-computer-vision-meetup/

Leveraging Pre-trained Text2Image Diffusion Models for Zero-Shot Video Editing

Text-to-image diffusion models demonstrate remarkable editing capabilities in the image domain, especially after Latent Diffusion Models made diffusion models more scalable. Conversely, video editing still has much room for improvement, particularly given the relative scarcity of video datasets compared to image datasets. Therefore, we will discuss whether pre-trained text-to-image diffusion models can be used for zero-shot video editing without any fine-tuning stage. Finally, we will also explore possible future work and interesting research ideas in the field.

About the Speaker

Bariscan Kurtkaya is a KUIS AI Fellow and a graduate student in the Department of Computer Science at Koc University. His research interests lie in exploring and leveraging the capabilities of generative models in the realm of 2D and 3D data, encompassing scientific observations from space telescopes.

Improved Visual Grounding through Self-Consistent Explanations

Vision-and-language models that are trained to associate images with text have shown to be effective for many tasks, including object detection and image segmentation. In this talk, we will discuss how to enhance vision-and-language models’ ability to localize objects in images by fine-tuning them for self-consistent visual explanations. We propose a method that augments text-image datasets with paraphrases using a large language model and employs SelfEQ, a weakly-supervised strategy that promotes self-consistency in visual explanation maps. This approach broadens the model’s working vocabulary and improves object localization accuracy, as demonstrated by performance gains on competitive benchmarks.

About the Speakers

Dr. Paola Cascante-Bonilla received her Ph.D. in Computer Science at Rice University in 2024, advised by Professor Vicente Ordóñez Román, working on Computer Vision, Natural Language Processing, and Machine Learning. She received a Master of Computer Science at the University of Virginia and a B.S. in Engineering at the Tecnológico de Costa Rica. Paola will join Stony Brook University (SUNY) as an Assistant Professor in the Department of Computer Science.

Ruozhen (Catherine) He is a first-year Computer Science PhD student at Rice University, advised by Prof. Vicente Ordóñez, focusing on efficient algorithms in computer vision with less or multimodal supervision. She aims to leverage insights from neuroscience and cognitive psychology to develop interpretable algorithms that achieve human-level intelligence across versatile tasks.

Combining Hugging Face Transformer Models and Image Data with FiftyOne

Datasets and Models are the two pillars of modern machine learning, but connecting the two can be cumbersome and time-consuming. In this lightning talk, you will learn how the seamless integration between Hugging Face and FiftyOne simplifies this complexity, enabling more effective data-model co-development. By the end of the talk, you will be able to download and visualize datasets from the Hugging Face hub with FiftyOne, apply state-of-the-art transformer models directly to your data, and effortlessly share your datasets with others.

About the Speaker

Jacob Marks, PhD is a Machine Learning Engineer and Developer Evangelist at Voxel51, where he leads open source efforts in vector search, semantic search, and generative AI for the FiftyOne data-centric AI toolkit. Prior to joining Voxel51, Jacob worked at Google X, Samsung Research, and Wolfram Research.

June 27 - AI, Machine Learning and Computer Vision Meetup

When: June 27, 2024 – 10:00 AM Pacific / 1:00 PM Eastern

Register for the Zoom: https://voxel51.com/computer-vision-events/june-27-2024-ai-machine-learning-computer-vision-meetup/

Leveraging Pre-trained Text2Image Diffusion Models for Zero-Shot Video Editing

Text-to-image diffusion models demonstrate remarkable editing capabilities in the image domain, especially after Latent Diffusion Models made diffusion models more scalable. Conversely, video editing still has much room for improvement, particularly given the relative scarcity of video datasets compared to image datasets. Therefore, we will discuss whether pre-trained text-to-image diffusion models can be used for zero-shot video editing without any fine-tuning stage. Finally, we will also explore possible future work and interesting research ideas in the field.

About the Speaker

Bariscan Kurtkaya is a KUIS AI Fellow and a graduate student in the Department of Computer Science at Koc University. His research interests lie in exploring and leveraging the capabilities of generative models in the realm of 2D and 3D data, encompassing scientific observations from space telescopes.

Improved Visual Grounding through Self-Consistent Explanations

Vision-and-language models that are trained to associate images with text have shown to be effective for many tasks, including object detection and image segmentation. In this talk, we will discuss how to enhance vision-and-language models’ ability to localize objects in images by fine-tuning them for self-consistent visual explanations. We propose a method that augments text-image datasets with paraphrases using a large language model and employs SelfEQ, a weakly-supervised strategy that promotes self-consistency in visual explanation maps. This approach broadens the model’s working vocabulary and improves object localization accuracy, as demonstrated by performance gains on competitive benchmarks.

About the Speakers

Dr. Paola Cascante-Bonilla received her Ph.D. in Computer Science at Rice University in 2024, advised by Professor Vicente Ordóñez Román, working on Computer Vision, Natural Language Processing, and Machine Learning. She received a Master of Computer Science at the University of Virginia and a B.S. in Engineering at the Tecnológico de Costa Rica. Paola will join Stony Brook University (SUNY) as an Assistant Professor in the Department of Computer Science.

Ruozhen (Catherine) He is a first-year Computer Science PhD student at Rice University, advised by Prof. Vicente Ordóñez, focusing on efficient algorithms in computer vision with less or multimodal supervision. She aims to leverage insights from neuroscience and cognitive psychology to develop interpretable algorithms that achieve human-level intelligence across versatile tasks.

Combining Hugging Face Transformer Models and Image Data with FiftyOne

Datasets and Models are the two pillars of modern machine learning, but connecting the two can be cumbersome and time-consuming. In this lightning talk, you will learn how the seamless integration between Hugging Face and FiftyOne simplifies this complexity, enabling more effective data-model co-development. By the end of the talk, you will be able to download and visualize datasets from the Hugging Face hub with FiftyOne, apply state-of-the-art transformer models directly to your data, and effortlessly share your datasets with others.

About the Speaker

Jacob Marks, PhD is a Machine Learning Engineer and Developer Evangelist at Voxel51, where he leads open source efforts in vector search, semantic search, and generative AI for the FiftyOne data-centric AI toolkit. Prior to joining Voxel51, Jacob worked at Google X, Samsung Research, and Wolfram Research.

June 27 - AI, Machine Learning and Computer Vision Meetup

When: June 27, 2024 – 10:00 AM Pacific / 1:00 PM Eastern

Register for the Zoom: https://voxel51.com/computer-vision-events/june-27-2024-ai-machine-learning-computer-vision-meetup/

Leveraging Pre-trained Text2Image Diffusion Models for Zero-Shot Video Editing

Text-to-image diffusion models demonstrate remarkable editing capabilities in the image domain, especially after Latent Diffusion Models made diffusion models more scalable. Conversely, video editing still has much room for improvement, particularly given the relative scarcity of video datasets compared to image datasets. Therefore, we will discuss whether pre-trained text-to-image diffusion models can be used for zero-shot video editing without any fine-tuning stage. Finally, we will also explore possible future work and interesting research ideas in the field.

About the Speaker

Bariscan Kurtkaya is a KUIS AI Fellow and a graduate student in the Department of Computer Science at Koc University. His research interests lie in exploring and leveraging the capabilities of generative models in the realm of 2D and 3D data, encompassing scientific observations from space telescopes.

Improved Visual Grounding through Self-Consistent Explanations

Vision-and-language models that are trained to associate images with text have shown to be effective for many tasks, including object detection and image segmentation. In this talk, we will discuss how to enhance vision-and-language models’ ability to localize objects in images by fine-tuning them for self-consistent visual explanations. We propose a method that augments text-image datasets with paraphrases using a large language model and employs SelfEQ, a weakly-supervised strategy that promotes self-consistency in visual explanation maps. This approach broadens the model’s working vocabulary and improves object localization accuracy, as demonstrated by performance gains on competitive benchmarks.

About the Speakers

Dr. Paola Cascante-Bonilla received her Ph.D. in Computer Science at Rice University in 2024, advised by Professor Vicente Ordóñez Román, working on Computer Vision, Natural Language Processing, and Machine Learning. She received a Master of Computer Science at the University of Virginia and a B.S. in Engineering at the Tecnológico de Costa Rica. Paola will join Stony Brook University (SUNY) as an Assistant Professor in the Department of Computer Science.

Ruozhen (Catherine) He is a first-year Computer Science PhD student at Rice University, advised by Prof. Vicente Ordóñez, focusing on efficient algorithms in computer vision with less or multimodal supervision. She aims to leverage insights from neuroscience and cognitive psychology to develop interpretable algorithms that achieve human-level intelligence across versatile tasks.

Combining Hugging Face Transformer Models and Image Data with FiftyOne

Datasets and Models are the two pillars of modern machine learning, but connecting the two can be cumbersome and time-consuming. In this lightning talk, you will learn how the seamless integration between Hugging Face and FiftyOne simplifies this complexity, enabling more effective data-model co-development. By the end of the talk, you will be able to download and visualize datasets from the Hugging Face hub with FiftyOne, apply state-of-the-art transformer models directly to your data, and effortlessly share your datasets with others.

About the Speaker

Jacob Marks, PhD is a Machine Learning Engineer and Developer Evangelist at Voxel51, where he leads open source efforts in vector search, semantic search, and generative AI for the FiftyOne data-centric AI toolkit. Prior to joining Voxel51, Jacob worked at Google X, Samsung Research, and Wolfram Research.

June 27 - AI, Machine Learning and Computer Vision Meetup

When: June 27, 2024 – 10:00 AM Pacific / 1:00 PM Eastern

Register for the Zoom: https://voxel51.com/computer-vision-events/june-27-2024-ai-machine-learning-computer-vision-meetup/

Leveraging Pre-trained Text2Image Diffusion Models for Zero-Shot Video Editing

Text-to-image diffusion models demonstrate remarkable editing capabilities in the image domain, especially after Latent Diffusion Models made diffusion models more scalable. Conversely, video editing still has much room for improvement, particularly given the relative scarcity of video datasets compared to image datasets. Therefore, we will discuss whether pre-trained text-to-image diffusion models can be used for zero-shot video editing without any fine-tuning stage. Finally, we will also explore possible future work and interesting research ideas in the field.

About the Speaker

Bariscan Kurtkaya is a KUIS AI Fellow and a graduate student in the Department of Computer Science at Koc University. His research interests lie in exploring and leveraging the capabilities of generative models in the realm of 2D and 3D data, encompassing scientific observations from space telescopes.

Improved Visual Grounding through Self-Consistent Explanations

Vision-and-language models that are trained to associate images with text have shown to be effective for many tasks, including object detection and image segmentation. In this talk, we will discuss how to enhance vision-and-language models’ ability to localize objects in images by fine-tuning them for self-consistent visual explanations. We propose a method that augments text-image datasets with paraphrases using a large language model and employs SelfEQ, a weakly-supervised strategy that promotes self-consistency in visual explanation maps. This approach broadens the model’s working vocabulary and improves object localization accuracy, as demonstrated by performance gains on competitive benchmarks.

About the Speakers

Dr. Paola Cascante-Bonilla received her Ph.D. in Computer Science at Rice University in 2024, advised by Professor Vicente Ordóñez Román, working on Computer Vision, Natural Language Processing, and Machine Learning. She received a Master of Computer Science at the University of Virginia and a B.S. in Engineering at the Tecnológico de Costa Rica. Paola will join Stony Brook University (SUNY) as an Assistant Professor in the Department of Computer Science.

Ruozhen (Catherine) He is a first-year Computer Science PhD student at Rice University, advised by Prof. Vicente Ordóñez, focusing on efficient algorithms in computer vision with less or multimodal supervision. She aims to leverage insights from neuroscience and cognitive psychology to develop interpretable algorithms that achieve human-level intelligence across versatile tasks.

Combining Hugging Face Transformer Models and Image Data with FiftyOne

Datasets and Models are the two pillars of modern machine learning, but connecting the two can be cumbersome and time-consuming. In this lightning talk, you will learn how the seamless integration between Hugging Face and FiftyOne simplifies this complexity, enabling more effective data-model co-development. By the end of the talk, you will be able to download and visualize datasets from the Hugging Face hub with FiftyOne, apply state-of-the-art transformer models directly to your data, and effortlessly share your datasets with others.

About the Speaker

Jacob Marks, PhD is a Machine Learning Engineer and Developer Evangelist at Voxel51, where he leads open source efforts in vector search, semantic search, and generative AI for the FiftyOne data-centric AI toolkit. Prior to joining Voxel51, Jacob worked at Google X, Samsung Research, and Wolfram Research.

June 27 - AI, Machine Learning and Computer Vision Meetup
Peter Voss – guest @ Aigo , Tobias Macey – host

Summary Artificial intelligence has dominated the headlines for several months due to the successes of large language models. This has prompted numerous debates about the possibility of, and timeline for, artificial general intelligence (AGI). Peter Voss has dedicated decades of his life to the pursuit of truly intelligent software through the approach of cognitive AI. In this episode he explains his approach to building AI in a more human-like fashion and the emphasis on learning rather than statistical prediction. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementDagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster today to get started. Your first 30 days are free!Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.Your host is Tobias Macey and today I'm interviewing Peter Voss about what is involved in making your AI applications more "human"Interview IntroductionHow did you get involved in machine learning?Can you start by unpacking the idea of "human-like" AI? How does that contrast with the conception of "AGI"?The applications and limitations of GPT/LLM models have been dominating the popular conversation around AI. How do you see that impacting the overrall ecosystem of ML/AI applications and investment?The fundamental/foundational challenge of every AI use case is sourcing appropriate data. What are the strategies that you have found useful to acquire, evaluate, and prepare data at an appropriate scale to build high quality models? What are the opportunities and limitations of causal modeling techniques for generalized AI models?As AI systems gain more sophistication there is a challenge with establishing and maintaining trust. What are the risks involved in deploying more human-level AI systems and monitoring their reliability?What are the practical/architectural methods necessary to build more cognitive AI systems? How would you characterize the ecosystem of tools/frameworks available for creating, evolving, and maintaining these applications?What are the most interesting, innovative, or unexpected ways that you have seen cognitive AI applied?What are the most interesting, unexpected, or challenging lessons that you have learned while working on desiging/developing cognitive AI systems?When is cognitive AI the wrong choice?What do you have planned for the future of cognitive AI applications at Aigo?Contact Info LinkedInWebsiteParting Question From your perspective, what is the biggest barrier to adoption of machine learning today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story.Links Aigo.aiArtificial General IntelligenceCognitive AIKnowledge GraphCausal ModelingBayesian StatisticsThinking Fast & Slow by Daniel Kahneman (affiliate link)Agent-Based ModelingReinforcement LearningDARPA 3 Waves of AI presentationWhy Don't We Have AGI Yet? whitepaperConcepts Is All You Need WhitepaperHellen KellerStephen HawkingThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0

AI/ML Analytics Cloud Computing Dagster Data Engineering Data Lake Data Lakehouse Delta Hudi Iceberg LLM Python Cyber Security SQL Trino
Data Engineering Podcast

Lightning Talks are an opportunity for members of the community to share a short talk with the wider community - with or without slides!

Introduction to Open-Source Science (OSSci) - Tim Bonnemann (he/him)

Open-Source Science (OSSci) is a new NumFOCUS initiative – launched in July 2022 in partnership with IBM – that aims to accelerate scientific research by improving the ways open source software in science gets done (built, used, funded, sustained, recognized, etc.). OSSci connects scientists, OSS developers and other stakeholders to share best practices, identify common pain points, and explore solutions together.

The five OSSci interest groups to date cover domain-specific topics (chemistry/materials, life sciences/healthcare, climate/sustainability) as well as cross-domain topics (reproducibility, map of science), with more to be rolled out in 2024. This lightning talk will provide a brief overview of OSSci’s activities to date, our plans for 2024, and how you can get involved.

Tim Bonnemann, Community Lead, Open-Source Science (OSSCi) at IBM Research

AutoXAI4Omics: Automated eXplainable AI for Omics - Anna Paola Carrieri (she/her)

AutoXAI4Omics (https://github.com/IBM/AutoXAI4Omics) is a command line automated explainable AI tool that easily enable healthcare and life sciences scientists (e.g., biologists, bioinformaticians, clinicians) to perform prediction tasks from omics data (e.g., gene expression; microbiome data; SNPs) and any tabular data (e.g., clinical data) using a range of machine learning methods. As an example a scientist may quickly an end-to-end ML analysis to predict disease type or status from genomics data without having to code or understand the deep technical details of the ML pipeline. A broad technical team from IBM Research has developed, tested and validated the tool for -Omics use cases over a number of years

Anna Paola Carrieri, Manager, Healthcare and Life Science team in IBM Research UK

The floor will then be opened up to other community lightning talks!

We'll be at BabNQ, the capacity is limited to 60.

EVENT GUIDELINES

PyDataMCR is a strictly professional event, as such professional behaviour is expected.

PyDataMCR is a chapter of PyData, an educational program of NumFOCUS and thus abides by the NumFOCUS Code of Conduct

https://pydata.org/code-of-conduct.html

Please take a moment to familiarise yourself with its contents.

ACCESSIBILITY

Under 16s welcome with a responsible guardian. There is a quiet room available if needed. The event space is downstairs.

SPONSORS

Thank you to NUMFocus for sponsoring Meetup and further support

PyDataMCR February - Open Source Science & LIGHTNING TALKS
Event Economics Data Podcast 2024-01-24
Paola – guest

En este episodio “Consulta Médica Financiera”, hablamos con Paola que nos cuenta cómo maneja sus finanzas teniendo tres metas principales y sus dudas financieras tanto personales como para su emprendimiento el cual aún no le genera un sueldo…🤓💰

Paola – guest

En esta consulta médica financiera abordamos el escenario financiero de Paola, quien tiene ingresos variables y muchas dudas sobre organización e inversión.

Un episodio clave para todo aquel que tenga ingresos variables, y de mucha oportunidad para el que tenga ingresos fijos. 👍🏼📊

Showing 15 results