talk-data.com talk-data.com

Filter by Source

Select conferences and events

People (40 results)

See all 40 →
Showing 10 results

Activities & events

Title & Speakers Event

Customer Segmentation Masterclass (Free Online Event) Datacove are hosting an online seminar, open to all attendees and guests, on Wednesday, 19th November, 4pm–5pm.

Description: Want to understand your customers better and tailor your marketing with precision? Join us for a free online masterclass on Customer Segmentation for Better Marketing — an essential session for anyone looking to turn data into smarter, more effective campaigns.

In this session, you’ll learn:

  • An introduction to customer segmentation and why it matters
  • The business benefits of effective segmentation
  • How to get started — practical methodology and tools
  • Real business case studies that show segmentation in action

Whether you’re a marketer, analyst, or business leader, this masterclass will equip you with the knowledge to identify, understand, and engage your audience segments more effectively. No coding or data science background needed - just curiosity and a desire to improve your marketing performance!

Your Host: Laura Mawer, Data Science Consultant at Datacove Laura, having transitioned from Education, possesses a knack for engaging and accessible storytelling. Her insatiable curiosity, fervent desire for profound comprehension, and an affinity for all things mathematical enable her to delve into data exploration and logically solve problems. She serves as the team's Python specialist, overseeing client projects from inception to completion, providing clear and comprehensive data insight in an easy-to-understand format. Furthermore, she leads the company's Python courses and hosts the local tech community BrightonPy.

Event Details: Date & Time: Wednesday 19th November, 4pm–5pm Location: Online – Join via Microsoft Teams Cost: Free

Bring your questions — we’ll have a live Q&A at the end!

Customer Segmentation for Better Marketing Masterclass (Free Online Event)

PyData Pittsburgh is excited to host our October event – Driving Materials Innovation with Data: The MDS-Rely Center for Industry-Academic Partnerships. Join us on Thursday, October 16th, as Satish Iyengar, MDS-Rely Co-PI and Professor and Associate Chair of Statistics at the University of Pittsburgh, discusses how data scientists, engineers, and statisticians are working together across institutions to bridge theory and practice, using real-world datasets to push the boundaries of what’s possible in materials reliability and degradation research.

Times: 6pm – Doors Open 6:30pm – Talk, Driving Materials Innovation with Data: The MDS-Rely Center for Industry-Academic Partnerships

Location: Benedum Hall\, Room 102. Swanson School of Engineering \| University of Pittsburgh.

About the talk:

The Materials Data Science for Reliability and Degradation (MDS-Rely) Center is a National Science Foundation (NSF) Industry-University Cooperative Research Center led by the University of Pittsburgh, Case Western Reserve University, and Carnegie Mellon University. MDS-Rely brings together industry, government, and academic partners to conduct pre-competitive research that leverages data science to enhance materials performance, reliability, and service life.

From predictive models for battery degradation to data-driven optimization in additive manufacturing and advanced coatings, MDS-Rely tackles real-world industrial challenges with cutting-edge analytics and experimentation. This talk will provide an overview of the Center’s research focus areas, industry engagement opportunities, and how companies can benefit from and shape this collaborative innovation ecosystem.

MDS-Rely homepage: https://mds-rely.org/ MDS-Rely on LinkedIn: https://www.linkedin.com/company/mds-rely Photo – MDS-Rely Directors, L to R: John Kitchin (CMU), Laura Bruckman (Case Western) and Paul Leu (Pitt), with a student.

✨✅ Take the PyData Pittsburgh Member Survey! ✅ ✨ Thanks for being part of our community! This quick 5-minute survey will help shape future PyData Pittsburgh events. Take the Survey HERE!

October Event – Driving Materials Innovation with Data: The MDS-Rely Center
Gergely Orosz – host , Laura Tacho – CTO @ DX

Supported by Our Partners •⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. • Graphite — The AI developer productivity platform. — There’s no shortage of bold claims about AI and developer productivity, but how do you separate signal from noise? In this episode of The Pragmatic Engineer, I’m joined by Laura Tacho, CTO at DX, to cut through the hype and share how well (or not) AI tools are actually working inside engineering orgs. Laura shares insights from DX’s research across 180+ companies, including surprising findings about where developers save the most time, why devs don’t use AI at all, and what kinds of rollouts lead to meaningful impact. We also discuss:  • The problem with oversimplified AI headlines and how to think more critically about them • An overview of the DX AI Measurement framework • Learnings from Booking.com’s AI tool rollout • Common reasons developers aren’t using AI tools • Why using AI tools sometimes decreases developer satisfaction • Surprising results from DX’s 180+ company study • How AI-generated documentation differs from human-written docs • Why measuring developer experience before rolling out AI is essential • Why Laura thinks roadmaps are on their way out • And much more! — Timestamps (00:00) Intro (01:23) Laura’s take on AI overhyped headlines  (10:46) Common questions Laura gets about AI implementation  (11:49) How to measure AI’s impact  (15:12) Why acceptance rate and lines of code are not sufficient measures of productivity (18:03) The Booking.com case study (20:37) Why some employees are not using AI  (24:20) What developers are actually saving time on  (29:14) What happens with the time savings (31:10) The surprising results from the DORA report on AI in engineering  (33:44) A hypothesis around AI and flow state and the importance of talking to developers (35:59) What’s working in AI architecture  (42:22) Learnings from WorkHuman’s adoption of Copilot  (47:00) Consumption-based pricing, and the difficulty of allocating resources to AI  (52:01) What DX Core 4 measures  (55:32) The best outcomes of implementing AI  (58:56) Why highly regulated industries are having the best results with AI rollout (1:00:30) Indeed’s structured AI rollout  (1:04:22) Why migrations might be a good use case for AI (and a tip for doing it!)  (1:07:30) Advice for engineering leads looking to get better at AI tooling and implementation  (1:08:49) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: • AI Engineering in the real world • Measuring software engineering productivity • The AI Engineering stack • A new way to measure developer productivity – from the creators of DORA and SPACE — See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

AI/ML Analytics Marketing
The Pragmatic Engineer

Oracle Forms has long been the backbone of enterprise applications, but evolving business needs now demand an enhanced user experience to stay competitive. While many mistakenly assume that replacing Forms is the only path forward, the truth is you can refresh and extend your existing applications without costly migrations or major disruptions.

In this session, we’ll showcase how companies have transformed their Oracle Forms UI with Oracle Visual Builder—a low-code platform for rapidly building and deploying mobile-first, responsive applications with seamless Oracle Cloud integration. Learn how easy it is to enhance usability while maintaining the reliability and continuity of your backend systems.

You’ll gain insights into: ✅ What VBCS is and how it’s geared for Forms developers ✅ Top tips to develop new Forms using VBCS ✅ How VBCS can enhance Oracle Forms applications without rewriting business logic ✅ Strategies for leveraging a hybrid cloud approach to extend functionality ✅ Learn from best practices and real-world case studies showcasing successful UI modernization

Don’t let outdated interfaces slow down your organization. Join us to discover how you can bring Oracle Forms into the modern era—without disrupting operations or breaking the bank!

Presented by Mia Urman and Laura Akel

This event is co-hosted by the New York Oracle Users Group (www.nyoug.org) and Oracle Professional Services firm, Viscosity North America (www.viscosityna.com)

REGISTER HERE: https://viscosityna.com/looks-do-matter-modernizing-oracle-forms-with-visual-builder-nyoug

Looks DO Matter: Modernizing Oracle Forms with Visual Builder

Pre-registration is REQUIRED. Add to your calendar - https://hubs.li/Q02R2NfK0

Topic: "Developing Equitable AI Diagnostics: A Technical Approach to Bias Mitigation"

In this talk, we will explore the critical need for fairness in AI-driven healthcare, with a focus on mitigating bias in machine learning models. As AI systems become more integrated into healthcare diagnostics, addressing the disparities in model performance across diverse ethnic groups is paramount. This session will present a technical deep dive into the challenges of bias in medical imaging datasets and the resulting impact on healthcare outcomes for underrepresented populations.

We will begin by defining the types of bias commonly found in machine learning models, with a case study in skin cancer detection. We will demonstrate how training on imbalanced datasets exacerbates disparities in diagnosing skin cancer across different racial groups. Attendees will gain insight into practical techniques for rectifying these biases, including data augmentation, fairness-aware algorithms, and advanced evaluation metrics designed to assess model equity.

In addition to discussing technical solutions, we will also address the limitations and ethical considerations surrounding bias mitigation in healthcare AI, highlighting the importance of interdisciplinary collaboration in creating equitable diagnostic tools. By the end of the session, participants will be equipped with the knowledge to implement fairness techniques in their own AI models, promoting better outcomes for all patient populations.

Speaker's bio: Laura Montoya, Founder and Managing Partner of Accel Impact Organizations

Laura is a tech leader focused on social impact and ethical AI. She founded Accel Impact Organizations, including Accel AI Institute and LXAI. With a background in biology, physics, and human development, Laura has worked at top tech companies like Intuit and has been a leader in tech diversity initiatives. She's a frequent speaker at industry conferences and has been featured in major publications.

ODSC Links: • Get free access to more talks/trainings like this at Ai+ Training platform: https://hubs.li/H0Zycsf0 • ODSC blog: https://opendatascience.com/ • Facebook: https://www.facebook.com/OPENDATASCI • Twitter: https://twitter.com/_ODSC & @odsc • LinkedIn: https://www.linkedin.com/company/open-data-science • Slack Channel: https://hubs.li/Q02zdcSk0 • Code of conduct: https://odsc.com/code-of-conduct/

Developing Equitable AI Diagnostics: A Technical Approach to Bias Mitigation

Pre-registration is REQUIRED. Add to your calendar - https://hubs.li/Q02R2NfK0

Topic: "Developing Equitable AI Diagnostics: A Technical Approach to Bias Mitigation"

In this talk, we will explore the critical need for fairness in AI-driven healthcare, with a focus on mitigating bias in machine learning models. As AI systems become more integrated into healthcare diagnostics, addressing the disparities in model performance across diverse ethnic groups is paramount. This session will present a technical deep dive into the challenges of bias in medical imaging datasets and the resulting impact on healthcare outcomes for underrepresented populations.

We will begin by defining the types of bias commonly found in machine learning models, with a case study in skin cancer detection. We will demonstrate how training on imbalanced datasets exacerbates disparities in diagnosing skin cancer across different racial groups. Attendees will gain insight into practical techniques for rectifying these biases, including data augmentation, fairness-aware algorithms, and advanced evaluation metrics designed to assess model equity.

In addition to discussing technical solutions, we will also address the limitations and ethical considerations surrounding bias mitigation in healthcare AI, highlighting the importance of interdisciplinary collaboration in creating equitable diagnostic tools. By the end of the session, participants will be equipped with the knowledge to implement fairness techniques in their own AI models, promoting better outcomes for all patient populations.

Speaker's bio: Laura Montoya, Founder and Managing Partner of Accel Impact Organizations

Laura is a tech leader focused on social impact and ethical AI. She founded Accel Impact Organizations, including Accel AI Institute and LXAI. With a background in biology, physics, and human development, Laura has worked at top tech companies like Intuit and has been a leader in tech diversity initiatives. She's a frequent speaker at industry conferences and has been featured in major publications.

ODSC Links: • Get free access to more talks/trainings like this at Ai+ Training platform: https://hubs.li/H0Zycsf0 • ODSC blog: https://opendatascience.com/ • Facebook: https://www.facebook.com/OPENDATASCI • Twitter: https://twitter.com/_ODSC & @odsc • LinkedIn: https://www.linkedin.com/company/open-data-science • Slack Channel: https://hubs.li/Q02zdcSk0 • Code of conduct: https://odsc.com/code-of-conduct/

Developing Equitable AI Diagnostics: A Technical Approach to Bias Mitigation

Pre-registration is REQUIRED. Add to your calendar - https://hubs.li/Q02R2NfK0

Topic: "Developing Equitable AI Diagnostics: A Technical Approach to Bias Mitigation"

In this talk, we will explore the critical need for fairness in AI-driven healthcare, with a focus on mitigating bias in machine learning models. As AI systems become more integrated into healthcare diagnostics, addressing the disparities in model performance across diverse ethnic groups is paramount. This session will present a technical deep dive into the challenges of bias in medical imaging datasets and the resulting impact on healthcare outcomes for underrepresented populations.

We will begin by defining the types of bias commonly found in machine learning models, with a case study in skin cancer detection. We will demonstrate how training on imbalanced datasets exacerbates disparities in diagnosing skin cancer across different racial groups. Attendees will gain insight into practical techniques for rectifying these biases, including data augmentation, fairness-aware algorithms, and advanced evaluation metrics designed to assess model equity.

In addition to discussing technical solutions, we will also address the limitations and ethical considerations surrounding bias mitigation in healthcare AI, highlighting the importance of interdisciplinary collaboration in creating equitable diagnostic tools. By the end of the session, participants will be equipped with the knowledge to implement fairness techniques in their own AI models, promoting better outcomes for all patient populations.

Speaker's bio: Laura Montoya, Founder and Managing Partner of Accel Impact Organizations

Laura is a tech leader focused on social impact and ethical AI. She founded Accel Impact Organizations, including Accel AI Institute and LXAI. With a background in biology, physics, and human development, Laura has worked at top tech companies like Intuit and has been a leader in tech diversity initiatives. She's a frequent speaker at industry conferences and has been featured in major publications.

ODSC Links: • Get free access to more talks/trainings like this at Ai+ Training platform: https://hubs.li/H0Zycsf0 • ODSC blog: https://opendatascience.com/ • Facebook: https://www.facebook.com/OPENDATASCI • Twitter: https://twitter.com/_ODSC & @odsc • LinkedIn: https://www.linkedin.com/company/open-data-science • Slack Channel: https://hubs.li/Q02zdcSk0 • Code of conduct: https://odsc.com/code-of-conduct/

Developing Equitable AI Diagnostics: A Technical Approach to Bias Mitigation

Pre-registration is REQUIRED. Add to your calendar - https://hubs.li/Q02R2NfK0

Topic: "Developing Equitable AI Diagnostics: A Technical Approach to Bias Mitigation"

In this talk, we will explore the critical need for fairness in AI-driven healthcare, with a focus on mitigating bias in machine learning models. As AI systems become more integrated into healthcare diagnostics, addressing the disparities in model performance across diverse ethnic groups is paramount. This session will present a technical deep dive into the challenges of bias in medical imaging datasets and the resulting impact on healthcare outcomes for underrepresented populations.

We will begin by defining the types of bias commonly found in machine learning models, with a case study in skin cancer detection. We will demonstrate how training on imbalanced datasets exacerbates disparities in diagnosing skin cancer across different racial groups. Attendees will gain insight into practical techniques for rectifying these biases, including data augmentation, fairness-aware algorithms, and advanced evaluation metrics designed to assess model equity.

In addition to discussing technical solutions, we will also address the limitations and ethical considerations surrounding bias mitigation in healthcare AI, highlighting the importance of interdisciplinary collaboration in creating equitable diagnostic tools. By the end of the session, participants will be equipped with the knowledge to implement fairness techniques in their own AI models, promoting better outcomes for all patient populations.

Speaker's bio: Laura Montoya, Founder and Managing Partner of Accel Impact Organizations

Laura is a tech leader focused on social impact and ethical AI. She founded Accel Impact Organizations, including Accel AI Institute and LXAI. With a background in biology, physics, and human development, Laura has worked at top tech companies like Intuit and has been a leader in tech diversity initiatives. She's a frequent speaker at industry conferences and has been featured in major publications.

ODSC Links: • Get free access to more talks/trainings like this at Ai+ Training platform: https://hubs.li/H0Zycsf0 • ODSC blog: https://opendatascience.com/ • Facebook: https://www.facebook.com/OPENDATASCI • Twitter: https://twitter.com/_ODSC & @odsc • LinkedIn: https://www.linkedin.com/company/open-data-science • Slack Channel: https://hubs.li/Q02zdcSk0 • Code of conduct: https://odsc.com/code-of-conduct/

Developing Equitable AI Diagnostics: A Technical Approach to Bias Mitigation

Pre-registration is REQUIRED. Add to your calendar - https://hubs.li/Q02R2NfK0

Topic: "Developing Equitable AI Diagnostics: A Technical Approach to Bias Mitigation"

In this talk, we will explore the critical need for fairness in AI-driven healthcare, with a focus on mitigating bias in machine learning models. As AI systems become more integrated into healthcare diagnostics, addressing the disparities in model performance across diverse ethnic groups is paramount. This session will present a technical deep dive into the challenges of bias in medical imaging datasets and the resulting impact on healthcare outcomes for underrepresented populations.

We will begin by defining the types of bias commonly found in machine learning models, with a case study in skin cancer detection. We will demonstrate how training on imbalanced datasets exacerbates disparities in diagnosing skin cancer across different racial groups. Attendees will gain insight into practical techniques for rectifying these biases, including data augmentation, fairness-aware algorithms, and advanced evaluation metrics designed to assess model equity.

In addition to discussing technical solutions, we will also address the limitations and ethical considerations surrounding bias mitigation in healthcare AI, highlighting the importance of interdisciplinary collaboration in creating equitable diagnostic tools. By the end of the session, participants will be equipped with the knowledge to implement fairness techniques in their own AI models, promoting better outcomes for all patient populations.

Speaker's bio: Laura Montoya, Founder and Managing Partner of Accel Impact Organizations

Laura is a tech leader focused on social impact and ethical AI. She founded Accel Impact Organizations, including Accel AI Institute and LXAI. With a background in biology, physics, and human development, Laura has worked at top tech companies like Intuit and has been a leader in tech diversity initiatives. She's a frequent speaker at industry conferences and has been featured in major publications.

ODSC Links: • Get free access to more talks/trainings like this at Ai+ Training platform: https://hubs.li/H0Zycsf0 • ODSC blog: https://opendatascience.com/ • Facebook: https://www.facebook.com/OPENDATASCI • Twitter: https://twitter.com/_ODSC & @odsc • LinkedIn: https://www.linkedin.com/company/open-data-science • Slack Channel: https://hubs.li/Q02zdcSk0 • Code of conduct: https://odsc.com/code-of-conduct/

Developing Equitable AI Diagnostics: A Technical Approach to Bias Mitigation

Pre-registration is REQUIRED. Add to your calendar - https://hubs.li/Q02R2NfK0

Topic: "Developing Equitable AI Diagnostics: A Technical Approach to Bias Mitigation"

In this talk, we will explore the critical need for fairness in AI-driven healthcare, with a focus on mitigating bias in machine learning models. As AI systems become more integrated into healthcare diagnostics, addressing the disparities in model performance across diverse ethnic groups is paramount. This session will present a technical deep dive into the challenges of bias in medical imaging datasets and the resulting impact on healthcare outcomes for underrepresented populations.

We will begin by defining the types of bias commonly found in machine learning models, with a case study in skin cancer detection. We will demonstrate how training on imbalanced datasets exacerbates disparities in diagnosing skin cancer across different racial groups. Attendees will gain insight into practical techniques for rectifying these biases, including data augmentation, fairness-aware algorithms, and advanced evaluation metrics designed to assess model equity.

In addition to discussing technical solutions, we will also address the limitations and ethical considerations surrounding bias mitigation in healthcare AI, highlighting the importance of interdisciplinary collaboration in creating equitable diagnostic tools. By the end of the session, participants will be equipped with the knowledge to implement fairness techniques in their own AI models, promoting better outcomes for all patient populations.

Speaker's bio: Laura Montoya, Founder and Managing Partner of Accel Impact Organizations

Laura is a tech leader focused on social impact and ethical AI. She founded Accel Impact Organizations, including Accel AI Institute and LXAI. With a background in biology, physics, and human development, Laura has worked at top tech companies like Intuit and has been a leader in tech diversity initiatives. She's a frequent speaker at industry conferences and has been featured in major publications.

ODSC Links: • Get free access to more talks/trainings like this at Ai+ Training platform: https://hubs.li/H0Zycsf0 • ODSC blog: https://opendatascience.com/ • Facebook: https://www.facebook.com/OPENDATASCI • Twitter: https://twitter.com/_ODSC & @odsc • LinkedIn: https://www.linkedin.com/company/open-data-science • Slack Channel: https://hubs.li/Q02zdcSk0 • Code of conduct: https://odsc.com/code-of-conduct/

Developing Equitable AI Diagnostics: A Technical Approach to Bias Mitigation
Showing 10 results