talk-data.com talk-data.com

Filter by Source

Select conferences and events

People (2 results)

Showing 8 results

Activities & events

Title & Speakers Event
Paul Shealy – Principal Group Engineering Manager , Paul Shealy , Markus Mooslechner , Sarah Bird , Anna Maria Brunnhofer-Pedemonte , Dr. Mehrnoosh Sameki – Principal Product PM Manager , Mehrnoosh Sameki , Markus Mooslechner – Executive Producer @ Terra Mater Studios , Anna Maria Brunnhöfer-Pedemonte – CEO @ Impact.AI , Dr. Sarah Bird – CVP, Chief Product Officer, Responsible AI

New challenges and threats are emerging as AI evolves. Learn how Azure AI’s advanced responsible AI tooling, including Azure AI Content Safety and built-in safety tools in Azure AI Foundry like evaluations and monitoring, can help mitigate these risks. This session covers responsible AI tooling announcements from Azure and will equip you with essential strategies and tools for deploying responsible AI applications.

𝗦𝗽𝗲𝗮𝗸𝗲𝗿𝘀: * Sarah Bird * Anna Maria Brunnhofer-Pedemonte * Markus Mooslechner * Mehrnoosh Sameki * Paul Shealy

𝗦𝗲𝘀𝘀𝗶𝗼𝗻 𝗜𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻: This is one of many sessions from the Microsoft Ignite 2024 event. View even more sessions on-demand and learn about Microsoft Ignite at https://ignite.microsoft.com

BRK113 | English (US) | AI

MSIgnite

AI/ML Azure Microsoft
Microsoft Ignite 2023

To access this webinar, please register here: https://hubs.li/Q02bNw-60

Topic: "Building Responsible and Safe Generative AI Applications"

Speaker: Mehrnoosh Sameki, Principal PM Manager, Responsible AI Tools Area Lead at Microsoft

Mehrnoosh is responsible for overseeing product initiatives that focus on responsible Artificial Intelligence and machine learning model understanding tools, such as interpretability, fairness, reliability, and decision-making, within the Open Source and Azure Machine Learning platforms.

She is co-founded several open-source repositories, including Fairlearn, Error Analysis, and Responsible-AI-Toolbox, and is also a contributor to the InterpretML offering. Mehrnoosh holds a Ph.D. in Computer Science from Boston University, where she is currently an Adjunct Assistant Professor, teaching courses on responsible AI. Prior to her role at Microsoft, she worked as a Data Scientist in the retail industry, utilizing data science and machine learning to improve customers' personalized shopping experiences.

Abstract: As large language models (LLMs) become more widely adopted, it is crucial to understand their effective utilization, copilot development, evaluation, operationalization, and monitoring in real-world applications. This session will provide insights into incorporating responsible AI practices and safety features into your generative AI applications. You will gain knowledge on assessing your copilots and generative AI applications, mitigating content-related risks, addressing hallucinations, jailbreak, and copywrite issues, ensuring fairness, and enhancing the overall quality and safety of your copilot.

ODSC Links: • Get free access to more talks/trainings like this at Ai+ Training platform: https://hubs.li/H0Zycsf0 • ODSC blog: https://opendatascience.com/ • Facebook: https://www.facebook.com/OPENDATASCI • Twitter: https://twitter.com/_ODSC & @odsc • LinkedIn: https://www.linkedin.com/company/open-data-science • Slack Channel: https://hubs.li/Q02b5zmq0 • Code of conduct: https://odsc.com/code-of-conduct/

WEBINAR: "Building Responsible and Safe Generative AI Applications"

To access this webinar, please register here: https://hubs.li/Q02bNw-60

Topic: "Building Responsible and Safe Generative AI Applications"

Speaker: Mehrnoosh Sameki, Principal PM Manager, Responsible AI Tools Area Lead at Microsoft

Mehrnoosh is responsible for overseeing product initiatives that focus on responsible Artificial Intelligence and machine learning model understanding tools, such as interpretability, fairness, reliability, and decision-making, within the Open Source and Azure Machine Learning platforms.

She is co-founded several open-source repositories, including Fairlearn, Error Analysis, and Responsible-AI-Toolbox, and is also a contributor to the InterpretML offering. Mehrnoosh holds a Ph.D. in Computer Science from Boston University, where she is currently an Adjunct Assistant Professor, teaching courses on responsible AI. Prior to her role at Microsoft, she worked as a Data Scientist in the retail industry, utilizing data science and machine learning to improve customers' personalized shopping experiences.

Abstract: As large language models (LLMs) become more widely adopted, it is crucial to understand their effective utilization, copilot development, evaluation, operationalization, and monitoring in real-world applications. This session will provide insights into incorporating responsible AI practices and safety features into your generative AI applications. You will gain knowledge on assessing your copilots and generative AI applications, mitigating content-related risks, addressing hallucinations, jailbreak, and copywrite issues, ensuring fairness, and enhancing the overall quality and safety of your copilot.

ODSC Links: • Get free access to more talks/trainings like this at Ai+ Training platform: https://hubs.li/H0Zycsf0 • ODSC blog: https://opendatascience.com/ • Facebook: https://www.facebook.com/OPENDATASCI • Twitter: https://twitter.com/_ODSC & @odsc • LinkedIn: https://www.linkedin.com/company/open-data-science • Slack Channel: https://hubs.li/Q02b5zmq0 • Code of conduct: https://odsc.com/code-of-conduct/

WEBINAR: "Building Responsible and Safe Generative AI Applications"

To access this webinar, please register here: https://hubs.li/Q02bNw-60

Topic: "Building Responsible and Safe Generative AI Applications"

Speaker: Mehrnoosh Sameki, Principal PM Manager, Responsible AI Tools Area Lead at Microsoft

Mehrnoosh is responsible for overseeing product initiatives that focus on responsible Artificial Intelligence and machine learning model understanding tools, such as interpretability, fairness, reliability, and decision-making, within the Open Source and Azure Machine Learning platforms.

She is co-founded several open-source repositories, including Fairlearn, Error Analysis, and Responsible-AI-Toolbox, and is also a contributor to the InterpretML offering. Mehrnoosh holds a Ph.D. in Computer Science from Boston University, where she is currently an Adjunct Assistant Professor, teaching courses on responsible AI. Prior to her role at Microsoft, she worked as a Data Scientist in the retail industry, utilizing data science and machine learning to improve customers' personalized shopping experiences.

Abstract: As large language models (LLMs) become more widely adopted, it is crucial to understand their effective utilization, copilot development, evaluation, operationalization, and monitoring in real-world applications. This session will provide insights into incorporating responsible AI practices and safety features into your generative AI applications. You will gain knowledge on assessing your copilots and generative AI applications, mitigating content-related risks, addressing hallucinations, jailbreak, and copywrite issues, ensuring fairness, and enhancing the overall quality and safety of your copilot.

ODSC Links: • Get free access to more talks/trainings like this at Ai+ Training platform: https://hubs.li/H0Zycsf0 • ODSC blog: https://opendatascience.com/ • Facebook: https://www.facebook.com/OPENDATASCI • Twitter: https://twitter.com/_ODSC & @odsc • LinkedIn: https://www.linkedin.com/company/open-data-science • Slack Channel: https://hubs.li/Q02b5zmq0 • Code of conduct: https://odsc.com/code-of-conduct/

WEBINAR: "Building Responsible and Safe Generative AI Applications"

To access this webinar, please register here: https://hubs.li/Q02bNw-60

Topic: "Building Responsible and Safe Generative AI Applications"

Speaker: Mehrnoosh Sameki, Principal PM Manager, Responsible AI Tools Area Lead at Microsoft

Mehrnoosh is responsible for overseeing product initiatives that focus on responsible Artificial Intelligence and machine learning model understanding tools, such as interpretability, fairness, reliability, and decision-making, within the Open Source and Azure Machine Learning platforms.

She is co-founded several open-source repositories, including Fairlearn, Error Analysis, and Responsible-AI-Toolbox, and is also a contributor to the InterpretML offering. Mehrnoosh holds a Ph.D. in Computer Science from Boston University, where she is currently an Adjunct Assistant Professor, teaching courses on responsible AI. Prior to her role at Microsoft, she worked as a Data Scientist in the retail industry, utilizing data science and machine learning to improve customers' personalized shopping experiences.

Abstract: As large language models (LLMs) become more widely adopted, it is crucial to understand their effective utilization, copilot development, evaluation, operationalization, and monitoring in real-world applications. This session will provide insights into incorporating responsible AI practices and safety features into your generative AI applications. You will gain knowledge on assessing your copilots and generative AI applications, mitigating content-related risks, addressing hallucinations, jailbreak, and copywrite issues, ensuring fairness, and enhancing the overall quality and safety of your copilot.

ODSC Links: • Get free access to more talks/trainings like this at Ai+ Training platform: https://hubs.li/H0Zycsf0 • ODSC blog: https://opendatascience.com/ • Facebook: https://www.facebook.com/OPENDATASCI • Twitter: https://twitter.com/_ODSC & @odsc • LinkedIn: https://www.linkedin.com/company/open-data-science • Slack Channel: https://hubs.li/Q02b5zmq0 • Code of conduct: https://odsc.com/code-of-conduct/

WEBINAR: "Building Responsible and Safe Generative AI Applications"

To access this webinar, please register here: https://hubs.li/Q02bNw-60

Topic: "Building Responsible and Safe Generative AI Applications"

Speaker: Mehrnoosh Sameki, Principal PM Manager, Responsible AI Tools Area Lead at Microsoft

Mehrnoosh is responsible for overseeing product initiatives that focus on responsible Artificial Intelligence and machine learning model understanding tools, such as interpretability, fairness, reliability, and decision-making, within the Open Source and Azure Machine Learning platforms.

She is co-founded several open-source repositories, including Fairlearn, Error Analysis, and Responsible-AI-Toolbox, and is also a contributor to the InterpretML offering. Mehrnoosh holds a Ph.D. in Computer Science from Boston University, where she is currently an Adjunct Assistant Professor, teaching courses on responsible AI. Prior to her role at Microsoft, she worked as a Data Scientist in the retail industry, utilizing data science and machine learning to improve customers' personalized shopping experiences.

Abstract: As large language models (LLMs) become more widely adopted, it is crucial to understand their effective utilization, copilot development, evaluation, operationalization, and monitoring in real-world applications. This session will provide insights into incorporating responsible AI practices and safety features into your generative AI applications. You will gain knowledge on assessing your copilots and generative AI applications, mitigating content-related risks, addressing hallucinations, jailbreak, and copywrite issues, ensuring fairness, and enhancing the overall quality and safety of your copilot.

ODSC Links: • Get free access to more talks/trainings like this at Ai+ Training platform: https://hubs.li/H0Zycsf0 • ODSC blog: https://opendatascience.com/ • Facebook: https://www.facebook.com/OPENDATASCI • Twitter: https://twitter.com/_ODSC & @odsc • LinkedIn: https://www.linkedin.com/company/open-data-science • Slack Channel: https://hubs.li/Q02b5zmq0 • Code of conduct: https://odsc.com/code-of-conduct/

WEBINAR: "Building Responsible and Safe Generative AI Applications"

To access this webinar, please register here: https://hubs.li/Q02bNw-60

Topic: "Building Responsible and Safe Generative AI Applications"

Speaker: Mehrnoosh Sameki, Principal PM Manager, Responsible AI Tools Area Lead at Microsoft

Mehrnoosh is responsible for overseeing product initiatives that focus on responsible Artificial Intelligence and machine learning model understanding tools, such as interpretability, fairness, reliability, and decision-making, within the Open Source and Azure Machine Learning platforms.

She is co-founded several open-source repositories, including Fairlearn, Error Analysis, and Responsible-AI-Toolbox, and is also a contributor to the InterpretML offering. Mehrnoosh holds a Ph.D. in Computer Science from Boston University, where she is currently an Adjunct Assistant Professor, teaching courses on responsible AI. Prior to her role at Microsoft, she worked as a Data Scientist in the retail industry, utilizing data science and machine learning to improve customers' personalized shopping experiences.

Abstract: As large language models (LLMs) become more widely adopted, it is crucial to understand their effective utilization, copilot development, evaluation, operationalization, and monitoring in real-world applications. This session will provide insights into incorporating responsible AI practices and safety features into your generative AI applications. You will gain knowledge on assessing your copilots and generative AI applications, mitigating content-related risks, addressing hallucinations, jailbreak, and copywrite issues, ensuring fairness, and enhancing the overall quality and safety of your copilot.

ODSC Links: • Get free access to more talks/trainings like this at Ai+ Training platform: https://hubs.li/H0Zycsf0 • ODSC blog: https://opendatascience.com/ • Facebook: https://www.facebook.com/OPENDATASCI • Twitter: https://twitter.com/_ODSC & @odsc • LinkedIn: https://www.linkedin.com/company/open-data-science • Slack Channel: https://hubs.li/Q02b5zmq0 • Code of conduct: https://odsc.com/code-of-conduct/

WEBINAR: "Building Responsible and Safe Generative AI Applications"

Join us to explore strategies for Responsible AI (RAI) systems and learn how to implement them in your organization. Gain insights into how Microsoft built Bing and prompt flow, our evaluation processes and other emerging tools and practices. Delve into effective prompt and metaprompt design, Azure AI Content Safety, how to prevent jailbreaks and against copyrighted material emissions, and Snippy. We will also discuss our White House commitments with RAI experts.

To learn more, please check out these resources: * https://aka.ms/Ignite23CollectionsBRK204H * https://info.microsoft.com/ww-landing-contact-me-for-events-m365-in-person-events.html?LCID=en-us&ls=407628-contactme-formfill * https://aka.ms/azure-ignite2023-dataaiblog

𝗦𝗽𝗲𝗮𝗸𝗲𝗿𝘀: * Apurva Gala * Mehrnoosh Sameki * Sarah Bird * Catherine Brown * Mallory Monsma * Ed Donahue * Katelyn Rothney * Deb Adeogba

𝗦𝗲𝘀𝘀𝗶𝗼𝗻 𝗜𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻: This video is one of many sessions delivered for the Microsoft Ignite 2023 event. View sessions on-demand and learn more about Microsoft Ignite at https://ignite.microsoft.com

BRK204H | English (US) | AI & Apps

MSIgnite

AI/ML Azure HTML Microsoft
Microsoft Ignite 2023
Showing 8 results