talk-data.com talk-data.com

Filter by Source

Select conferences and events

People (117 results)

See all 117 →
Showing 2 results

Activities & events

Title & Speakers Event
Event DataFramed 2024-05-27
Richie – host @ DataCamp , Bruce Schneier – Chief of Security Architecture @ Inrupt, Inc.

Trust is the foundation of any relationship, whether it's between friends or in business. But what happens when the entity you're asked to trust isn't human, but AI? How do you ensure that the AI systems you're developing are not only effective but also trustworthy? In a world where AI is increasingly making decisions that impact our lives, how can we distinguish between systems that genuinely serve our interests and those that might exploit our data?  Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of over one dozen books—including his latest, A Hacker’s Mind—as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Klein Center for Internet & Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation and AccessNow; and an Advisory Board Member of the Electronic Privacy Information Center and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc. In the episode, Richie and Bruce explore the definition of trust, the difference between trust and trustworthiness, how AI mimics social trust, AI and deception, the need for public non-profit AI to counterbalance corporate AI, monopolies in tech, understanding the application and potential consequences of AI misuse, AI regulation, the positive potential of AI, why AI is a political issue and much more. Links Mentioned in the Show: Schneier on SecurityBooks by Bruce[Course] AI EthicsRelated Episode: Building Trustworthy AI with Alexandra Ebert, Chief Trust Officer at MOSTLY AISign up to RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

AI/ML Cyber Security
Richie – host @ DataCamp , Alexandra Ebert – Chief Trust Officer @ MOSTLY AI

We’ve never been more aware of the word ‘hallucinate’ in a professional setting. Generative AI has taught us that we need to work in tandem with personal AI tools when we want accurate and reliable information. We’ve also seen the impacts of bias in AI systems, and why trusting outputs at face value can be a dangerous game, even for the largest tech organizations in the world. It seems we could be both very close and very far away from being able to fully trust AI in a work setting. To really find out what trustworthy AI is, and what causes us to lose trust in an AI system, we need to hear from someone who’s been at the forefront of the policy and tech around the issue.  Alexandra Ebert is an expert in data privacy and responsible AI. She works on public policy issues in the emerging field of synthetic data and ethical AI. Alexandra is on Forbes ‘30 Under 30’ list and has an upcoming course on DataCamp! In addition to her role as Chief Trust Officer at MOSTLY AI, Alexandra is the chair of the IEEE Synthetic Data IC expert group and the host of the Data Democratization podcast. In the episode, Richie and Alexandra explore the importance of trust in AI, what causes us to lose trust in AI systems and the impacts of a lack of trust, AI regulation and adoption, AI decision accuracy and fairness, privacy concerns in AI, handling sensitive data in AI systems, the benefits of synthetic data, explainability and transparency in AI, skills for using AI in a trustworthy fashion and much more.  Links Mentioned in the Show: MOSTLY.AIMicrosoft Research on AI FairnessUsing Synthetic Data for Machine Learning & AI in Python[Course] AI Ethics

AI/ML GenAI Python
Showing 2 results