Explore how Siemens is transforming data sharing with innovative data products, powered by Snowflake for seamless, automated, and cross-platform data sharing. This transformative approach empowers Siemens to enhance collaboration, and unlock the full potential of enterprise data, paving the way to becoming a truly data-driven organization. Join us to explore their journey and key insights.
talk-data.com
Topic
AI/ML
Artificial Intelligence/Machine Learning
9014
tagged
Activity Trend
Top Events
Explore how Snowflake and Microsoft collaborate to transform data and AI workflows. Learn to operate on a single data copy between Microsoft Fabric OneLake and Snowflake via Apache Iceberg, eliminating duplication. Discover Real-Time RAG AI Agents that integrate Snowflake's trusted data and enterprise systems for instant Microsoft Copilot responses, without copying data. Unlock Real-Time Actions using PowerApps with live query and writeback to Snowflake, all with no code. Simplify and innovate with these powerful tools.
Learn how to accelerate and automate migrations with SnowConvert AI, featuring data ecosystem migration agents powered by Snowflake Cortex AI. SnowConvert AI is your free, automated solution designed to dramatically reduce the complexities, costs, and timelines associated with data warehouse and BI migrations. It intelligently analyzes your existing code, automating code conversion, data validation, and streamlining the entire migration process. Join us for an overview of the solution, migration best practices, and live demos.
Gardez le contrôle sur l'IA: choisissez votre moteur (OpenAI, Gemini, etc.) et personnalisez les prompts pour une transparence totale.
Skrub is an open source package that simplifies machine-learning with dataframes by providing a variety of tools to explore, prepare and feature-engineer dataframes so they can be integrated into scikit-learn pipelines. Skrub DataOps allow to build extensive, multi-table wrangling plans, explore hyperparameter spaces, and export the resulting objects for deployment. The talk showcases various use cases where skrub can simplify the job of a data scientist from data preparation to deployment, through code examples and demonstrations.
Discover how DHL Supply Chain leverages Snowflake to power its self-service data warehouse platform for advanced analytics and AI.
In this episode, I sit down with Saket Saurabh (CEO of Nexla) to discuss the fundamental shift happening in the AI landscape. The conversation is moving beyond the race to build the biggest foundational models and towards a new battleground: context. We explore what it means to be a "model company" versus a "context company" and how this changes everything for data strategy and enterprise AI.
Join us as we cover: Model vs. Context Companies: The emerging divide between companies building models (like OpenAI) and those whose advantage lies in their unique data and integrations. The Limits of Current Models: Why we might be hitting an asymptote with the current transformer architecture for solving complex, reliable business processes. "Context Engineering": What this term really means, from RAG to stitching together tools, data, and memory to feed AI systems. The Resurgence of Knowledge Graphs: Why graph databases are becoming critical for providing deterministic, reliable information to probabilistic AI models, moving beyond simple vector similarity. AI's Impact on Tooling: How tools like Lovable and Cursor are changing workflows for prototyping and coding, and the risk of creating the "-10x engineer." The Future of Data Engineering: How the field is expanding as AI becomes the primary consumer of data, requiring a new focus on architecture, semantics, and managing complexity at scale.
The current AI hype, driven by generative AI and particularly large language models, is creating excitement, fear, and inflated expectations. In this keynote, we'll explore geographic & mobility data science tools (such as GeoPandas and MovingPandas) to transform this hype into sustainable and positive development that empowers users.
Where data does more—analytics, AI, data engineering, apps, and collaboration. Powered with the AI Data Cloud.
Send us a text Replay Episode: Python, Anaconda, and the AI Frontier with Peter Wang Peter Wang — Chief AI & Innovation Officer and Co-founder of Anaconda — is back on Making Data Simple! Known for shaping the open-source ecosystem and making Python a powerhouse, Peter dives into Anaconda’s new AI incubator, the future of GenAI, and why Python isn’t just “still a thing”… it’s the thing. From branding and security to leadership and philosophy, this episode is a wild ride through the biggest opportunities (and risks) shaping AI today. Timestamps: 01:27 Meet Peter Wang 05:10 Python or R? 05:51 Anaconda’s Differentiation 07:08 Why the Name Anaconda 08:24 The AI Incubator 11:40 GenAI 14:39 Enter Python 16:08 Anaconda Commercial Services 18:40 Security 20:57 Common Points of Failure 22:53 Branding 24:50 watsonx Partnership 28:40 AI Risks 34:13 Getting Philosophical 36:13 China 44:52 Leadership Style
Linkedin: linkedin.com/in/pzwang Website: https://www.linkedin.com/company/anacondainc/, https://www.anaconda.com/ Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.
Practical techniques to accelerate software development using generative AI. Let’s get real. You’d like to hand off a lot of tedious software development tasks to an assistant—and now you can! AI-powered coding tools like Copilot can accelerate research, design, code creation, testing, troubleshooting, documentation, refactoring and more. Coding with AI shows you how. Written for working developers, this book fast-tracks you to AI-powered productivity with bite-size projects, tested prompts, and techniques for getting the most out of AI. In Coding with AI you’ll learn how to: Incorporate AI tools into your development workflow Create pro-quality documentation and tests Debug and refactor software efficiently Create and organize reusable prompts Coding with AI takes you through several small Python projects with the help of AI tools, showing you exactly how to use AI to create and refine real software. This book skips the baby steps and goes straight to the techniques you’ll use on the job, every day. You’ll learn to sidestep AI inefficiencies like hallucination and identify the places where AI can save you the most time and effort. About the Technology Taking a systematic approach to coding with Al will deliver the clarity, consistency, and scalability you need for production-grade applications. With practice, you can use AI tools to break down complex problems, generate maintainable code, enhance your models, and streamline debugging, testing, and collaboration. As you learn to work with AI’s strengths—and recognize its limitations—you’ll build more reliable software and find that the quality of your generated code improves significantly. About the Book Coding with AI shows you how to gain massive benefits from a powerful array of AI-driven development tools and techniques. And it shares the insights and methods you need to use them effectively in professional projects. Following realistic examples, you’ll learn AI coding for database integration, designing a UI, and establishing an automated testing suite. You’ll even vibe code a game—but only after you’ve built a rock-solid foundation. What's Inside Incorporate AI into your development workflow Create pro-quality documentation and tests Debug and refactor software efficiently Create and organize reusable prompts About the Reader For professional software developers. Examples in Python. About the Author Jeremy C. Morgan has two decades of experience as an engineer building software for everything from Fortune 100 companies to tiny startups. Quotes Delivers exactly what working developers need: practical techniques that actually work. - Scott Hanselman, Microsoft You’ll be writing prompt engineering poetry. - Lars Klint, Atlassian Blends years of software experience with hands-on knowledge of top AI coding techniques. Essential. - Steve Buchanan, Jamf Detailed use of AI in real-world applications. A great job! - Santosh Yadav, Celonis
Content coming soon.
Abstract: In this talk, Claire will share how she went from a career in data science to founding an AI startup backed by Y Combinator: what are the steps along this path? How do you find "the idea," how do you find a co-founder, how do you get your first clients and funding? How do you get into Y Combinator? She will also share her vision on the next data skills to acquire: how to go from data science to AI engineering, and how to build and evaluate agentic AI.
Abstract: In this talk, we will explore how AI is transforming personalized customer interactions - moving from traditional rule-based segmentations to intelligent, data-driven, behavioral journeys. Imagine predicting a customer’s growth potential or anticipating their next purchase before they even consider it. Through practical use cases, we will explore how AI is bringing intelligence and personalization to industries long driven by relationships and physical touchpoints.
In this talk, Dr. Rui 'Ray' Li presents groundbreaking work on multi-agent AI systems for Educational AI. Educational AI refers to the application of AI to improve teaching, learning, and educational management. It includes personalized learning systems that adapt to individual student needs, intelligent tutoring systems that provide real-time feedback, automated grading tools, and predictive analytics that help educators identify learning gaps. By leveraging NLP, ML, and data-driven insights, educational AI supports more engaging learning experiences, reduces administrative burdens, and enables equitable access to knowledge across diverse student populations. In this talk, we are discussing the most recent development of using AI agent in classroom learning such as assisting student group projects.
Real-time databases unlock the full potential of AI applications (and Agents). Randy draws on his experience at SingleStore and Matillion to show why performance and strong data foundations are essential for building AI.
Deep Research architectures have become increasingly popular over the past three months as users want more complex tasks to be handled by LLMs. The LastMile AI team has gone through a journey to understand what core components are needed for a Deep Research Agent and what qualities enable it to be ready for production. We'll be sharing the core architecture for a Deep Research Agent and how these components influence the behavior of the agent.
Behind every technical leap in scientific Python lies a human ecosystem of volunteers, companies, and institutions working in tension and collaboration. This keynote explores how innovation actually happens in open source, through the lens of recent and ongoing initiatives that aim to move the needle on performance and usability - from the ideas that went into NumPy 2.0 and its relatively smooth rollout to the ongoing efforts to leverage the performance GPUs offer without sacrificing maintainability and usability.
Takeaways for the audience: Whether you’re an ML engineer tired of debugging GPU-CPU inconsistencies, a researcher pushing Python to its limits, or an open-source maintainer seeking sustainable funding, this keynote will equip you with both practical solutions and a clear vision of where scientific Python is headed next.
The exponential growth of textual data—ranging from social media posts and digital news archives to speech-to-text transcripts—has opened new frontiers for research in the social sciences. Tasks such as stance detection, topic classification, and information extraction have become increasingly common. At the same time, the rapid evolution of Natural Language Processing, especially pretrained language models and generative AI, has largely been led by the computer science community, often leaving a gap in accessibility for social scientists.
To address this, we initiated since 2023 the development of ActiveTigger, a lightweight, open-source Python application (with a web frontend in React) designed to accelerate annotation process and manage large-scale datasets through the integration of fine-tuned models. It aims to support computational social science for a large public both within and outside social sciences. Already used by a dynamic community in social sciences, the stable version is planned for early June 2025.
From a more technical prospect, the API is designed to manage the complete workflow from project creation, embeddings computation, exploration of the text corpus, human annotation with active learning, fine-tuning of pre-trained models (BERT-like), prediction on a larger corpus, and export. It also integrates LLM-as-a-service capabilities for prompt-based annotation and information extraction, offering a flexible approach for hybrid manual/automatic labeling. Accessible both with a web frontend and a Python client, ActiveTigger encourages customization and adaptation to specific research contexts and practices.
In this talk, we will delve into the motivations behind the creation of ActiveTigger, outline its technical architecture, and walk through its core functionalities. Drawing on several ongoing research projects within the Computational Social Science (CSS) group at CREST, we will illustrate concrete use cases where ActiveTigger has accelerated data annotation, enabled scalable workflows, and fostered collaborations. Beyond the technical demonstration, the talk will also open a broader reflection on the challenges and opportunities brought by generative AI in academic research—especially in terms of reliability, transparency, and methodological adaptation for qualitative and quantitative inquiries.
The repository of the project : https://github.com/emilienschultz/activetigger/
The development of this software is funded by the DRARI Ile-de-France and supported by Progédo.
To satisfy the need for data in generative and traditional AI, in a rapidly evolving environment, the ability to efficiently extract data from the web has become indispensable for businesses and developers. This presentation delves into the methodology and tools of web crawling and web scraping, with an overview of the ethical and legal side of the process, including the best practices on how to crawl politely and efficiently and use the data to not violate any privacy or intellectual property laws.