As AI agents become more powerful and widely adopted, enterprises face a new challenge: how to build them on a foundation of trustworthy, AI-ready data that includes both structured and unstructured data. Unstructured data introduces new complexity for organizations already contending with large and growing volumes of data that is often distributed and disconnected, and from more information sources. In this episode, Stephanie Valarezo, Program Director, Product, from IBM Data Integration, shares how organizations can simplify and scale the integration, access and governance of unstructured and structured data. Explore how IBM is simplifying the enterprise data stack by empowering teams to integrate structured and unstructured data, using batch, real-time streaming, or replication techniques, while extending governance beyond the data layer to the AI agents themselves. Whether you're modernizing legacy infrastructure, accelerating agent development, or building robust governance strategies, this session will give you a blueprint to: Unlock the value of unstructured data for enterprise-grade AI Accelerate data intelligence through built-in observability and governance Simplify your tech stack while improving trust and traceability in AI outputs Learn more about watsonx #sponsored Register for free to be part of the next live session: https://bit.ly/3XB3A8b Follow us on Socials: LinkedIn YouTube Instagram (Mavens of Data) Instagram (Maven Analytics) TikTok Facebook Medium X/Twitter
talk-data.com
Topic
AI/ML
Artificial Intelligence/Machine Learning
9014
tagged
Activity Trend
Top Events
Dive into building applications that combine the power of Large Language Models (LLMs) with Neo4j knowledge graphs, Haystack, and Spring AI to deliver intelligent, data-driven recommendations and search outcomes. This book provides actionable insights and techniques to create scalable, robust solutions by leveraging the best-in-class frameworks and a real-world project-oriented approach. What this Book will help me do Understand how to use Neo4j to build knowledge graphs integrated with LLMs for enhanced data insights. Develop skills in creating intelligent search functionalities by combining Haystack and vector-based graph techniques. Learn to design and implement recommendation systems using LangChain4j and Spring AI frameworks. Acquire the ability to optimize graph data architectures for LLM-driven applications. Gain proficiency in deploying and managing applications on platforms like Google Cloud for scalability. Author(s) Ravindranatha Anthapu, a Principal Consultant at Neo4j, and Siddhant Agarwal, a Google Developer Expert in Generative AI, bring together their vast experience to offer practical implementations and cutting-edge techniques in this book. Their combined expertise in Neo4j, graph technology, and real-world AI applications makes them authoritative voices in the field. Who is it for? Designed for database developers and data scientists, this book caters to professionals aiming to leverage the transformational capabilities of knowledge graphs alongside LLMs. Readers should have a working knowledge of Python and Java as well as familiarity with Neo4j and the Cypher query language. If you're looking to enhance search or recommendation functionalities through state-of-the-art AI integrations, this book is for you.
Machine learning (ML) models in production often start with a single objective, such as maximizing conversion rate in the payment industry. However, real-world business contexts are often more nuanced where other aspects relevant to a transaction, such as transaction cost or fraud risk, come into play. These objectives can be inherently conflicting: while optimizing for authorization may drive more revenue, it could also lead to higher costs or increased risk exposure.
Addressing such trade-offs necessitates the consideration of multi-objective optimization (MOO), while key information in the payment context plays a role in determining which objective should get more weight when considering the trade-off. In this talk we will share how we (Optimize ML team at Adyen) use the contextualized scalarization approach to improve our Intelligent Payment Routing product with a focus on conversion rate and transaction cost optimization.
Supported by Our Partners • Statsig — The unified platform for flags, analytics, experiments, and more. • Graphite — The AI developer productivity platform. • Augment Code — AI coding assistant that pro engineering teams love — GitHub recently turned 17 years old—but how did it start, how has it evolved, and what does the future look like as AI reshapes developer workflows? In this episode of The Pragmatic Engineer, I’m joined by Thomas Dohmke, CEO of GitHub. Thomas has been a GitHub user for 16 years and an employee for 7. We talk about GitHub’s early architecture, its remote-first operating model, and how the company is navigating AI—from Copilot to agents. We also discuss why GitHub hires junior engineers, how the company handled product-market fit early on, and why being a beloved tool can make shipping harder at times. Other topics we discuss include: • How GitHub’s architecture evolved beyond its original Rails monolith • How GitHub runs as a remote-first company—and why they rarely use email • GitHub’s rigorous approach to security • Why GitHub hires junior engineers • GitHub’s acquisition by Microsoft • The launch of Copilot and how it’s reshaping software development • Why GitHub sees AI agents as tools, not a replacement for engineers • And much more! — Timestamps (00:00) Intro (02:25) GitHub’s modern tech stack (08:11) From cloud-first to hybrid: How GitHub handles infrastructure (13:08) How GitHub’s remote-first culture shapes its operations (18:00) Former and current internal tools including Haystack (21:12) GitHub’s approach to security (24:30) The current size of GitHub, including security and engineering teams (25:03) GitHub’s intern program, and why they are hiring junior engineers (28:27) Why AI isn’t a replacement for junior engineers (34:40) A mini-history of GitHub (39:10) Why GitHub hit product market fit so quickly (43:44) The invention of pull requests (44:50) How GitHub enables offline work (46:21) How monetization has changed at GitHub since the acquisition (48:00) 2014 desktop application releases (52:10) The Microsoft acquisition (1:01:57) Behind the scenes of GitHub’s quiet period (1:06:42) The release of Copilot and its impact (1:14:14) Why GitHub decided to open-source Copilot extensions (1:20:01) AI agents and the myth of disappearing engineering jobs (1:26:36) Closing — The Pragmatic Engineer deepdives relevant for this episode: • AI Engineering in the real world • The AI Engineering stack • How Linux is built with Greg Kroah-Hartman • Stacked Diffs (and why you should know about them) • 50 Years of Microsoft and developer tools — See the transcript and other references from the episode at https://newsletter.pragmaticengineer.com/podcast — Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email [email protected].
Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Está no ar, o Data Hackers News !! Os assuntos mais quentes da semana, com as principais notícias da área de Dados, IA e Tecnologia, que você também encontra na nossa Newsletter semanal, agora no Podcast do Data Hackers !! Aperte o play e ouça agora, o Data Hackers News dessa semana ! Para saber tudo sobre o que está acontecendo na área de dados, se inscreva na Newsletter semanal: https://www.datahackers.news/ Conheça nossos comentaristas do Data Hackers News: Monique Femme Links mencionados: Breaking Data Hackers - com a Snowflake Vagas no BeesDemais canais do Data Hackers: Site Linkedin Instagram Tik Tok You Tube
Regular guest Gordon Wong joins me to chat about the impact of AI on attention and expertise, product management, and much more.
AI is having a huge impact, but is not the only thing with societal, technological, and organizational implications driving change in data and analytics. We examine trends in areas such as complexity, trust, and empowerment facing leaders and teams as they make decisions in all aspects of their bet-the-business D&A strategy.
Metadata, data quality and data observability tools provide significant capabilities to ensure good data for your BI and AI initiatives. Metadata tools help discover, and inventory your data assets. Data quality tools help business users manage their data at sources by setting rules and policies. Data observability tools give organizations integrated visibility over the health of data, data pipeline and data landscape. Together the tools help organizations lay good foundation in data management for BI and AI initiatives.
Data and analytics leaders need to support the opportunities and challenges of today’s digital business with the right competencies. This is the time to evaluate data and analytics roles and skills that are fit for now and the future. This session will provide key considerations for D&A and AI roles and skills.
AI is moving faster than ever. AI techniques should bring adaptability to an uncertain world in constant flux. However, despite its extraordinary power and early promises, AI has not been leveraged to its full potential. What is missing? Where did we go wrong? Join us as we discuss our ambition for the future of AI and AI should do for us to deliver the value that we are expecting.
Responsible AI decisions are not black and white, they require trade-offs. Learn to make trade-offs and debate the alternatives to make AI governance and responsible AI decisions. Discuss controversial ideas in AI with your peers. Express your opinion and listen to what others are saying. Learn to ask the right questions and get the right answers to ensure responsible, trustworthy and ethical AI.
Generative AI continues to be a top priority for the C-suite and there has been an accelerated innovation in new AI models and products to enable it. In this session, IT leaders can learn about the key techniques and technologies powering one of the most transformative technology trends of this decade.
As organisations scale AI and move towards Data Products, success depends on trusted, high-quality data underpinned by strong governance. In this fireside chat, Chemist Warehouse shares how domain-aligned metadata, data quality, and governance, powered by Alation, enable a unified delivery framework using Critical Data Elements (CDEs) to reduce risk, drive self-service, and build a foundation for AI-ready analytics and future data product initiatives.
Join Andrew Hinds from Bupa Australia to explore how combining data lineage and AI-powered automation enables organizations to optimize operations and deliver trusted insights. Hear practical strategies for embedding observability and governance into every layer of your data ecosystem—empowering better decisions, innovation, and compliance in a rapidly changing world.
In a world flooded with data, dashboards alone aren't enough—organisations need real-time answers that drive action. Auror, a leading retail crime intelligence platform, leverages Elastic’s AI-powered search to unify and analyse data at scale—accelerating investigations, enabling cross-organisational collaboration, and significantly reducing retail shrink. In this session, discover how search-native architecture empowers decision intelligence, operational resilience, and frontline impact—delivering measurable ROI and strategic business value.
In today's competitive landscape, businesses seek innovative ways to leverage AI for strategic advantage. This session will explore how Agentic AI can transform data insights into actionable decision intelligence. Discover how NostraData uses AI to improve access to critical information across the pharmacy supply chain.
Organizations can face many challenges in operationalizing D&A and AI strategies. In this session, we discuss how to capitalize on value-based opportunities, engage with stakeholders and get to what matters.
As business leaders seek the adoption of AI-enabled capabilities across all aspects of operations, less than a third of D&A leaders express confidence that their organization is ready to meet the challenges associated with meeting AI-driven demand’. This multi-group discussion will cover:
Which components of the D&A value delivery chain are most in need of evolution?
What are the best next-generation D&A organizational & operating models suited for the AI-era?
What are the best KPIs for measuring ‘AI-readiness’ among systems, teams, and leaders?
Asking your colleagues how analytics can help them often results in blank stares, defensiveness, or wildly incoherent suggestions involving AI. This session will show you how to work with your colleagues to pinpoint how you can help, identify the most helpful capabilities to build, and explain how to measure your impact.