talk-data.com talk-data.com

Topic

GenAI

Generative AI

ai machine_learning llm

1517

tagged

Activity Trend

192 peak/qtr
2020-Q1 2026-Q1

Activities

1517 activities · Newest first

Discover how AI-driven personalization and agentic architecture are helping revolutionize customer experiences and operations in financial services. In this insightful panel discussion with leadership from USAA and Deloitte, we'll explore the next generation of AI-enabled end-to-end data value chains and highlight some top trends that are reshaping today's technology organizations.

This Session is hosted by a Google Cloud Next Sponsor.
Visit your registration profile at g.co/cloudnext to opt out of sharing your contact information with the sponsor hosting this session.

session
by Lorenzo Caggioni (Google Cloud) , Guido Marangoni (Università degli Studi di Padova)

Ready to build more accessible apps? Generative AI can help. This session provides practical strategies and real-world examples of using GenAI to: Design personalized assistance that guides users seamlessly. Generate image descriptions that make visual content accessible to all. Provide concise summaries that empower users to quickly grasp key information. Learn how to harness the power of Generative AI and create apps that are truly accessible.

For years, mainframe modernization seemed out of reach. Generative AI breakthroughs, led by Google’s Gemini, offer automated assessments, code transformation, and risk mitigation for accelerated and cost-effective cloud migration. Learn from Ford and Intesa Sanpaolo about how they reduced costs, boosted performance, and embraced innovation by modernizing their core applications off the mainframe and onto Google Cloud. Learn how Google Cloud’s mainframe modernization solutions can work for your digital transformation journey.

Discover the breakthrough AI capabilities transforming cloud operations and management. Join us for an overview of Gemini Cloud Assist and learn how it brings you an AI-based cloud management experience. Gemini Cloud Assist empowers you to design and deploy apps faster, troubleshoot issues with AI insights, and optimize performance and costs through intelligent recommendations. We’ll showcase its new capabilities and integrations in various Google Cloud products, and show you how it can completely reshape your cloud management experience.

In this spotlight, we will explore Google Cloud’s approach to getting started with generative AI and share examples of how organizations are putting generative AI into production with Google Cloud and creating AI Agent experiences for their customers, employees and partners. Join the session to learn about all our new announcements planned for the best foundation models, best agent building framework, and the most integrated agent space, come together to help you orchestrate a solid AI future for your organization.

Be at the forefront of the gen AI revolution in 2025 by learning proven strategies to capitalize on opportunities with Google Cloud. Discover Google's go-to-market priorities, including Customer Engagement Suite, Search, AI agents, and Vertex AI Platform. Learn how to accelerate gen AI projects from proof-of-concept to production, with best practices for agentic AI & responsible AI development. Together, we can open up new revenue streams by building profitable AI offerings that engage customers, accelerate deals, and showcase real-world ROI.

Join this session to learn about Google Cloud's vision and investment priorities in EMEA, focus on key industry verticals for 2025, and emerging technology trends. Discover how fellow partners are capitalizing on opportunities to establish a footprint and build pipeline in multiple markets. We'll also discuss tackling generative AI implementation challenges, developing solutions, hiring and training, and the legal and ethical guidelines within the region.

session
by Yumi Ueno (Google Cloud) , Doi Yuki (Fujitsu Limited) , Coby Kobayahi (Google Cloud)

Regional leader Yumi Ueno will share Japan's long-term strategic plan for 2025. She'll discuss the latest successes in the region and the ongoing efforts to help partners maximize Google Cloud's gen AI offerings to win customer deals. This session will be offered in Japanese, no English translation available.

session
by Troy Bertram (Google Cloud) , Brian Schoepfle (Google Cloud) , Karen Dahut (Google Public Sector)

Welcome the new era of American innovation in the public sector, driven by gen AI. This session will cover new GTM strategies to expand business, build customer loyalty, and understand co-selling. Learn how to utilize US Public Sector Deal Registration Discount, Partner Development Sprints, and the Rapid Innovation Team, all designed to accelerate partner success.

Today, we’re joined by Nik Froehlich, founder and CEO of Saritasa, a technology solutions company that designs and develops custom, commercial-grade software systems. We talk about: Pro and cons of building vs. buying softwareThe challenges of low code/ no code solutionsThe high costs of code debtCoping with client requests for cheaply developed softwareWill GenAI eliminate the need for developers to write code?Current AI development use cases: documentation & automatic testing

The role of data and AI engineers is more critical than ever. With organizations collecting massive amounts of data, the challenge lies in building efficient data infrastructures that can support AI systems and deliver actionable insights. But what does it take to become a successful data or AI engineer? How do you navigate the complex landscape of data tools and technologies? And what are the key skills and strategies needed to excel in this field?  Deepak Goyal is a globally recognized authority in Cloud Data Engineering and AI. As the Founder & CEO of Azurelib Academy, he has built a trusted platform for advanced cloud education, empowering over 100,000 professionals and influencing data strategies across Fortune 500 companies. With over 17 years of leadership experience, Deepak has been at the forefront of designing and implementing scalable, real-world data solutions using cutting-edge technologies like Microsoft Azure, Databricks, and Generative AI. In the episode, Richie and Deepak explore the fundamentals of data engineering, the critical skills needed, the intersection with AI roles, career paths, and essential soft skills. They also discuss the hiring process, interview tips, and the importance of continuous learning in a rapidly evolving field, and much more. Links Mentioned in the Show: AzureLibAzureLib Academy Connect with DeepakGet Certified! Azure FundamentalsRelated Episode: Effective Data Engineering with Liya Aizenberg, Director of Data Engineering at AwaySign up to attend RADAR: Skills Edition  New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

A pesquisa State of Data Brazil 2025, conduzida pelo Data Hackers em parceria com a Bain & Company, reuniu mais de 5,2 mil profissionais de dados para entender os desafios, tendências e transformações do setor. Esse é o maior mapeamento já realizado sobre o mercado brasileiro de trabalho em dados e inteligência artificial !! Neste episódio, recebemos Felipe Fiamozzini (Expert Partner na Bain & Company) para explorar os principais insights do relatório, como: Salários e evolução das carreiras em dados e IA; Tendências tecnológicas e adoção de GenAI; Impacto dos layoffs e mudanças no modelo de trabalho; e o que esperar do mercado de dados em 2025. Lembrando que você pode encontrar todos os podcasts da comunidade Data Hackers no Spotify, iTunes, Google Podcast, Castbox e muitas outras plataformas. Falamos no episódio Felipe Fiamozzini — Expert Partner na Bain & Company Nossa Bancada Data Hackers: Monique Femme — Head of Community Management na Data HackersGabriel Lages — Co-founder da Data Hackers e Data & Analytics Sr. Director na Hotmart. Referências: Semana de tecnologia Itaú: https://comunicatech.itau.com.br/semanadatecnologia2025_datahackersBaixe a pesquisa State of Data Brazil 2025: https://www.datahackers.news/p/relatorio2024-2025

Time Series Analysis with Spark

Time Series Analysis with Spark provides a practical introduction to leveraging Apache Spark and Databricks for time series analysis. You'll learn to prepare, model, and deploy robust and scalable time series solutions for real-world applications. From data preparation to advanced generative AI techniques, this guide prepares you to excel in big data analytics. What this Book will help me do Understand the core concepts and architectures of Apache Spark for time series analysis. Learn to clean, organize, and prepare time series data for big data environments. Gain expertise in choosing, building, and training various time series models tailored to specific projects. Master techniques to scale your models in production using Spark and Databricks. Explore the integration of advanced technologies such as generative AI to enhance predictions and derive insights. Author(s) Yoni Ramaswami, a Senior Solutions Architect at Databricks, has extensive experience in data engineering and AI solutions. With a focus on creating innovative big data and AI strategies across industries, Yoni authored this book to empower professionals to efficiently handle time series data. Yoni's approachable style ensures that both foundational concepts and advanced techniques are accessible to readers. Who is it for? This book is ideal for data engineers, machine learning engineers, data scientists, and analysts interested in enhancing their expertise in time series analysis using Apache Spark and Databricks. Whether you're new to time series or looking to refine your skills, you'll find both foundational insights and advanced practices explained clearly. A basic understanding of Spark is helpful but not required.

Summary In this episode of the Data Engineering Podcast Sean Knapp, CEO of Ascend.io, explores the intersection of AI and data engineering. He discusses the evolution of data engineering and the role of AI in automating processes, alleviating burdens on data engineers, and enabling them to focus on complex tasks and innovation. The conversation covers the challenges and opportunities presented by AI, including the need for intelligent tooling and its potential to streamline data engineering processes. Sean and Tobias also delve into the impact of generative AI on data engineering, highlighting its ability to accelerate development, improve governance, and enhance productivity, while also noting the current limitations and future potential of AI in the field.

Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. Your host is Tobias Macey and today I'm interviewing Sean Knapp about how Ascend is incorporating AI into their platform to help you keep up with the rapid rate of changeInterview IntroductionHow did you get involved in the area of data management?Can you describe what Ascend is and the story behind it?The last time we spoke was August of 2022. What are the most notable or interesting evolutions in your platform since then?In that same time "AI" has taken up all of the oxygen in the data ecosystem. How has that impacted the ways that you and your customers think about their priorities?The introduction of AI as an API has caused many organizations to try and leap-frog their data maturity journey and jump straight to building with advanced capabilities. How is that impacting the pressures and priorities felt by data teams?At the same time that AI-focused product goals are straining data teams capacities, AI also has the potential to act as an accelerator to their work. What are the roadblocks/speedbumps that are in the way of that capability?Many data teams are incorporating AI tools into parts of their workflow, but it can be clunky and cumbersome. How are you thinking about the fundamental changes in how your platform works with AI at its center?Can you describe the technical architecture that you have evolved toward that allows for AI to drive the experience rather than being a bolt-on?What are the concrete impacts that these new capabilities have on teams who are using Ascend?What are the most interesting, innovative, or unexpected ways that you have seen Ascend + AI used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on incorporating AI into the core of Ascend?When is Ascend the wrong choice?What do you have planned for the future of AI in Ascend?Contact Info LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links AscendCursor AI Code EditorDevinGitHub CopilotOpenAI DeepResearchS3 TablesAWS GlueAWS BedrockSnowparkCo-Intelligence: Living and Working with AI by Ethan Mollick (affiliate link)OpenAI o3The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Time Series Forecasting Using Generative AI : Leveraging AI for Precision Forecasting

"Time Series Forecasting Using Generative AI introduces readers to Generative Artificial Intelligence (Gen AI) in time series analysis, offering an essential exploration of cutting-edge forecasting methodologies." The book covers a wide range of topics, starting with an overview of Generative AI, where readers gain insights into the history and fundamentals of Gen AI with a brief introduction to large language models. The subsequent chapter explains practical applications, guiding readers through the implementation of diverse neural network architectures for time series analysis such as Multi-Layer Perceptrons (MLP), WaveNet, Temporal Convolutional Network (TCN), Bidirectional Temporal Convolutional Network (BiTCN), Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM), Deep AutoRegressive(DeepAR), and Neural Basis Expansion Analysis(NBEATS) using modern tools. Building on this foundation, the book introduces the power of Transformer architecture, exploring its variants such as Vanilla Transformers, Inverted Transformer (iTransformer), DLinear, NLinear, and Patch Time Series Transformer (PatchTST). Finally, The book delves into foundation models such as Time-LLM, Chronos, TimeGPT, Moirai, and TimesFM enabling readers to implement sophisticated forecasting models tailored to their specific needs. This book empowers readers with the knowledge and skills needed to leverage Gen AI for accurate and efficient time series forecasting. By providing a detailed exploration of advanced forecasting models and methodologies, this book enables practitioners to make informed decisions and drive business growth through data-driven insights. ● Understand the core history and applications of Gen AI and its potential to revolutionize time series forecasting. ● Learn to implement different neural network architectures such as MLP, WaveNet, TCN, BiTCN, RNN, LSTM, DeepAR, and NBEATS for time series forecasting. ● Discover the potential of Transformer architecture and its variants, such as Vanilla Transformers, iTransformer, DLinear, NLinear, and PatchTST, for time series forecasting. ● Explore complex foundation models like Time-LLM, Chronos, TimeGPT, Moirai, and TimesFM. ● Gain practical knowledge on how to apply Gen AI techniques to real-world time series forecasting challenges and make data-driven decisions. Who this book is for: Data Scientists, Machine learning engineers, Business Aanalysts, Statisticians, Economists, Financial Analysts, Operations Research Analysts, Data Analysts, Students.

Jason's co-author, Barry Green, joins this next episode as they discuss the release of the second edition of "Data Means Business" on 25th March.  Returning for his third conversation on the podcast, Barry shares updates on the evolving landscape of data and AI, the impact of generative AI, and the importance of business capabilities. They delve into the changes since the first edition, including the role of the Chief Data Officer and the significance of adaptability in today's fast-paced world. Tune in to hear their thoughts on driving transformational change and delivering value with data & AI. The second edition of Data Means Business will be out on 25th March and will be available on Amazon. *****    Cynozure is a leading data, analytics and AI company that helps organisations to reach their data potential. It works with clients on data and AI strategy, data management, data architecture and engineering, analytics and AI, data culture and literacy, and data leadership. The company was named one of The Sunday Times' fastest-growing private companies in both 2022 and 2023 and recognised as The Best Place to Work in Data by DataIQ in 2023 and 2024. Cynozure is a certified B Corporation. 

A challenge I frequently hear about from subscribers to my insights mailing list is how to design B2B data products for multiple user types with differing needs. From dashboards to custom apps and commercial analytics / AI products, data product teams often struggle to create a single solution that meets the diverse needs of technical and business users in B2B settings. If you're encountering this issue, you're not alone!

In this episode, I share my advice for tackling this challenge including the gift of saying "no.” What are the patterns you should be looking out for in your customer research? How can you choose what to focus on with limited resources? What are the design choices you should avoid when trying to build these products? I’m hoping by the end of this episode, you’ll have some strategies to help reduce the size of this challenge—particularly if you lack a dedicated UX team to help you sort through your various user/stakeholder demands. 

Highlights/ Skip to 

The importance of proper user research and clustering “jobs to be done” around business importance vs. task frequency—ignoring the rest until your solution can show measurable value  (4:29) What “level” of skill to design for, and why “as simple as possible” isn’t what I generally recommend (13:44) When it may be advantageous to use role or feature-based permissions to hide/show/change certain aspects, UI elements, or features  (19:50) Leveraging AI and LLMs in-product to allow learning about the user and progressive disclosure and customization of UIs (26:44) Leveraging the “old” solution of rapid prototyping—which is now faster than ever with AI, and can accelerate learning (capturing user feedback) (31:14) 5 things I do not recommend doing when trying to satisfy multiple user types in your b2b AI or analytics product (34:14)

Quotes from Today’s Episode

If you're not talking to your users and stakeholders sufficiently, you're going to have a really tough time building a successful data product for one user – let alone for multiple personas. Listen for repeating patterns in what your users are trying to achieve (tasks they are doing). Focus on the jobs and tasks they do most frequently or the ones that bring the most value to their business. Forget about the rest until you've proven that your solution delivers real value for those core needs. It's more about understanding the problems and needs, not just the solutions. The solutions tend to be easier to design when the problem space is well understood. Users often suggest solutions, but it's our job to focus on the core problem we're trying to solve; simply entering in any inbound requests verbatim into JIRA and then “eating away” at the list is not usually a reliable strategy. (5:52) I generally recommend not going for “easy as possible” at the cost of shallow value. Instead, you’re going to want to design for some “mid-level” ability, understanding that this may make early user experiences with the product more difficult. Why? Oversimplification can mislead because data is complex, problems are multivariate, and data isn't always ideal. There are also “n” number of “not-first” impressions users will have with your product. This also means there is only one “first impression” they have. As such, the idea conceptually is to design an amazing experience for the “n” experiences, but not to the point that users never realize value and give up on the product.  While I'd prefer no friction, technical products sometimes will have to have a little friction up front however, don't use this as an excuse for poor design. This is hard to get right, even when you have design resources, and it’s why UX design matters as thinking this through ends up determining, in part, whether users obtain the promise of value you made to them. (14:21) As an alternative to rigid role and feature-based permissions in B2B data products, you might consider leveraging AI and / or LLMs in your UI as a means of simplifying and customizing the UI to particular users. This approach allows users to potentially interrogate the product about the UI, customize the UI, and even learn over time about the user’s questions (jobs to be done) such that becomes organically customized over time to their needs. This is in contrast to the rigid buckets that role and permission-based customization present. However, as discussed in my previous episode (164 - “The Hidden UX Taxes that AI and LLM Features Impose on B2B Customers Without Your Knowledge”)  designing effective AI features and capabilities can also make things worse due to the probabilistic nature of the responses GenAI produces. As such, this approach may benefit from a UX designer or researcher familiar with designing data products. Understanding what “quality” means to the user, and how to measure it, is especially critical if you’re going to leverage AI and LLMs to make the product UX better. (20:13) The old solution of rapid prototyping is even more valuable now—because it’s possible to prototype even faster. However, prototyping is not just about learning if your solution is on track. Whether you use AI or pencil and paper, prototyping early in the product development process should be framed as a “prop to get users talking.” In other words, it is a prop to facilitate problem and need clarity—not solution clarity. Its purpose is to spark conversation and determine if you're solving the right problem. As you iterate, your need to continually validate the problem should shrink, which will present itself in the form of consistent feedback you hear from end users. This is the point where you know you can focus on the design of the solution. Innovation happens when we learn; so the goal is to increase your learning velocity. (31:35) Have you ever been caught in the trap of prioritizing feature requests based on volume? I get it. It's tempting to give the people what they think they want. For example, imagine ten users clamoring for control over specific parameters in your machine learning forecasting model. You could give them that control, thinking you're solving the problem because, hey, that's what they asked for! But did you stop to ask why they want that control? The reasons behind those requests could be wildly different. By simply handing over the keys to all the model parameters, you might be creating a whole new set of problems. Users now face a "usability tax," trying to figure out which parameters to lock and which to let float. The key takeaway? Focus on addressing the frequency that the same problems are occurring across your users, not just the frequency a given tactic or “solution” method (i.e. “model” or “dashboard” or “feature”) appears in a stakeholder or user request. Remember, problems are often disguised as solutions. We've got to dig deeper and uncover the real needs, not just address the symptoms. (36:19)

Today, we’re joined by Rahul Pangam, Co-Founder & CEO of RapidCanvas, a leader in delivering transformative AI-powered solutions that empower businesses to achieve faster and more impactful outcomes. We talk about: How to make GenAI more reliable: Understanding your business context & knowing why something is happeningMoving from planning based on the human gut to an AI-based setupThe coming paradigm shift from SaaS to service as a softwareInteracting with apps in plain language vs. remembering which of 56 dashboards to view