See real-world success stories of businesses using AI for stronger cybersecurity.
talk-data.com
Topic
AI/ML
Artificial Intelligence/Machine Learning
9014
tagged
Activity Trend
Top Events
Stay ahead with insights into the next big advancements in AI-powered security.
Discover how AI identifies vulnerabilities and stops attacks before they happen.
Learn how AI detects and responds to cyber threats as they emerge.
Willis Nana and I chat about the challenges of data engineering leadership, foundational skills, and his journey to a content creator on YouTube.#dataengineering #data #ai #datateam #leadership
The integration of AI into everyday business operations raises questions about the future of work and human agency. With AI's potential to automate and optimize, how do we ensure that it complements rather than competes with human capabilities? What measures can be taken to prevent AI from overshadowing human input and creativity? How do we strike a balance between embracing AI's benefits and preserving the essence of human contribution? Faisal Hoque is the founder and CEO of SHADOKA, NextChapter, and other companies. He also serves as a transformation and an innovation partner for CACI, an $8B company focused on U.S. national security. He volunteers for several organizations, including MIT IDEAS Social Innovation Program. He is also a contributor at the Swiss business school IMD, Thinkers50, the Project Management Institute (PMl), and others. As a founder and CEO of multiple companies, he is a three-time winner of Deloitte Technology Fast 50™ and Fast 500™ awards. He has developed more than 20 commercial platforms and worked with leadership at the U.S. DoD, DHS, GE, MasterCard, American Express, Home Depot, PepsiCo, IBM, Chase, and others. For their innovative work, he and his team have been awarded several provisional patents in the areas of user authentication, business rule routing, and metadata sorting. In the episode, Richie and Faisal explore the philosophical implications of AI on humanity, the concept of AI as a partner, the potential societal impacts of AI-driven unemployment, the importance of critical thinking and personal responsibility in the AI era, and much more. Links Mentioned in the Show: SHADOKAFaisail’s WebsiteConnect with FaisalSkill Track: Artificial Intelligence (AI) LeadershipRelated Episode: Making Better Decisions using Data & AI with Cassie Kozyrkov, Google's First Chief Decision ScientistSign up to attend RADAR: Skills Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
Jason's co-author, Barry Green, joins this next episode as they discuss the release of the second edition of "Data Means Business" on 25th March. Returning for his third conversation on the podcast, Barry shares updates on the evolving landscape of data and AI, the impact of generative AI, and the importance of business capabilities. They delve into the changes since the first edition, including the role of the Chief Data Officer and the significance of adaptability in today's fast-paced world. Tune in to hear their thoughts on driving transformational change and delivering value with data & AI. The second edition of Data Means Business will be out on 25th March and will be available on Amazon. ***** Cynozure is a leading data, analytics and AI company that helps organisations to reach their data potential. It works with clients on data and AI strategy, data management, data architecture and engineering, analytics and AI, data culture and literacy, and data leadership. The company was named one of The Sunday Times' fastest-growing private companies in both 2022 and 2023 and recognised as The Best Place to Work in Data by DataIQ in 2023 and 2024. Cynozure is a certified B Corporation.
Está no ar, o Data Hackers News !! Os assuntos mais quentes da semana, com as principais notícias da área de Dados, IA e Tecnologia, que você também encontra na nossa Newsletter semanal, agora no Podcast do Data Hackers !! Aperte o play e ouça agora, o Data Hackers News dessa semana ! Para saber tudo sobre o que está acontecendo na área de dados, se inscreva na Newsletter semanal: https://www.datahackers.news/ Conheça nossos comentaristas do Data Hackers News: Monique Femme Paulo Vasconcellos Demais canais do Data Hackers: Site Linkedin Instagram Tik Tok You Tube
Send us a text 🚀 What’s the secret to building a world-class influencer marketing program? Ryan Debenham, CEO of GRIN, joins Making Data Simple to break down the future of creator-driven marketing, AI’s role in the space, and how GRIN is reshaping brand-influencer partnerships. 💡 From Red Bull’s marketing playbook to a key conversation about the “Secret Sauce”, Ryan shares the insights, technology, and mindset needed to scale a creator-powered brand in today’s digital economy. 🔹 Key Topics: ✅ What makes a great influencer? ✅ How GRIN is redefining the industry ✅ AI’s impact on marketing ✅ The future of remote companies ⌛️Minute Markers 02:11 Meet Ryan Debenham 07:54 Running a Startup 08:46 Modeled after Red Bull 12:01 GRIN 15:44 Influencer Marketing 17:52 Who are the Influencers 18:45 Secret Sauce 22:51 The Technology 24:51 The 2-Min GRIN Pitch 31:16 AI in Marketing 34:32 GRIN's Future 36:06 A Remote Company 38:35 In Closing 📢 Listen now! 🎧 Guest: Ryan Debenham | LinkedIn | Website 🌍 Host: Al Martin, WW VP Technical Sales, IBM | LinkedIn 📩 Want to be a guest? Email us at [email protected] 🔗 Hashtags: #MakingDataSimple #InfluencerMarketing #CreatorEconomy #AIinMarketing #BrandGrowth #MarketingTech #Leadership #GRIN #RyanDebenham Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.
How does a tiny hookworm outsmart the human immune system? In this episode, we explore the physicochemical fingerprint of Necator americanus, a parasite that infects millions worldwide. Using cutting-edge techniques like atomic force microscopy (AFM) and time-of-flight secondary ion mass spectrometry (ToF-SIMS), researchers reveal how the hookworm’s sheath and cuticle surfaces play a crucial role in immune evasion and infection.
🔍 Key Topics Covered: • The unique surface properties of N. americanus at the infective L3 stage • How the hookworm’s sheath diverts immune defences, aiding reinfection • The role of nano-annuli in enhancing adhesion and survival • How surface chemistry, including heparan sulphate and phosphatidylglycerol, influences parasite migration
📖 Based on the research article: “The Physicochemical Fingerprint of Necator americanus”** Veeren M. Chauhan, David J. Scurr, Thomas Christie, Gary Telford, Jonathan W. Aylott, David I. Pritchard. Published in PLOS Neglected Tropical Diseases (2017). 🔗 Read it here: https://doi.org/10.1371/journal.pntd.0005971
Join us as we discuss how surface biochemistry influences parasite survival, reinfection, and potential future treatments!
🎧 Subscribe to the WoRM Podcast for more deep dives into cutting-edge parasitology research!
This podcast is generated with artificial intelligence and curated by Veeren. If you’d like your publication featured on the show, please get in touch.
📩 More info: 🔗 www.veerenchauhan.com 📧 [email protected]
A challenge I frequently hear about from subscribers to my insights mailing list is how to design B2B data products for multiple user types with differing needs. From dashboards to custom apps and commercial analytics / AI products, data product teams often struggle to create a single solution that meets the diverse needs of technical and business users in B2B settings. If you're encountering this issue, you're not alone!
In this episode, I share my advice for tackling this challenge including the gift of saying "no.” What are the patterns you should be looking out for in your customer research? How can you choose what to focus on with limited resources? What are the design choices you should avoid when trying to build these products? I’m hoping by the end of this episode, you’ll have some strategies to help reduce the size of this challenge—particularly if you lack a dedicated UX team to help you sort through your various user/stakeholder demands.
Highlights/ Skip to
The importance of proper user research and clustering “jobs to be done” around business importance vs. task frequency—ignoring the rest until your solution can show measurable value (4:29) What “level” of skill to design for, and why “as simple as possible” isn’t what I generally recommend (13:44) When it may be advantageous to use role or feature-based permissions to hide/show/change certain aspects, UI elements, or features (19:50) Leveraging AI and LLMs in-product to allow learning about the user and progressive disclosure and customization of UIs (26:44) Leveraging the “old” solution of rapid prototyping—which is now faster than ever with AI, and can accelerate learning (capturing user feedback) (31:14) 5 things I do not recommend doing when trying to satisfy multiple user types in your b2b AI or analytics product (34:14)
Quotes from Today’s Episode
If you're not talking to your users and stakeholders sufficiently, you're going to have a really tough time building a successful data product for one user – let alone for multiple personas. Listen for repeating patterns in what your users are trying to achieve (tasks they are doing). Focus on the jobs and tasks they do most frequently or the ones that bring the most value to their business. Forget about the rest until you've proven that your solution delivers real value for those core needs. It's more about understanding the problems and needs, not just the solutions. The solutions tend to be easier to design when the problem space is well understood. Users often suggest solutions, but it's our job to focus on the core problem we're trying to solve; simply entering in any inbound requests verbatim into JIRA and then “eating away” at the list is not usually a reliable strategy. (5:52) I generally recommend not going for “easy as possible” at the cost of shallow value. Instead, you’re going to want to design for some “mid-level” ability, understanding that this may make early user experiences with the product more difficult. Why? Oversimplification can mislead because data is complex, problems are multivariate, and data isn't always ideal. There are also “n” number of “not-first” impressions users will have with your product. This also means there is only one “first impression” they have. As such, the idea conceptually is to design an amazing experience for the “n” experiences, but not to the point that users never realize value and give up on the product. While I'd prefer no friction, technical products sometimes will have to have a little friction up front however, don't use this as an excuse for poor design. This is hard to get right, even when you have design resources, and it’s why UX design matters as thinking this through ends up determining, in part, whether users obtain the promise of value you made to them. (14:21) As an alternative to rigid role and feature-based permissions in B2B data products, you might consider leveraging AI and / or LLMs in your UI as a means of simplifying and customizing the UI to particular users. This approach allows users to potentially interrogate the product about the UI, customize the UI, and even learn over time about the user’s questions (jobs to be done) such that becomes organically customized over time to their needs. This is in contrast to the rigid buckets that role and permission-based customization present. However, as discussed in my previous episode (164 - “The Hidden UX Taxes that AI and LLM Features Impose on B2B Customers Without Your Knowledge”) designing effective AI features and capabilities can also make things worse due to the probabilistic nature of the responses GenAI produces. As such, this approach may benefit from a UX designer or researcher familiar with designing data products. Understanding what “quality” means to the user, and how to measure it, is especially critical if you’re going to leverage AI and LLMs to make the product UX better. (20:13) The old solution of rapid prototyping is even more valuable now—because it’s possible to prototype even faster. However, prototyping is not just about learning if your solution is on track. Whether you use AI or pencil and paper, prototyping early in the product development process should be framed as a “prop to get users talking.” In other words, it is a prop to facilitate problem and need clarity—not solution clarity. Its purpose is to spark conversation and determine if you're solving the right problem. As you iterate, your need to continually validate the problem should shrink, which will present itself in the form of consistent feedback you hear from end users. This is the point where you know you can focus on the design of the solution. Innovation happens when we learn; so the goal is to increase your learning velocity. (31:35) Have you ever been caught in the trap of prioritizing feature requests based on volume? I get it. It's tempting to give the people what they think they want. For example, imagine ten users clamoring for control over specific parameters in your machine learning forecasting model. You could give them that control, thinking you're solving the problem because, hey, that's what they asked for! But did you stop to ask why they want that control? The reasons behind those requests could be wildly different. By simply handing over the keys to all the model parameters, you might be creating a whole new set of problems. Users now face a "usability tax," trying to figure out which parameters to lock and which to let float. The key takeaway? Focus on addressing the frequency that the same problems are occurring across your users, not just the frequency a given tactic or “solution” method (i.e. “model” or “dashboard” or “feature”) appears in a stakeholder or user request. Remember, problems are often disguised as solutions. We've got to dig deeper and uncover the real needs, not just address the symptoms. (36:19)
UVA School of Data Science graduates pursue many career paths, including government, health care, technology, retail, and... finance. In this episode, we hear from two UVA data science alumni who put their data science degrees to work every day in their roles at Octus, a financial services company that uses data to provide insights to its clients in banking and legal services.
They discuss the integration of AI into various industries, the challenges of information overload, and the role of human expertise.We welcome Charu Rawat and Yihnew Eshetu, who earned their M.S. in Data Science degrees from UVA in 2019 and 2021, respectively, and Ben Rogers, vice president of AI and advanced analytics at Permira.
Salma Bakouk (CEO of Sifflet) and I discuss the evolving data and AI landscape, the rise of data observability in the age of AI, balancing personal and professional life as a founder, and much more.
In this episode I'll show you what it takes to land data analyst jobs! I'll provide in-depth insights and tips for six data analyst positions with salaries ranging from $35K to $200K-- and why should you apply even if you don't meet all the requirements. 💌 Join 10k+ aspiring data analysts & get my tips in your inbox weekly 👉 https://www.datacareerjumpstart.com/newsletter 🆘 Feeling stuck in your data journey? Come to my next free "How to Land Your First Data Job" training 👉 https://www.datacareerjumpstart.com/training 👩💻 Want to land a data job in less than 90 days? 👉 https://www.datacareerjumpstart.com/daa 👔 Ace The Interview with Confidence 👉 https://www.datacareerjumpstart.com/interviewsimulator No College Degree As A Data Analyst Playlist: https://youtu.be/mSWtnjq4LRE?si=FlfChqSxIPBXc_Lb ⌚ TIMESTAMPS Data Analyst Jobs: How Much $$$ Could You ACTUALLY Make??? 00:00 - Introduction 00:21 - Data Analyst Job #1: Data Specialist ($35k) 04:00 - Data Analyst Job #2: Business and Data Analyst ($55k) 07:48 - Data Analyst Job #3: Data Visualization Analyst ($75k) 10:21 - Data Analyst Job #4: Senior Financial Analyst ($90k) 13:04 - Data Analyst Job #5: Senior Investment Operations Data Analyst ($125k) 14:35 - Data Analyst Job #6: Business Intelligence Engineer ($107k to $189k) 🔗 CONNECT WITH AVERY 🎥 YouTube Channel: https://www.youtube.com/@averysmith 🤝 LinkedIn: https://www.linkedin.com/in/averyjsmith/ 📸 Instagram: https://instagram.com/datacareerjumpstart 🎵 TikTok: https://www.tiktok.com/@verydata 💻 Website: https://www.datacareerjumpstart.com/ Mentioned in this episode: Join the last cohort of 2025! The LAST cohort of The Data Analytics Accelerator for 2025 kicks off on Monday, December 8th and enrollment is officially open!
To celebrate the end of the year, we’re running a special End-of-Year Sale, where you’ll get: ✅ A discount on your enrollment 🎁 6 bonus gifts, including job listings, interview prep, AI tools + more
If your goal is to land a data job in 2026, this is your chance to get ahead of the competition and start strong.
👉 Join the December Cohort & Claim Your Bonuses: https://DataCareerJumpstart.com/daa https://www.datacareerjumpstart.com/daa
The surging energy demands of data centers are transforming how power is sourced, managed, and delivered.
We sit down with Eamon Perrel, Executive Vice President of Apex Clean Energy, to discuss power solutions for hyperscalers and data center operators. Eamon shares insights into the increasing role of wind and solar energy, the complexities of energy procurement, and the challenges of integrating renewables into the grid. We also explore the impact of regulatory hurdles, the emergence of microgrids, and how energy teams and data center teams are beginning to collaborate more closely than ever before.
Key Takeaways:
(06:12) Apex Clean Energy’s transition from utilities to serving tech companies. (21:23) The role of virtual power purchase agreements (VPPAs) in energy sourcing. (35:16) Hyperscalers and data center operators’ approach to power procurement. (43:13) The potential of microgrids for more sustainable energy solutions. (55:47) The impact of regulatory barriers and misinformation on renewable energy adoption. (01:15:37) The rise of self-generation as a viable option for data centers. (01:27:48) Land acquisition and permitting challenges that slow renewable projects. (01:51:19) Future trends in energy procurement and evolving market structures.
Resources Mentioned:
Eamon Perrel https://www.linkedin.com/in/eamon-perrel-a8b8695/
Apex Clean Energy LinkedIn https://www.linkedin.com/company/apex-clean-energy/
Apex Clean Energy Website https://www.apexcleanenergy.com/
Goldman Sachs report on energy demand https://www.goldmansachs.com/pdfs/insights/pages/generational-growth-ai-data-centers-and-the-coming-us-power-surge/report.pdf
US Department of Energy https://www.energy.gov/
Thank you for listening to “Data Center Revolution.” Don’t forget to leave us a review and subscribe so you don’t miss an episode.
To learn more about Overwatch, visit us at https://linktr.ee/overwatchmissioncritical
DataCenterIndustry #NuclearEnergy #FutureOfDataCenters #AI
Today, we’re joined by Rahul Pangam, Co-Founder & CEO of RapidCanvas, a leader in delivering transformative AI-powered solutions that empower businesses to achieve faster and more impactful outcomes. We talk about: How to make GenAI more reliable: Understanding your business context & knowing why something is happeningMoving from planning based on the human gut to an AI-based setupThe coming paradigm shift from SaaS to service as a softwareInteracting with apps in plain language vs. remembering which of 56 dashboards to view
In the retail industry, data science is not just about crunching numbers—it's about driving business impact through well-designed experiments. A-B testing in a physical store setting presents unique challenges that require careful planning and execution. How do you balance the need for statistical rigor with the practicalities of store operations? What role does data science play in ensuring that test results lead to actionable insights? Philipp Paraguya is the Chapter Lead for Data Science at Aldi DX. Previously, Philipp studied applied mathematics and computer science and has worked as a BI and advanced analytics consultant in various industries and projects since graduating. Due to his background as a software developer, he has a strong connection to classic software engineering and the sensible use of data science solutions. In the episode, Adel and Philipp explore the intricacies of A-B testing in retail, the challenges of running experiments in brick-and-mortar settings, aligning stakeholders for successful experimentation, the evolving role of data scientists, the impact of genAI on data workflows, and much more. Links Mentioned in the Show: Aldi DXConnect with PhilippCourse: Customer Analytics and A/B Testing in PythonRelated Episode: Can You Use AI-Driven Pricing Ethically? with Jose Mendoza, Academic Director & Clinical Associate Professor at NYUSign up to attend RADAR: Skills Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
Serhii Sokolenko, founder at Tower Dev and former product manager at tech giants like Google Cloud, Snowflake, and Databricks, joined Yuliia to discuss his journey building a next-generation compute platform. Tower Dev aims to simplify data processing for data engineers who work with Python. Serhii explains how Tower addresses three key market trends: the integration of data engineering with AI through Python, the movement away from complex distributed processing frameworks, and users' desire for flexibility across different data platforms. He explains how Tower makes Python data applications more accessible by eliminating the need to learn complex frameworks while automatically scaling infrastructure. Sergei also shares his perspective on the future of data engineering, noting in which ways AI will transform the profession.Tower Dev - https://tower.dev/Serhii's Linkedin - https://www.linkedin.com/in/ssokolenko/
Summary In this episode of Data and AI with Mukundan, the host discusses the creation and impact of an AI life planner designed to enhance productivity and time management. The conversation covers the technology behind the planner, including the use of GPT-4, Google Calendar API, and the Pomodoro technique, as well as the personal transformation experienced by the host as a result of implementing this tool. Takeaways Most of us struggle with time management.AI can help optimize our schedules.The AI life planner analyzes daily habits.It syncs with Google Calendar for seamless planning.Reminders are sent via Slack API integration.A Pomodoro timer helps maintain focus.The planner allows for real-time adjustments.Productivity can skyrocket with the right tools.You can build your own AI life planner.Engaging with the audience for feedback is important.If you want to see exactly how I built this AI Life Planner, check out my full guide here: https://mukundansankar.substack.com/p/i-never-thought-i-had-my-life-together
How the App looks like: https://youtu.be/pyyWV7-Ty5w?feature=shared
Summary In this episode of the Data Engineering Podcast Pete DeJoy, co-founder and product lead at Astronomer, talks about building and managing Airflow pipelines on Astronomer and the upcoming improvements in Airflow 3. Pete shares his journey into data engineering, discusses Astronomer's contributions to the Airflow project, and highlights the critical role of Airflow in powering operational data products. He covers the evolution of Airflow, its position in the data ecosystem, and the challenges faced by data engineers, including infrastructure management and observability. The conversation also touches on the upcoming Airflow 3 release, which introduces data awareness, architectural improvements, and multi-language support, and Astronomer's observability suite, Astro Observe, which provides insights and proactive recommendations for Airflow users.
Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.Your host is Tobias Macey and today I'm interviewing Pete DeJoy about building and managing Airflow pipelines on Astronomer and the upcoming improvements in Airflow 3Interview IntroductionCan you describe what Astronomer is and the story behind it?How would you characterize the relationship between Airflow and Astronomer?Astronomer just released your State of Airflow 2025 Report yesterday and it is the largest data engineering survey ever with over 5,000 respondents. Can you talk a bit about top level findings in the report?What about the overall growth of the Airflow project over time?How have the focus and features of Astronomer changed since it was last featured on the show in 2017?Astro Observe GA’d in early February, what does the addition of pipeline observability mean for your customers? What are other capabilities similar in scope to observability that Astronomer is looking at adding to the platform?Why is Airflow so critical in providing an elevated Observability–or cataloging, or something simlar - experience in a DataOps platform? What are the notable evolutions in the Airflow project and ecosystem in that time?What are the core improvements that are planned for Airflow 3.0?What are the most interesting, innovative, or unexpected ways that you have seen Astro used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Airflow and Astro?What do you have planned for the future of Astro/Astronomer/Airflow?Contact Info LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links AstronomerAirflowMaxime BeaucheminMongoDBDatabricksConfluentSparkKafkaDagsterPodcast EpisodePrefectAirflow 3The Rise of the Data Engineer blog postdbtJupyter NotebookZapiercosmos library for dbt in AirflowRuffAirflow Custom OperatorSnowflakeThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA