talk-data.com talk-data.com

Filter by Source

Select conferences and events

Showing 11 results

Activities & events

Title & Speakers Event
Martin Kratky – author

Build effective data models and reports in Power BI for financial planning, budgeting, and valuations with practical templates, logic, and step-by-step guidance. Free with your book: DRM-free PDF version + access to Packt's next-gen Reader Key Features Engineer optimal star schema data models for financial planning and analysis Implement common financial logic, calendars, and variance calculations Create dynamic, formatted reports for income statements, balance sheets, and cash flow Purchase of the print or Kindle book includes a free PDF eBook Book Description Martin Kratky brings his global experience of over 20 years as co-founder of Managility and creator of Acterys to empower CFOs and accountants with Power BI for Finance through this hands-on guide to streamlining and enhancing financial processes. Starting with the foundation of every effective BI solution, a well-designed data model, the book shows you how to structure star schemas and integrate common financial data sources like ERP and accounting systems. You’ll then learn to implement key financial logic using DAX and M, covering calendars, KPIs, and variance calculations. The book offers practical advice on creating clear and compliant financial reports, such as income statements, balance sheets, and cash flows with visual design and formatting best practices. With dedicated chapters on advanced workflows, you’ll learn how to handle multi-currency setups, perform group consolidations, and implement planning models like rolling forecasts, annual budgets, and sales and operations planning (S&OP). As you advance, you’ll gain insights from real-world case studies covering company valuations, Excel integration, and the use of write-back methods with Dynamics Business Performance Planning and Acterys. The concluding chapters highlight how AI and Copilot enhance financial analytics. Email sign-up and proof of purchase required What you will learn Apply multi-currency handling and group consolidation techniques in Power BI Model discounted cash flow and company valuation scenarios Design and manage write-back workflows with Dynamics BPP and Acterys Integrate Excel and Power BI using live connections and cube formulas Utilize AI, Copilot, and LLMs to enhance automation and insight generation Create complete finance-focused dashboards for sales and operations planning Who this book is for This book is for finance professionals including CFOs, FP&A managers, controllers, and certified accountants who want to enhance reporting, planning, and forecasting using Power BI. Basic familiarity with Power BI and financial concepts is recommended to get the most out of this hands-on guide.

data data-science business-intelligence microsoft-power-platform power-bi AI/ML Analytics BI Data Modelling DAX ERP dimensional modeling KPI LLM Power BI
DAX for Humans 2025-09-26
Greg Deckler – author

Level up your Power BI skills by learning DAX in an easy, fun, and practical way using one core pattern that can be used to solve most problems Key Features Learn simple through advanced DAX in a clear, concise way using real-world examples Explore powerful techniques for debugging DAX and increasing efficiency Use artificial intelligence to write, refine, and troubleshoot your DAX formulas Purchase of the print or Kindle book includes a free PDF eBook Book Description Although DAX has been around for over a decade, many struggle to master the language primarily because DAX is often taught through the CALCULATE function, which is the most complex and unintuitive function in all of DAX. But what if DAX could be taught without CALCULATE? The result would be an incredibly intuitive and easy way to learn DAX. DAX for Humans stands the traditional approach to learning DAX on its head, foregoing the traditional, legacy methods of learning DAX for a more modern approach that focuses on core DAX concepts and not any specific function. Even if you know nothing about DAX, from the very first chapter you will learn the essentials of the DAX language, as well as a single pattern to solve the majority of DAX problems. From that point forward, you’ll explore how to work with the basic building blocks of the DAX language and apply what you learn to real-world business scenarios across customers, human resources, projects, finance, operations, and more. By the end of this book, you’ll be able to apply your DAX skills to simple, complex, and advanced scenarios; understand how to optimize and debug your DAX code; and even know how to efficiently apply artificial intelligence to help you write and debug your DAX code. What you will learn Master techniques to solve common DAX calculations Apply DAX to real-word, practical business scenarios Explore advanced techniques for tackling unusual DAX scenarios Discover new ideas, tricks, and time-saving techniques for better calculations Find out how to optimize and debug DAX effectively Leverage AI to assist in writing, troubleshooting, and improving DAX Who this book is for If you use Power BI but struggle with DAX or if you know DAX but want to improve and expand your skills, then this book is for you. Even if you have never used Power BI or DAX before, you will find this book helpful as you progress from the basics to mastery of the DAX language using real-world scenarios as your guide.

data data-science analytics-platforms powerpivot data-analysis-expressions-dax data analysis expressions (dax) AI/ML BI DAX Power BI
Alison Huh – author , Jeffrey Allen – author , Maya Raman – author , Parker Faucher – author , Lauren Tran – author , Lander Kerbey – author , Rachelle Palmer – author

The official guide to MongoDB architecture, tools, and cloud features, written by leading MongoDB subject matter experts to help you build secure, scalable, high-performance applications Key Features Design resilient, secure solutions with high performance and scalability Streamline development with modern tooling, indexing, and AI-powered workflows Deploy and optimize in the cloud using advanced MongoDB Atlas features Purchase of the print or Kindle book includes a free PDF eBook Book Description Delivering secure, scalable, and high-performance applications is never easy, especially when systems must handle growth, protect sensitive data, and perform reliably under pressure. The Official MongoDB Guide addresses these challenges with guidance from MongoDB’s top subject matter experts, so you learn proven best practices directly from those who know the technology inside out. This book takes you from core concepts and architecture through to advanced techniques for data modeling, indexing, and query optimization, supported by real-world patterns that improve performance and resilience. It offers practical coverage of developer tooling, IDE integrations, and AI-assisted workflows that will help you work faster and more effectively. Security-focused chapters walk you through authentication, authorization, encryption, and compliance, while chapters dedicated to MongoDB Atlas showcase its robust security features and demonstrate how to deploy, scale, and leverage platform-native capabilities such as Atlas Search and Atlas Vector Search. By the end of this book, you’ll be able to design, build, and manage MongoDB applications with the confidence that comes from learning directly from the experts shaping the technology. What you will learn Build secure, scalable, and high-performance applications Design efficient data models and indexes for real workloads Write powerful queries to sort, filter, and project data Protect applications with authentication and encryption Accelerate coding with AI-powered and IDE-based tools Launch, scale, and manage MongoDB Atlas with confidence Unlock advanced features like Atlas Search and Atlas Vector Search Apply proven techniques from MongoDB's own engineering leaders Who this book is for This book is for developers, database professionals, architects, and platform teams who want to get the most out of MongoDB. Whether you’re building web apps, APIs, mobile services, or backend systems, the concepts covered here will help you structure data, improve performance, and deliver value to your users. No prior experience with MongoDB is required, but familiarity with databases and programming will be helpful.

data data-engineering nosql-databases MongoDB AI/ML API Cloud Computing Data Modelling Cyber Security
O'Reilly Data Engineering Books

This event is being promoted by many groups.

This is an online event.

Participants can be located anywhere in the world!

This event is FREE to attend!

In order to attend, please register here:

How to Write Your Book Using AI Tickets, Wed, Jul 30, 2025 at 9:00 AM | Eventbrite

For login info, visit:

https://www.publishmybestseller.com/the-ai-workshop?affiliate_id=4308062

Hurry! Slots are limited!

How would you like to finally write your amazing book, get completely unstuck in the writing process and never worry about sitting in front of a blinking cursor with writer's block again?

All by using the power of AI to supercharge your writing! You’ve probably heard a ton about AI recently and maybe you're skeptical about it - perhaps understandably so.The first thing I want to point out is that I am not talking about having AI do all the work and write some regurgitated mess from the web - no one wants that.

But what if you’re stuck on crafting a really engaging title and sub-title and you just need some great examples to draw from. I’m going to show you how AI can help with that.

Or what if you are stuck with creating a great table of contents and flow for your book and you’d like to see the best structure to use for a business book, or self help book, or coaching, medical, legal or consulting book?

I’m going to show you how AI can help with that.

What if you're stuck needing a relevant stories or anecdote to drive home your message—be it leadership triumphs, coaching successes, sales victories, or client breakthroughs.

AI can help with that too!

The speaker, Rob Kosberg, is the founder of Best Seller Publishing and we have helped thousands of authors to write, publish and launch their books to bestseller status.

Over the last decade we have developed our own proprietary methodology (and trademarked it) we call Enhanced Ghostwriting™ And we have taught AI to model that method.

I want to invite you to my Write Your Book with AI Workshop and I am going to show you how to use AI and get unstuck forever with your writing.

We will discuss how to:

  • Use AI to help you dial in your ideal reader and client
  • Use AI to craft an amazing hook, title and sub-title
  • Use AI to craft a table of contents and flow for your book
  • Use AI to set your chapter structure up properly
  • Use AI to give you great story and anecdote ideas or to simply enhance your own story *

About the Speaker

Rob Kosberg

  • Founder and CEO of Best Seller Publishing.
  • 2x Wall Street Journal, and USA Today Best Selling Author.
  • 10+ years experience helping 1000's of Coaches, Business Owners, and Entrepreneurs to Publish and Promote their own book.
  • Featured on ABC, CBS, NBC, FOX, The Wall Street Journal, Forbes, Entrepreneur Magazine, USA Today, The New York Times, and CNN.
How to Write Your Book Using AI
Angelica Lo Duca – Professor, Researcher, and Author

Send us a text Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. Datatopics Unplugged is your go-to spot for relaxed discussions around tech, news, data, and society.

In this episode, we dive into the world of data storytelling with special guest Angelica Lo Duca, a professor, researcher, and author. Pull up a chair as we explore her journey from programming to teaching, and dive into the principles of turning raw data into compelling stories.

Key topics include:

Angelica’s background: From researcher to professor and published author

Why write a book?: The motivation, process, and why she chooses books over blogs

About the book: Data Storytelling with Altair and Generative AI

Overview of the book: Who it’s for and the key insights it offers

What is data storytelling and how it differs from traditional dashboards and reports

Why Altair? Exploring Altair and Vega-Lite for effective visualizations

Generative AI’s role: How tools like ChatGPT and DALL-E fit into the data storytelling process, and potential risks like bias in AI-generated images

DIKW Pyramid: Moving from raw data to actionable wisdom using the Data-Information-Knowledge-Wisdom framework

Where to buy her books: https://www.amazon.com/stores/Angelica-Lo-Duca/author/B0B5BHD5VF https://www.amazon.com/Become-Great-Data-Storyteller-Change/dp/1394283318 https://www.amazon.com/Data-Storytelling-Altair-Angelica-Duca/dp/1633437922/

Snippet: https://livebook.manning.com/book/data-storytelling-with-altair-and-ai/chapter-10/16

Connect with Angelica on Medium for more articles and insights: https://medium.com/@alod83/about

AI/ML GenAI LLM
DataTopics: All Things Data, AI & Tech
Johanna Berer – Host/Interviewer @ DataTalks.Club , Christopher Bergh – CEO and Founder @ DataKitchen

0:00

hi everyone Welcome to our event this event is brought to you by data dos club which is a community of people who love

0:06

data and we have weekly events and today one is one of such events and I guess we

0:12

are also a community of people who like to wake up early if you're from the states right Christopher or maybe not so

0:19

much because this is the time we usually have uh uh our events uh for our guests

0:27

and presenters from the states we usually do it in the evening of Berlin time but yes unfortunately it kind of

0:34

slipped my mind but anyways we have a lot of events you can check them in the

0:41

description like there's a link um I don't think there are a lot of them right now on that link but we will be

0:48

adding more and more I think we have like five or six uh interviews scheduled so um keep an eye on that do not forget

0:56

to subscribe to our YouTube channel this way you will get notified about all our future streams that will be as awesome

1:02

as the one today and of course very important do not forget to join our community where you can hang out with

1:09

other data enthusiasts during today's interview you can ask any question there's a pin Link in live chat so click

1:18

on that link ask your question and we will be covering these questions during the interview now I will stop sharing my

1:27

screen and uh there is there's a a message in uh and Christopher is from

1:34

you so we actually have this on YouTube but so they have not seen what you wrote

1:39

but there is a message from to anyone who's watching this right now from Christopher saying hello everyone can I

1:46

call you Chris or you okay I should go I should uh I should look on YouTube then okay yeah but anyways I'll you don't

1:53

need like you we'll need to focus on answering questions and I'll keep an eye

1:58

I'll be keeping an eye on all the question questions so um

2:04

yeah if you're ready we can start I'm ready yeah and you prefer Christopher

2:10

not Chris right Chris is fine Chris is fine it's a bit shorter um

2:18

okay so this week we'll talk about data Ops again maybe it's a tradition that we talk about data Ops every like once per

2:25

year but we actually skipped one year so because we did not have we haven't had

2:31

Chris for some time so today we have a very special guest Christopher Christopher is the co-founder CEO and

2:37

head chef or hat cook at data kitchen with 25 years of experience maybe this

2:43

is outdated uh cuz probably now you have more and maybe you stopped counting I

2:48

don't know but like with tons of years of experience in analytics and software engineering Christopher is known as the

2:55

co-author of the data Ops cookbook and data Ops Manifesto and it's not the

3:00

first time we have Christopher here on the podcast we interviewed him two years ago also about data Ops and this one

3:07

will be about data hops so we'll catch up and see what actually changed in in

3:13

these two years and yeah so welcome to the interview well thank you for having

3:19

me I'm I'm happy to be here and talking all things related to data Ops and why

3:24

why why bother with data Ops and happy to talk about the company or or what's changed

3:30

excited yeah so let's dive in so the questions for today's interview are prepared by Johanna berer as always

3:37

thanks Johanna for your help so before we start with our main topic for today

3:42

data Ops uh let's start with your ground can you tell us about your career Journey so far and also for those who

3:50

have not heard have not listened to the previous podcast maybe you can um talk

3:55

about yourself and also for those who did listen to the previous you can also maybe give a summary of what has changed

4:03

in the last two years so we'll do yeah so um my name is Chris so I guess I'm

4:09

a sort of an engineer so I spent about the first 15 years of my career in

4:15

software sort of working and building some AI systems some non- AI systems uh

4:21

at uh Us's NASA and MIT linol lab and then some startups and then um

4:30

Microsoft and then about 2005 I got I got the data bug uh I think you know my

4:35

kids were small and I thought oh this data thing was easy and I'd be able to go home uh for dinner at 5 and life

4:41

would be fine um because I was a big you started your own company right and uh it didn't work out that way

4:50

and um and what was interesting is is for me it the problem wasn't doing the

4:57

data like I we had smart people who did data science and data engineering the act of creating things it was like the

5:04

systems around the data that were hard um things it was really hard to not have

5:11

errors in production and I would sort of driving to work and I had a Blackberry at the time and I would not look at my

5:18

Blackberry all all morning I had this long drive to work and I'd sit in the parking lot and take a deep breath and

5:24

look at my Blackberry and go uh oh is there going to be any problems today and I'd be and if there wasn't I'd walk and

5:30

very happy um and if there was I'd have to like rce myself um and you know and

5:36

then the second problem is the team I worked for we just couldn't go fast enough the customers were super

5:42

demanding they didn't care they all they always thought things should be faster and we are always behind and so um how

5:50

do you you know how do you live in that world where things are breaking left and right you're terrified of making errors

5:57

um and then second you just can't go fast enough um and it's preh Hadoop era

6:02

right it's like before all this big data Tech yeah before this was we were using

6:08

uh SQL Server um and we actually you know we had smart people so we we we

6:14

built an engine in SQL Server that made SQL Server a column or

6:20

database so we built a column or database inside of SQL Server um so uh

6:26

in order to make certain things fast and and uh yeah it was it was really uh it's not

6:33

bad I mean the principles are the same right before Hadoop it's it's still a database there's still indexes there's

6:38

still queries um things like that we we uh at the time uh you would use olap

6:43

engines we didn't use those but you those reports you know are for models it's it's not that different um you know

6:50

we had a rack of servers instead of the cloud um so yeah and I think so what what I

6:57

took from that was uh it's just hard to run a team of people to do do data and analytics and it's not

7:05

really I I took it from a manager perspective I started to read Deming and

7:11

think about the work that we do as a factory you know and in a factory that produces insight and not automobiles um

7:18

and so how do you run that factory so it produces things that are good of good

7:24

quality and then second since I had come from software I've been very influenced

7:29

by by the devops movement how you automate deployment how you run in an agile way how you

7:35

produce um how you how you change things quickly and how you innovate and so

7:41

those two things of like running you know running a really good solid production line that has very low errors

7:47

um and then second changing that production line at at very very often they're kind of opposite right um and so

7:55

how do you how do you as a manager how do you technically approach that and

8:00

then um 10 years ago when we started data kitchen um we've always been a profitable company and so we started off

8:07

uh with some customers we started building some software and realized that we couldn't work any other way and that

8:13

the way we work wasn't understood by a lot of people so we had to write a book and a Manifesto to kind of share our our

8:21

methods and then so yeah we've been in so we've been in business now about a little over 10

8:28

years oh that's cool and uh like what

8:33

uh so let's talk about dat offs and you mentioned devops and how you were inspired by that and by the way like do

8:41

you remember roughly when devops as I think started to appear like when did people start calling these principles

8:49

and like tools around them as de yeah so agile Manifesto well first of all the I

8:57

mean I had a boss in 1990 at Nasa who had this idea build a

9:03

little test a little learn a lot right that was his Mantra and then which made

9:09

made a lot of sense um and so and then the sort of agile software Manifesto

9:14

came out which is very similar in 2001 and then um the sort of first real

9:22

devops was a guy at Twitter started to do automat automated deployment you know

9:27

push a button and that was like 200 Nish and so the first I think devops

9:33

Meetup was around then so it's it's it's been 15 years I guess 6 like I was

9:39

trying to so I started my career in 2010 so I my first job was a Java

9:44

developer and like I remember for some things like we would just uh SFTP to the

9:52

machine and then put the jar archive there and then like keep our fingers crossed that it doesn't break uh uh like

10:00

it was not really the I wouldn't call it this way right you were deploying you

10:06

had a Dey process I put it yeah

10:11

right was that so that was documented too it was like put the jar on production cross your

10:17

fingers I think there was uh like a page on uh some internal Viki uh yeah that

10:25

describes like with passwords and don't like what you should do yeah that was and and I think what's interesting is

10:33

why that changed right and and we laugh at it now but that was why didn't you

10:38

invest in automating deployment or a whole bunch of automated regression

10:44

tests right that would run because I think in software now that would be rare

10:49

that people wouldn't use C CD they wouldn't have some automated tests you know functional

10:56

regression tests that would be the

Agile/Scrum AI/ML Analytics Big Data Chef Cloud Computing Data Engineering Data Science DataOps DevOps Hadoop Java Microsoft SQL
DataTalks.Club
Elad Eldor – Author , Tobias Macey – host

Summary

Kafka has become a ubiquitous technology, offering a simple method for coordinating events and data across different systems. Operating it at scale, however, is notoriously challenging. Elad Eldor has experienced these challenges first-hand, leading to his work writing the book "Kafka: : Troubleshooting in Production". In this episode he highlights the sources of complexity that contribute to Kafka's operational difficulties, and some of the main ways to identify and mitigate potential sources of trouble.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack You shouldn't have to throw away the database to build with fast-changing data. You should be able to keep the familiarity of SQL and the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date. With Materialize, you can! It’s the only true SQL streaming database built from the ground up to meet the needs of modern data products. Whether it’s real-time dashboarding and analytics, personalization and segmentation or automation and alerting, Materialize gives you the ability to work with fresh, correct, and scalable results — all in a familiar SQL interface. Go to dataengineeringpodcast.com/materialize today to get 2 weeks free! Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. Your host is Tobias Macey and today I'm interviewing Elad Eldor about operating Kafka in production and how to keep your clusters stable and performant

Interview

Introduction How did you get involved in the area of data management? Can you describe your experiences with Kafka?

What are the operational challenges that you have had to overcome while working with Kafka? What motivated to write a book about how to manage Kafka in production?

There are many options now for persistent data queues. What are the factors to consider when determining whether Kafka is the right choice?

In the case where Kafka is the appropriate tool, there are many ways to run it now. What are the considerations that teams need to work through when determining whether/where/how to operate a cluster?

When provisioning a Kafka cluster, what are the requirements that need to be considered when determining the sizing?

What are the axes along which size/scale need to be determined?

The core promise of Kafka is that it is a durable store for continuous data. What are the mechanisms that are available for preventing data loss?

Under what circumstances can data be lost?

What are the different failure conditions that cluster operators need to be aware of?

What are the monitoring strategies that ar

AI/ML Analytics Cloud Computing Data Engineering Data Lake Data Lakehouse Data Management Delta Hudi Iceberg Kafka SaaS SQL Data Streaming Trino
Data Engineering Podcast
Ron L'Esteve – author

Design and implement a modern data lakehouse on the Azure Data Platform using Delta Lake, Apache Spark, Azure Databricks, Azure Synapse Analytics, and Snowflake. This book teaches you the intricate details of the Data Lakehouse Paradigm and how to efficiently design a cloud-based data lakehouse using highly performant and cutting-edge Apache Spark capabilities using Azure Databricks, Azure Synapse Analytics, and Snowflake. You will learn to write efficient PySpark code for batch and streaming ELT jobs on Azure. And you will follow along with practical, scenario-based examples showing how to apply the capabilities of Delta Lake and Apache Spark to optimize performance, and secure, share, and manage a high volume, high velocity, and high variety of data in your lakehouse with ease. The patterns of success that you acquire from reading this book will help you hone your skills to build high-performing and scalable ACID-compliant lakehouses using flexible and cost-efficient decoupled storage and compute capabilities. Extensive coverage of Delta Lake ensures that you are aware of and can benefit from all that this new, open source storage layer can offer. In addition to the deep examples on Databricks in the book, there is coverage of alternative platforms such as Synapse Analytics and Snowflake so that you can make the right platform choice for your needs. After reading this book, you will be able to implement Delta Lake capabilities, including Schema Evolution, Change Feed, Live Tables, Sharing, and Clones to enable better business intelligence and advanced analytics on your data within the Azure Data Platform. What You Will Learn Implement the Data Lakehouse Paradigm on Microsoft’s Azure cloud platform Benefit from the new Delta Lake open-source storage layer for data lakehouses Take advantage of schema evolution, change feeds, live tables, and more Writefunctional PySpark code for data lakehouse ELT jobs Optimize Apache Spark performance through partitioning, indexing, and other tuning options Choose between alternatives such as Databricks, Synapse Analytics, and Snowflake Who This Book Is For Data, analytics, and AI professionals at all levels, including data architect and data engineer practitioners. Also for data professionals seeking patterns of success by which to remain relevant as they learn to build scalable data lakehouses for their organizations and customers who are migrating into the modern Azure Data Platform.

data data-engineering storage-repositories data-lake AI/ML Analytics Azure BI Cloud Computing Data Lakehouse Databricks Delta ETL/ELT Microsoft PySpark Snowflake Spark Data Streaming Synapse
O'Reilly Data Engineering Books
Josh Linkner – Chairman and co-founder @ Platypus Labs , Vishal – host

Discussing Data, Innovation, and Creativity with Josh Linkner talks about using little creativity spurts to use for disruption. He sheds light on how organizations could embrace creativity and use little creative innovation to help stir for big breakthroughs. She shared lots of example of big little breakthroughs.

Bio: He has been the founder and CEO of five tech companies, which sold for a combined value of over $200 million. He’s the author of four books including the New York Times Bestsellers, Disciplined Dreaming, and The Road to Reinvention. This guy just loves starting and building companies. He’s the founding partner of Detroit Venture Partners and has been involved in the launch of over 100 startups. Today, Josh serves as Chairman and co-founder of Platypus Labs, innovation research, training, and consulting firm. He has twice been named the Ernst & Young Entrepreneur of the Year and is a recipient of the United States Presidential Champion of Change Award. Josh is also a passionate Detroiter, the father of four, a professional-level jazz guitarist, and has a slightly odd obsession for greasy pizza

Josh's Book: Big Little Breakthroughs https://amzn.to/3usFCLm

Josh's Recommendations: Think Like a Monk: Train Your Mind for Peace and Purpose Every Day https://amzn.to/3bzvyYh Range: Why Generalists Triumph in a Specialized World https://amzn.to/37K4PqW Think Again: The Power of Knowing What You Don't Know https://amzn.to/37MepcR

Discussion Timeline: TIMELINE

Some questions we covered: 1. Starter: Give your starter pitch 1 point that Big Little Breakthroughs points to: 2. Vishal briefly introduce Josh 3. What are you seeing the role of innovation in the middle of firefight[pandemic] 4. What is the state of enterprise investments to promote innovation? 5. What are some easy to fix bottlenecks to get enterprises to keep on innovating 6. What are some misconceptions about innovation and its adoption 7. Explain your journey to your current role? 8. Could you share something about your current role? 9. What does your company do? 10. Explain your journey to this book? 11. Why write this book? 12. Why are you so passionate about helping everyday people become everyday innovators? 13. What's the most misunderstood thing around human creativity? 14. What's your favorite brainstorming technique? 15. From doing the research for your new book, Big Little Breakthroughs, what surprised you the most? 16. What are 1-3 best practices that you think are the key to success in your journey? 17. Do you have any favorite read? 18. As a closing remark, what would you like to tell our audience?

About TAO.ai[Sponsor]: TAO is building the World's largest and AI-powered Skills Universe and Community powering career development platform empowering some of the World's largest communities/organizations. Learn more at https://TAO.ai

About FutureOfData: FutureOfData takes you on the journey with leaders, experts, academics, authors, and change-makers designing the future of data, analytics, and insights.

About AnalyticsWeek.com FutureOfData is managed by AnalyticsWeek.com, a #FutureOfData Leadership community of Organization architects and leaders.

Sponsorship / Guest Request should be directed to [email protected]

Keywords:

FutureofData #Work2.0 #Work2dot0 #Leadership #Growth #Org2dot0 #Work2 #Org2

AI/ML Analytics
The Future of Data Podcast | conversation with leaders, influencers, and change makers in the World of Data & Analytics

Use this guide to one of SQL Server 2019’s most impactful features—Big Data Clusters. You will learn about data virtualization and data lakes for this complete artificial intelligence (AI) and machine learning (ML) platform within the SQL Server database engine. You will know how to use Big Data Clusters to combine large volumes of streaming data for analysis along with data stored in a traditional database. For example, you can stream large volumes of data from Apache Spark in real time while executing Transact-SQL queries to bring in relevant additional data from your corporate, SQL Server database. Filled with clear examples and use cases, this book provides everything necessary to get started working with Big Data Clusters in SQL Server 2019. You will learn about the architectural foundations that are made up from Kubernetes, Spark, HDFS, and SQL Server on Linux. You then are shown how to configure and deploy Big Data Clusters in on-premises environments or in the cloud. Next, you are taught about querying. You will learn to write queries in Transact-SQL—taking advantage of skills you have honed for years—and with those queries you will be able to examine and analyze data from a wide variety of sources such as Apache Spark. Through the theoretical foundation provided in this book and easy-to-follow example scripts and notebooks, you will be ready to use and unveil the full potential of SQL Server 2019: combining different types of data spread across widely disparate sources into a single view that is useful for business intelligence and machine learning analysis. What You Will Learn Install, manage, and troubleshoot Big Data Clusters in cloud or on-premise environments Analyze large volumes of data directly from SQL Server and/or Apache Spark Manage data stored in HDFS from SQL Server as if it wererelational data Implement advanced analytics solutions through machine learning and AI Expose different data sources as a single logical source using data virtualization Who This Book Is For Data engineers, data scientists, data architects, and database administrators who want to employ data virtualization and big data analytics in their environments

data data-engineering storage-repositories data-lake AI/ML Analytics BI Big Data Cloud Computing Data Analytics Data Lake HDFS Kubernetes Linux Spark SQL Data Streaming

Get a head-start on learning one of SQL Server 2019’s latest and most impactful features—Big Data Clusters—that combines large volumes of non-relational data for analysis along with data stored relationally inside a SQL Server database. This book provides a first look at Big Data Clusters based upon SQL Server 2019 Release Candidate 1. Start now and get a jump on your competition in learning this important new feature. Big Data Clusters is a feature set covering data virtualization, distributed computing, and relational databases and provides a complete AI platform across the entire cluster environment. This book shows you how to deploy, manage, and use Big Data Clusters. For example, you will learn how to combine data stored on the HDFS file system together with data stored inside the SQL Server instances that make up the Big Data Cluster. Filled with clear examples and use cases, this book provides everything necessary to get started working with Big Data Clusters in SQL Server 2019 using Release Candidate 1. You will learn about the architectural foundations that are made up from Kubernetes, Spark, HDFS, and SQL Server on Linux. You then are shown how to configure and deploy Big Data Clusters in on-premises environments or in the cloud. Next, you are taught about querying. You will learn to write queries in Transact-SQL—taking advantage of skills you have honed for years—and with those queries you will be able to examine and analyze data from a wide variety of sources such as Apache Spark. Through the theoretical foundation provided in this book and easy-to-follow example scripts and notebooks, you will be ready to use and unveil the full potential of SQL Server 2019: combining different types of data spread across widely disparate sources into a single view that is useful for business intelligence and machine learning analysis. What You Will Learn Install, manage, and troubleshoot Big Data Clusters in cloud or on-premise environments Analyze large volumes of data directly from SQL Server and/or Apache Spark Manage data stored in HDFS from SQL Server as if it were relational data Implement advanced analytics solutions through machine learning and AI Expose different data sources as a single logical source using data virtualization Who This Book Is For For data engineers, data scientists, data architects, and database administrators who want to employ data virtualization and big data analytics in their environment

data data-engineering SQL AI/ML Analytics BI Big Data Cloud Computing Data Analytics HDFS Kubernetes Linux RDBMS Spark
Showing 11 results