There are very few people like Stephen Brobst, a legendary tech CTO and "certified data geek," Stephen shares his incredible journey, from his early days in computational physics and building real-time trading systems on Wall Street to becoming the CTO for Teradata and now Ab Initio Software. Stephen provides a masterclass on the evolution of data architecture, tracing the macro trends from early decision support systems to "active data warehousing" and the rise of AI/ML (formerly known as data mining). He dives deep into why metadata-driven architecture is critical for the future and how AI, large language models, and real-time sensor technology will fundamentally reshape industries and eliminate the dashboard as we know it. We also chat about something way cooler, as Stephen discusses his three passions: travel, music, and teaching. He reveals his personal rule of never staying in the same city for more than five consecutive days since 1993 and how he manages a life of constant motion. From his early days DJing punk rock and seeing the Sex Pistols' last concert to his minimalist travel philosophy and ever-growing bucket list, Stephen offers a unique perspective on living a life rich with experience over material possessions. Finally, he offers invaluable advice for the next generation on navigating careers in an AI-driven world and living life to the fullest.
talk-data.com
Topic
Teradata
14
tagged
Activity Trend
Top Events
As companies scale and become more successful, new horizons open, but with them come unexpected challenges. The influx of revenue and expansion of operations often reveal hidden complexities that can hinder efficiency and inflate costs. In this tricky situation, data teams can find themselves entangled in a web of obstacles that slow down their ability to innovate and respond to ever-changing business needs. Enter cloud analytics—a transformative solution that promises to break down barriers and unleash potential. By migrating analytics to the cloud, organizations can navigate the growing pains of success, cutting costs, enhancing flexibility, and empowering data teams to work with agility and precision. John Knieriemen is the Regional Business Lead for North America at Exasol, the market-leading high-performance analytics database. Prior to joining Exasol, he served as Vice President and General Manager at Teradata during an 11-year tenure with the company. John is responsible for strategically scaling Exasol’s North America business presence across industries and expanding the organization’s partner network. Solongo Erdenekhuyag is the former Customer Success and Data Strategy Leader at Exasol. Solongo is skilled in strategy, business development, program management, leadership, strategic partnerships, and management. In the episode, Richie, Solongo, and John cover the motivation for moving analytics to the cloud, economic triggers for migration, success stories from organizations who have migrated to the cloud, the challenges and potential roadblocks in migration, the importance of flexibility and open-mindedness and much more. Links from the Show ExasolAmazon S3Azure Blob StorageGoogle Cloud StorageBigQueryAmazon RedshiftSnowflake[Course] Understanding Cloud Computing[Course] AWS Cloud Concepts
Summary
All software systems are in a constant state of evolution. This makes it impossible to select a truly future-proof technology stack for your data platform, making an eventual migration inevitable. In this episode Gleb Mezhanskiy and Rob Goretsky share their experiences leading various data platform migrations, and the hard-won lessons that they learned so that you don't have to.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack Modern data teams are using Hex to 10x their data impact. Hex combines a notebook style UI with an interactive report builder. This allows data teams to both dive deep to find insights and then share their work in an easy-to-read format to the whole org. In Hex you can use SQL, Python, R, and no-code visualization together to explore, transform, and model data. Hex also has AI built directly into the workflow to help you generate, edit, explain and document your code. The best data teams in the world such as the ones at Notion, AngelList, and Anthropic use Hex for ad hoc investigations, creating machine learning models, and building operational dashboards for the rest of their company. Hex makes it easy for data analysts and data scientists to collaborate together and produce work that has an impact. Make your data team unstoppable with Hex. Sign up today at dataengineeringpodcast.com/hex to get a 30-day free trial for your team! Your host is Tobias Macey and today I'm interviewing Gleb Mezhanskiy and Rob Goretsky about when and how to think about migrating your data stack
Interview
Introduction How did you get involved in the area of data management? A migration can be anything from a minor task to a major undertaking. Can you start by describing what constitutes a migration for the purposes of this conversation? Is it possible to completely avoid having to invest in a migration? What are the signals that point to the need for a migration?
What are some of the sources of cost that need to be accounted for when considering a migration? (both in terms of doing one, and the costs of not doing one) What are some signals that a migration is not the right solution for a perceived problem?
Once the decision has been made that a migration is necessary, what are the questions that the team should be asking to determine the technologies to move to and the sequencing of execution? What are the preceding tasks that should be completed before starting the migration to ensure there is no breakage downstream of the changing component(s)? What are some of the ways that a migration effort might fail? What are the major pitfalls that teams need to be aware of as they work through a data platform migration? What are the opportunities for automation during the migration process? What are the most interesting, innovative, or unexpected ways that you have seen teams approach a platform migration? What are the most interesting, unexpected, or challenging lessons that you have learned while working on data platform migrations? What are some ways that the technologies and patterns that we use can be evolved to reduce the cost/impact/need for migraitons?
Contact Info
Gleb
LinkedIn @glebmm on Twitter
Rob
LinkedIn RobGoretsky on GitHub
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on Apple Podcasts and tell your friends and co-workers
Links
Datafold
Podcast Episode
Informatica Airflow Snowflake
Podcast Episode
Redshift Eventbrite Teradata BigQuery Trino EMR == Elastic Map-Reduce Shadow IT
Podcast Episode
Mode Analytics Looker Sunk Cost Fallacy data-diff
Podcast Episode
SQLGlot Dagster dbt
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Sponsored By:
Hex: 
Hex is a collaborative workspace for data science and analytics. A single place for teams to explore, transform, and visualize data into beautiful interactive reports. Use SQL, Python, R, no-code and AI to find and share insights across your organization. Empower everyone in an organization to make an impact with data. Sign up today at [dataengineeringpodcast.com/hex](https://www.dataengineeringpodcast.com/hex} and get 30 days free!Rudderstack: 
Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstackSupport Data Engineering Podcast
Summary
Business intelligence has gone through many generational shifts, but each generation has largely maintained the same workflow. Data analysts create reports that are used by the business to understand and direct the business, but the process is very labor and time intensive. The team at Omni have taken a new approach by automatically building models based on the queries that are executed. In this episode Chris Merrick shares how they manage integration and automation around the modeling layer and how it improves the organizational experience of business intelligence.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management Truly leveraging and benefiting from streaming data is hard - the data stack is costly, difficult to use and still has limitations. Materialize breaks down those barriers with a true cloud-native streaming database - not simply a database that connects to streaming systems. With a PostgreSQL-compatible interface, you can now work with real-time data using ANSI SQL including the ability to perform multi-way complex joins, which support stream-to-stream, stream-to-table, table-to-table, and more, all in standard SQL. Go to dataengineeringpodcast.com/materialize today and sign up for early access to get started. If you like what you see and want to help make it better, they're hiring across all functions! Your host is Tobias Macey and today I'm interviewing Chris Merrick about the Omni Analytics platform and how they are adding automatic data modeling to your business intelligence
Interview
Introduction How did you get involved in the area of data management? Can you describe what Omni Analytics is and the story behind it?
What are the core goals that you are trying to achieve with building Omni?
Business intelligence has gone through many evolutions. What are the unique capabilities that Omni Analytics offers over other players in the market?
What are the technical and organizational anti-patterns that typically grow up around BI systems?
What are the elements that contribute to BI being such a difficult product to use effectively in an organization?
Can you describe how you have implemented the Omni platform?
How have the design/scope/goals of the product changed since you first started working on it?
What does the workflow for a team using Omni look like?
What are some of the developments in the broader ecosystem that have made your work possible?
What are some of the positive and negative inspirations that you have drawn from the experience that you and your team-mates have gained in previous businesses?
What are the most interesting, innovative, or unexpected ways that you have seen Omni used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on Omni?
When is Omni the wrong choice?
What do you have planned for the future of Omni?
Contact Info
LinkedIn @cmerrick on Twitter
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on Apple Podcasts and tell your friends and co-workers
Links
Omni Analytics Stitch RJ Metrics Looker
Podcast Episode
Singer dbt
Podcast Episode
Teradata Fivetran Apache Arrow
Podcast Episode
DuckDB
Podcast Episode
BigQuery Snowflake
Podcast Episode
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Sponsored By:
Materialize: 
Looking for the simplest way to get the freshest data possible to your teams? Because let's face it: if real-time were easy, everyone would be using it. Look no further than Materialize, the streaming database you already know how to use.
Materialize’s PostgreSQL-compatible interface lets users leverage the tools they already use, with unsurpassed simplicity enabled by full ANSI SQL support. Delivered as a single platform with the separation of storage and compute, strict-serializability, active replication, horizontal scalability and workload isolation — Materialize is now the fastest way to build products with streaming data, drastically reducing the time, expertise, cost and maintenance traditionally associated with implementation of real-time features.
Sign up now for early access to Materialize and get started with the power of streaming data with the same simplicity and low implementation cost as batch cloud data warehouses.
Go to materialize.comSupport Data Engineering Podcast
Summary Data governance is a practice that requires a high degree of flexibility and collaboration at the organizational and technical levels. The growing prominence of cloud and hybrid environments in data management adds additional stress to an already complex endeavor. Privacera is an enterprise grade solution for cloud and hybrid data governance built on top of the robust and battle tested Apache Ranger project. In this episode Balaji Ganesan shares how his experiences building and maintaining Ranger in previous roles helped him understand the needs of organizations and engineers as they define and evolve their data governance policies and practices.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! This episode is brought to you by Acryl Data, the company behind DataHub, the leading developer-friendly data catalog for the modern data stack. Open Source DataHub is running in production at several companies like Peloton, Optum, Udemy, Zynga and others. Acryl Data provides DataHub as an easy to consume SaaS product which has been adopted by several companies. Signup for the SaaS product at dataengineeringpodcast.com/acryl RudderStack helps you build a customer data platform on your warehouse or data lake. Instead of trapping data in a black box, they enable you to easily collect customer data from the entire stack and build an identity graph on your warehouse, giving you full visibility and control. Their SDKs make event streaming from any app or website easy, and their state-of-the-art reverse ETL pipelines enable you to send enriched data to any cloud tool. Sign up free… or just get the free t-shirt for being a listener of the Data Engineering Podcast at dataengineeringpodcast.com/rudder. The most important piece of any data project is the data itself, which is why it is critical that your data source is high quality. PostHog is your all-in-one product analytics suite including product analysis, user funnels, feature flags, experimentation, and it’s open source so you can host it yourself or let them do it for you! You have full control over your data and their plugin system lets you integrate with all of your other data tools, including data warehouses and SaaS platforms. Give it a try today with their generous free tier at dataengineeringpodcast.com/posthog Your host is Tobias Macey and today I’m interviewing Balaji Ganesan about his work at Privacera and his view on the state of data governance, access control, and security in the cloud
Interview
Introduction How did you get involved in the area of data management? Can you describe what Privacera is and the story behind it? What is your working definition of "data governance" and how does that influence your product focus and priorities? What are some of the lessons that you learned from your work on Apache Ranger that helped with your efforts at Privacera? How would you characterize your position in the market for data governance/data security tools? What are the unique constraints and challenges that come into play when managing data in cloud platforms? Can you explain how the Privacera platform is architected?
How have the design and goals of the system changed or evolved since you started working on it?
What is the workflow for an operator integrating Privacera into a data platform?
How do you provide feedback to users about the level of coverage for discovered data assets?
How does Privacera fit into the workflow of the different personas working with data?
What are some of the security and privacy controls that Privacera introduces?
How do you mitigate the potential for anyone to bypass Privacera’s controls by interacting directly with the underlying systems? What are the most interesting, innovative, or unexpected ways that you have seen Privacera used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Privacera? When is Privacera the wrong choice? What do you have planned for the future of Privacera?
Contact Info
LinkedIn @Balaji_Blog on Twitter
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don’t forget to check out our other show, Podcast.init to learn about the Python language, its community, and the innovative ways it is being used. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you’ve learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Links
Privacera Hadoop Hortonworks Apache Ranger Oracle Teradata Presto/Trino Starburst
Podcast Episode
Ahana
Podcast Episode
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Sponsored By:
Acryl: 
The modern data stack needs a reimagined metadata management platform. Acryl Data’s vision is to bring clarity to your data through its next generation multi-cloud metadata management platform. Founded by the leaders that created projects like LinkedIn DataHub and Airbnb Dataportal, Acryl Data enables delightful search and discovery, data observability, and federated governance across data ecosystems. Signup for the SaaS product today at dataengineeringpodcast.com/acrylSupport Data Engineering Podcast
Summary The data warehouse has become the focal point of the modern data platform. With increased usage of data across businesses, and a diversity of locations and environments where data needs to be managed, the warehouse engine needs to be fast and easy to manage. Yellowbrick is a data warehouse platform that was built from the ground up for speed, and can work across clouds and all the way to the edge. In this episode CTO Mark Cusack explains how the engine is architected, the benefits that speed and predictable pricing has for the organization, and how you can simplify your platform by putting the warehouse close to the data, instead of the other way around.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! Firebolt is the fastest cloud data warehouse. Visit dataengineeringpodcast.com/firebolt to get started. The first 25 visitors will receive a Firebolt t-shirt. Atlan is a collaborative workspace for data-driven teams, like Github for engineering or Figma for design teams. By acting as a virtual hub for data assets ranging from tables and dashboards to SQL snippets & code, Atlan enables teams to create a single source of truth for all their data assets, and collaborate across the modern data stack through deep integrations with tools like Snowflake, Slack, Looker and more. Go to dataengineeringpodcast.com/atlan today and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $3000 on an annual subscription Your host is Tobias Macey and today I’m interviewing Mark Cusack about Yellowbrick, a data warehouse designed for distributed clouds
Interview
Introduction How did you get involved in the area of data management? Can you start by describing what Yellowbrick is and some of the story behind it? What does the term "distributed cloud" signify and what challenges are associated with it? How would you characterize Yellowbrick’s position in the database/DWH market? How is Yellowbrick architected?
How have the goals and design of the platform changed or evolved over time?
How does Yellowbrick maintain visibility across the different data locations that it is responsible for?
What capabilities does it offer for being able to join across the disparate "clouds"?
What are some data modeling strategies that users should consider when designing their deployment of Yellowbrick? What are some of the capabilities of Yellowbrick that you find most useful or technically interesting? For someone who is adopting Yellowbrick, what is the process for getting it integrated into their data systems? What are the most underutilized, overlooked, or misunderstood features of Yellowbrick? What are the most interesting, innovative, or unexpected ways that you have seen Yellowbrick used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on and with Yellowbrick? When is Yellowbrick the wrong choice? What do you have planned for the future of the product?
Contact Info
LinkedIn @markcusack on Twitter
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
Yellowbrick Teradata Rainstor Distributed Cloud Hybrid Cloud SwimOS
Podcast Episode
K
The rise of machine learning has placed a premium on finding new sources of data to fuel predictive models. But acquiring external data is often expensive and many data sets are rife with errors and difficult to combine with internal data. But that’s going to change in 2020.
To help us understand the scale, scope, and dimensions of emerging data marketplaces is Justin Langseth, one of the visionaries in our space. Justin is a VP at Snowflake responsible for the Snowflake Data Exchange. Prior to Snowflake, Justin was the technical founder and CEO/CTO of 5 data technology startups: Claraview (sold to Teradata), Zoomdata (sold to Logi Analytics), Clarabridge, Strategy.com, and Augaroo. He has 25 years of experience in business intelligence, natural language processing, big data, and AI.
Summary The scale and complexity of the systems that we build to satisfy business requirements is increasing as the available tools become more sophisticated. In order to bridge the gap between legacy infrastructure and evolving use cases it is necessary to create a unifying set of components. In this episode Dipti Borkar explains how the emerging category of data orchestration tools fills this need, some of the existing projects that fit in this space, and some of the ways that they can work together to simplify projects such as cloud migration and hybrid cloud environments. It is always useful to get a broad view of new trends in the industry and this was a helpful perspective on the need to provide mechanisms to decouple physical storage from computing capacity.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show! This week’s episode is also sponsored by Datacoral, an AWS-native, serverless, data infrastructure that installs in your VPC. Datacoral helps data engineers build and manage the flow of data pipelines without having to manage any infrastructure, meaning you can spend your time invested in data transformations and business needs, rather than pipeline maintenance. Raghu Murthy, founder and CEO of Datacoral built data infrastructures at Yahoo! and Facebook, scaling from terabytes to petabytes of analytic data. He started Datacoral with the goal to make SQL the universal data programming language. Visit dataengineeringpodcast.com/datacoral today to find out more. You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Dataversity, Corinium Global Intelligence, Alluxio, and Data Council. Upcoming events include the combined events of the Data Architecture Summit and Graphorum, the Data Orchestration Summit, and Data Council in NYC. Go to dataengineeringpodcast.com/conferences to learn more about these and other events, and take advantage of our partner discounts to save money when you register today. Your host is Tobias Macey and today I’m interviewing Dipti Borkark about data orchestration and how it helps in migrating data workloads to the cloud
Interview
Introduction How did you get involved in the area of data management? Can you start by describing what you mean by the term "Data Orchestration"?
How does it compare to the concept of "Data Virtualization"? What are some of the tools and platforms that fit under that umbrella?
What are some of the motivations for organizations to use the cloud for their data oriented workloads?
What are they giving up by using cloud resources in place of on-premises compute?
For businesses that have invested heavily in their own datacenters, what are some ways that they can begin to replicate some of the benefits of cloud environments? What are some of the common patterns for cloud migration projects and what challenges do they present?
Do you have advice on useful metrics to track for determining project completion or success criteria?
How do businesses approach employee education for designing and implementing effective systems for achieving their migration goals? Can you talk through some of the ways that different data orchestration tools can be composed together for a cloud migration effort?
What are some of the common pain points that organizations encounter when working on hybrid implementations?
What are some of the missing pieces in the data orchestration landscape?
Are there any efforts that you are aware of that are aiming to fill those gaps?
Where is the data orchestration market heading, and what are some industry trends that are driving it?
What projects are you most interested in or excited by?
For someone who wants to learn more about data orchestration and the benefits the technologies can provide, what are some resources that you would recommend?
Contact Info
LinkedIn @dborkar on Twitter
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don’t forget to check out our other show, Podcast.init to learn about the Python language, its community, and the innovative ways it is being used. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you’ve learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on iTunes and tell your friends and co-workers Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Links
Alluxio
Podcast Episode
UC San Diego Couchbase Presto
Podcast Episode
Spark SQL Data Orchestration Data Virtualization PyTorch
Podcast.init Episode
Rook storage orchestration PySpark MinIO
Podcast Episode
Kubernetes Openstack Hadoop HDFS Parquet Files
Podcast Episode
ORC Files Hive Metastore Iceberg Table Format
Podcast Episode
Data Orchestration Summit Star Schema Snowflake Schema Data Warehouse Data Lake Teradata
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast
IoT has created a tidal wave that data savvy organizations can turn into profitable business solutions. Most IoT data comes from sensors, which are now attached to almost every device imaginable, from factory floor machines and agricultural fields to your cell phone and toothbrush. But IoT is forcing companies to rethink their data architectures to ingest, process, and analyze streaming data in real-time.
To help us understand the impact of IoT on data architectures, we invited Dan Graham to our show for a second time. Dan is a former product marketing manager at both IBM and Teradata, renowned for combining deep technical knowledge with industry marketing savvy. During his tenure at those companies, he was responsible for MPP data management systems, data warehouses, and data lakes, and most recently, the Internet of Things.
In this episode, Daniel Graham dissects the capabilities of data lakes and compares it to data warehouses. He talks about the primary use cases of data lakes and how they are vital for big data ecosystems. He then goes on to explain the role of data warehouses which are still responsible for timely and accurate data but don't have a central role anymore. In the end, both Wayne Eckerson and Dan Graham settle on a common definition for modern data architectures.
Daniel Graham has more than 30 years in IT, consulting, research, and product marketing, with almost 30 years at leading database management companies. Dan was a Strategy Director in IBM’s Global BI Solutions division and General Manager of Teradata’s high-end server divisions. During his tenure as a product marketer, Dan has been responsible for MPP data management systems, data warehouses, and data lakes, and most recently, the Internet of Things and streaming systems.
In this podcast, @BillFranksGA talks about the ingredients of a successful analytics ecosystem. He shared his analytics journey, his perspective on how other businesses are engaging in data analytics practice. He also sheds some light on best practices that businesses could adopt to execute a successful data strategy.
Timeline: 0:28 Bill's journey. 4:00 Bill's journey as an analyst. 9:29 Maturity of the analytics market. 11:56 Business, IT, and Data. 16:18 Introducing centralized analytics practice in an enterprise. 19:50 Tips and strategies for chief data officers to deliver the goods. 26:07 What do businesses don't get about data analytics? 29:40 Is the future aligned with data or analytics. 34:25 Importance for leadership to understand analytics. 36:35 The role of analytics professionals in the age of AI. 41:42 Upgrading analytics models. 47:50 How much should a business experiment on AI. 55:25 Evaluating blockchain. 59:50 Bill's success mantra. 1:05:25 Bill's favorite reads. 1:07:17 Key takeaway.
Podcast Link: https://futureofdata.org/billfranksga-on-the-ingredients-of-successful-analytics-ecosystem-futureofdata-podcast/
Bill's BIO: Bill Franks is Chief Analytics Officer for The International Institute For Analytics (IIA), where he provides perspective on trends in the analytics and big data space and helps clients understand how IIA can support their efforts to improve analytic performance. He also serves on the advisory boards of multiple university and professional analytic programs. He has held a range of executive positions in the analytics space in the past, including several years as Chief Analytics Officer for Teradata (NYSE: TDC).
Bill is the author of the book Taming The Big Data Tidal Wave (John Wiley & Sons). In the book, he applies his two decades of experience working with clients on large-scale analytics initiatives to outline what it takes to succeed in today’s world of big data and analytics. The book made Tom Peter’s list of 2014 “Must Read” books and also the Top 10 Most Influential Translated Technology Books list from CSDN in China.
His focus has always been to help translate complex analytics into terms that business users can understand and to then help an organization implement the results effectively within their processes. His work has spanned clients in a variety of industries for companies ranging in size from Fortune 100 companies to small non-profit organizations.
He earned a Bachelor’s degree in Applied Statistics from Virginia Tech and a Master’s degree in Applied Statistics from North Carolina State University.
About #Podcast:
FutureOfData podcast is a conversation starter to bring leaders, influencers and lead practitioners to come on show and discuss their journey in creating the data driven future.
Want to sponsor? Email us @ [email protected]
Keywords:
FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy
Summary
Most businesses end up with data in a myriad of places with varying levels of structure. This makes it difficult to gain insights from across departments, projects, or people. Presto is a distributed SQL engine that allows you to tie all of your information together without having to first aggregate it all into a data warehouse. Kamil Bajda-Pawlikowski co-founded Starburst Data to provide support and tooling for Presto, as well as contributing advanced features back to the project. In this episode he describes how Presto is architected, how you can use it for your analytics, and the work that he is doing at Starburst Data.
Preamble
Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to run a bullet-proof data platform. Go to dataengineeringpodcast.com/linode to get a $20 credit and launch a new server in under a minute. Go to dataengineeringpodcast.com to subscribe to the show, sign up for the newsletter, read the show notes, and get in touch. Your host is Tobias Macey and today I’m interviewing Kamil Bajda-Pawlikowski about Presto and his experiences with supporting it at Starburst Data
Interview
Introduction How did you get involved in the area of data management? Can you start by explaining what Presto is?
What are some of the common use cases and deployment patterns for Presto?
How does Presto compare to Drill or Impala? What is it about Presto that led you to building a business around it? What are some of the most challenging aspects of running and scaling Presto? For someone who is using the Presto SQL interface, what are some of the considerations that they should keep in mind to avoid writing poorly performing queries?
How does Presto represent data for translating between its SQL dialect and the API of the data stores that it interfaces with?
What are some cases in which Presto is not the right solution? What types of support have you found to be the most commonly requested? What are some of the types of tooling or improvements that you have made to Presto in your distribution?
What are some of the notable changes that your team has contributed upstream to Presto?
Contact Info
Website E-mail Twitter – @starburstdata Twitter – @prestodb
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
Starburst Data Presto Hadapt Hadoop Hive Teradata PrestoCare Cost Based Optimizer ANSI SQL Spill To Disk Tempto Benchto Geospatial Functions Cassandra Accumulo Kafka Redis PostGreSQL
The intro and outro music is from The Hug by The Freak Fandango Orchestra / {CC BY-SA](http://creativecommons.org/licenses/by-sa/3.0/)?utm_source=rss&utm_medium=rss Support Data Engineering Podcast
In this podcast, Justin Borgman talks about his journey of starting a data science start, doing an exit, and jumping on another one. The session is filled with insights for leadership, looking for entrepreneurial wisdom to get on a data-driven journey.
Timeline: 0:28 Justin's journey. 3:22 Taking the plunge to start a new company. 5:49 Perception vs. reality of starting a data warehouse company. 8:15 Bringing in something new to the IT legacy. 13:20 Getting your first few customers. 16:16 Right moment for a data warehouse company to look for a new venture. 18:20 Right person to have as a co-founder. 20:29 Advantages of going seed vs. series A. 22:13 When is a company ready for seeding or series A? 24:40 Who's a good adviser? 26:35 Exiting Teradata. 28:54 Teradata to starting a new company. 31:24 Excitement of starting something from scratch. 32:24 What is Starburst? 37:15 Presto, a great engine for cloud platforms. 40:30 How can a company get started with Presto. 41:50 Health of enterprise data. 44:15 Where does Presto not fit in? 45:19 Future of enterprise data. 46:36 Drawing parallels between proprietary space and open source space. 49:02 Does align with open-source gives a company a better chance in seeding. 51:44 John's ingredients for success. 54:05 John's favorite reads. 55:01 Key takeaways.
Paul's Recommended Read: The Outsiders Paperback – S. E. Hinton amzn.to/2Ai84Gl
Podcast Link: https://futureofdata.org/running-a-data-science-startup-one-decision-at-a-time-futureofdata-podcast/
Justin's BIO: Justin has spent the better part of a decade in senior executive roles building new businesses in the data warehousing and analytics space. Before co-founding Starburst, Justin was Vice President and General Manager at Teradata (NYSE: TDC), where he was responsible for the company’s portfolio of Hadoop products. Prior to joining Teradata, Justin was co-founder and CEO of Hadapt, the pioneering "SQL-on-Hadoop" company that transformed Hadoop from file system to analytic database accessible to anyone with a BI tool. Teradata acquired Hadapt in 2014.
Justin earned a BS in Computer Science from the University of Massachusetts at Amherst and an MBA from the Yale School of Management.
About #Podcast:
FutureOfData podcast is a conversation starter to bring leaders, influencers, and lead practitioners to discuss their journey to create the data-driven future.
Want to sponsor? Email us @ [email protected]
Keywords:
FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy
In this episode, Wayne Eckerson and Lenin Gali discuss the past and future of the cloud and big data.
Gali is a data analytics practitioner who has always been on the leading edge of where business and technology intersect. He was one of the first to move data analytics to the cloud when he was BI director at ShareThis, a social media based services provider. He was instrumental in defining an enterprise analytics strategy, developing a data platform that brought games and business data together to enable thousands of data users to build better games and services by using Hadoop & Teradata while at Ubisoft. He is now spearheading the creation of a Hadoop-based data analytics platform at Quotient, a digital marketing technology firm in the retail industry.