We’re definitely in AI hype mode at the moment largely driven by the evolution in generative AI. However, it seems like this progress is not necessarily driving lots of data-related innovation inside organisations that are not AI-first tech companies. A recent survey published by Randy Bean’s company, NewVantage Partners, confirms this. Here are the main findings compared to when the survey was last run 4 years ago: 59.5% of executives say their companies use data for business innovation – the same as four years ago.A drop from 47.6% to 40.8% of executives say their companies compete using data and analytics.Fewer executives (39.5% down from 46.9%) say their companies manage data as a business asset.Only 23.9% of executives now say their companies are data-driven, compared to 31% before.Just 20.6% of executives report having a data culture in their companies, down 27% from 28.3% in 2019.These numbers spell regression, not progress. Why is it so hard to become a truly data-driven organisation? In this episode, Randy and I explore the challenges facing Chief Data & Analytics Officers and their teams, including: How organizations can create an environment that encourages innovation in data-driven initiativesExamples of organisations doing data well, and whyHow to set clear expectations around the responsibilities of CDAOsThe most important qualities for someone in the CDAO role, and much more.Randy on LinkedIn: https://www.linkedin.com/in/randybeannvp/ Randy's website and book, 'Fail Fast, Learn Faster': https://www.randybeandata.com/book
talk-data.com
Topic
GenAI
Generative AI
1517
tagged
Activity Trend
Top Events
Throughout history, small businesses have consistently played a pivotal role in the global economy, serving as its foundational backbone. As we navigate the digital age, the emergence of large corporations and rapid technological advancements present new challenges. Now, more than ever, it's imperative for small businesses to adapt, embracing a data-driven approach to remain competitive and sustainable. In this evolving landscape, we need champions dedicated to guiding these businesses, ensuring they harness the full potential of modern tools and insights to ensure a fair and varied marketplace of goods and services for all. Dr Kendra Vant, Executive General Manager of Data & AI Products at Xero, is an industry leader in building data-driven products that harness AI and machine learning to solve complex problems for the small-business economy. Working across Australia, Asia and the US, Kendra has led data and technology teams at companies such as Seek, Telstra, Deloitte and now Xero where she leads the company's global efforts using emerging practices and technologies to help small businesses and their advisors benefit from the power of data and insights. Starting with doctoral research in experimental quantum physics at MIT and a stint building quantum computers at Los Alamos National Laboratory, Kendra has made a career of solving hard problems and pushing the boundaries of what's possible. In the episode, Kendra and Richie delve into the transformative impact of data science on small businesses, use-cases of data science for small businesses, how Xero has supported numerous small businesses with data science. They also cover the integration of AI in product development, the unexpected depth of data in seemingly low-tech sectors, the pivotal role of software platforms in data analysis and much more. Links Mentioned in The Show: Xero Analyzing Business Data in SQL Financial Modeling in Spreadsheets Implementing AI Solutions in Business Generative AI Concepts
Summary
Generative AI has unlocked a massive opportunity for content creation. There is also an unfulfilled need for experts to be able to share their knowledge and build communities. Illumidesk was built to take advantage of this intersection. In this episode Greg Werner explains how they are using generative AI as an assistive tool for creating educational material, as well as building a data driven experience for learners.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack This episode is brought to you by Datafold – a testing automation platform for data engineers that finds data quality issues before the code and data are deployed to production. Datafold leverages data-diffing to compare production and development environments and column-level lineage to show you the exact impact of every code change on data, metrics, and BI tools, keeping your team productive and stakeholders happy. Datafold integrates with dbt, the modern data stack, and seamlessly plugs in your data CI for team-wide and automated testing. If you are migrating to a modern data stack, Datafold can also help you automate data and code validation to speed up the migration. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold You shouldn't have to throw away the database to build with fast-changing data. You should be able to keep the familiarity of SQL and the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date. With Materialize, you can! It’s the only true SQL streaming database built from the ground up to meet the needs of modern data products. Whether it’s real-time dashboarding and analytics, personalization and segmentation or automation and alerting, Materialize gives you the ability to work with fresh, correct, and scalable results — all in a familiar SQL interface. Go to dataengineeringpodcast.com/materialize today to get 2 weeks free! Your host is Tobias Macey and today I'm interviewing Greg Werner about building IllumiDesk, a data-driven and AI powered online learning platform
Interview
Introduction How did you get involved in the area of data management? Can you describe what Illumidesk is and the story behind it? What are the challenges that educators and content creators face in developing and maintaining digital course materials for their target audiences? How are you leaning on data integrations and AI to reduce the initial time investment required to deliver courseware? What are the opportunities for collecting and collating learner interactions with the course materials to provide feedback to the instructors? What are some of the ways that you are incorporating pedagogical strategies into the measurement and evaluation methods that you use for reports? What are the different categories of insights that you need to provide across the different stakeholders/personas who are interacting with the platform and learning content? Can you describe how you have architected the Illumidesk platform? How have the design and goals shifted since you first began working on it? What are the strategies that you have used to allow for evolution and adaptation of the system in order to keep pace with the ecosystem of generative AI capabilities? What are the failure modes of the content generation that you need to account for? What are the most interesting, innovative, or unexpected ways that you have seen Illumidesk us
Send us a text "Insights from Luke Arrigoni, CEO of Arricor, on AI Innovations and Business Impact" Description: Welcome to an enlightening episode of our podcast as we dive into the fascinating world of Generative AI, Vision AI, and Natural Language Processing (NLP) with the esteemed Luke Arrigoni. In this Part 1 interview, Luke, Chief Executive Officer at Arricor, takes us on a journey through AI's transformative potential. Discover the minds behind AI advancements as we delve into topics like facial recognition for privacy, Arricor's mission, Prompt engineering, and the myriad use cases that these technologies unlock. Gain valuable insights into Large Language Models (LLMs) and the role of prompt engineering in optimizing AI's capabilities. Luke Arrigoni shares his expertise on avoiding AI hallucinations, the unique differentiation of Arricor, and the remarkable business impact of Generative AI. Join us to explore the present and future of AI through this engaging discussion. Don't miss this opportunity to gain insights from a visionary in the AI field. Connect with Luke Arrigoni on LinkedIn [https://www.linkedin.com/in/lukearrigoni/] and learn more about Arricor's work on their website [http://arricor.com/]. Stay tuned for Part 2 as we continue our conversation on AI's groundbreaking potential.
01:40 Meet Luke Arrigoni04:13 Facial recognition for privacy06:21 Arricor mission08:39 More on LLMs10:29 Prompt engineering13:30 Use cases16:30 Arricor differentiation20:59 Avoiding hallucinations26:13 Business impact of GenAILinkedIn: https://www.linkedin.com/in/lukearrigoni/ Website: http://arricor.com/ Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.
Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.
Talk on Generative AI Pitch Deck Roast.
Responsible AI can help data leaders comply with the fast-evolving regulatory environment of data and artificial intelligence. Published at: https://www.eckerson.com/articles/the-opportunity-and-risk-of-generative-ai-part-ii-how-responsible-ai-assists-compliance
US frontier history had races, risks, and rewards. Generative AI's future will follow a similar path. Published at: https://www.eckerson.com/articles/enterprise-data-and-the-taming-of-the-generative-ai-frontier
Generative AI is here to stay—even in the 8 months since the public release of ChatGPT, there are an abundance of AI tools to help make us more productive at work and ease the stress of planning and execution of our daily lives among other things. Already, many of us are wondering what is to come in the next 8 months, the next year, and the next decade of AI’s evolution. In the grand scheme of things, this really is just the beginning. But what should we expect in this Cambrian explosion of technology? What are the use cases being developed behind the scenes? What do we need to be mindful of when training the next generations of AI? Can we combine multiple LLMs to get better results? Bal Heroor is CEO and Principal at Mactores and has led over 150 business transformations driven by analytics and cutting-edge technology. His team at Mactores are researching and building AI, AR/VR, and Quantum computing solutions for business to gain a competitive advantage. Bal is also the Co-Founder of Aedeon—the first hyper-scale Marketplace for Data Analytics and AI talent. In the episode, Richie and Bal explore common use cases for generative AI, how it's evolving to solve enterprise problems, challenges of data governance and the importance of explainable AI, the challenges of tracking the lineage of AI and data in large organizations. Bal also touches on the shift from general-purpose generative AI models to more specialized models, fascinating use cases in the manufacturing industry, what to consider when adopting AI solutions in business, and much more. Links mentioned in the show: PulsarTrifactaAWS Clarify[Course] Introduction to ChatGPT[Course] Implementing AI Solutions in Business[Course] Generative AI Concepts
Summary
As businesses increasingly invest in technology and talent focused on data engineering and analytics, they want to know whether they are benefiting. So how do you calculate the return on investment for data? In this episode Barr Moses and Anna Filippova explore that question and provide useful exercises to start answering that in your company.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack Your host is Tobias Macey and today I'm interviewing Barr Moses and Anna Filippova about how and whether to measure the ROI of your data team
Interview
Introduction How did you get involved in the area of data management? What are the typical motivations for measuring and tracking the ROI for a data team?
Who is responsible for collecting that information? How is that information used and by whom?
What are some of the downsides/risks of tracking this metric? (law of unintended consequences) What are the inputs to the number that constitutes the "investment"? infrastructure, payroll of employees on team, time spent working with other teams? What are the aspects of data work and its impact on the business that complicate a calculation of the "return" that is generated? How should teams think about measuring data team ROI? What are some concrete ROI metrics data teams can use?
What level of detail is useful? What dimensions should be used for segmenting the calculations?
How can visibility into this ROI metric be best used to inform the priorities and project scopes of the team? With so many tools in the modern data stack today, what is the role of technology in helping drive or measure this impact? How do your respective solutions, Monte Carlo and dbt, help teams measure and scale data value? With generative AI on the upswing of the hype cycle, what are the impacts that you see it having on data teams?
What are the unrealistic expectations that it will produce? How can it speed up time to delivery?
What are the most interesting, innovative, or unexpected ways that you have seen data team ROI calculated and/or used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on measuring the ROI of data teams? When is measuring ROI the wrong choice?
Contact Info
Barr
Anna
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on Apple Podcasts and tell your friends and co-workers
Links
Monte Carlo
Podcast Episode
dbt
Podcast Episode
JetBlue Snowflake Con Presentation Generative AI Large Language Models
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Sponsored By:
Rudderstack: 
Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guessw
Send us a text He's BACK! Roger Premo, General Manager, Corporate Strategy and Ventures Development at IBM. How the world has changed in a short year. Generative AI and more! 02:29 Meet Roger Premo Take 205:52 A Changing World08:18 Generative AI12:48 Both Sides of the Story14:22 Hybrid Cloud and AI20:50 IBM's watsonx25:53 What Have We Learned?27:46 Enterprise Models29:59 Hugging Face31:03 IBM's Differentiation32:23 The 2 min Bar Pitch35:57 Three Questions42:21 An Intentional Hybrid Cloud Architecture 46:40 Responsible AILinkedin: https://www.linkedin.com/in/ropremo/ Website: https://www.ibm.com/watsonx Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.
Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.
Data and AI are revolutionizing industries and transforming businesses at an unprecedented pace. These advancements pave the way for groundbreaking outcomes such as fresh revenue streams, optimized working capital, and captivating, personalized customer experiences.
Join Hugh Burgin, Luke Pritchard and Dan Diasio as we explore a range of real-world examples of AI and data-driven transformation opportunities being powered by Databricks, including business value realized and technical solutions implemented. We will focus on how to integrate and leverage business insights, a diverse network of cloud-based solutions and Databricks to unleash new business value opportunities. By highlighting real-world use cases we will discuss:
- Examples of how Manufacturing, Retail, Financial Services and other sectors are using Databricks services to scale AI, gain insights that matter and secure their data
- The ways data monetization are changing how companies view data and incentivizing better data management
- Examples of Generative AI and LLMs changing how businesses operate, how their customers engage, and what you can do about it
Talk by: Hugh Burgin and Luke Pritchard
Here’s more to explore: State of Data + AI Report: https://dbricks.co/44i2HBp The Data Team's Guide to the Databricks Lakehouse Platform: https://dbricks.co/46nuDpI
Connect with us: Website: https://databricks.com Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/databricks Instagram: https://www.instagram.com/databricksinc Facebook: https://www.facebook.com/databricksin
Generative AI is under the spotlight and it has diverse applications but there are also many considerations when deploying a generative model at scale. This presentation will make a deep dive into multiple architectures and talk about optimization hacks for the sophisticated data pipelines that generative AI requires. The session will cover: - How to create and prepare a dataset for training at scale in single GPU and multi GPU environments. - How to optimize your data pipeline for training and inference in production considering the complex deep learning models that need to be run. - Tradeoff between higher quality outputs versus training time and resources and processing times.
Agenda: - Basic concepts in Generative AI: GAN networks and Stable Diffusion - Training and inference data pipelines - Industry applications and use cases
Talk by: Paula Martinez and Rodrigo Beceiro
Here’s more to explore: LLM Compact Guide: https://dbricks.co/43WuQyb Big Book of MLOps: https://dbricks.co/3r0Pqiz
Connect with us: Website: https://databricks.com Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/databricks Instagram: https://www.instagram.com/databricksinc Facebook: https://www.facebook.com/databricksinc
Digital twins are the foundation for the Factory of the Future providing the data foundation to answer questions like what is happening and what can be done about it. It requires combining data across the business — from R&D, manufacturing, supply chain, and operations — and with partners, that then is used with AI to make decisions.
This session presents a case study of a digital twin implemented for warehouse controllers designed to alleviate internal decisions and recommendations for next trips, that replaces tribal knowledge and gut-decision making. We share how we use a domain knowledge graph to drive a data-driven approach that combines warehouse data, with simulations, AI models, and domain knowledge. Warehouse controllers use a dispatch control board that provides a list of orders by dispatch date and time, destination, carrier, assignments to the trailers and to the order and dock number. We show how this new semantic layer works with large language models to make it easier to answer questions on what trip to activate and trailer to choose; based on assets available, products in inventory, and what's coming out of manufacturing.
Talk by: Teresa Tung
Here’s more to explore: A New Approach to Data Sharing: https://dbricks.co/44eUnT1
Connect with us: Website: https://databricks.com Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/databricks Instagram: https://www.instagram.com/databricksinc Facebook: https://www.facebook.com/databricksinc
Generative AI shows incredible promise for enterprise applications. The explosion of generative AI can be attributed to the convergence of several factors. Most significant is that the barrier to entry has dropped for AI application developers through customizable prompts (few-shot learning), enabling laypeople to generate high-quality content. The flexibility of models like ChatGPT and DALLE-2 have sparked curiosity and creativity about new applications that they can support. The number of tools will continue to grow in a manner similar to how AWS fueled app development. But excitement must be tampered by concerns about new risks imposed to business and society. Increased capability and adoption also increase risk exposure. As organizations explore creative boundaries of generative models, measures to reduce risk must be put in place. However, the enormous size of the input space and inherent complexity make this task more challenging than traditional ML models.
In this session, we summarize the new risks introduced by the new class of generative foundation models through several examples, and compare how these risks relate to the risks of mainstream discriminative models. Steps can be taken to reduce the operational risk, bias and fairness issues, and privacy and security of systems that leverage LLM for automation. We’ll explore model hallucinations, output evaluation, output bias, prompt injection, data leakage, stochasticity, and more. We’ll discuss some of the larger issues common to LLMs and show how to test for them. A comprehensive, test-based approach to generative AI development will help instill model integrity by proactively mitigating failure and the associated business risk.
Talk by: Yaron Singer
Here’s more to explore: LLM Compact Guide: https://dbricks.co/43WuQyb Big Book of MLOps: https://dbricks.co/3r0Pqiz
Connect with us: Website: https://databricks.com Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/databricks Instagram: https://www.instagram.com/databricksinc Facebook: https://www.facebook.com/databricksinc
Language models are incredible engineering breakthroughs but require auditing and risk management before productization. These systems raise concerns about toxicity, transparency and reproducibility, intellectual property licensing and ownership, disinformation and misinformation, supply chains, and more. How can your organization leverage these new tools without taking on undue or unknown risks? While language models and associated risk management are in their infancy, a small number of best practices in governance and risk are starting to emerge. If you have a language model use case in mind, want to understand your risks, and do something about them, this presentation is for you! We'll be covering the following:
- Studying past incidents in the AI Incident Database and using this information to guide debugging.
- Adhering to authoritative standards, like the NIST AI Risk Management Framework.
- Finding and fixing common data quality issues.
- Applying general public tools and benchmarks as appropriate (e.g., BBQ, Winogender, TruthfulQA).
- Binarizing specific tasks and debugging them using traditional model assessment and bias testing.
- Engineering adversarial prompts with strategies like counterfactual reasoning, role-playing, and content exhaustion.
- Conducting random attacks: random sequences of attacks, prompts, or other tests that may evoke unexpected responses.
- Countering prompt injection attacks, auditing for backdoors and data poisoning, ensuring endpoints are protected with authentication and throttling, and analyzing third-party dependencies.
- Engaging stakeholders to help find problems system designers and developers cannot see.
- Everyone knows that generative AI is going to be huge. Don't let inadequate risk management ruin the party at your organization!
Talk by: Patrick Hall
Here’s more to explore: LLM Compact Guide: https://dbricks.co/43WuQyb Big Book of MLOps: https://dbricks.co/3r0Pqiz
Connect with us: Website: https://databricks.com Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/databricks Instagram: https://www.instagram.com/databricksinc Facebook: https://www.facebook.com/databricksinc
Create a custom chat-based solution to query and summarize your data within your VPC using Dolly 2.0 and Amazon SageMaker. In this talk, you will learn about Dolly 2.0, Databricks, state-of-the-art, open source, LLM, available for commercial and Amazon SageMaker, AWS’s premiere toolkit for ML builders. You will learn how to deploy and customize models to reference your data using retrieval augmented generation (RAG) and additional fine tuning techniques…all using open-source components available today.
Talk by: Venkat Viswanathan and Karl Albertsen
Here’s more to explore: LLM Compact Guide: https://dbricks.co/43WuQyb Big Book of MLOps: https://dbricks.co/3r0Pqiz
Connect with us: Website: https://databricks.com Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/databricks Instagram: https://www.instagram.com/databricksinc Facebook: https://www.facebook.com/databricksinc
Large Language Models (LLMs) such as ChatGPT have revolutionized AI applications, offering unprecedented potential for complex real-world scenarios. However, fully harnessing this potential comes with unique challenges such as model brittleness and the need for consistent, accurate outputs. These hurdles become more pronounced when developing production-grade applications that utilize LLMs as a software abstraction layer.
In this session, we will tackle these challenges head-on. We introduce Guardrails AI, an open-source platform designed to mitigate risks and enhance the safety and efficiency of LLMs. We will delve into specific techniques and advanced control mechanisms that enable developers to optimize model performance effectively. Furthermore, we will explore how implementing these safeguards can significantly improve the development process of LLMs, ultimately leading to safer, more reliable, and robust real-world AI applications
Talk by: Shreya Rajpal
Here’s more to explore: LLM Compact Guide: https://dbricks.co/43WuQyb Big Book of MLOps: https://dbricks.co/3r0Pqiz
Connect with us: Website: https://databricks.com Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/databricks Instagram: https://www.instagram.com/databricksinc Facebook: https://www.facebook.com/databricksinc
Join this panel discussion that unpacks the technical challenges surrounding biases found in data, and poses potential solutions and strategies for the future including Generative AI. This session is a showcase highlighting diverse perspectives in the data and AI industry.
Talk by: Adi Polak, Gavita Regunath, Christina Taylor, and Layla Yang
Connect with us: Website: https://databricks.com Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/databricks Instagram: https://www.instagram.com/databricksinc Facebook: https://www.facebook.com/databricksinc
Join Nathan as he explores Stability AI's latest advancements in open source generative AI, focused on building the multimodal information infrastructure of the future. Get an insider's perspective on our recently released model, Stable Diffusion XL v0.9, and Stability's behind-the-scenes efforts. Discover how advancements in open-source generative AI models enable efficient development of multimodal AI systems, and learn how researchers worldwide are customizing these models and leveraging unique datasets.
Nathan will discuss the dynamic interplay between open-source models and enterprise AI adoption, resulting in efficient, tailored solutions. At Stability AI, our focus is on unlocking the inherent value and competitive advantage found in unique data and AI ownership. Combining open-source models with proprietary data assets creates a strategic advantage for enterprises.
Despite the growing AI trend, the need for human judgment and creativity remains pivotal. At Stability AI, our goal is to augment rather than replace human capabilities using AI collaboration and co-creation. Join us in shaping a collaborative generative future.
Talk by: Nathan Lile
Here’s more to explore: State of Data + AI Report: https://dbricks.co/44i2HBp Databricks named a Leader in 2022 Gartner® Magic QuadrantTM CDBMS: https://dbricks.co/3phw20d
Connect with us: Website: https://databricks.com Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/databricks Instagram: https://www.instagram.com/databricksinc Facebook: https://www.facebook.com/databricksinc
Patients are increasingly taking an active role in managing their healthcare costs and are more likely to choose providers and treatments based on cost considerations. Learn how technology can help build cost-efficient care models across the healthcare continuum, delivering higher quality care while improving patient experience and operational efficiency.
Talk by: Janine Pratt
Here’s more to explore: LLM Compact Guide: https://dbricks.co/43WuQyb Big Book of MLOps: https://dbricks.co/3r0Pqiz
Connect with us: Website: https://databricks.com Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/databricks Instagram: https://www.instagram.com/databricksinc Facebook: https://www.facebook.com/databricksinc