At Zillow, we have accelerated the volume and quality of our dashboards by leveraging a modern SDLC with version control and CI/CD. In the past three months, we have released 32 production-grade dashboards and shared them securely across the organization while cutting error rates in half over that span. In this session, we will provide an overview of how we utilize Databricks asset bundles and GitLab CI/CD to create performant dashboards that can be confidently used for mission-critical operations. As a concrete example, we'll then explore how Zillow's Data Platform team used this approach to automate our on-call support analysis, leveraging our dashboard development strategy alongside Databricks LLM offerings to create a comprehensive view that provides actionable performance metrics alongside AI-generated insights and action items from the hundreds of requests that make up our support workload.
talk-data.com
Topic
Dashboard
306
tagged
Activity Trend
Top Events
This session is repeated.Managing data and AI workloads in Databricks can be complex. Databricks Asset Bundles (DABs) simplify this by enabling declarative, Git-driven deployment workflows for notebooks, jobs, Lakeflow Declarative Pipelines, dashboards, ML models and more.Join the DABs Team for a Deep Dive and learn about:The Basics: Understanding Databricks asset bundlesDeclare, define and deploy assets, follow best practices, use templates and manage dependenciesCI/CD & Governance: Automate deployments with GitHub Actions/Azure DevOps, manage Dev vs. Prod differences, and ensure reproducibilityWhat’s new and what's coming up! AI/BI Dashboard support, Databricks Apps support, a Pythonic interface and workspace-based deploymentIf you're a data engineer, ML practitioner or platform architect, this talk will provide practical insights to improve reliability, efficiency and compliance in your Databricks workflows.
“I don’t want to spend time filtering through another dashboard — I just need an answer now.” We’ve all experienced the frustration of wading through dashboards, yearning for immediate answers. Traditional reports and visualizations, though essential, often complicate the process for decision-makers. The digital enterprise demands a shift towards conversational, natural language interactions with data. At KPMG, AI|BI Genie is reimagining our approach by allowing users to inquire about data just as they would consult a knowledgeable colleague, obtaining precise and actionable insights instantly. Discover how the KPMG Contract to Cash team leverages AI|BI Genie to enhance data engagement, drive insights and foster business growth. Join us to see AI|BI Genie in action and learn how you can transform your data interaction paradigm.
Databricks announced two new features in 2024: AI/BI Dashboards and AI/BI Genie. Dashboards is a redesigned dashboarding experience for your regular reporting needs, while Genie provides a natural language experience for your last-mile analytics. In this session, Databricks Solutions Architect and content creator Youssef Mrini will present alongside Databricks MVP and content creator Josue A. Bogran on how you can get the most value from these tools for your organization. Content covered includes: Setup necessary, including Unity Catalog, permissions and compute Building out a dashboard with AI/BI Dashboards Creating and training an AI/BI Genie workspace to reliably deliver answers When to use Dashboards, Genie, and when to use other tools such as PBI, Tableau, Sigma, ChatGPT, etc. Fluff-free, full of practical tips, and geared to help you deliver immediate impact with these new Databricks capabilities.
In this course, you’ll learn how to orchestrate data pipelines with Lakeflow Jobs (previously Databricks Workflows) and schedule dashboard updates to keep analytics up-to-date. We’ll cover topics like getting started with Lakeflow Jobs, how to use Databricks SQL for on-demand queries, and how to configure and schedule dashboards and alerts to reflect updates to production data pipelines. Pre-requisites: Beginner familiarity with the Databricks Data Intelligence Platform (selecting clusters, navigating the Workspace, executing notebooks), cloud computing concepts (virtual machines, object storage, etc.), production experience working with data warehouses and data lakes, intermediate experience with basic SQL concepts (select, filter, groupby, join, etc), beginner programming experience with Python (syntax, conditions, loops, functions), beginner programming experience with the Spark DataFrame API (Configure DataFrameReader and DataFrameWriter to read and write data, Express query transformations using DataFrame methods and Column expressions, etc.) Labs: No Certification Path: Databricks Certified Data Engineer Associate
D&A value is not possible without data storytelling that offers a better way to engage communication findings than just BI reporting or data science notebooks. Join this session to know about the fundamentals of data storytelling and how to fill the gap between data science speakers and decision makers. It further discusses how to tell the best data storytelling and how to upscale data storytelling for future in landscape of GenAI.
Summary In this episode of the Data Engineering Podcast Mai-Lan Tomsen Bukovec, Vice President of Technology at AWS, talks about the evolution of Amazon S3 and its profound impact on data architecture. From her work on compute systems to leading the development and operations of S3, Mylan shares insights on how S3 has become a foundational element in modern data systems, enabling scalable and cost-effective data lakes since its launch alongside Hadoop in 2006. She discusses the architectural patterns enabled by S3, the importance of metadata in data management, and how S3's evolution has been driven by customer needs, leading to innovations like strong consistency and S3 tables.
Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.This is a pharmaceutical Ad for Soda Data Quality. Do you suffer from chronic dashboard distrust? Are broken pipelines and silent schema changes wreaking havoc on your analytics? You may be experiencing symptoms of Undiagnosed Data Quality Syndrome — also known as UDQS. Ask your data team about Soda. With Soda Metrics Observability, you can track the health of your KPIs and metrics across the business — automatically detecting anomalies before your CEO does. It’s 70% more accurate than industry benchmarks, and the fastest in the category, analyzing 1.1 billion rows in just 64 seconds. And with Collaborative Data Contracts, engineers and business can finally agree on what “done” looks like — so you can stop fighting over column names, and start trusting your data again.Whether you’re a data engineer, analytics lead, or just someone who cries when a dashboard flatlines, Soda may be right for you. Side effects of implementing Soda may include: Increased trust in your metrics, reduced late-night Slack emergencies, spontaneous high-fives across departments, fewer meetings and less back-and-forth with business stakeholders, and in rare cases, a newfound love of data. Sign up today to get a chance to win a $1000+ custom mechanical keyboard. Visit dataengineeringpodcast.com/soda to sign up and follow Soda’s launch week. It starts June 9th.Your host is Tobias Macey and today I'm interviewing Mai-Lan Tomsen Bukovec about the evolutions of S3 and how it has transformed data architectureInterview IntroductionHow did you get involved in the area of data management?Most everyone listening knows what S3 is, but can you start by giving a quick summary of what roles it plays in the data ecosystem?What are the major generational epochs in S3, with a particular focus on analytical/ML data systems?The first major driver of analytical usage for S3 was the Hadoop ecosystem. What are the other elements of the data ecosystem that helped shape the product direction of S3?Data storage and retrieval have been core primitives in computing since its inception. What are the characteristics of S3 and all of its copycats that led to such a difference in architectural patterns vs. other shared data technologies? (e.g. NFS, Gluster, Ceph, Samba, etc.)How does the unified pool of storage that is exemplified by S3 help to blur the boundaries between application data, analytical data, and ML/AI data?What are some of the default patterns for storage and retrieval across those three buckets that can lead to anti-patterns which add friction when trying to unify those use cases?The age of AI is leading to a massive potential for unlocking unstructured data, for which S3 has been a massive dumping ground over the years. How is that changing the ways that your customers think about the value of the assets that they have been hoarding for so long?What new architectural patterns is that generating?What are the most interesting, innovative, or unexpected ways that you have seen S3 used for analytical/ML/Ai applications?What are the most interesting, unexpected, or challenging lessons that you have learned while working on S3?When is S3 the wrong choice?What do you have planned for the future of S3?Contact Info LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links AWS S3KinesisKafkaSQSEMRDrupalWordpressNetflix Blog on S3 as a Source of TruthHadoopMapReduceNasa JPLFINRA == Financial Industry Regulatory AuthorityS3 Object VersioningS3 Cross RegionS3 TablesIcebergParquetAWS KMSIceberg RESTDuckDBNFS == Network File SystemSambaGlusterFSCephMinIOS3 MetadataPhotoshop Generative FillAdobe FireflyTurbotax AI AssistantAWS Access AnalyzerData ProductsS3 Access PointAWS Nova ModelsLexisNexis ProtegeS3 Intelligent TieringS3 Principal Engineering TenetsThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Summary In this episode of the Data Engineering Podcast Chakravarthy Kotaru talks about scaling data operations through standardized platform offerings. From his roots as an Oracle developer to leading the data platform at a major online travel company, Chakravarthy shares insights on managing diverse database technologies and providing databases as a service to streamline operations. He explains how his team has transitioned from DevOps to a platform engineering approach, centralizing expertise and automating repetitive tasks with AWS Service Catalog. Join them as they discuss the challenges of migrating legacy systems, integrating AI and ML for automation, and the importance of organizational buy-in in driving data platform success.
Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.This is a pharmaceutical Ad for Soda Data Quality. Do you suffer from chronic dashboard distrust? Are broken pipelines and silent schema changes wreaking havoc on your analytics? You may be experiencing symptoms of Undiagnosed Data Quality Syndrome — also known as UDQS. Ask your data team about Soda. With Soda Metrics Observability, you can track the health of your KPIs and metrics across the business — automatically detecting anomalies before your CEO does. It’s 70% more accurate than industry benchmarks, and the fastest in the category, analyzing 1.1 billion rows in just 64 seconds. And with Collaborative Data Contracts, engineers and business can finally agree on what “done” looks like — so you can stop fighting over column names, and start trusting your data again.Whether you’re a data engineer, analytics lead, or just someone who cries when a dashboard flatlines, Soda may be right for you. Side effects of implementing Soda may include: Increased trust in your metrics, reduced late-night Slack emergencies, spontaneous high-fives across departments, fewer meetings and less back-and-forth with business stakeholders, and in rare cases, a newfound love of data. Sign up today to get a chance to win a $1000+ custom mechanical keyboard. Visit dataengineeringpodcast.com/soda to sign up and follow Soda’s launch week. It starts June 9th.Your host is Tobias Macey and today I'm interviewing Chakri Kotaru about scaling successful data operations through standardized platform offeringsInterview IntroductionHow did you get involved in the area of data management?Can you start by outlining the different ways that you have seen teams you work with fail due to lack of structure and opinionated design?Why NoSQL?Pairing different styles of NoSQL for different problemsUseful patterns for each NoSQL style (document, column family, graph, etc.)Challenges in platform automation and scaling edge casesWhat challenges do you anticipate as a result of the new pressures as a result of AI applications?What are the most interesting, innovative, or unexpected ways that you have seen platform engineering practices applied to data systems?What are the most interesting, unexpected, or challenging lessons that you have learned while working on data platform engineering?When is NoSQL the wrong choice?What do you have planned for the future of platform principles for enabling data teams/data applications?Contact Info LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links RiakDynamoDBSQL ServerCassandraScyllaDBCAP TheoremTerraformAWS Service CatalogBlog PostThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
D&A value is not possible without data storytelling that offers a better way to engage communication findings than just BI reporting or data science notebooks. Join this session to know about the fundamentals of data storytelling and how to fill the gap between data science speakers and decision makers. It further discusses how to tell the best data storytelling and how to upscale data storytelling for future in landscape of GenAI.
This book takes an advanced dive into using Tableau for professional data visualization and analytics. You will learn techniques for crafting highly interactive dashboards, optimizing their performance, and leveraging Tableau's APIs and server features. With a focus on real-world applications, this resource serves as a guide for professionals aiming to master advanced Tableau skills. What this Book will help me do Build robust, high-performing Tableau data models for enterprise analytics. Use advanced geospatial techniques to create dynamic, data-rich mapping visualizations. Leverage APIs and developer tools to integrate Tableau with other platforms. Optimize Tableau dashboards for performance and interactivity. Apply best practices for content management and data security in Tableau implementations. Author(s) Pablo Sáenz de Tejada and Daria Kirilenko are seasoned Tableau experts with vast professional experience in implementing advanced analytics solutions. Pablo specializes in enterprise-level dashboard design and has trained numerous professionals globally. Daria focuses on integrating Tableau into complex data ecosystems, bringing a practical and innovative approach to analytics. Who is it for? This book is tailored for professionals such as Tableau developers, data analysts, and BI consultants who already have a foundational knowledge of Tableau. It is ideal for those seeking to deepen their skills and gain expertise in tackling advanced data visualization challenges. Whether you work in corporate analytics or enjoy exploring data in your own projects, this book will enhance your Tableau proficiency.
Today, I’m talking with Natalia Andreyeva from Infor about AI / ML product management and its application to supply chain software. Natalia is a Senior Director of Product Management for the Nexus AI / ML Solution Portfolio, and she walks us through what is new, and what is not, about designing AI capabilities in B2B software. We also got into why user experience is so critical in data-driven products, and the role of design in ensuring AI produces value. During our chat, Natalia hit on the importance of really nailing down customer needs through solid discovery and the role of product leaders in this non-technical work.
We also tackled some of the trickier aspects of designing for GenAI, digital assistants, the need to keep efforts strongly grounded in value creation for customers, and how even the best ML-based predictive analytics need to consider UX and the amount of evidence that customers need to believe the recommendations. During this episode, Natalia emphasizes a huge key to her work’s success: keeping customers and users in the loop throughout the product development lifecycle.
Highlights/ Skip to
What Natalia does as a Senior Director of Product Management for Infor Nexus (1:13) Who are the people using Infor Nexus Products and what do they accomplish when using them (2:51) Breaking down who makes up Natalia's team (4:05) What role does AI play in Natalia's work? (5:32) How do designers work with Natalia's team? (7:17) The problem that had Natalia rethink the discovery process when working with AI and machine learning applications (10:28) Why Natalia isn’t worried about competitors catching up to her team's design work (14:24) How Natalia works with Infor Nexus customers to help them understand the solutions her team is building (23:07) The biggest challenges Natalia faces with building GenAI and machine learning products (27:25) Natalia’s four steps to success in building AI products and capabilities (34:53) Where you can find more from Natalia (36:49)
Quotes from Today’s Episode
“I always launch discovery with customers, in the presence of the UX specialist [our designer]. We do the interviews together, and [regardless of who is facilitating] the goal is to understand the pain points of our customers by listening to how they do their jobs today. We do a series of these interviews and we distill them into the customer needs; the problems we need to really address for the customers. And then we start thinking about how to [address these needs]. Data products are a particular challenge because it’s not always that you can easily create a UX that would allow users to realize the value they’re searching for from the solution. And even if we can deliver it, consuming that is typically a challenge, too. So, this is where [design becomes really important]. [...] What I found through the years of experience is that it’s very difficult to explain to people around you what it is that you’re building when you’re dealing with a data-driven product. Is it a dashboard? Is it a workboard? They understand the word data, but that’s not what we are creating. We are creating the actual experience for the outcome that data will deliver to them indirectly, right? So, that’s typically how we work.” - Natalia Andreyeva (7:47) “[When doing discovery for products without AI], we already have ideas for what we want to get out. We know that there is a space in the market for those solutions to come to life. We just have to understand where. For AI-driven products, it’s not only about [the user’s] understanding of the problem or the design, it is also about understanding if the data exists and if it’s feasible to build the solution to address [the user’s] problem. [Data] feasibility is an extremely important piece because it will drive the UX as well.” - Natalia Andreyeva (10:50) “When [the team] discussed the problem, it sounded like a simple calculation that needed to be created [for users]. In reality, it was an entire process of thinking of multiple people in the chain [of command] to understand whether or not a medical product was safe to be consumed. That’s the outcome we needed to produce, and when we finally did, we actually celebrated with our customers and with our designers. It was one of the most difficult things that we had to design. So why did this problem actually get solved, and why we were the ones who solved it? It’s because we took the time to understand the current user experience through [our customer] interviews. We connected the dots and translated it all into a visual solution. We would never be able to do that without the proper UX and design in that place for the data.” - Natalia Andreyeva (13:16) “Everybody is pressured to come up with a strategy [for AI] or explain how AI is being incorporated into their solutions and platform, but it is still essential for all of my peers in product management to focus on the value [we’re] creating for customers. You cannot bypass discovery. Discovery is the essential portion where you have to spend time with your customers, champions, advisors, and their leads, but especially users who are doing this [supply chain] job every single day—so we understand where the pain point really is for them, we solve that pain, and we solve it with our design team as a partner, so that solution can surface value. ” - Natalia Andreyeva (22:08) “GenAI is a new field and new technology. It’s evolving quickly, and nobody really knows how to properly adapt or drive the adoption of AI solutions. The speed of innovation [in the AI field] is a challenge for everybody. People who work on the frontlines (i.e. product, engineering teams), have to stay way ahead of the market. Meanwhile, customers who are going to be using these [AI] solutions are not going to trust the [initial] outcomes. It’s going to take some time for people to become comfortable with them. But it doesn’t mean that your solution is bad or didn’t find the market fit. It’s just not time for your [solution] yet. Educating our users on the value of the solution is also part of that challenge, and [designers] have to be very careful that solutions are accessible. Users do not adopt intimidating solutions.” - Natalia Andreyeva (27:41) “First, discovery—where we search for the problems. From my experience, [discovery] works better if you’re very structured. I always provide [a customer] with an outline of what needs to happen so it’s not a secret. Then, do the prototyping phase and keep the customer engaged so they can see the quick outcomes of those prototypes. This is where you also have to really include the feasibility of the data if you’re building an AI solution, right? [Prototyping] can be short or long, but you need to keep the customer engaged throughout that phase so they see quick outcomes. Keep on validating this conceptually, you know, on the napkin, in Figma, it doesn’t really matter; you have to keep on keeping them engaged. Then, once you validate it works and the customer likes it, then build. Don’t really go into the deep development work until you know [all of this!] When you do build, create a beta solution. It only has to work so much to prove the value. Then, run the pilot, and if it’s successful, build the MVP, then launch. It’s simple, but it is a lot of work, and you have to keep your customers really engaged through all of those phases. If something doesn’t work [along the way], try to pivot early enough so you still have a viable product at the end.” - Natalia Andreyeva (34:53)
Links
Natalia's LinkedIn
Even cloud security experts can struggle with application and data compliance. Discover how Security Command Center is simplifying cloud compliance by bringing together configuration, monitoring, and evidence generation. Learn how a unified cloud compliance solution can make it easy to apply prebuilt and custom compliance frameworks, check compliance status with a centralized dashboard, and automatically generate audit reports to prove compliance.
In this hands-on lab, you'll learn how to build a powerful business intelligence (BI) dashboard using Looker Studio and BigQuery. Discover how to upload and query data, create reports datasets, and run scheduled queries to uncover valuable insights from large service usage logs. With your dashboard, you'll gain the ability to identify trends, optimize operations, and make data-driven decisions to improve efficiency and service quality.
If you register for a Learning Center lab, please ensure that you sign up for a Google Cloud Skills Boost account for both your work domain and personal email address. You will need to authenticate your account as well (be sure to check your spam folder!). This will ensure you can arrive and access your labs quickly onsite. You can follow this link to sign up!
A challenge I frequently hear about from subscribers to my insights mailing list is how to design B2B data products for multiple user types with differing needs. From dashboards to custom apps and commercial analytics / AI products, data product teams often struggle to create a single solution that meets the diverse needs of technical and business users in B2B settings. If you're encountering this issue, you're not alone!
In this episode, I share my advice for tackling this challenge including the gift of saying "no.” What are the patterns you should be looking out for in your customer research? How can you choose what to focus on with limited resources? What are the design choices you should avoid when trying to build these products? I’m hoping by the end of this episode, you’ll have some strategies to help reduce the size of this challenge—particularly if you lack a dedicated UX team to help you sort through your various user/stakeholder demands.
Highlights/ Skip to
The importance of proper user research and clustering “jobs to be done” around business importance vs. task frequency—ignoring the rest until your solution can show measurable value (4:29) What “level” of skill to design for, and why “as simple as possible” isn’t what I generally recommend (13:44) When it may be advantageous to use role or feature-based permissions to hide/show/change certain aspects, UI elements, or features (19:50) Leveraging AI and LLMs in-product to allow learning about the user and progressive disclosure and customization of UIs (26:44) Leveraging the “old” solution of rapid prototyping—which is now faster than ever with AI, and can accelerate learning (capturing user feedback) (31:14) 5 things I do not recommend doing when trying to satisfy multiple user types in your b2b AI or analytics product (34:14)
Quotes from Today’s Episode
If you're not talking to your users and stakeholders sufficiently, you're going to have a really tough time building a successful data product for one user – let alone for multiple personas. Listen for repeating patterns in what your users are trying to achieve (tasks they are doing). Focus on the jobs and tasks they do most frequently or the ones that bring the most value to their business. Forget about the rest until you've proven that your solution delivers real value for those core needs. It's more about understanding the problems and needs, not just the solutions. The solutions tend to be easier to design when the problem space is well understood. Users often suggest solutions, but it's our job to focus on the core problem we're trying to solve; simply entering in any inbound requests verbatim into JIRA and then “eating away” at the list is not usually a reliable strategy. (5:52) I generally recommend not going for “easy as possible” at the cost of shallow value. Instead, you’re going to want to design for some “mid-level” ability, understanding that this may make early user experiences with the product more difficult. Why? Oversimplification can mislead because data is complex, problems are multivariate, and data isn't always ideal. There are also “n” number of “not-first” impressions users will have with your product. This also means there is only one “first impression” they have. As such, the idea conceptually is to design an amazing experience for the “n” experiences, but not to the point that users never realize value and give up on the product. While I'd prefer no friction, technical products sometimes will have to have a little friction up front however, don't use this as an excuse for poor design. This is hard to get right, even when you have design resources, and it’s why UX design matters as thinking this through ends up determining, in part, whether users obtain the promise of value you made to them. (14:21) As an alternative to rigid role and feature-based permissions in B2B data products, you might consider leveraging AI and / or LLMs in your UI as a means of simplifying and customizing the UI to particular users. This approach allows users to potentially interrogate the product about the UI, customize the UI, and even learn over time about the user’s questions (jobs to be done) such that becomes organically customized over time to their needs. This is in contrast to the rigid buckets that role and permission-based customization present. However, as discussed in my previous episode (164 - “The Hidden UX Taxes that AI and LLM Features Impose on B2B Customers Without Your Knowledge”) designing effective AI features and capabilities can also make things worse due to the probabilistic nature of the responses GenAI produces. As such, this approach may benefit from a UX designer or researcher familiar with designing data products. Understanding what “quality” means to the user, and how to measure it, is especially critical if you’re going to leverage AI and LLMs to make the product UX better. (20:13) The old solution of rapid prototyping is even more valuable now—because it’s possible to prototype even faster. However, prototyping is not just about learning if your solution is on track. Whether you use AI or pencil and paper, prototyping early in the product development process should be framed as a “prop to get users talking.” In other words, it is a prop to facilitate problem and need clarity—not solution clarity. Its purpose is to spark conversation and determine if you're solving the right problem. As you iterate, your need to continually validate the problem should shrink, which will present itself in the form of consistent feedback you hear from end users. This is the point where you know you can focus on the design of the solution. Innovation happens when we learn; so the goal is to increase your learning velocity. (31:35) Have you ever been caught in the trap of prioritizing feature requests based on volume? I get it. It's tempting to give the people what they think they want. For example, imagine ten users clamoring for control over specific parameters in your machine learning forecasting model. You could give them that control, thinking you're solving the problem because, hey, that's what they asked for! But did you stop to ask why they want that control? The reasons behind those requests could be wildly different. By simply handing over the keys to all the model parameters, you might be creating a whole new set of problems. Users now face a "usability tax," trying to figure out which parameters to lock and which to let float. The key takeaway? Focus on addressing the frequency that the same problems are occurring across your users, not just the frequency a given tactic or “solution” method (i.e. “model” or “dashboard” or “feature”) appears in a stakeholder or user request. Remember, problems are often disguised as solutions. We've got to dig deeper and uncover the real needs, not just address the symptoms. (36:19)
"What if you have a beautiful SLO Dashboard and it's all red and no one cares?" The mission of Site Reliability Engineering (SRE) is to ensure the reliability, scalability, and performance of critical systems - a goal best achieved through strong collaboration with teams across the organization. We are exploring how SRE is embedded in an organization, how it interfaces with application owners, senior management, business stakeholders and external software/hardware vendors. In all these cases the success of SRE's mission hinges on the effectiveness of the relationships.
We will use plenty of examples of what worked, what failed in our past work and why. Additionally, we will address funding challenges that can unexpectedly impact even well-established SRE teams.
Mike has built his career around driving performance and efficiency, specializing in optimizing the security, availability and speed of cloud applications, data and infrastructure. He developed the first currency program trading system for the Toronto Stock Exchange at UBS and later refined his expertise in optimizing trading systems and migrating core data to the cloud at Morgan Stanley and Transamerica. He is a founding member of the NYZH consultancy, focusing on AI and SRE. Based in Denver, Colorado, Mike is a pilot who enjoys desert racing and cycling, sharing adventures with his wife and three children.
🌟 Session Overview 🌟
Session Name: Automating Web Workflows with LLMs Speaker: Jiri Moravcik Session Description: This talk will delve into Apify's approach to automation and its workflow with Large Language Models (LLMs), highlighting the seamless integration and strategic use of AI in data extraction from the web. Participants will gain insight into how Apify serves clients like Intercom and Rocket Money by employing cutting-edge techniques to scrape and structure online data. The presentation will showcase specific case studies involving ChatGPT, illustrating the methodologies and tools utilized to transform raw data into valuable insights for clients.
🚀 About Big Data and RPA 2024 🚀
Unlock the future of innovation and automation at Big Data & RPA Conference Europe 2024! 🌟 This unique event brings together the brightest minds in big data, machine learning, AI, and robotic process automation to explore cutting-edge solutions and trends shaping the tech landscape. Perfect for data engineers, analysts, RPA developers, and business leaders, the conference offers dual insights into the power of data-driven strategies and intelligent automation. 🚀 Gain practical knowledge on topics like hyperautomation, AI integration, advanced analytics, and workflow optimization while networking with global experts. Don’t miss this exclusive opportunity to expand your expertise and revolutionize your processes—all from the comfort of your home! 📊🤖✨
📅 Yearly Conferences: Curious about the evolution of QA? Check out our archive of past Big Data & RPA sessions. Watch the strategies and technologies evolve in our videos! 🚀 🔗 Find Other Years' Videos: 2023 Big Data Conference Europe https://www.youtube.com/playlist?list=PLqYhGsQ9iSEpb_oyAsg67PhpbrkCC59_g 2022 Big Data Conference Europe Online https://www.youtube.com/playlist?list=PLqYhGsQ9iSEryAOjmvdiaXTfjCg5j3HhT 2021 Big Data Conference Europe Online https://www.youtube.com/playlist?list=PLqYhGsQ9iSEqHwbQoWEXEJALFLKVDRXiP
💡 Stay Connected & Updated 💡
Don’t miss out on any updates or upcoming event information from Big Data & RPA Conference Europe. Follow us on our social media channels and visit our website to stay in the loop!
🌐 Website: https://bigdataconference.eu/, https://rpaconference.eu/ 👤 Facebook: https://www.facebook.com/bigdataconf, https://www.facebook.com/rpaeurope/ 🐦 Twitter: @BigDataConfEU, @europe_rpa 🔗 LinkedIn: https://www.linkedin.com/company/73234449/admin/dashboard/, https://www.linkedin.com/company/75464753/admin/dashboard/ 🎥 YouTube: http://www.youtube.com/@DATAMINERLT
🌟 Session Overview 🌟
Session Name: Can AI Face Your (Potential) Customers? Lessons Learned with Multilingual Enterprises Speaker: Alius Petraska Session Description: Vytenis will share their experience on when AI can directly interact with customers or when human intervention is still necessary. Their solution helps sales and customer service (CS) agents be more effective on calls. This provides a unique perspective on understanding when AI excels and when it falls short, still requiring clients to call or send emails.
🚀 About Big Data and RPA 2024 🚀
Unlock the future of innovation and automation at Big Data & RPA Conference Europe 2024! 🌟 This unique event brings together the brightest minds in big data, machine learning, AI, and robotic process automation to explore cutting-edge solutions and trends shaping the tech landscape. Perfect for data engineers, analysts, RPA developers, and business leaders, the conference offers dual insights into the power of data-driven strategies and intelligent automation. 🚀 Gain practical knowledge on topics like hyperautomation, AI integration, advanced analytics, and workflow optimization while networking with global experts. Don’t miss this exclusive opportunity to expand your expertise and revolutionize your processes—all from the comfort of your home! 📊🤖✨
📅 Yearly Conferences: Curious about the evolution of QA? Check out our archive of past Big Data & RPA sessions. Watch the strategies and technologies evolve in our videos! 🚀 🔗 Find Other Years' Videos: 2023 Big Data Conference Europe https://www.youtube.com/playlist?list=PLqYhGsQ9iSEpb_oyAsg67PhpbrkCC59_g 2022 Big Data Conference Europe Online https://www.youtube.com/playlist?list=PLqYhGsQ9iSEryAOjmvdiaXTfjCg5j3HhT 2021 Big Data Conference Europe Online https://www.youtube.com/playlist?list=PLqYhGsQ9iSEqHwbQoWEXEJALFLKVDRXiP
💡 Stay Connected & Updated 💡
Don’t miss out on any updates or upcoming event information from Big Data & RPA Conference Europe. Follow us on our social media channels and visit our website to stay in the loop!
🌐 Website: https://bigdataconference.eu/, https://rpaconference.eu/ 👤 Facebook: https://www.facebook.com/bigdataconf, https://www.facebook.com/rpaeurope/ 🐦 Twitter: @BigDataConfEU, @europe_rpa 🔗 LinkedIn: https://www.linkedin.com/company/73234449/admin/dashboard/, https://www.linkedin.com/company/75464753/admin/dashboard/ 🎥 YouTube: http://www.youtube.com/@DATAMINERLT
🌟 Session Overview 🌟
Session Name: Panel Discussion | Integrating AI with RPA: Streamlining Operations and Innovating Business Processes Speakers: Ana Marija Barisic, Andrzej Kinatowski, Pedram Birounvand, Swanand Rao, Alius Petraska Session Description: Panel Discussion will explore the powerful synergy between Artificial Intelligence (AI) and Robotic Process Automation (RPA). Panelists will discuss how combining these technologies can transform and streamline business operations, driving efficiency, accuracy, and innovation. The session will cover real-world use cases, strategies for successful integration, and the potential challenges organizations might face.
🚀 About Big Data and RPA 2024 🚀
Unlock the future of innovation and automation at Big Data & RPA Conference Europe 2024! 🌟 This unique event brings together the brightest minds in big data, machine learning, AI, and robotic process automation to explore cutting-edge solutions and trends shaping the tech landscape. Perfect for data engineers, analysts, RPA developers, and business leaders, the conference offers dual insights into the power of data-driven strategies and intelligent automation. 🚀 Gain practical knowledge on topics like hyperautomation, AI integration, advanced analytics, and workflow optimization while networking with global experts. Don’t miss this exclusive opportunity to expand your expertise and revolutionize your processes—all from the comfort of your home! 📊🤖✨
📅 Yearly Conferences: Curious about the evolution of QA? Check out our archive of past Big Data & RPA sessions. Watch the strategies and technologies evolve in our videos! 🚀 🔗 Find Other Years' Videos: 2023 Big Data Conference Europe https://www.youtube.com/playlist?list=PLqYhGsQ9iSEpb_oyAsg67PhpbrkCC59_g 2022 Big Data Conference Europe Online https://www.youtube.com/playlist?list=PLqYhGsQ9iSEryAOjmvdiaXTfjCg5j3HhT 2021 Big Data Conference Europe Online https://www.youtube.com/playlist?list=PLqYhGsQ9iSEqHwbQoWEXEJALFLKVDRXiP
💡 Stay Connected & Updated 💡
Don’t miss out on any updates or upcoming event information from Big Data & RPA Conference Europe. Follow us on our social media channels and visit our website to stay in the loop!
🌐 Website: https://bigdataconference.eu/, https://rpaconference.eu/ 👤 Facebook: https://www.facebook.com/bigdataconf, https://www.facebook.com/rpaeurope/ 🐦 Twitter: @BigDataConfEU, @europe_rpa 🔗 LinkedIn: https://www.linkedin.com/company/73234449/admin/dashboard/, https://www.linkedin.com/company/75464753/admin/dashboard/ 🎥 YouTube: http://www.youtube.com/@DATAMINERLT
🌟 Session Overview 🌟
Session Name: From Quick Wins to Revolutionising Productivity & CX with GenAI: Utilising Real-time and Open Source AI with Semantic Search Speaker: Anna Semjen Session Description: Join this session to discover how DataStax Astra DB can boost productivity, enable rapid deployment of GenAI applications, and transform customer experience. We’ll showcase an advanced semantic search use case, demonstrating how to vectorize entire videos with specific timestamps and use natural language processing to find precise moments from events like the Olympics. Learn about an open-source model that runs locally, making this powerful tool accessible and cost-effective. Additionally, explore hybrid search capabilities that integrate multiple videos into a single collection, streamlining processes by loading only embeddings and metadata. Perfect for enhancing content management and delivering exceptional user experiences.
🚀 About Big Data and RPA 2024 🚀
Unlock the future of innovation and automation at Big Data & RPA Conference Europe 2024! 🌟 This unique event brings together the brightest minds in big data, machine learning, AI, and robotic process automation to explore cutting-edge solutions and trends shaping the tech landscape. Perfect for data engineers, analysts, RPA developers, and business leaders, the conference offers dual insights into the power of data-driven strategies and intelligent automation. 🚀 Gain practical knowledge on topics like hyperautomation, AI integration, advanced analytics, and workflow optimization while networking with global experts. Don’t miss this exclusive opportunity to expand your expertise and revolutionize your processes—all from the comfort of your home! 📊🤖✨
📅 Yearly Conferences: Curious about the evolution of QA? Check out our archive of past Big Data & RPA sessions. Watch the strategies and technologies evolve in our videos! 🚀 🔗 Find Other Years' Videos: 2023 Big Data Conference Europe https://www.youtube.com/playlist?list=PLqYhGsQ9iSEpb_oyAsg67PhpbrkCC59_g 2022 Big Data Conference Europe Online https://www.youtube.com/playlist?list=PLqYhGsQ9iSEryAOjmvdiaXTfjCg5j3HhT 2021 Big Data Conference Europe Online https://www.youtube.com/playlist?list=PLqYhGsQ9iSEqHwbQoWEXEJALFLKVDRXiP
💡 Stay Connected & Updated 💡
Don’t miss out on any updates or upcoming event information from Big Data & RPA Conference Europe. Follow us on our social media channels and visit our website to stay in the loop!
🌐 Website: https://bigdataconference.eu/, https://rpaconference.eu/ 👤 Facebook: https://www.facebook.com/bigdataconf, https://www.facebook.com/rpaeurope/ 🐦 Twitter: @BigDataConfEU, @europe_rpa 🔗 LinkedIn: https://www.linkedin.com/company/73234449/admin/dashboard/, https://www.linkedin.com/company/75464753/admin/dashboard/ 🎥 YouTube: http://www.youtube.com/@DATAMINERLT
🌟 Session Overview 🌟
Session Name: AI Agents in Action: Transforming Enterprise Processes Speaker: Karyna Mihalevich Session Description: Drawing from Karyna's experience in SAP environments and intelligent automation, she will demonstrate how AI agents are transforming the landscape of business operations. The session will begin with an overview of AI agent applications across various business functions, followed by a live demonstration of an AI agent in action.
After the demo, Karyna will share additional use cases, providing attendees with insights into how AI agents are being used across different industries and departments. She will also outline practical steps for implementing AI agents in SAP and other ERP systems.
Attendees will leave this session with:
A clear understanding of the role of AI agents in modern enterprise automation Practical strategies for implementing AI agents in their own organizations
🚀 About Big Data and RPA 2024 🚀
Unlock the future of innovation and automation at Big Data & RPA Conference Europe 2024! 🌟 This unique event brings together the brightest minds in big data, machine learning, AI, and robotic process automation to explore cutting-edge solutions and trends shaping the tech landscape. Perfect for data engineers, analysts, RPA developers, and business leaders, the conference offers dual insights into the power of data-driven strategies and intelligent automation. 🚀 Gain practical knowledge on topics like hyperautomation, AI integration, advanced analytics, and workflow optimization while networking with global experts. Don’t miss this exclusive opportunity to expand your expertise and revolutionize your processes—all from the comfort of your home! 📊🤖✨
📅 Yearly Conferences: Curious about the evolution of QA? Check out our archive of past Big Data & RPA sessions. Watch the strategies and technologies evolve in our videos! 🚀 🔗 Find Other Years' Videos: 2023 Big Data Conference Europe https://www.youtube.com/playlist?list=PLqYhGsQ9iSEpb_oyAsg67PhpbrkCC59_g 2022 Big Data Conference Europe Online https://www.youtube.com/playlist?list=PLqYhGsQ9iSEryAOjmvdiaXTfjCg5j3HhT 2021 Big Data Conference Europe Online https://www.youtube.com/playlist?list=PLqYhGsQ9iSEqHwbQoWEXEJALFLKVDRXiP
💡 Stay Connected & Updated 💡
Don’t miss out on any updates or upcoming event information from Big Data & RPA Conference Europe. Follow us on our social media channels and visit our website to stay in the loop!
🌐 Website: https://bigdataconference.eu/, https://rpaconference.eu/ 👤 Facebook: https://www.facebook.com/bigdataconf, https://www.facebook.com/rpaeurope/ 🐦 Twitter: @BigDataConfEU, @europe_rpa 🔗 LinkedIn: https://www.linkedin.com/company/73234449/admin/dashboard/, https://www.linkedin.com/company/75464753/admin/dashboard/ 🎥 YouTube: http://www.youtube.com/@DATAMINERLT