Customer experience failures are costing businesses billions, driven by siloed data, poor channel selection, and voice latency issues. Discover how Twilio's Conversation Relay integrates seamlessly with Microsoft Foundry models to solve these critical challenges. Learn to build real-time intelligent voice interactions, leverage Azure's robust AI infrastructure, and rapidly deploy scalable solutions that eliminate guesswork and modernize your customer experience workflows.
talk-data.com
Topic
Twilio Segment
6
tagged
Activity Trend
Top Events
In a landscape where customer expectations are evolving faster than ever, the ability to activate real-time, first-party data is becoming the difference between reactive and intelligent businesses. This fireside chat brings together experts from Capgemini, Twilio Segment, and leading marketplace StockX to explore how organizations are building future-proof data foundations that power scalable, responsible AI.
The rise of A-B testing has transformed decision-making in tech, yet its application isn't without challenges. As professionals, how do you navigate the balance between short-term gains and long-term sustainability? What strategies can you employ to ensure your testing methods enhance rather than hinder user experience? And how do you effectively communicate the insights gained from testing to drive meaningful change within your organization? Vanessa Larco is a former partner at NEA where she led Series A and Series B investment rounds and worked with major consumer companies like DTC jewelry giant Mejuri, menopause symptom relief treatment Evernow, and home-swapping platform Kindred as well as major enterprise SaaS companies like Assembled, Orby AI, Granica AI, EvidentID, Rocket.Chat, Forethought AI. She is also a board observer at Forethought, SafeBase, Orby AI, Granica, Modyfi, and HEAVY.AI. She was a board observer at Robinhood until its IPO in 2021. Before she became an investor, she built consumer and enterprise tech herself at Microsoft, Disney, Twilio, and Box as a product leader. In the episode, Richie and Vanessa explore the evolution of A-B testing in gaming, the balance between data-driven decisions and user experience, the challenges of scaling experimentation, the pitfalls of misaligned metrics, the importance of understanding user behavior, and much more. Links Mentioned in the Show: New Enterprise AssociatesConnect with VanessaCourse: Customer Analytics and A/B Testing in PythonRelated Episode: Make Your A/B Testing More Effective and EfficientSign up to attend RADAR: Skills Edition - Vanessa will be speaking! New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
Fast access to data has become a critical game changer. Today, a new breed of company understands that the faster they can build, access, and share well-defined datasets, the more competitive they’ll be in our data-driven world. In this practical report, Scott Haines from Twilio introduces you to operational analytics, a new approach for making sense of all the data flooding into business systems. Data architects and data scientists will see how Apache Kafka and other tools and processes laid the groundwork for fast analytics on a mix of historical and near-real-time data. You’ll learn how operational analytics feeds minute-by-minute customer interactions, and how NewSQL databases have entered the scene to drive machine learning algorithms, AI programs, and ongoing decision-making within an organization. Understand the key advantages that data-driven companies have over traditional businesses Explore the rise of operational analytics—and how this method relates to current tech trends Examine the impact of can’t wait business decisions and won’t wait customer experiences Discover how NewSQL databases support cloud native architecture and set the stage for operational databases Learn how to choose the right database to support operational analytics in your organization
Summary
Business intelligence is a necessity for any organization that wants to be able to make informed decisions based on the data that they collect. Unfortunately, it is common for different portions of the business to build their reports with different assumptions, leading to conflicting views and poor choices. Looker is a modern tool for building and sharing reports that makes it easy to get everyone on the same page. In this episode Daniel Mintz explains how the product is architected, the features that make it easy for any business user to access and explore their reports, and how you can use it for your organization today.
Preamble
Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to run a bullet-proof data platform. Go to dataengineeringpodcast.com/linode to get a $20 credit and launch a new server in under a minute. Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch. Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat Your host is Tobias Macey and today I’m interviewing Daniel Mintz about Looker, a a modern data platform that can serve the data needs of an entire company
Interview
Introduction How did you get involved in the area of data management? Can you start by describing what Looker is and the problem that it is aiming to solve?
How do you define business intelligence?
How is Looker unique from other approaches to business intelligence in the enterprise?
How does it compare to open source platforms for BI?
Can you describe the technical infrastructure that supports Looker? Given that you are connecting to the customer’s data store, how do you ensure sufficient security? For someone who is using Looker, what does their workflow look like?
How does that change for different user roles (e.g. data engineer vs sales management)
What are the scaling factors for Looker, both in terms of volume of data for reporting from, and for user concurrency? What are the most challenging aspects of building a business intelligence tool and company in the modern data ecosystem?
What are the portions of the Looker architecture that you would do differently if you were to start over today?
What are some of the most interesting or unusual uses of Looker that you have seen? What is in store for the future of Looker?
Contact Info
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
Looker Upworthy MoveOn.org LookML SQL Business Intelligence Data Warehouse Linux Hadoop BigQuery Snowflake Redshift DB2 PostGres ETL (Extract, Transform, Load) ELT (Extract, Load, Transform) Airflow Luigi NiFi Data Curation Episode Presto Hive Athena DRY (Don’t Repeat Yourself) Looker Action Hub Salesforce Marketo Twilio Netscape Navigator Dynamic Pricing Survival Analysis DevOps BigQuery ML Snowflake Data Sharehouse
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Support Data Engineering Podcast