talk-data.com talk-data.com

Topic

TIBCO Spotfire

analytics data_integration business_intelligence data_management

10

tagged

Activity Trend

1 peak/qtr
2020-Q1 2026-Q1

Activities

10 activities · Newest first

Summary In this episode of the Data Engineering Podcast Derek Collison, creator of NATS and CEO of Synadia, talks about the evolution and capabilities of NATS as a multi-paradigm connectivity layer for distributed applications. Derek discusses the challenges and solutions in building distributed systems, and highlights the unique features of NATS that differentiate it from other messaging systems. He delves into the architectural decisions behind NATS, including its ability to handle high-speed global microservices, support for edge computing, and integration with Jetstream for data persistence, and explores the role of NATS in modern data management and its use cases in industries like manufacturing and connected vehicles.

Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.Your host is Tobias Macey and today I'm interviewing Derek Collison about NATS, a multi-paradigm connectivity layer for distributed applications.Interview IntroductionHow did you get involved in the area of data management?Can you describe what NATS is and the story behind it?How have your experiences in past roles (cloud foundry, TIBCO messaging systems) informed the core principles of NATS?What other sources of inspiration have you drawn on in the design and evolution of NATS? (e.g. Kafka, RabbitMQ, etc.)There are several patterns and abstractions that NATS can support, many of which overlap with other well-regarded technologies. When designing a system or service, what are the heuristics that should be used to determine whether NATS should act as a replacement or addition to those capabilities? (e.g. considerations of scale, speed, ecosystem compatibility, etc.)There is often a divide in the technologies and architecture used between operational/user-facing applications and data systems. How does the unification of multiple messaging patterns in NATS shift the ways that teams think about the relationship between these use cases?How does the shared communication layer of NATS with multiple protocol and pattern adaptaters reduce the need to replicate data and logic across application and data layers?Can you describe how the core NATS system is architected?How have the design and goals of NATS evolved since you first started working on it?In the time since you first began writing NATS (~2012) there have been several evolutionary stages in both application and data implementation patterns. How have those shifts influenced the direction of the NATS project and its ecosystem?For teams who have an existing architecture, what are some of the patterns for adoption of NATS that allow them to augment or migrate their capabilities?What are some of the ecosystem investments that you and your team have made to ease the adoption and integration of NATS?What are the most interesting, innovative, or unexpected ways that you have seen NATS used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on NATS?When is NATS the wrong choice?What do you have planned for the future of NATS?Contact Info GitHubLinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links NATSNATS JetStreamSynadiaCloud FoundryTIBCOApplied Physics Lab - Johns Hopkins UniversityCray SupercomputerRVCM Certified MessagingTIBCO ZMSIBM MQJMS == Java Message ServiceRabbitMQMongoDBNodeJSRedisAMQP == Advanced Message Queueing ProtocolPub/Sub PatternCircuit Breaker PatternZero MQAkamaiFastlyCDN == Content Delivery NetworkAt Most OnceAt Least OnceExactly OnceAWS KinesisMemcachedSQSSegmentRudderstackPodcast EpisodeDLQ == Dead Letter QueueMQTT == Message Queueing Telemetry TransportNATS Kafka Bridge10BaseT NetworkWeb AssemblyRedPandaPodcast EpisodePulsar FunctionsmTLSAuthZ (Authorization)AuthN (Authentication)NATS Auth CalloutsOPA == Open Policy AgentRAG == Retrieval Augmented GenerationAI Engineering Podcast EpisodeHome AssistantPodcast.init EpisodeTailscaleOllamaCDC == Change Data CapturegRPCThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

We’re improving DataFramed, and we need your help! We want to hear what you have to say about the show, and how we can make it more enjoyable for you—find out more here. Edge computing is poised to transform industries by bringing computation and data storage closer to the source of data generation. This shift unlocks new types of value creation with data & AI and allows for a privacy-first and deeply personalized use of AI on our devices. What will the edge computing transition look like? How do you ensure applications are edge-ready, and what is the role of AI in the transition?  Derek Collison is the founder and CEO at Synadia. He is an industry veteran, entrepreneur and pioneer in large-scale distributed systems and cloud computing. Derek founded Synadia Communications and Apcera, and has held executive positions at Google, VMware, and TIBCO Software. He is also an active angel investor and a technology futurist around Artificial Intelligence, Machine Learning, IOT and Cloud Computing. Justyna Bak is VP of Marketing at Synadia. Justyna is a versatile executive bridging Marketing, Sales and Product, a spark-plug for innovation at startups and Fortune 100 and a tech expert in Data Analytics and AI, AppDev and Networking. She is an astute influencer, panelist and presenter (Google, HBR) and a respected leader in Silicon Valley and Europe. In the episode, Richie, Derek, and Justyna explore the transition from cloud to edge computing, the benefits of reduced latency, real-time decision-making in industries like manufacturing and retail, the role of AI at the edge, and the future of edge-native applications, and much more. Links Mentioned in the Show: SynadiaConnect with Derek and JustynaCourse: Understanding Cloud ComputingRelated Episode: The Database is the Operating System with Mike Stonebraker, CTO & Co-Founder At DBOSRewatch sessions from RADAR: Forward Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Summary The landscape of data management and processing is rapidly changing and evolving. There are certain foundational elements that have remained steady, but as the industry matures new trends emerge and gain prominence. In this episode Astasia Myers of Redpoint Ventures shares her perspective as an investor on which categories she is paying particular attention to for the near to medium term. She discusses the work being done to address challenges in the areas of data quality, observability, discovery, and streaming. This is a useful conversation to gain a macro perspective on where businesses are looking to improve their capabilities to work with data.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management What are the pieces of advice that you wish you had received early in your career of data engineering? If you hand a book to a new data engineer, what wisdom would you add to it? I’m working with O’Reilly on a project to collect the 97 things that every data engineer should know, and I need your help. Go to dataengineeringpodcast.com/97things to add your voice and share your hard-earned expertise. When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar to get you up and running in no time. With simple pricing, fast networking, S3 compatible object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! You listen to this show because you love working with data and want to keep your skills up to date. Machine learning is finding its way into every aspect of the data landscape. Springboard has partnered with us to help you take the next step in your career by offering a scholarship to their Machine Learning Engineering career track program. In this online, project-based course every student is paired with a Machine Learning expert who provides unlimited 1:1 mentorship support throughout the program via video conferences. You’ll build up your portfolio of machine learning projects and gain hands-on experience in writing machine learning algorithms, deploying models into production, and managing the lifecycle of a deep learning prototype. Springboard offers a job guarantee, meaning that you don’t have to pay for the program until you get a job in the space. The Data Engineering Podcast is exclusively offering listeners 20 scholarships of $500 to eligible applicants. It only takes 10 minutes and there’s no obligation. Go to dataengineeringpodcast.com/springboard and apply today! Make sure to use the code AISPRINGBOARD when you enroll. Your host is Tobias Macey and today I’m interviewing Astasia Myers about the trends in the data industry that she sees as an investor at Redpoint Ventures

Interview

Introduction How did you get involved in the area of data management? Can you start by giving an overview of Redpoint Ventures and your role there? From an investor perspective, what is most appealing about the category of data-oriented businesses? What are the main sources of information that you rely on to keep up to date with what is happening in the data industry?

What is your personal heuristic for determining the relevance of any given piece of information to decide whether it is worthy of further investigation?

As someone who works closely with a variety of companies across different industry verticals and different areas of focus, what are some of the common trends that you have identified in the data ecosystem? In your article that covers the trends you are keeping an eye on for 2020 you call out 4 in particular, data quality, data catalogs, observability of what influences critical business indicators, and streaming data. Taking those in turn:

What are the driving factors that influence data quality, and what elements of that problem space are being addressed by the companies you are watching?

What are the unsolved areas that you see as being viable for newcomers?

What are the challenges faced by businesses in establishing and maintaining data catalogs?

What approaches are being taken by the companies who are trying to solve this problem?

What shortcomings do you see in the available products?

For gaining visibility into the forces that impact the key performance indicators (KPI) of businesses, what is lacking in the current approaches?

What additional information needs to be tracked to provide the needed context for making informed decisions about what actions to take to improve KPIs? What challenges do businesses in this observability space face to provide useful access and analysis to this collected data?

Streaming is an area that has been growing rapidly over the past few years, with many open source and commercial options. What are the major business opportunities that you see to make streaming more accessible and effective?

What are the main factors that you see as driving this growth in the need for access to streaming data?

With your focus on these trends, how does that influence your investment decisions and where you spend your time? What are the unaddressed markets or product categories that you see which would be lucrative for new businesses? In most areas of technology now there is a mix of open source and commercial solutions to any given problem, with varying levels of maturity and polish between them. What are your views on the balance of this relationship in the data ecosystem?

For data in particular, there is a strong potential for vendor lock-in which can cause potential customers to avoid adoption of commercial solutions. What has been your experience in that regard with the companies that you work with?

Contact Info

@AstasiaMyers on Twitter @astasia on Medium LinkedIn

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don’t forget to check out our other show, Podcast.init to learn about the Python language, its community, and the innovative ways it is being used. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you’ve learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on iTunes and tell your friends and co-workers Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat

Links

Redpoint Ventures 4 Data Trends To Watch in 2020 Seagate Western Digital Pure Storage Cisco Cohesity Looker

Podcast Episode

DGraph

Podcast Episode

Dremio

Podcast Episode

SnowflakeDB

Podcast Episode

Thoughspot Tibco Elastic Splunk Informatica Data Council DataCoral Mattermost Bitwarden Snowplow

Podcast Interview Interview About Snowplow Infrastructure

CHAOSSEARCH

Podcast Episode

Kafka Streams Pulsar

Podcast Interview Followup Podcast Interview

Soda Toro Great Expectations Alation Collibra Amundsen DataHub Netflix Metacat Marquez

Podcast Episode

LDAP == Lightweight Directory Access Protocol Anodot Databricks Flink

a…

Reporting, Predictive Analytics, and Everything in Between

Business decisions today are tactical and strategic at the same time. How do you respond to a competitor’s price change? Or to specific technology changes? What new products, markets, or businesses should you pursue? Decisions like these are based on information from only one source: data. With this practical report, technical and non-technical leaders alike will explore the fundamental elements necessary to embark on a data analytics initiative. Is your company planning or contemplating a data analytics initiative? Authors Brett Stupakevich, David Sweenor, and Shane Swiderek from TIBCO guide you through several analytics options. IT leaders, product developers, analytics leaders, data analysts, data scientists, and business professionals will learn how to deploy analytic components in streaming and embedded systems using one of five platforms. You’ll examine: Analytics platforms including embedded BI, reporting, data exploration & discovery, streaming BI, and data science & machine learning The business problems each option solves and the capabilities and requirements of each How to identify the right analytics type for your particular use case Key considerations and the level of investment for each analytics platform

TIBCO Spotfire: A Comprehensive Primer - Second Edition

Explore the possibilities of TIBCO Spotfire with this comprehensive guide. You'll start with fundamental data visualization principles and progress to creating powerful, professional-grade analytics dashboards and applications. By following this book, you'll master both basic usage and advanced features such as predictive and spatial analytics. What this Book will help me do Understand the fundamentals of TIBCO Spotfire and its various interfaces including web and desktop clients. Utilize Spotfire's range of visualization tools to effectively analyze and present data. Develop robust analytics dashboards and applications tailored for enterprise needs. Implement advanced features like predictive analytics and location-based data representations. Learn strategies for deploying and administrating Spotfire in a scalable, enterprise-oriented environment. Author(s) The authors, None Berridge and None Phillips, bring years of experience in business intelligence and data analytics. Their practical knowledge and real-world perspective shape the book into a practical resource for learning Spotfire. Their approach ensures that concepts are clearly explained with relatable examples, improving accessibility for all readers. Who is it for? This book is intended for business intelligence professionals, data analysts, and developers who aim to enhance their analytics skills using TIBCO Spotfire. It is suitable for beginners as no prior experience with Spotfire or advanced analytics is required. Readers looking to develop enterprise-grade visualization and analytical solutions will find it valuable.

Summary

Building an ETL pipeline is a common need across businesses and industries. It’s easy to get one started but difficult to manage as new requirements are added and greater scalability becomes necessary. Rather than duplicating the efforts of other engineers it might be best to use a hosted service to handle the plumbing so that you can focus on the parts that actually matter for your business. In this episode CTO and co-founder of Alooma, Yair Weinberger, explains how the platform addresses the common needs of data collection, manipulation, and storage while allowing for flexible processing. He describes the motivation for starting the company, how their infrastructure is architected, and the challenges of supporting multi-tenancy and a wide variety of integrations.

Preamble

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to run a bullet-proof data platform. Go to dataengineeringpodcast.com/linode to get a $20 credit and launch a new server in under a minute. For complete visibility into the health of your pipeline, including deployment tracking, and powerful alerting driven by machine-learning, DataDog has got you covered. With their monitoring, metrics, and log collection agent, including extensive integrations and distributed tracing, you’ll have everything you need to find and fix performance bottlenecks in no time. Go to dataengineeringpodcast.com/datadog today to start your free 14 day trial and get a sweet new T-Shirt. Go to dataengineeringpodcast.com to subscribe to the show, sign up for the newsletter, read the show notes, and get in touch. Your host is Tobias Macey and today I’m interviewing Yair Weinberger about Alooma, a company providing data pipelines as a service

Interview

Introduction How did you get involved in the area of data management? What is Alooma and what is the origin story? How is the Alooma platform architected?

I want to go into stream VS batch here What are the most challenging components to scale?

How do you manage the underlying infrastructure to support your SLA of 5 nines? What are some of the complexities introduced by processing data from multiple customers with various compliance requirements?

How do you sandbox user’s processing code to avoid security exploits?

What are some of the potential pitfalls for automatic schema management in the target database? Given the large number of integrations, how do you maintain the

What are some challenges when creating integrations, isn’t it simply conforming with an external API?

For someone getting started with Alooma what does the workflow look like? What are some of the most challenging aspects of building and maintaining Alooma? What are your plans for the future of Alooma?

Contact Info

LinkedIn @yairwein on Twitter

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

Alooma Convert Media Data Integration ESB (Enterprise Service Bus) Tibco Mulesoft ETL (Extract, Transform, Load) Informatica Microsoft SSIS OLAP Cube S3 Azure Cloud Storage Snowflake DB Redshift BigQuery Salesforce Hubspot Zendesk Spark The Log: What every software engineer should know about real-time data’s unifying abstraction by Jay Kreps RDBMS (Relational Database Management System) SaaS (Software as a Service) Change Data Capture Kafka Storm Google Cloud PubSub Amazon Kinesis Alooma Code Engine Zookeeper Idempotence Kafka Streams Kubernetes SOC2 Jython Docker Python Javascript Ruby Scala PII (Personally Identifiable Information) GDPR (General Data Protection Regulation) Amazon EMR (Elastic Map Reduce) Sequoia Capital Lightspeed Investors Redis Aerospike Cassandra MongoDB

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Support Data Engineering Podcast

In this session, Michael O'Connell, Chief Analytics Officer, TIBCO Software, sat with Vishal Kumar, CEO AnalyticsWeek and shared his journey as a Chief Analytics Executive, shared best practices, cultural hacks for upcoming executives, shared his perspective on changing BI landscape and how businesses could leverage that and shared some challenges/opportunities he's observing across various industries.

Timeline:

0:28 Michael's journey. 4:12 CDO, CAO, and CTO. 7:30 Adoption of data analytics capabilities. 9:55 The BI industry dealing with the latest in data analytics. 12:10 Future of stats. 14:58 Creating a center of excellence with data. 18:00 Evolution of data in BI. 21:40 Small businesses getting started with data analytics. 24:35 First steps in the process of becoming a data-driven company. 26:28 Convincing leaders towards data science. 28:20 Shortest route to become a data scientist. 29:49 A typical day in Michael's life.

Podcast Link: https://futureofdata.org/analyticsweek-leadership-podcast-with-michael-oconnell-tibco-software/

Here's Michael's Bio: Michael O’Connell, Chief Analytics Officer, TIBCO Software, developing analytic solutions across a number of industries including Financial Services, Energy, Life Sciences, Consumer Goods & Retail, and Telco, Media & Networks. Michael has been working on analytics software applications for the past 20 years and has published more than 50 papers and several software packages on analytics methodology and applications. Michael did his Ph.D. work in Statistics at North Carolina State University and is Adjunct Professor Statistics in the department.

Follow @michoconnell

The podcast is sponsored by: TAO.ai(https://tao.ai), Artificial Intelligence Driven Career Coach

About #Podcast:

FutureOfData podcast is a conversation starter to bring leaders, influencers, and lead practitioners to discuss their journey to create the data-driven future.

Want to Join? If you or any you know wants to join in, Register your interest @ http://play.analyticsweek.com/guest/

Want to sponsor? Email us @ [email protected]

Keywords:

FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy

TIBCO Spotfire: A Comprehensive Primer

TIBCO Spotfire: A Comprehensive Primer is the go-to guide for mastering TIBCO Spotfire, a leading data visualization and analytics tool. Whether you are new to Spotfire or data visualization in general, this book will provide you with a solid foundation to create impactful and actionable visual insights. What this Book will help me do Understand the fundamentals of TIBCO Spotfire and its application in data analytics. Learn how to design compelling visualizations and dashboards that convey meaningful insights. Master advanced data transformations and analysis techniques in TIBCO Spotfire. Integrate Spotfire with external data sources and scripting languages, enhancing its functionality. Optimize Spotfire's performance and usability for enterprise-level implementations. Author(s) None Phillips, an experienced analytics professional and educator, specializes in creating accessible learning materials for data science tools. With a decade of experience in the field, None has helped many organizations unlock their data potential through tools like TIBCO Spotfire. Their approach emphasizes practical understanding, making complex concepts approachable for learners of all levels. Who is it for? The book is perfect for business analysts, data scientists, and other professionals involved in data-driven decision making who want to master TIBCO Spotfire. It's designed for beginners without prior exposure to data visualization or TIBCO Spotfire, offering an accessible entry into the field. Individuals aiming to gain hands-on experience and create enterprise-grade solutions will find this book invaluable. Additionally, it serves as a reference for experienced Spotfire users looking to refine their skills.

Data Mining For Business Intelligence: Concepts, Techniques, and Applications in Microsoft Office Excel® with XLMiner®, Second Edition

Data Mining for Business Intelligence, Second Edition uses real data and actual cases to illustrate the applicability of data mining (DM) intelligence in the development of successful business models. Featuring complimentary access to XLMiner®, the Microsoft Office Excel® add-in, this book allows readers to follow along and implement algorithms at their own speed, with a minimal learning curve. In addition, students and practitioners of DM techniques are presented with hands-on, business-oriented applications. An abundant amount of exercises and examples, now doubled in number in the second edition, are provided to motivate learning and understanding. This book helps readers understand the beneficial relationship that can be established between DM and smart business practices, and is an excellent learning tool for creating valuable strategies and making wiser business decisions. New topics include detailed coverage of visualization (enhanced by Spotfire subroutines) and time series forecasting, among a host of other subject matter.

Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions

Enterprise Integration Patterns provides an invaluable catalog of sixty-five patterns, with real-world solutions that demonstrate the formidable of messaging and help you to design effective messaging solutions for your enterprise. The authors also include examples covering a variety of different integration technologies, such as JMS, MSMQ, TIBCO ActiveEnterprise, Microsoft BizTalk, SOAP, and XSL. A case study describing a bond trading system illustrates the patterns in practice, and the book offers a look at emerging standards, as well as insights into what the future of enterprise integration might hold. This book provides a consistent vocabulary and visual notation framework to describe large-scale integration solutions across many technologies. It also explores in detail the advantages and limitations of asynchronous messaging architectures. The authors present practical advice on designing code that connects an application to a messaging system, and provide extensive information to help you determine when to send a message, how to route it to the proper destination, and how to monitor the health of a messaging system. If you want to know how to manage, monitor, and maintain a messaging system once it is in use, get this book.