talk-data.com talk-data.com

Topic

Marketing

advertising branding customer_acquisition

743

tagged

Activity Trend

49 peak/qtr
2020-Q1 2026-Q1

Activities

743 activities · Newest first

In this podcast Jason Carmel(@defenestrate99) Chief Data Officer @ POSSIBLE talks about his journey leading data analytics practice of digital marketing agency. He sheds light on some methodologies for building a sound data science practice. He sheds light on using data science chops for doing some good while creating traditional value. He shared his perspective on keeping team-high on creativity to keep creating innovative solutions. This is a great podcast for anyone looking to understanding the digital marketing landscape and how to create a sound data science practice.

Timelines: 0:29 Jason's journey. 6:40 Advantage of having a legal background for a data scientist. 9:15 Understanding emotions based on data. 13:54 The empathy model. 14:53 From idea to inception to execution. 23:40 The role of digital agencies. 30:20 Measuring the right amount of data. 32:40 Management in a creative agency. 34:40 Leadership qualities that promote creativity. 38:14 Leader's playbook in a digital agency. 40:50 Qualities of a great data science team in the digital agency. 44:30 Leadership's role in data creativity. 47:00 Opportunites as a data scientist in the digital agency. 49:18 Future of data in digital media. 51:38 Jason's success mantra. 53:30 Jason's favorite reads. 57:11 Key takeaways.

Jason's Recommended Read: Trendology: Building an Advantage through Data-Driven Real-Time Marketing by Chris Kerns amzn.to/2zMhYkV Venomous: How Earth's Deadliest Creatures Mastered Biochemistry by Christie Wilcox amzn.to/2LhqI76

Podcast Link: https://futureofdata.org/jason-carmel-defenestrate99-possible-leading-analytics-data-digital-marketing/

Jason's BIO: Jason Carmel is Chief Data Officer at Possible. With nearly 20 years of digital data and marketing experience, Jason has worked with clients such as Coca Cola, Ford, and Microsoft to evolve digital experiences based on real-time feedback and behavioral data. Jason manages a global team of 100 digital analysts across POSSIBLE, a digital advertising agency that uses traditional and unconventional data sets and models to help brands connect more effectively with their customers.

Of particular interest is Jason’s work using data and machine learning to define and understand the emotional components of human conversation. Jason spearheaded the creation of POSSIBLE’s Empathy Model, with translates the raw, unstructured content of social media into a quantitative understanding of what customers are actually feeling about a given topic, event, or brand.

About #Podcast:

FutureOfData podcast is a conversation starter to bring leaders, influencers and lead practitioners to come on show and discuss their journey in creating the data driven future.

Wanna Join? If you or any you know wants to join in, Register your interest by mailing us @ [email protected]

Want to sponsor? Email us @ [email protected]

Keywords: FutureOfData,

DataAnalytics,

Leadership,

Futurist,

Podcast,

BigData,

Strategy

In this podcast @AndyPalmer from @Tamr sat with @Vishaltx from @AnalyticsWeek to talk about the emergence/need/market for Data Ops, a specialized capability emerging from merging data engineering and dev ops ecosystem due to increased convoluted data silos and complicated processes. Andy shared his journey on what some of the businesses and their leaders are doing wrong and how businesses need to rethink their data silos to future proof themselves. This is a good podcast for any data leader thinking about cracking the code on getting high-quality insights from data.

Timelines: 0:28 Andy's journey. 4:56 What's Tamr? 6:38 What's Andy's role in Tamr. 8:16 What's data ops? 13:07 Right time for business to incorporate data ops. 15:56 Data exhaust vs. data ops. 21:05 Tips for executives in dealing with data. 23:15 Suggestions for businesses working with data. 25:48 Creating buy-in for experimenting with new technologies. 28:47 Using data ops for the acquisition of new companies. 31:58 Data ops vs. dev ops. 36:40 Big opportunities in data science. 39:35 AI and data ops. 44:28 Parameters for a successful start-up. 47:49 What still surprises Andy? 50:19 Andy's success mantra. 52:48 Andy's favorite reads. 54:25 Final remarks.

Andy's Recommended Read: Enlightenment Now: The Case for Reason, Science, Humanism, and Progress by Steven Pinker https://amzn.to/2Lc6WqK The Three-Body Problem by Cixin Liu and Ken Liu https://amzn.to/2rQyPvp

Andy's BIO: Andy Palmer is a serial entrepreneur who specializes in accelerating the growth of mission-driven startups. Andy has helped found and/or fund more than 50 innovative companies in technology, health care, and the life sciences. Andy’s unique blend of strategic perspective and disciplined tactical execution is suited to environments where uncertainty is the rule rather than the exception. Andy has a specific passion for projects at the intersection of computer science and the life sciences.

Most recently, Andy co-founded Tamr, a next-generation data curation company, and Koa Labs, a start-up club in the heart of Harvard Square, Cambridge, MA.

Specialties: Software, Sales & Marketing, Web Services, Service Oriented Architecture, Drug Discovery, Database, Data Warehouse, Analytics, Startup, Entrepreneurship, Informatics, Enterprise Software, OLTP, Science, Internet, eCommerce, Venture Capital, Bootstrapping, Founding Team, Venture Capital firm, Software companies, early-stage venture, corporate development, venture-backed, venture capital fund, world-class, stage venture capital

About #Podcast:

FutureOfData podcast is a conversation starter to bring leaders, influencers, and lead practitioners to discuss their journey to create the data-driven future.

Podcast link: https://futureofdata.org/emergence-of-dataops-age-andypalmer-futureofdata-podcast/

Wanna Join? If you or any you know wants to join in, Register your interest and email at [email protected]

Want to sponsor? Email us @ [email protected]

Keywords:

FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy

Streaming Change Data Capture

There are many benefits to becoming a data-driven organization, including the ability to accelerate and improve business decision accuracy through the real-time processing of transactions, social media streams, and IoT data. But those benefits require significant changes to your infrastructure. You need flexible architectures that can copy data to analytics platforms at near-zero latency while maintaining 100% production uptime. Fortunately, a solution already exists. This ebook demonstrates how change data capture (CDC) can meet the scalability, efficiency, real-time, and zero-impact requirements of modern data architectures. Kevin Petrie, Itamar Ankorion, and Dan Potter—technology marketing leaders at Attunity—explain how CDC enables faster and more accurate decisions based on current data and reduces or eliminates full reloads that disrupt production and efficiency. The book examines: How CDC evolved from a niche feature of database replication software to a critical data architecture building block Architectures where data workflow and analysis take place, and their integration points with CDC How CDC identifies and captures source data updates to assist high-speed replication to one or more targets Case studies on cloud-based streaming and streaming to a data lake and related architectures Guiding principles for effectively implementing CDC in cloud, data lake, and streaming environments The Attunity Replicate platform for efficiently loading data across all major database, data warehouse, cloud, streaming, and Hadoop platforms

podcast_episode
by Val Kroll , Julie Hoyer , Gary Angel (EY) , Tim Wilson (Analytics Power Hour - Columbus (OH) , Moe Kiss (Canva) , Michael Helbling (Search Discovery)

Are you reading this? If so, then you are literate. But, are you (and are your stakeholders) data literate? What does that even mean? On this episode -- recorded in front of a live audience at Marketing Evolution Experience in Las Vegas -- the gang tackled the topic. Mid-way through the show, they were delighted to be joined on stage by Gary Angel (unplanned, but due to a series of unfortunate travel and communication mishaps -- recording with a live audience is exciting! He is officially over halfway to joining the podcast's Five-Timers Club)! It was an engaging discussion with some smart questions from the live audience. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.  

In this podcast, @JohnNives discusses ways to demystify AI for the enterprise. He shares his perspective on how businesses should engage with AI and what are some of the best practices and considerations for businesses to adopt AI in their strategic roadmap. This podcast is great for anyone seeking to learn about the way to adopt AI in the enterprise landscape.

Timelines: 0:28 John's journey. 6:50 John's current role. 9:40 The role of a chief digital officer. 11:16 The current trend of AI. 13:52 AI hype or real? 16:42 Why AI now? 19:03 Demystifying deep learning. 23:35 Enterprise use cases of AI. 28:25 Attributes of a successful AI project. 32:20 Best AI investments in an enterprise. 36:56 Convincing leadership to adopt AI. 39:20 Organizational implications of adopting AI. 43:45 What do executives get wrong about AI? 48:36 Tips for executives to understand the AI landscape. 53:11 John's favorite reads. 57:35 Closing remarks.

John's Recommended Listen: FutureOfData Podcast math.im/itunes War and Peace Leo Tolstoy (Author),‎ Frederick Davidson (Narrator),‎ Inc. Blackstone Audio (Publisher) amzn.to/2w7ObkI

Podcast Link: https://futureofdata.org/johnnives-on-ways-to-demystify-ai-for-enterprise/

Jean's BIO: Jean-Louis (John) Nives serves as Chief Digital Officer and the Global Chair of the Digital Transformation practice at N2Growth. Prior to joining N2Growth, Mr. Nives was at IBM Global Business Services, within the Watson and Analytics Center of Competence. There he worked on Cognitive Digital Transformation projects related to Watson, Big Data, Analytics, Social Business and Marketing/Advertising Technology. Examples include CognitiveTV and the application of external unstructured data (social, weather, etc.) for business transformation. Prior relevant experience includes executive leadership positions at Nielsen, IRI, Kraft and two successful advertising technology acquisitions (Appnexus and SintecMedia). In this capacity, Jean-Louis combined information, analytics and technology to created significant business value in transformative ways. Jean-Louis earned a Bachelor’s Degree in Industrial Engineering from University at Buffalo and an MBA in Finance and Computer Science from Pace University. He is married with four children and lives in the New York City area.

About #Podcast:

FutureOfData podcast is a conversation starter to bring leaders, influencers and lead practitioners to discuss their journey in creating the data-driven future.

Wanna Join? If you or any you know wants to join in, Register your interest @ play.analyticsweek.com/guest/

Want to sponsor? Email us @ [email protected]

Keywords:

FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy

That's right. We're trying to grow the reach of this podcast, so we figured we needed to do some growth h---...NO! No. No. NO!!! We're NOT going to use that term. But, it turns out that growth marketing has some interesting concepts. On the one hand, you may think, "Don't I already do that?" And the answer is quite possibly, "Yeah. Pretty much." On the other hand, you may think, "Oh, well that's an interesting lens through which to view the world." And, that is okay, too. Either way, check out this chat Moe had with Krista Seiden from Google on the subject.

IBM Power System AC922 Introduction and Technical Overview

This IBM® Redpaper™ publication is a comprehensive guide that covers the IBM Power System AC922 server (8335-GTG and 8335-GTW models). The Power AC922 server is the next generation of the IBM Power processor-based systems, which are designed for deep learning and artificial intelligence (AI), high-performance analytics, and high-performance computing (HPC). This paper introduces the major innovative Power AC922 server features and their relevant functions: Powerful IBM POWER9™ processors that offer 16 cores at 2.6 GHz with 3.09 GHz turbo performance or 20 cores at 2.0 GHz with 2.87 GHz turbo for the 8335-GTG Eighteen cores at 2.98 GHz with 3.26 GHz turbo performance or 22 at 2.78 GHz cores with 3.07 GHz turbo for the 8335-GTW IBM Coherent Accelerator Processor Interface (CAPI) 2.0, IBM OpenCAPI™, and second-generation NVIDIA NVLink technology for exceptional processor-to-accelerator intercommunication Up to six dedicated NVIDIA Tesla V100 GPUs This publication is for professionals who want to acquire a better understanding of IBM Power Systems™ products and is intended for the following audiences: Clients Sales and marketing professionals Technical support professionals IBM Business Partners Independent software vendors (ISVs) This paper expands the set of IBM Power Systems documentation by providing a desktop reference that offers a detailed technical description of the Power AC922 server. This paper does not replace the current marketing materials and configuration tools. It is intended as an extra source of information that, together with existing sources, can be used to enhance your knowledge of IBM server solutions.

Implementing the IBM Storwize V7000 with IBM Spectrum Virtualize V8.1

Abstract Continuing its commitment to developing and delivering industry-leading storage technologies, IBM® introduces the IBM Storwize® V7000 solution powered by IBM Spectrum™ Virtualize. This innovative storage offering delivers essential storage efficiency technologies and exceptional ease of use and performance, all integrated into a compact, modular design that is offered at a competitive, midrange price. The IBM Storwize V7000 solution incorporates some of the top IBM technologies that are typically found only in enterprise-class storage systems, raising the standard for storage efficiency in midrange disk systems. This cutting-edge storage system extends the comprehensive storage portfolio from IBM and can help change the way organizations address the ongoing information explosion. This IBM Redbooks® publication introduces the features and functions of the IBM Storwize V7000 and IBM Spectrum Virtualize™ V8.1 system through several examples. This book is aimed at pre-sales and post-sales technical support and marketing and storage administrators. It helps you understand the architecture of the Storwize V7000, how to implement it, and how to take advantage of its industry-leading functions and features.

Summary

One of the sources of data that often gets overlooked is the systems that we use to run our businesses. This data is not used to directly provide value to customers or understand the functioning of the business, but it is still a critical component of a successful system. Sam Stokes is an engineer at Honeycomb where he helps to build a platform that is able to capture all of the events and context that occur in our production environments and use them to answer all of your questions about what is happening in your system right now. In this episode he discusses the challenges inherent in capturing and analyzing event data, the tools that his team is using to make it possible, and how this type of knowledge can be used to improve your critical infrastructure.

Preamble

Hello and welcome to the Data Engineering Podcast, the show about modern data infrastructure When you’re ready to launch your next project you’ll need somewhere to deploy it. Check out Linode at dataengineeringpodcast.com/linode and get a $20 credit to try out their fast and reliable Linux virtual servers for running your data pipelines or trying out the tools you hear about on the show. Go to dataengineeringpodcast.com to subscribe to the show, sign up for the newsletter, read the show notes, and get in touch. You can help support the show by checking out the Patreon page which is linked from the site. To help other people find the show you can leave a review on iTunes, or Google Play Music, and tell your friends and co-workers A few announcements:

There is still time to register for the O’Reilly Strata Conference in San Jose, CA March 5th-8th. Use the link dataengineeringpodcast.com/strata-san-jose to register and save 20% The O’Reilly AI Conference is also coming up. Happening April 29th to the 30th in New York it will give you a solid understanding of the latest breakthroughs and best practices in AI for business. Go to dataengineeringpodcast.com/aicon-new-york to register and save 20% If you work with data or want to learn more about how the projects you have heard about on the show get used in the real world then join me at the Open Data Science Conference in Boston from May 1st through the 4th. It has become one of the largest events for data scientists, data engineers, and data driven businesses to get together and learn how to be more effective. To save 60% off your tickets go to dataengineeringpodcast.com/odsc-east-2018 and register.

Your host is Tobias Macey and today I’m interviewing Sam Stokes about his work at Honeycomb, a modern platform for observability of software systems

Interview

Introduction How did you get involved in the area of data management? What is Honeycomb and how did you get started at the company? Can you start by giving an overview of your data infrastructure and the path that an event takes from ingest to graph? What are the characteristics of the event data that you are dealing with and what challenges does it pose in terms of processing it at scale? In addition to the complexities of ingesting and storing data with a high degree of cardinality, being able to quickly analyze it for customer reporting poses a number of difficulties. Can you explain how you have built your systems to facilitate highly interactive usage patterns? A high degree of visibility into a running system is desirable for developers and systems adminstrators, but they are not always willing or able to invest the effort to fully instrument the code or servers that they want to track. What have you found to be the most difficult aspects of data collection, and do you have any tooling to simplify the implementation for user? How does Honeycomb compare to other systems that are available off the shelf or as a service, and when is it not the right tool? What have been some of the most challenging aspects of building, scaling, and marketing Honeycomb?

Contact Info

@samstokes on Twitter Blog samstokes on GitHub

Parting Question

IBM Power Systems Bits: Understanding IBM Patterns for Cognitive Systems

This IBM® Redpaper™ publication addresses IBM Patterns for Cognitive Systems topics to anyone developing, implementing, and using Cognitive Solutions on IBM Power Systems™ servers. Moreover, this publication provides documentation to transfer the knowledge to the sales and technical teams. This publication describes IBM Patterns for Cognitive Systems. Think of a pattern as a use case for a specific scenario, such as event-based real-time marketing for real-time analytics, anti-money laundering, and addressing data oceans by reducing the cost of Hadoop. These examples are just a few of the cognitive patterns that are now available. Patterns identify and address challenges for cognitive infrastructures. These entry points then help you understand where you are on the cognitive journey and enables IBM to demonstrate the set of solutions capabilities for each lifecycle stage. This book targets technical readers, including IT specialist, systems architects, data scientists, developers, and anyone looking for a guide about how to unleash the cognitive capabilities of IBM Power Systems by using patterns.

Summary

As communications between machines become more commonplace the need to store the generated data in a time-oriented manner increases. The market for timeseries data stores has many contenders, but they are not all built to solve the same problems or to scale in the same manner. In this episode the founders of TimescaleDB, Ajay Kulkarni and Mike Freedman, discuss how Timescale was started, the problems that it solves, and how it works under the covers. They also explain how you can start using it in your infrastructure and their plans for the future.

Preamble

Hello and welcome to the Data Engineering Podcast, the show about modern data infrastructure When you’re ready to launch your next project you’ll need somewhere to deploy it. Check out Linode at dataengineeringpodcast.com/linode and get a $20 credit to try out their fast and reliable Linux virtual servers for running your data pipelines or trying out the tools you hear about on the show. Go to dataengineeringpodcast.com to subscribe to the show, sign up for the newsletter, read the show notes, and get in touch. You can help support the show by checking out the Patreon page which is linked from the site. To help other people find the show you can leave a review on iTunes, or Google Play Music, and tell your friends and co-workers Your host is Tobias Macey and today I’m interviewing Ajay Kulkarni and Mike Freedman about Timescale DB, a scalable timeseries database built on top of PostGreSQL

Interview

Introduction How did you get involved in the area of data management? Can you start by explaining what Timescale is and how the project got started? The landscape of time series databases is extensive and oftentimes difficult to navigate. How do you view your position in that market and what makes Timescale stand out from the other options? In your blog post that explains the design decisions for how Timescale is implemented you call out the fact that the inserted data is largely append only which simplifies the index management. How does Timescale handle out of order timestamps, such as from infrequently connected sensors or mobile devices? How is Timescale implemented and how has the internal architecture evolved since you first started working on it?

What impact has the 10.0 release of PostGreSQL had on the design of the project? Is timescale compatible with systems such as Amazon RDS or Google Cloud SQL?

For someone who wants to start using Timescale what is involved in deploying and maintaining it? What are the axes for scaling Timescale and what are the points where that scalability breaks down?

Are you aware of anyone who has deployed it on top of Citus for scaling horizontally across instances?

What has been the most challenging aspect of building and marketing Timescale? When is Timescale the wrong tool to use for time series data? One of the use cases that you call out on your website is for systems metrics and monitoring. How does Timescale fit into that ecosystem and can it be used along with tools such as Graphite or Prometheus? What are some of the most interesting uses of Timescale that you have seen? Which came first, Timescale the business or Timescale the database, and what is your strategy for ensuring that the open source project and the company around it both maintain their health? What features or improvements do you have planned for future releases of Timescale?

Contact Info

Ajay

LinkedIn @acoustik on Twitter Timescale Blog

Mike

Website LinkedIn @michaelfreedman on Twitter Timescale Blog

Timescale

Website @timescaledb on Twitter GitHub

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

Timescale PostGreSQL Citus Timescale Design Blog Post MIT NYU Stanford SDN Princeton Machine Data Timeseries Data List of Timeseries Databases NoSQL Online Transaction Processing (OLTP) Object Relational Mapper (ORM) Grafana Tableau Kafka When Boring Is Awesome PostGreSQL RDS Google Cloud SQL Azure DB Docker Continuous Aggregates Streaming Replication PGPool II Kubernetes Docker Swarm Citus Data

Website Data Engineering Podcast Interview

Database Indexing B-Tree Index GIN Index GIST Index STE Energy Redis Graphite Prometheus pg_prometheus OpenMetrics Standard Proposal Timescale Parallel Copy Hadoop PostGIS KDB+ DevOps Internet of Things MongoDB Elastic DataBricks Apache Spark Confluent New Enterprise Associates MapD Benchmark Ventures Hortonworks 2σ Ventures CockroachDB Cloudflare EMC Timescale Blog: Why SQL is beating NoSQL, and what this means for the future of data

The intro and outro music is from a href="http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug?utm_source=rss&utm_medium=rss" target="_blank"…

In this episode, Wayne Eckerson and Lenin Gali discuss the past and future of the cloud and big data.

Gali is a data analytics practitioner who has always been on the leading edge of where business and technology intersect. He was one of the first to move data analytics to the cloud when he was BI director at ShareThis, a social media based services provider. He was instrumental in defining an enterprise analytics strategy, developing a data platform that brought games and business data together to enable thousands of data users to build better games and services by using Hadoop & Teradata while at Ubisoft. He is now spearheading the creation of a Hadoop-based data analytics platform at Quotient, a digital marketing technology firm in the retail industry.

Marketing attribution is a buzzword that has everyone the paid advertising community talking. But what does it mean for you? Is it really a magic bullet, or is attribution just a bunch of bullshit? This talk goes through the need for attribution, shares practical applications and technologies, and ends with a framework for you to get started.

Join renowned expert Julien Coquet around the fireplace as he takes the role an a digital analytics Gordon Ramsey. Like most digital marketing projects, digital analytics projects suffer from a lack of vision and planning. This leads to poorly executed projects that yield poor results. Sometimes, analytics become a concern after a new manager arrives, only to discover how bad things turned out! Julien will share stories about the worse analytics situations he ever encountered - along with simple yet effective solutions to these problems.

talk
by Jim Sterne (Board Chair, Digital Analytics Association - USA)

AI and Machine Learning will become an integral part of your marketing analytics life so before Matt Gershoff explains how it works, Jim walks you through what it is and how it is being used. From natural language processing and computer vision to chatbots and robots, you'll see how AI is applied to customer interaction. Then, Jim dives into machine learning so you can determine which software services are worth your time, communicate better with the data scientists in your company, decide to become one yourself, and figure out how and where to bring AI and ML into your marketing tool suite.

With BigQuery business and organisations have a unique chance of taking there analytics data and start the transformation towards a data lake. By combining customer, analytics, marketing and CRM data here we not only get a repository where can have room to add or work with data as we see fit, we also open up for the opportunity to use machine learning to actually sift through our data to help determine the causality and relationship between the individual data points. This way we use the full power of data to define our segments and profiles based on their actual behavior and not our prejudice.

All attribution models are wrong, but some are useful. Except for Data Driven Attribution models of course which just might be right. Correct? Or not? Peter wants to reframe the discussion so it is not about attribution models but about techniques for optimising your marketing spend. He will talk about why the technique of attribution is wrong and where to invest your time/resources instead.

The increasing complexity of digital landscape on one side and huge business expectations on the other side are the driving force of change in e-commerce. Fueled by tons of data machine learning and artificial intelligence are slowly becoming the norm. But algorithms themselves won't be able to change the companies and deliver success. Entire companies need to change as well. How to embrace this change? Where to start and what to expect? How to organize yourselves? We'll deep-dive into data-driven digital marketing framework, followed by insights and case studies from clients and finish up with a stack of tools and takeaways you can use to produce some quick wins.