talk-data.com talk-data.com

T

Speaker

Tim Wilson

25

talks

host Analytics Power Hour - Columbus (OH

Frequent Collaborators

Filter by Event / Source

Talks & appearances

284 activities · Newest first

Search activities →
podcast_episode
with Val Kroll , Julie Hoyer , Tim Wilson (Analytics Power Hour - Columbus (OH) , Moe Kiss (Canva) , Michael Helbling (Search Discovery)

Who would have thought that we'd get to 2020 and still be debating whether recurring reports should include "insights?" As it turns out, Tim did an ad hoc analysis back in 2015 where he predicted exactly that! Unfortunately, the evidence is buried in the outbox of his email account at a previous employer. So, instead, we've opted to just tackle the topic head-on: what is a report, anyway? What are the different types of reports? What should they include? What should they leave out? And where does "analysis" fall in all of this? We have so many opinions on the subject that we didn't even bring on a guest for this episode! So, pop in your earbuds, pull out your notebook, and start taking notes, as we'll expect a report on what you think of the show once you're done giving it a listen! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

podcast_episode
with Val Kroll , Julie Hoyer , Tim Wilson (Analytics Power Hour - Columbus (OH) , Josh Crowhurst , Moe Kiss (Canva) , Michael Helbling (Search Discovery)

It's the end of the year, and we know it, and we feel fiiiiine. Or, maybe we have a little anxiety. But, for the fifth year in a row, we're wrapping up the year with a reflective episode: reflecting on changes in the analytics industry, the evolution of the podcast, and the interpersonal dynamics between Tim and Michael. From the state of diversity in the industry (and on the show), to the trends in analytics staffing and careers, to the growing impact of ethical and privacy considerations on the role of the analyst, it's an episode chock full of agreement, acrimony, and angst. And, it's an episode with a special "guest;" it's the first time that producer Josh Crowhurst is on mic doing something besides simply keeping our advertisers happy! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

podcast_episode
with Val Kroll , Nancy Duarte , Julie Hoyer , Tim Wilson (Analytics Power Hour - Columbus (OH) , Moe Kiss (Canva) , Michael Helbling (Search Discovery)

Once upon a time, there was an analyst. And that analyst had some data. She used that data to do some analysis, and from that analysis she realized she had some recommendations she could make to her organization. This was the point where our intrepid analyst reached a metaphorical fork in Communication Road: would she hastily put all of her thoughts together quickly in a slide deck with charts and graphs and bullets, or would she pause, step back, and craft a true data story? Well, if she listened to this episode of the podcast with presentation legend Nancy Duarte, author of five award-winning books (the most recent one — DataStory: Explain Data and Inspire Action Through Story — being the main focus of this episode) she would do the latter, and her story would have a happy ending indeed! For complete show notes, including links to items mentioned in the episode and a transcript of the show, visit the show page.

podcast_episode
with Val Kroll , Julie Hoyer , Tim Wilson (Analytics Power Hour - Columbus (OH) , Yali Sassoon (Snowplow) , Moe Kiss (Canva) , Michael Helbling (Search Discovery)

How accurate is your data? How accurate is any of our data? If our data is more accurate, will we make better decisions? How MUCH better? Why do the show blurbs of late have so many questions? THAT is a question we can ACCURATELY answer: because the shows grapple with challenging questions! On this episode, Snowplow co-founder Yali Sassoon joined us to chat about the nuts and bolts of data accuracy: the inherent messiness of client-side tracking (but, also, the limitations of server-side tracking), strategies of incrementally improving data accuracy (and the costs therein), and the different types of scenarios where different aspects of data accuracy matter in different ways! Pour yourself a drink (a 2 oz. shot of a fine Scotch will do... which would be 59.1471 ml if you want an accurate and precise metric pour), settle in, and give it a listen! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

READ ME!!! LISTEN!!! DO YOU KNOW WHY THIS IS IN ALL CAPS?! IS IT RAISING YOUR HEART RATE?! IS IT MAKING YOU A LITTLE IRRITATED?! IT MIGHT BE! IF IT IS, WE COULD MEASURE IT, AND MAYBE WE WOULD REALIZE THAT WE WERE INDUCING A SUBCONSCIOUS EMOTIONAL RESPONSE AND REALLY SHOULD TURN OFF THE CAPS LOCK! That's the topic of this episode: the brain. Specifically: neuroscience. Even more specifically: neurodesign and neuromarketing and the measurement and analytics therein. We're talking EEGs, eye tracking, predictive eye tracking, heart rate monitoring, and the like (and why it matters) with Diana Lucaci from True Impact. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

Multi-touch attribution is like fat free cheese: it sounds like a great idea, it seems like technology would have made it amazing and delicious by now, and, yet, the reality is incredibly unsatisfying. Since we've recently covered how browsers are making the analyst's lot in life more difficult, and since multi-touch attribution is affected by those changes, we figured it was high time to revisit the topic. It's something we've covered before (twice, actually). But interest in the topic has not diminished, while a claim could be made that reality has gone from being merely a cold dishrag to the face to being a bucket of ice over the head. We sat down with Priscilla Cheung to hash out the topic. No fat free cheese was consumed during the making of the episode. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

podcast_episode
with Val Kroll , Julie Hoyer , Tim Wilson (Analytics Power Hour - Columbus (OH) , Emily Oster (Brown University) , Moe Kiss (Canva) , Michael Helbling (Search Discovery)

Did you hear the one about the Harvard-educated economist who embraced her inner wiring as a lateral thinker to explore topics ranging from HIV/AIDS in Africa to the impact of Hepatitis B on male-biased sex ratios in China to the range of advice and dicta doled out by doctors and parents and in-laws and friends about what to do (and not do!) during pregnancy? It's a data-driven tale if ever there was one! Emily Oster, economics professor at Brown University and bestselling author of Expecting Better and Cribsheet, joined the show to chat about what happens when the evidence (the data!) doesn't match conventional wisdom, and strategies for presenting and discussing topics where that's the case. Plus causal inference! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

Are you down with ITP? What about ETP? Are you pretty sure that the decline in returning visitors to your site that has everyone in a tizzy is largely due to increasingly restrictive cookie handling by browsers? Do you really, really, REALLY want Google, Apple, Mozilla, and even Microsoft to get on the same page when it comes to cookie handling and JavaScript subtleties? So many questions! Lucky for us (and you!), Measure Slack legend (and L.L. Bean Senior Programmer/Analyst) Cory Underwood has some answers. Or, at least, he will depress you in delightful ways. For complete show notes, including links to items mentioned in this episode, a transcript of the show, and an update on ITP 2.3 from Cory, visit the show page.

Have you ever noticed that 68.2% of the people who explain machine learning use a "this picture is a cat" example, and another 24.3% use "this picture is a dog?" Is there really a place for machine learning and the world of computer vision (or machine vision, which we have conclusively determined is a synonym) in the real world of digital analytics? The short answer is the go-to answer of every analyst: it depends. On this episode, we sat down with Ali Vanderveld, Director of Data Science at ShopRunner, to chat about some real world applications of computer vision, as well as the many facets and considerations therein! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

podcast_episode
with Val Kroll , Julie Hoyer , Tim Wilson (Analytics Power Hour - Columbus (OH) , Augustine Fou , Moe Kiss (Canva) , Michael Helbling (Search Discovery)
CDP

What percentage of digital ad impressions and clicks do you think is actually the work of non-human bots? Pick a number. Now double it. Double it again. You're getting close. A recent study by Pixalate found that 19 percent of traffic from programmatic ads in the U.S. is fraudulent. David Raab from the CDP Institute found this number to be "optimistic." Ad fraud historian Dr. Augustine Fou, our guest on this show, has compelling evidence that the actual number could easily be north of 50 percent. Why? Who benefits? Why is it hard to tamp out? Is it illegal (it isn't!)? We explore these topics and more on this episode! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

podcast_episode
with Val Kroll , Julie Hoyer , Tim Wilson (Analytics Power Hour - Columbus (OH) , Astrid Illum (/ DFDS A/S) , Moe Kiss (Canva) , Michael Helbling (Search Discovery)

It's 1:00 AM, and you can't sleep. The paid search manager needs to know whether brand keywords can be turned off without impacting revenue. The product team needs the latest A/B test results analyzed before they can start on their next sprint. The display media intern urgently needs your help figuring out why the campaign tracking parameters he added for the campaign that launches in two days are breaking the site (you're pretty sure he's confusing "&" and "?" again). And the team running the site redesign needs to know YESTERDAY what fields they need to include in the new headless CMS to support analytics. You're pulled in a million directions, and every request is valid. How do you manage your world without losing your sanity? On this episode, analytics philosopher Astrid Illum from DFDS joins the gang to discuss those challenges. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

podcast_episode
with Val Kroll , Julie Hoyer , Tim Wilson (Analytics Power Hour - Columbus (OH) , Moe Kiss (Canva) , Michael Helbling (Search Discovery)

Somewhere between "welcome to the company, now get to work!" and weeks of tedious orientation sessions (that, presumably, include a few hours with the legal department explaining that, should you be on a podcast, you need to include a disclaimer that the views expressed on the podcast are your own and not those of the company for which you now work), is a happy medium when it comes to onboarding an analyst. What is that happy medium, and how does one find it? It turns out the answer is that favorite of analyst phrases: "it depends." Unsatisfying? Perhaps. But, listeners who have been properly onboarded to this podcast know that "unsatisfying" is our bread and butter. So, in this episode, Moe and Michael share their thoughts and their emotional intelligence on the subject of analyst onboarding, while Tim works to make up for recent deficiencies in the show's use of the "explicit" tag. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

podcast_episode
with Val Kroll , Bradley Fay (DraftKings) , Julie Hoyer , Tim Wilson (Analytics Power Hour - Columbus (OH) , Moe Kiss (Canva) , Michael Helbling (Search Discovery)

Listen. Really. That's what you can do. You can listen to this episode and find out what you learn. Or you can NOT listen to the show and NOT find out what you learn. You can't do both, which means that, one way or the other, you WILL be creating your very own counterfactual! That, dear listener, is a fundamental concept when it comes to causal inference. Smart analysts and data scientists the world over are excited about the subject, because it provides a means of thinking and application techniques for actually getting to causality. Bradley Fay from DraftKings is one of those smart data scientists, so the gang sat down with him to discuss the subject! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

podcast_episode
with Val Kroll , Julie Hoyer , Tim Wilson (Analytics Power Hour - Columbus (OH) , Moe Kiss (Canva) , Michael Helbling (Search Discovery)

Have you ever thought it would be a great idea to have a drink or two, grab a microphone, and then air your grievances in a public forum? Well, we did! This episode of the show was recorded in front of a live audience (No laugh tracks! No canned applause!) at the Marketing Analytics Summit (MAS) in Las Vegas. Moe, Michael, and Tim used a "What Grinds Our Gears?" application to discuss a range of challenges and frustrations that analysts face. They (well, Moe and Tim, of course) disagreed on a few of them, but they occasionally even proposed some ways to address the challenges, too. To more effectively simulate the experience, we recommend pairing this episode with a nice Japanese whiskey, which is what the live audience did! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

podcast_episode
with Val Kroll , Julie Hoyer , Tim Wilson (Analytics Power Hour - Columbus (OH) , Finn Lattimore (Gradient Institute) , Moe Kiss (Canva) , Michael Helbling (Search Discovery)

Did you hear the one about how the AI eliminated cancer? It just wiped out the human race! As machine learning and artificial intelligence are woven more and more into the fabric of our daily lives, we are increasingly seeing that decisions based purely on code require a lot of care to ensure that the code truly behaves as we would like it to. As one high profile example after another demonstrates, this is a tricky challenge. On this episode, Finn Lattimore from Gradient Institute joined the gang to discuss the different dimensions of the challenge! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

podcast_episode
with Val Kroll , Julie Hoyer , Tim Wilson (Analytics Power Hour - Columbus (OH) , Maryam Jahanshahi (TapRecruit) , Moe Kiss (Canva) , Michael Helbling (Search Discovery)

What's in a job title? That which we call a senior data scientist by any other job title would model as predictively...  This, dear listener, is why the hosts of this podcast crunch data rather than dabble in iambic pentameter. With sincere apologies to William Shakespeare, we sat down with Maryam Jahanshahi to discuss job titles, job descriptions, and the research, experiments, and analysis that she has conducted as a research scientist at TapRecruit, specifically relating to data science and analytics roles. The discussion was intriguing and enlightening! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

Remember that time you ran a lunch-and-learn at your company to show a handful of co-workers some Excel tips? What would have happened if you actually needed to fully train them on Excel, and there were approximately a gazillion users*? Or, have you ever watched a Google Analytics or Google Tag Manager training video? Or perused their documentation? How does Google actually think about educating a massive and diverse set of users on their platform? And, what can we learn from that when it comes to educating our in-house users on tool, processes, and concepts? In this episode, Justin Cutroni from Google joined the gang to discuss this very topic! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

A simple recipe for a delicious analytics platform: combine 3 cups of data schema with a pinch of JavaScript in a large pot of cloud storage. Bake in the deployment oven for a couple of months, and savory insights will emerge. Right? Why does this recipe have both 5-star and 1-star ratings?! On this episode, long-standing digital analytics maven June Dershewitz, Director of Analytics at Twitch, drops by the podcast's analytics kitchen to discuss the relative merits of building versus buying an analytics platform. Or, of course, doing something in between!

The episode was originally 3.5 hours long, but we edited out most of Michael's tangents into gaming geekdown, which brought the run-time down to a more normal length.

For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

podcast_episode
with Val Kroll , Julie Hoyer , Tim Wilson (Analytics Power Hour - Columbus (OH) , Moe Kiss (Canva) , Michael Helbling (Search Discovery) , Kasper Rasmussen (Accutics)

We're not sure what's going on with this episode. For some reason, we have a bunch of first-time listeners, and they're all from Apple devices! Maybe it's because the show only comes out every two weeks, and the first-party cookies we've been using to track our listeners are now expiring after seven days! (This is a hilarious episode description if you're well-versed in the ins and outs and ethical and philosophical aspects of WebKit's Intelligent Tracking Prevention (ITP) 2.1. If you're not, then you might want to listen to the gang chat with Kasper Rasmussen from Accutics about the topic, as it's likely already impacting the traffic to your site!) For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

podcast_episode
with Val Kroll , Julie Hoyer , Tim Wilson (Analytics Power Hour - Columbus (OH) , Moe Kiss (Canva) , Michael Helbling (Search Discovery)

Have you ever attended a conference? Did you know that analysts over-index towards introversion? Have you ever struggled to figure out how to start a conversation over a cold pastry and a cup of tepid coffee at a conference breakfast? IS there actually a point in developing and executing a strategy when it comes to attending a conference? Is it annoying to listen to people who speak pretty regularly at conferences pontificate about speaking at conferences? Some of these questions are answered on this episode! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page. We made this up, but it seems plausible.

podcast_episode
with Val Kroll , Julie Hoyer , Tim Wilson (Analytics Power Hour - Columbus (OH) , Erik Driessen (/ Greenhouse Group) , Moe Kiss (Canva) , Michael Helbling (Search Discovery)

We thought we deserved a break from the podcast, so we went looking for some AI to take over the episode. Amazon Polly wasn't quite up to the task, unfortunately, so we wound up sitting down as humans with another human -- Erik Driessen from Greenhouse -- to chat about the different ways that automation can be put to use in the service of analytics: from pixel deployment to automated alerts to daily reports, there are both opportunities and pitfalls! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

podcast_episode
with Val Kroll , Julie Hoyer , Tim Wilson (Analytics Power Hour - Columbus (OH) , Stacey Goers (National Public Radio (NPR)) , Moe Kiss (Canva) , Michael Helbling (Search Discovery)

Do you know something that is really simple? Really Simple Syndication (aka, RSS). Did you know that RSS is the backbone of podcast delivery? Well, aren't you clever! What's NOT really simple is effectively measuring podcasts when a key underlying component is a glorified text file that tells an app how to download an audio file. Advertisers, publishers, and content producers the world over have been stuck with "downloads" as their key -- and pretty much only -- metric for years. That's like just counting "hits" on a website! But, NPR is leading an initiative to change all that through Remote Audio Data, or RAD. Stacey Goers, product manager for podcasts at National Public Radio, joins the gang on this episode to discuss that effort: how it works, how it's rolling out, and the myriad parallels podcast analytics has to website and mobile analytics! "For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

podcast_episode
with Val Kroll , Julie Hoyer , Steve Mulder (National Public Radio (NPR)) , Tim Wilson (Analytics Power Hour - Columbus (OH) , Moe Kiss (Canva) , Michael Helbling (Search Discovery)

"Hey, Google! How do you measure yourself?" "I'm sorry. I can't answer that question. Would you like to listen to a podcast that can?" National Public Radio has long been on the forefront of the world of audio media. Why, you might even remember episode #046, where Steve Mulder from NPR made his first appearance on the show discussing the cans and cannots of podcast measurement! On this episode, Mulder returns to chat about how much more comfortable we have become when it comes to conversing with animated inanimate objects, as well as the current state of what data is available (and how) to publishers and brands who have ventured into this brave new world. "Alexa! Play the Digital Analytics Power Hour podcast!" For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

DIGITAL ANALYTICS MEETS DATA SCIENCE: USE CASES FOR GOOGLE ANALYTICS

Past attendees of Superweek have ridden along with Tim as he explored R, and then as he dove deeper into some of the fundamental concepts of statistics. In this session, he will provide the latest update on that journey: how he is putting his exploration into the various dimensions of data science to use with real data and real clients. The statistical methods will be real, the code will be R (and available on GitHub), and the data will only be lightly obfuscated. So, you will be able to head back to your room at the next break and try one or more of the examples out on your own data! (But, don't do that -- the food and conversation at the breaks is too good to miss!)