talk-data.com talk-data.com

Topic

Analytics

data_analysis insights metrics

4552

tagged

Activity Trend

398 peak/qtr
2020-Q1 2026-Q1

Activities

4552 activities · Newest first

Tableau: Creating Interactive Data Visualizations

Illustrate your data in a more interactive way by implementing data visualization principles and creating visual stories using Tableau About This Book Use data visualization principles to help you to design dashboards that enlighten and support business decisions Integrate your data to provide mashed-up dashboards Connect to various data sources and understand what data is appropriate for Tableau Public Understand chart types and when to use specific chart types with different types of data Who This Book Is For Data scientists who have just started using Tableau and want to build on the skills using practical examples. Familiarity with previous versions of Tableau will be helpful, but not necessary. What You Will Learn Customize your designs to meet the needs of your business using Tableau Use Tableau to prototype, develop, and deploy the final dashboard Create filled maps and use any shape file Discover features of Tableau Public, from basic to advanced Build geographic maps to bring context to data Create filters and actions to allow greater interactivity to Tableau Public visualizations and dashboards Publish and embed Tableau visualizations and dashboards in articles In Detail With increasing interest for data visualization in the media, businesses are looking to create effective dashboards that engage as well as communicate the truth of data. Tableau makes data accessible to everyone, and is a great way of sharing enterprise dashboards across the business. Tableau is a revolutionary toolkit that lets you simply and effectively create high-quality data visualizations. This course starts with making you familiar with its features and enable you to develop and enhance your dashboard skills, starting with an overview of what dashboard is, followed by how you can collect data using various mathematical formulas. Next, you'll learn to filter and group data, as well as how to use various functions to present the data in an appealing and accurate way. In the first module, you will learn how to use the key advanced string functions to play with data and images. You will be walked through the various features of Tableau including dual axes, scatterplot matrices, heat maps, and sizing.In the second module, you'll start with getting your data into Tableau, move onto generating progressively complex graphics, and end with the finishing touches and packaging your work for distribution. This module is filled with practical examples to help you create filled maps, use custom markers, add slider selectors, and create dashboards. You will learn how to manipulate data in various ways by applying various filters, logic, and calculating various aggregate measures. Finally, in the third module, you learn about Tableau Public using which allows readers to explore data associations in multiple-sourced public data, and uses state-of-the-art dashboard and chart graphics to immerse the users in an interactive experience. In this module, the readers can quickly gain confidence in understanding and expanding their visualization, creation knowledge, and quickly create interesting, interactive data visualizations to bring a richness and vibrancy to complex articles. The course provides a great overview for beginner to intermediate Tableau users, and covers the creation of data visualizations of varying complexities. Style and approach The approach will be a combined perspective, wherein we start by performing some basic recipes and move on to some advanced ones. Finally, we perform some advanced analytics and create appealing and insightful data stories using Tableau Public in a step-by-step manner.

Disruptive Analytics: Charting Your Strategy for Next-Generation Business Analytics

Learn all you need to know about seven key innovations disrupting business analytics today. These innovations—the open source business model, cloud analytics, the Hadoop ecosystem, Spark and in-memory analytics, streaming analytics, Deep Learning, and self-service analytics—are radically changing how businesses use data for competitive advantage. Taken together, they are disrupting the business analytics value chain, creating new opportunities. Enterprises who seize the opportunity will thrive and prosper, while others struggle and decline: disrupt or be disrupted. Disruptive Business Analytics provides strategies to profit from disruption. It shows you how to organize for insight, build and provision an open source stack, how to practice lean data warehousing, and how to assimilate disruptive innovations into an organization. Through a short history of business analytics and a detailed survey of products and services, analytics authority Thomas W. Dinsmore provides a practical explanation of the most compelling innovations available today. What You'll Learn Discover how the open source business model works and how to make it work for you See how cloud computing completely changes the economics of analytics Harness the power of Hadoop and its ecosystem Find out why Apache Spark is everywhere Discover the potential of streaming and real-time analytics Learn what Deep Learning can do and why it matters See how self-service analytics can change the way organizations do business Who This Book Is For Corporate actors at all levels of responsibility for analytics: analysts, CIOs, CTOs, strategic decision makers, managers, systems architects, technical marketers, product developers, IT personnel, and consultants.

Big Data War

This book mainly focuses on why data analytics fails in business. It provides an objective analysis and root causes of the phenomenon, instead of abstract criticism of utility of data analytics. The author, then, explains in detail on how companies can survive and win the global big data competition, based on actual cases of companies. Having established the execution and performance-oriented big data methodology based on over 10 years of experience in the field as an authority in big data strategy, the author identifies core principles of data analytics using case analysis of failures and successes of actual companies. Moreover, he endeavors to share with readers the principles regarding how innovative global companies became successful through utilization of big data. This book is a quintessential big data analytics, in which the author’s knowhow from direct and indirect experiences is condensed. How do we survive at this big data war in which Facebook in SNS, Amazon in e-commerce, Google in search, expand their platforms to other areas based on their respective distinct markets? The answer can be found in this book. 

Introduction to R for Business Intelligence

Master the essentials of using R for Business Intelligence in this practical guide. Through real-world use cases, learn to manipulate data, build predictive models, and create interactive dashboards to communicate insights effectively. What this Book will help me do Extract, clean, and analyze complex datasets for business applications. Perform advanced statistical and machine learning techniques like predictive modeling and clustering. Gain proficiency in creating interactive dashboards using R and the Shiny package. Develop real-world analytics skills that enhance decision-making processes. Integrate Business Intelligence workflows using R for operations, marketing, and finance domains. Author(s) None Gendron is an expert in data science and business analytics, passionate about teaching professionals to make data-driven decisions. With extensive experience in R programming, None has a knack for breaking down complex topics into easily digestible knowledge. Their practical approach ensures readers not only understand but can directly apply the concepts. Who is it for? This book is ideal for data analysts, business professionals, and entry-level data scientists looking to enhance their analytical skills. If you're familiar with basic R programming and aspire to derive actionable insights from data in the business context, this is the resource for you. It will also resonate with those in operations, marketing, or finance seeking to integrate data analysis into their decision-making.

In this session, David Rose, CEO, Ditto Labs, sat with Vishal Kumar, CEO AnalyticsWeek and shared his journey as a data driven executive, best practices, shared some thought leadership in visualizations and usability. Some challenges/opportunities he's observing as an analytics-driven startup.

Timeline: 0:29 David's journey. 4:50 Bringing technology to everyday objects. 9:37 Sensor and photosensor. 13:02 Choosing the right use cases. 16:54 On deep learning. 21:49 Working on new use cases in image processing. 26:05 Ditto Labs's allure classifiers. 28:15 Challenges as an entrepreneur in an image processing company. 32:50 Technical challenges for Ditto faces. 36:58 Privacy and IoT. 40:17 Different countries, different legal norms on privacy. 42:55 Data culture and image processing company. 44:46 Opportunities in the image processing stacks.

Podcast Link: https://futureofdata.org/analyticsweek-leadership-podcast-with-david-rose-ditto-labs/

If interested in vision catalog (as discussed in the video): http://www.slideshare.net/davidloring...

David's website: enchantedobjects.com

Here's David's Bio: David is the CEO at Ditto Labs, an image-recognition software platform that scours social media photos to find brands and products.

His new book, Enchanted Objects, focuses on the future of the internet of things and how these technologies will impact how we live and work.

Prior to Ditto, David founded and was CEO at Vitality, a company that reinvented medication packaging now distributed by CVS, Walgreens, and Express Scripts.

He founded Ambient Devices, which pioneered glanceable technology: embedding internet information in everyday objects like lamps, mirrors, and umbrellas.

David holds patents for photo sharing, interactive TV, ambient information displays, and medical devices. His work has been featured at the MoMA, covered in the New York Times, WIRED, and The Economist, and parodied on the Colbert Report.

David co-teaches a popular course in tangible user interfaces at the MIT Media Lab with Hiroshi Ishii. He is a frequent speaker to corporations and design and technology conferences.

He received his BA in Physics from St. Olaf College, studied Interactive Cinema at the MIT Media Lab, and earned a Masters at Harvard.

Follow @davidrose

The podcast is sponsored by: TAO.ai(https://tao.ai), Artificial Intelligence Driven Career Coach

About #Podcast:

FutureOfData podcast is a conversation starter to bring leaders, influencers, and lead practitioners to discuss their journey to create the data-driven future.

Want to Join? If you or any you know wants to join in, Register your interest @ http://play.analyticsweek.com/guest/

Want to sponsor? Email us @ [email protected]

Keywords:

FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy

Data Analysis Plans: A Blueprint for Success Using SAS

Data Analysis Plans: A Blueprint for Success Using SAS gets you started on building an effective data analysis plan with a solid foundation for planning and managing your analytics projects. Data analysis plans are critical to the success of analytics projects and can improve the workflow of your project when implemented effectively. This book provides step-by-step instructions on writing, implementing, and updating your data analysis plan. It emphasizes the concept of an analysis plan as a working document that you update throughout the life of a project.

This book will help you manage the following tasks:

control client expectations

limit and refine the scope of the analysis

enable clear communication and understanding among team members

organize and develop your final report

SAS users of any level of experience will benefit from this book, but beginners will find it extremely useful as they build foundational knowledge for performing data analysis and hypotheses testing. Subject areas include medical research, public health research, social studies, educational testing and evaluation, and environmental studies.

IBM Data Engine for Hadoop and Spark

This IBM® Redbooks® publication provides topics to help the technical community take advantage of the resilience, scalability, and performance of the IBM Power Systems™ platform to implement or integrate an IBM Data Engine for Hadoop and Spark solution for analytics solutions to access, manage, and analyze data sets to improve business outcomes. This book documents topics to demonstrate and take advantage of the analytics strengths of the IBM POWER8® platform, the IBM analytics software portfolio, and selected third-party tools to help solve customer's data analytic workload requirements. This book describes how to plan, prepare, install, integrate, manage, and show how to use the IBM Data Engine for Hadoop and Spark solution to run analytic workloads on IBM POWER8. In addition, this publication delivers documentation to complement available IBM analytics solutions to help your data analytic needs. This publication strengthens the position of IBM analytics and big data solutions with a well-defined and documented deployment model within an IBM POWER8 virtualized environment so that customers have a planned foundation for security, scaling, capacity, resilience, and optimization for analytics workloads. This book is targeted at technical professionals (analytics consultants, technical support staff, IT Architects, and IT Specialists) that are responsible for delivering analytics solutions and support on IBM Power Systems.

Real World SQL and PL/SQL: Advice from the Experts

Master the Underutilized Advanced Features of SQL and PL/SQL This hands-on guide from Oracle Press shows how to fully exploit lesser known but extremely useful SQL and PL/SQL features―and how to effectively use both languages together. Written by a team of Oracle ACE Directors, Real-World SQL and PL/SQL: Advice from the Experts features best practices, detailed examples, and insider tips that clearly demonstrate how to write, troubleshoot, and implement code for a wide variety of practical applications. The book thoroughly explains underutilized SQL and PL/SQL functions and lays out essential development strategies. Data modeling, advanced analytics, database security, secure coding, and administration are covered in complete detail. Learn how to: • Apply advanced SQL and PL/SQL tools and techniques • Understand SQL and PL/SQL functionality and determine when to use which language • Develop accurate data models and implement business logic • Run PL/SQL in SQL and integrate complex datasets • Handle PL/SQL instrumenting and profiling • Use Oracle Advanced Analytics and Oracle R Enterprise • Build and execute predictive queries • Secure your data using encryption, hashing, redaction, and masking • Defend against SQL injection and other code-based attacks • Work with Oracle Virtual Private Database Code examples in the book are available for download at www.MHProfessional.com. TAG: For a complete list of Oracle Press titles, visit www.OraclePressBooks.com

podcast_episode
by Val Kroll , Julie Hoyer , Tim Wilson (Analytics Power Hour - Columbus (OH) , Simon Rumble (Snowflake Analytics) , Moe Kiss (Canva) , Michael Helbling (Search Discovery)

Somebody wants to overthink their analytics tools? Tell 'em their dreamin'! We wanted to talk about open source and event analytics and Snowplow sits right at that intersection. Our guest Simon Rumble is the co-founder of Snowflake Analytics and one of the longest users of Snowplow. We wrap up the show with all the places you can find Simon and Tim in the next few months. Fun fact: You will also learn in this episode that conversion funnels go down the opposite direction in Australia. 

Architecting for Access

Fragmented, disparate backend data systems have become the norm in today’s enterprise, where you’ll find a mix of relational databases, Hadoop stores, and NoSQL engines, with access and analytics tools bolted on every which way. This mishmash of options presents a real challenge when it comes to choosing frontend analytics and visualization tools. How did we get here? In this O’Reilly report, IT veteran Rich Morrow takes you through the rapid changes to both backend storage and frontend analytics over the past decade, and provides a pragmatic list of requirements for an analytics stack that will centralize access to all of these data systems. You’ll examine current analytics platforms, including Looker—a new breed of analytics and visualization tools built specifically to handle our fragmented data space. Understand why and how data became so fractured so quickly Explore the tangled web of data and backend tools in today’s enterprises Learn the tool requirements for accessing and analyzing the full spectrum of data Examine the relative strengths of popular analytics and visualization tools, including Looker, Tableau, and MicroStrategy Inspect Looker’s unique focus on both the frontend and backend

Interactive Spark using PySpark

Apache Spark is an in-memory framework that allows data scientists to explore and interact with big data much more quickly than with Hadoop. Python users can work with Spark using an interactive shell called PySpark. Why is it important? PySpark makes the large-scale data processing capabilities of Apache Spark accessible to data scientists who are more familiar with Python than Scala or Java. This also allows for reuse of a wide variety of Python libraries for machine learning, data visualization, numerical analysis, etc. What you'll learn—and how you can apply it Compare the different components provided by Spark, and what use cases they fit. Learn how to use RDDs (resilient distributed datasets) with PySpark. Write Spark applications in Python and submit them to the cluster as Spark jobs. Get an introduction to the Spark computing framework. Apply this approach to a worked example to determine the most frequent airline delays in a specific month and year. This lesson is for you because… You're a data scientist, familiar with Python coding, who needs to get up and running with PySpark You're a Python developer who needs to leverage the distributed computing resources available on a Hadoop cluster, without learning Java or Scala first Prerequisites Familiarity with writing Python applications Some familiarity with bash command-line operations Basic understanding of how to use simple functional programming constructs in Python, such as closures, lambdas, maps, etc. Materials or downloads needed in advance Apache Spark This lesson is taken from by Jenny Kim and Benjamin Bengfort. Data Analytics with Hadoop

The Data and Analytics Playbook

The Data and Analytics Playbook: Proven Methods for Governed Data and Analytic Quality explores the way in which data continues to dominate budgets, along with the varying efforts made across a variety of business enablement projects, including applications, web and mobile computing, big data analytics, and traditional data integration. The book teaches readers how to use proven methods and accelerators to break through data obstacles to provide faster, higher quality delivery of mission critical programs. Drawing upon years of practical experience, and using numerous examples and an easy to understand playbook, Lowell Fryman, Gregory Lampshire, and Dan Meers discuss a simple, proven approach to the execution of multiple data oriented activities. In addition, they present a clear set of methods to provide reliable governance, controls, risk, and exposure management for enterprise data and the programs that rely upon it. In addition, they discuss a cost-effective approach to providing sustainable governance and quality outcomes that enhance project delivery, while also ensuring ongoing controls. Example activities, templates, outputs, resources, and roles are explored, along with different organizational models in common use today and the ways they can be mapped to leverage playbook data governance throughout the organization. Provides a mature and proven playbook approach (methodology) to enabling data governance that supports agile implementation Features specific examples of current industry challenges in enterprise risk management, including anti-money laundering and fraud prevention Describes business benefit measures and funding approaches using exposure based cost models that augment risk models for cost avoidance analysis and accelerated delivery approaches using data integration sprints for application, integration, and information delivery success

In this session, Eloy Sasot, Head of Analytics, NewsCorp, sat with Vishal Kumar, CEO AnalyticsWeek and shared his journey as an analytics executive, best practices, hacks for upcoming executives, and some challenges/opportunities she's observing as a Chief Analytics Officer.

Timeline:

0:29 Eloy's journey. 4:43 Why work in a publishing house? 7:16 Non-tech industry doing tech stuff. 10:18 Tips for a small business to get started with data science. 13:46 Creating a culture of data science in a company. 17:23 Convincing leaders towards data science. 22:05 Initial days for a leader in creating a data science practice. 27:20 Putting together a data science team. 29:18 Choosing the right tool. 33:00 Keep oneself tool agnostic. 35:20 CDO, CAO, and CTO. 38:58 Defining a data scientist at News Corp. 42:12 Future of data analytics. 46:37 Blaming everything on Big Data.

Podcast Link: https://futureofdata.org/563533-2/

Here's Eloy's Bio: Eloy is the CAO at News Corp, a worldwide network of leading companies in the worlds of diversified media, news, education, and information services, such as The Wall Street Journal, Dow Jones, New York Post, The Times, The Sun, The Australian, HarperCollins, Move, Storyful and Unruly

Prior to this, Eloy led Pricing, Data Science and Data Analytics for HarperCollins Publishers, the second-largest consumer book publisher in the world, with operations in 18 countries, nearly 200 years of history, and more than 65 unique imprints. Since joining HarperCollins in 2011, Eloy pioneered the creation and development of the pricing function, first in the UK, and then its extension to an international scale for the global company. He worked with his teams and each division around the world to drive data-driven decision-making, with a particular focus on Pricing. Besides his global role, he was Board Level Director of HarperCollins UK.

He holds an MBA from INSEAD and a Master’s in Mathematical Engineering from INSA Toulouse.

Follow @eloysasot

The podcast is sponsored by: TAO.ai(https://tao.ai), Artificial Intelligence Driven Career Coach

About #Podcast:

FutureOfData podcast is a conversation starter to bring leaders, influencers, and lead practitioners to discuss their journey to create the data-driven future.

Want to Join? If you or any you know wants to join in, Register your interest @ http://play.analyticsweek.com/guest/

Want to sponsor? Email us @ [email protected]

Keywords:

FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy

Enabling Real-time Analytics on IBM z Systems Platform

Regarding online transaction processing (OLTP) workloads, IBM® z Systems™ platform, with IBM DB2®, data sharing, Workload Manager (WLM), geoplex, and other high-end features, is the widely acknowledged leader. Most customers now integrate business analytics with OLTP by running, for example, scoring functions from transactional context for real-time analytics or by applying machine-learning algorithms on enterprise data that is kept on the mainframe. As a result, IBM adds investment so clients can keep the complete lifecycle for data analysis, modeling, and scoring on z Systems control in a cost-efficient way, keeping the qualities of services in availability, security, reliability that z Systems solutions offer. Because of the changed architecture and tighter integration, IBM has shown, in a customer proof-of-concept, that a particular client was able to achieve an orders-of-magnitude improvement in performance, allowing that client’s data scientist to investigate the data in a more interactive process. Open technologies, such as Predictive Model Markup Language (PMML) can help customers update single components instead of being forced to replace everything at once. As a result, you have the possibility to combine your preferred tool for model generation (such as SAS Enterprise Miner or IBM SPSS® Modeler) with a different technology for model scoring (such as Zementis, a company focused on PMML scoring). IBM SPSS Modeler is a leading data mining workbench that can apply various algorithms in data preparation, cleansing, statistics, visualization, machine learning, and predictive analytics. It has over 20 years of experience and continued development, and is integrated with z Systems. With IBM DB2 Analytics Accelerator 5.1 and SPSS Modeler 17.1, the possibility exists to do the complete predictive model creation including data transformation within DB2 Analytics Accelerator. So, instead of moving the data to a distributed environment, algorithms can be pushed to the data, using cost-efficient DB2 Accelerator for the required resource-intensive operations. This IBM Redbooks® publication explains the overall z Systems architecture, how the components can be installed and customized, how the new IBM DB2 Analytics Accelerator loader can help efficient data loading for z Systems data and external data, how in-database transformation, in-database modeling, and in-transactional real-time scoring can be used, and what other related technologies are available. This book is intended for technical specialists and architects, and data scientists who want to use the technology on the z Systems platform. Most of the technologies described in this book require IBM DB2 for z/OS®. For acceleration of the data investigation, data transformation, and data modeling process, DB2 Analytics Accelerator is required. Most value can be archived if most of the data already resides on z Systems platforms, although adding external data (like from social sources) poses no problem at all.

A Recipe for Success Using SAS University Edition

Filled with helpful examples and real-life projects of SAS users, A Recipe for Success Using SAS University Edition is an easy guide on how to start applying the analytical power of SAS to real-world scenarios. This book shows you: how to start using analytics how to use SAS to accomplish a project goal how to effectively apply SAS to your community or school how users like you implemented SAS to solve their analytical problems A beginner’s guide on how to create and complete your first analytics project using SAS University Edition, this book is broken down into easy-to-read chapters that also include quick takeaway tips. It introduces you to the vocabulary and structure of the SAS language, shows you how to plan and execute a successful project, introduces you to basic statistics, and it walks you through case studies to inspire and motivate you to complete your own projects. Following a recipe for success using this book, harness the power of SAS to plan and complete your first analytics project!

podcast_episode
by Tim Wilson (Analytics Power Hour - Columbus (OH) , Brent Dykes (Blast Analytics) , Michael Helbling (Search Discovery)

Once upon a time, in an industry near and dear, lived an analyst. And that analyst needed to present the results of her analysis to a big, scary, business user. This is not a tale for the faint of heart, dear listener. We're talking the Brothers Grimm before Disney got their sugar-tipped screenwriting pens on the stories! Actually, this isn't a fairy tale at all. It's a practical reality of the analyst's role: effectively communicating the results of our work out to the business. Join Michael and Tim and special guest, Storytelling Maven Brent Dykes, as they look for a happy ending to The Tale of the Analyst with Data to Be Conveyed. Tangential tales referenced in this episode include: Web Analytics Action Hero, Brent Dykes Articles on Forbes.com, The Wizard of Oz, Made to Stick, Data Storytelling: The Essential Data Science Skill Everyone Needs, The Story of Maths, and mockaroo.com.

Big Data Analytics with R

Unlock the potential of big data analytics by mastering R programming with this comprehensive guide. This book takes you step-by-step through real-world scenarios where R's capabilities shine, providing you with practical skills to handle, process, and analyze large and complex datasets effectively. What this Book will help me do Understand the latest big data processing methods and how R can enhance their application. Set up and use big data platforms such as Hadoop and Spark in conjunction with R. Utilize R for practical big data problems, such as analyzing consumption and behavioral datasets. Integrate R with SQL and NoSQL databases to maximize its versatility in data management. Discover advanced machine learning implementations using R and Spark MLlib for predictive analytics. Author(s) None Walkowiak is an experienced data analyst and R programming expert with a passion for data engineering and machine learning. With a deep knowledge of big data platforms and extensive teaching experience, they bring a clear and approachable writing style to help learners excel. Who is it for? Ideal for data analysts, scientists, and engineers with fundamental data analysis knowledge looking to enhance their big data capabilities using R. If you aim to adapt R for large-scale data management and analysis workflows, this book is your ideal companion to bridge the gap.

Mastering Business Intelligence with MicroStrategy

Mastering Business Intelligence with MicroStrategy offers a thorough walkthrough of implementing enterprise business intelligence solutions using MicroStrategy 10. In this book, you'll learn how to design comprehensive dashboards, analyze data efficiently, and enhance user experiences with modern BI tools. What this Book will help me do Learn to utilize MicroStrategy's advanced BI capabilities, including dashboards and predictive analytics, to enhance business insights. Develop mobile-responsive analytics dashboards to deliver critical data effectively wherever needed. Explore integration techniques to connect MicroStrategy with other data sources like Hadoop and third-party mapping tools. Master visualization techniques such as charts and geospatial mapping to present data insights compellingly. Gain technical expertise in managing, administering, and troubleshooting MicroStrategy systems to maintain robust BI operations. Author(s) The authors Dmitry Anoshin, None Rana, None Ma, and Neil Mehta bring years of expertise in business intelligence and analytics. With backgrounds working in leading technology solutions and BI projects, they aim to share actionable, real-world insights based on their experiences. Who is it for? This book is perfect for BI developers, analytics managers, and business analysts who use MicroStrategy and wish to deepen their proficiency. It provides value for readers migrating from MicroStrategy 9 to 10 and for those seeking to leverage advanced BI functionalities. If you are keen on unlocking the full potential of BI tools for your organization, this book is for you.

Implementing an IBM High-Performance Computing Solution on IBM Power System S822LC

This IBM® Redbooks® publication demonstrates and documents that IBM Power Systems™ high-performance computing and technical computing solutions deliver faster time to value with powerful solutions. Configurable into highly scalable Linux clusters, Power Systems offer extreme performance for demanding workloads such as genomics, finance, computational chemistry, oil and gas exploration, and high-performance data analytics. This book delivers a high-performance computing solution implemented on the IBM Power System S822LC. The solution delivers high application performance and throughput based on its built-for-big-data architecture that incorporates IBM POWER8® processors, tightly coupled Field Programmable Gate Arrays (FPGAs) and accelerators, and faster I/O by using Coherent Accelerator Processor Interface (CAPI). This solution is ideal for clients that need more processing power while simultaneously increasing workload density and reducing datacenter floor space requirements. The Power S822LC offers a modular design to scale from a single rack to hundreds, simplicity of ordering, and a strong innovation roadmap for graphics processing units (GPUs). This publication is targeted toward technical professionals (consultants, technical support staff, IT Architects, and IT Specialists) responsible for delivering cost effective high-performance computing (HPC) solutions that help uncover insights from their data so they can optimize business results, product development, and scientific discoveries

In this session, Michael O'Connell, Chief Analytics Officer, TIBCO Software, sat with Vishal Kumar, CEO AnalyticsWeek and shared his journey as a Chief Analytics Executive, shared best practices, cultural hacks for upcoming executives, shared his perspective on changing BI landscape and how businesses could leverage that and shared some challenges/opportunities he's observing across various industries.

Timeline:

0:28 Michael's journey. 4:12 CDO, CAO, and CTO. 7:30 Adoption of data analytics capabilities. 9:55 The BI industry dealing with the latest in data analytics. 12:10 Future of stats. 14:58 Creating a center of excellence with data. 18:00 Evolution of data in BI. 21:40 Small businesses getting started with data analytics. 24:35 First steps in the process of becoming a data-driven company. 26:28 Convincing leaders towards data science. 28:20 Shortest route to become a data scientist. 29:49 A typical day in Michael's life.

Podcast Link: https://futureofdata.org/analyticsweek-leadership-podcast-with-michael-oconnell-tibco-software/

Here's Michael's Bio: Michael O’Connell, Chief Analytics Officer, TIBCO Software, developing analytic solutions across a number of industries including Financial Services, Energy, Life Sciences, Consumer Goods & Retail, and Telco, Media & Networks. Michael has been working on analytics software applications for the past 20 years and has published more than 50 papers and several software packages on analytics methodology and applications. Michael did his Ph.D. work in Statistics at North Carolina State University and is Adjunct Professor Statistics in the department.

Follow @michoconnell

The podcast is sponsored by: TAO.ai(https://tao.ai), Artificial Intelligence Driven Career Coach

About #Podcast:

FutureOfData podcast is a conversation starter to bring leaders, influencers, and lead practitioners to discuss their journey to create the data-driven future.

Want to Join? If you or any you know wants to join in, Register your interest @ http://play.analyticsweek.com/guest/

Want to sponsor? Email us @ [email protected]

Keywords:

FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy