talk-data.com talk-data.com

Topic

Cloud Computing

infrastructure saas iaas

4055

tagged

Activity Trend

471 peak/qtr
2020-Q1 2026-Q1

Activities

4055 activities · Newest first

In this podcast, Rahul Kashyap(@RCKashyap) talks about the state of security, technology, and business crossroad on Security and the mindset of a security led technologist. He sheds some light on past, present, and future security risks discussed some common leadership concerns, and how a technologist could circumvent that. This podcast is a must for all technologists and wannabe technologists to grow their organization.

Timeline: 0:29 Rahul's journey. 4:40 Rahul's current role. 7:58 How the types of cyberattacks have changed. 12:53 How has IT interaction evolved? 16:50 Problems security industry. 20:12 Market mindset vs. security mindset. 23:10 Ownership of data. 27:02 Cloud, saas, and security. 31:40 Priorities for securing an enterprise. 34:50 How security is secure enough. 37:40 Providing a stable core to the business. 41:11 The state of data science vis a vis security. 44:05 Future of security, data science, and AI. 46:14 Distributed computing and security. 50:30 Tenets of Rahul's success. 53:15 Rahul's favorite read. 54:35 Closing remarks.

Rahul's Recommended Read: Mindset: The New Psychology of Success – Carol S. Dweck http://amzn.to/2GvEX2F

Podcast Link: https://futureofdata.org/rckashyap-cylance-on-state-of-security-technologist-mindset-futureofdata-podcast/

Rahul's BIO: Rahul Kashyap is the Global Chief Technology Officer at Cylance, where he is responsible for strategy, products, and architecture.

Rahul has been instrumental in building several key security technologies viz: Network Intrusion Prevention Systems (NIPS), Host Intrusion Prevention Systems (HIPS), Web Application Firewalls (WAF), Whitelisting, Endpoint/Server Host Monitoring (EDR), and Micro-virtualization. He has been awarded several patents for his innovations. Rahul is an accomplished pen-tester and has in-depth knowledge of OS, networking, and security products.

Rahul has written several security research papers, blogs, and articles that are widely quoted and referenced by media around the world. He has built, led, and scaled award-winning teams that innovate and solve complex security challenges in both large and start-up companies.

He is frequently featured in several podcasts, webinars, and media briefings. Rahul has been a speaker at several top security conferences like BlackHat, BlueHat, Hack-In-The-Box, RSA, DerbyCon, BSides, ISSA International, OWASP, InfoSec UK, and others. He was named 'Silicon Valley's 40 under 40' by Silicon Valley Business Journal.

Rahul mentors entrepreneurs who work with select VC firms and is on the advisory board of tech start-ups.

About #Podcast:

FutureOfData podcast is a conversation starter to bring leaders, influencers, and lead practitioners to discuss their journey to create the data-driven future.

Wanna Join? If you or any you know wants to join in, Register your interest @ http://play.analyticsweek.com/guest/

Want to sponsor? Email us @ [email protected]

Keywords:

FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy

Camel in Action, Second Edition

Camel in Action, Second Edition is the most complete Camel book on the market. Written by core developers of Camel and the authors of the highly acclaimed first edition, this book distills their experience and practical insights so that you can tackle integration tasks like a pro. About the Technology Apache Camel is a Java framework that implements enterprise integration patterns (EIPs) and comes with over 200 adapters to third-party systems. A concise DSL lets you build integration logic into your app with just a few lines of Java or XML. By using Camel, you benefit from the testing and experience of a large and vibrant open source community. About the Book Camel in Action, Second Edition is the definitive guide to the Camel framework. It starts with core concepts like sending, receiving, routing, and transforming data. It then goes in depth on many topics such as how to develop, debug, test, deal with errors, secure, scale, cluster, deploy, and monitor your Camel applications. The book also discusses how to run Camel with microservices, reactive systems, containers, and in the cloud. What's Inside Coverage of all relevant EIPs Camel microservices with Spring Boot Camel on Docker and Kubernetes Error handling, testing, security, clustering, monitoring, and deployment Hundreds of examples in Java and XML About the Reader Readers should be familiar with Java. This book is accessible to beginners and invaluable to experts. About the Authors Claus Ibsen is a senior principal engineer working for Red Hat specializing in cloud and integration. He has worked on Apache Camel for the last nine years where he heads the project. Claus lives in Denmark. Jonathan Anstey is an engineering manager at Red Hat and a core Camel contributor. He lives in Newfoundland, Canada. Quotes I highly recommend this book to anyone with even a passing interest in Apache Camel. Do take Camel for a ride...and don't get the hump! - From the Foreword by James Strachan, Creator of Apache Camel Claus and Jon are great writers, relying on figures and diagrams where needed and presenting lots of code snippets and worked examples. - From the Foreword by Dr. Mark Little, Technical Director of JBoss The second edition of this all-time classic is an indispensable companion for your Apache Camel rides. - Gregor Zurowski, Apache Camel Committer The absolute best way to learn and use Camel - top to bottom, front to back, and all the way through. Camel is a fantastic tool - every Java coder should have a copy of this book. - Rick Wagner, Red Hat An excellent book and the definite reference for experienced engineers. - Yan Guo, EventBrite

In this last part of the two-part podcast, @TimothyChou discussed the Internet of Things landscape's future. He laid out how the internet has always been about the internet of things and not the internet of people. He sheds light on the internet of things as it is spread across themes of things, connect, collect, learn, and do workflows. He builds an interesting case about achieving precision to introduction optimality.

Timeline: 0:29 Timothy's journey. 8:56 Selling cloud to Oracle. 15:57 Communicating economics and technology disruption. 23:54 Internet of people to the internet of things.

Timothy's Recommended Read: Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark http://amzn.to/2Cidyhy Zone to Win: Organizing to Compete in an Age of Disruption Paperback by Geoffrey A. Moore http://amzn.to/2Hd5zpv

Podcast Link: https://futureofdata.org/timothychou-on-world-of-iot-its-future-part-2/

Timothy's BIO: Timothy Chou has his career spanning through academia, successful (and not so successful) startups, and large corporations. He was one of only a few people to hold the President's title at Oracle. As President of Oracle On Demand, he grew the cloud business from its very beginning. Today that business is over $2B. He wrote about the move of applications to the cloud in 2004 in his first book, “The End of Software”. Today he serves on the board of Blackbaud, a nearly $700M vertical application cloud service company.

After earning his Ph.D. in EE at the University of Illinois, he went to work for Tandem Computers, one of the original Silicon Valley startups. Had he understood stock options, he would have joined earlier. He’s invested in and been a contributor to a number of other startups, some you’ve heard of like Webex, and others you’ve never heard of but were sold to companies like Cisco and Oracle. Today he is focused on several new ventures in cloud computing, machine learning, and the Internet of Things.

About #Podcast:

FutureOfData podcast is a conversation starter to bring leaders, influencers, and lead practitioners to discuss their journey to create the data-driven future.

Wanna Join? If you or any you know wants to join in, Register your interest @ http://play.analyticsweek.com/guest/

Want to sponsor? Email us @ [email protected]

Keywords:

FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy

SQL Server 2017 Administration Inside Out, First Edition

Conquer SQL Server 2017 administration—from the inside out Dive into SQL Server 2017 administration—and really put your SQL Server DBA expertise to work. This supremely organized reference packs hundreds of timesaving solutions, tips, and workarounds—all you need to plan, implement, manage, and secure SQL Server 2017 in any production environment: on-premises, cloud, or hybrid. Four SQL Server experts offer a complete tour of DBA capabilities available in SQL Server 2017 Database Engine, SQL Server Data Tools, SQL Server Management Studio, and via PowerShell. Discover how experts tackle today’s essential tasks—and challenge yourself to new levels of mastery. • Install, customize, and use SQL Server 2017’s key administration and development tools • Manage memory, storage, clustering, virtualization, and other components • Architect and implement database infrastructure, including IaaS, Azure SQL, and hybrid cloud configurations • Provision SQL Server and Azure SQL databases • Secure SQL Server via encryption, row-level security, and data masking • Safeguard Azure SQL databases using platform threat protection, firewalling, and auditing • Establish SQL Server IaaS network security groups and user-defined routes • Administer SQL Server user security and permissions • Efficiently design tables using keys, data types, columns, partitioning, and views • Utilize BLOBs and external, temporal, and memory-optimized tables • Master powerful optimization techniques involving concurrency, indexing, parallelism, and execution plans • Plan, deploy, and perform disaster recovery in traditional, cloud, and hybrid environments For Experienced SQL Server Administrators and Other Database Professionals • Your role: Intermediate-to-advanced level SQL Server database administrator, architect, developer, or performance tuning expert • Prerequisites: Basic understanding of database administration procedures

In this first part of a two-part podcast, @TimothyChou discussed the Internet of Things landscape. He laid out how the internet has always been about the internet of things and not the internet of people. He sheds light on the internet of things as it is spread across themes of things, connect, collect, learn, and do workflows. He builds an interesting case about achieving precision to introduction optimality.

Timeline: 0:29 Reason behind the failure of IoT projects. 19:10 Which businesses will be impacted by IoT expansion? 30:22 How is IoT getting impacted in the world of AI. 40:35 Innovative startups in the IoT industry. 49:17 What's slowing down IoT? 52:20 How much IoT and cloud are married together? 54:32 Timothy's success mantra. 56:16 Parting thoughts.

Timothy's Recommended Read: Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark http://amzn.to/2Cidyhy Zone to Win: Organizing to Compete in an Age of Disruption Paperback by Geoffrey A. Moore http://amzn.to/2Hd5zpv

Podcast Link: https://futureofdata.org/timothychou-on-world-of-iot-its-future-part-1-futureofdata-podcast/

Timothy's BIO: Timothy Chou has his career spanning through academia, successful (and not so successful) startups, and large corporations. He was one of only a few people to hold the President's title at Oracle. As President of Oracle On Demand, he grew the cloud business from its very beginning. Today that business is over $2B. He wrote about the move of applications to the cloud in 2004 in his first book, “The End of Software”. Today he serves on the board of Blackbaud, a nearly $700M vertical application cloud service company.

After earning his Ph.D. in EE at the University of Illinois, he went to work for Tandem Computers, one of the original Silicon Valley startups. Had he understood stock options, he would have joined earlier. He’s invested in and been a contributor to a number of other startups, some you’ve heard of like Webex, and others you’ve never heard of but were sold to companies like Cisco and Oracle. Today he is focused on several new ventures in cloud computing, machine learning, and the Internet of Things.

About #Podcast:

FutureOfData podcast is a conversation starter to bring leaders, influencers, and lead practitioners to discuss their journey to create the data-driven future.

Wanna Join? If you or any you know wants to join in, Register your interest @ http://play.analyticsweek.com/guest/

Want to sponsor? Email us @ [email protected]

Keywords:

FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy

Summary

As communications between machines become more commonplace the need to store the generated data in a time-oriented manner increases. The market for timeseries data stores has many contenders, but they are not all built to solve the same problems or to scale in the same manner. In this episode the founders of TimescaleDB, Ajay Kulkarni and Mike Freedman, discuss how Timescale was started, the problems that it solves, and how it works under the covers. They also explain how you can start using it in your infrastructure and their plans for the future.

Preamble

Hello and welcome to the Data Engineering Podcast, the show about modern data infrastructure When you’re ready to launch your next project you’ll need somewhere to deploy it. Check out Linode at dataengineeringpodcast.com/linode and get a $20 credit to try out their fast and reliable Linux virtual servers for running your data pipelines or trying out the tools you hear about on the show. Go to dataengineeringpodcast.com to subscribe to the show, sign up for the newsletter, read the show notes, and get in touch. You can help support the show by checking out the Patreon page which is linked from the site. To help other people find the show you can leave a review on iTunes, or Google Play Music, and tell your friends and co-workers Your host is Tobias Macey and today I’m interviewing Ajay Kulkarni and Mike Freedman about Timescale DB, a scalable timeseries database built on top of PostGreSQL

Interview

Introduction How did you get involved in the area of data management? Can you start by explaining what Timescale is and how the project got started? The landscape of time series databases is extensive and oftentimes difficult to navigate. How do you view your position in that market and what makes Timescale stand out from the other options? In your blog post that explains the design decisions for how Timescale is implemented you call out the fact that the inserted data is largely append only which simplifies the index management. How does Timescale handle out of order timestamps, such as from infrequently connected sensors or mobile devices? How is Timescale implemented and how has the internal architecture evolved since you first started working on it?

What impact has the 10.0 release of PostGreSQL had on the design of the project? Is timescale compatible with systems such as Amazon RDS or Google Cloud SQL?

For someone who wants to start using Timescale what is involved in deploying and maintaining it? What are the axes for scaling Timescale and what are the points where that scalability breaks down?

Are you aware of anyone who has deployed it on top of Citus for scaling horizontally across instances?

What has been the most challenging aspect of building and marketing Timescale? When is Timescale the wrong tool to use for time series data? One of the use cases that you call out on your website is for systems metrics and monitoring. How does Timescale fit into that ecosystem and can it be used along with tools such as Graphite or Prometheus? What are some of the most interesting uses of Timescale that you have seen? Which came first, Timescale the business or Timescale the database, and what is your strategy for ensuring that the open source project and the company around it both maintain their health? What features or improvements do you have planned for future releases of Timescale?

Contact Info

Ajay

LinkedIn @acoustik on Twitter Timescale Blog

Mike

Website LinkedIn @michaelfreedman on Twitter Timescale Blog

Timescale

Website @timescaledb on Twitter GitHub

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

Timescale PostGreSQL Citus Timescale Design Blog Post MIT NYU Stanford SDN Princeton Machine Data Timeseries Data List of Timeseries Databases NoSQL Online Transaction Processing (OLTP) Object Relational Mapper (ORM) Grafana Tableau Kafka When Boring Is Awesome PostGreSQL RDS Google Cloud SQL Azure DB Docker Continuous Aggregates Streaming Replication PGPool II Kubernetes Docker Swarm Citus Data

Website Data Engineering Podcast Interview

Database Indexing B-Tree Index GIN Index GIST Index STE Energy Redis Graphite Prometheus pg_prometheus OpenMetrics Standard Proposal Timescale Parallel Copy Hadoop PostGIS KDB+ DevOps Internet of Things MongoDB Elastic DataBricks Apache Spark Confluent New Enterprise Associates MapD Benchmark Ventures Hortonworks 2σ Ventures CockroachDB Cloudflare EMC Timescale Blog: Why SQL is beating NoSQL, and what this means for the future of data

The intro and outro music is from a href="http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug?utm_source=rss&utm_medium=rss" target="_blank"…

Python Web Scraping Cookbook

Python Web Scraping Cookbook is your comprehensive guide to building efficient and functional web scraping tools using Python. With practical recipes, you'll learn to overcome the challenges of dynamic content, captcha, and irregular web structures while deploying scalable solutions. What this Book will help me do Master the use of Python libraries like BeautifulSoup and Scrapy for scraping data. Perfect techniques for handling JavaScript-heavy sites using Selenium. Learn to overcome web scraping challenges, such as captchas and rate-limiting. Design scalable scraping pipelines with cloud deployment in AWS. Understand web data extraction techniques with XPath, CSS selectors, and more. Author(s) Michael Heydt is a seasoned software engineer and technical author with a focus on data engineering and cloud solutions. Having worked with Python extensively, he brings real-world insights into web scraping. His practical approach simplifies complex concepts. Who is it for? This book is perfect for Python developers and data enthusiasts keen to master web scraping techniques. If you're a programmer with insights into Python scripting and wish to scrape, analyze, and utilize web data efficiently, this book is for you.

IBM z14 Technical Guide

Abstract This IBM® Redbooks® publication describes the new member of the IBM Z family, IBM z14®. IBM z14 is the trusted enterprise platform for pervasive encryption, integrating data, transactions, and insights into the data. A data-centric infrastructure must always be available with a 99.999% or better availability, have flawless data integrity, and be secured from misuse. It also must be an integrated infrastructure that can support new applications. Finally, it must have integrated capabilities that can provide new mobile capabilities with real-time analytics that are delivered by a secure cloud infrastructure. IBM z14 servers are designed with improved scalability, performance, security, resiliency, availability, and virtualization. The superscalar design allows z14 servers to deliver a record level of capacity over the prior IBM Z platforms. In its maximum configuration, z14 is powered by up to 170 client characterizable microprocessors (cores) running at 5.2 GHz. This configuration can run more than 146,000 million instructions per second (MIPS) and up to 32 TB of client memory. The IBM z14 Model M05 is estimated to provide up to 35% more total system capacity than the IBM z13® Model NE1. This Redbooks publication provides information about IBM z14 and its functions, features, and associated software support. More information is offered in areas that are relevant to technical planning. It is intended for systems engineers, consultants, planners, and anyone who wants to understand the IBM Z servers functions and plan for their usage. It is intended as an introduction to mainframes. Readers are expected to be generally familiar with existing IBM Z technology and terminology.

SAS Viya

Learn how to access analytics from SAS Cloud Analytic Services (CAS) using Python and the SAS Viya platform. SAS Viya : The Python Perspective is an introduction to using the Python client on the SAS Viya platform. SAS Viya is a high-performance, fault-tolerant analytics architecture that can be deployed on both public and private cloud infrastructures. While SAS Viya can be used by various SAS applications, it also enables you to access analytic methods from SAS, Python, Lua, and Java, as well as through a REST interface using HTTP or HTTPS. This book focuses on the perspective of SAS Viya from Python. SAS Viya is made up of multiple components. The central piece of this ecosystem is SAS Cloud Analytic Services (CAS). CAS is the cloud-based server that all clients communicate with to run analytical methods. The Python client is used to drive the CAS component directly using objects and constructs that are familiar to Python programmers. Some knowledge of Python would be helpful before using this book; however, there is an appendix that covers the features of Python that are used in the CAS Python client. Knowledge of CAS is not required to use this book. However, you will need to have a CAS server set up and running to execute the examples in this book. With this book, you will learn how to: Install the required components for accessing CAS from Python Connect to CAS, load data, and run simple analyses Work with CAS using APIs familiar to Python users Grasp general CAS workflows and advanced features of the CAS Python client SAS Viya : The Python Perspective covers topics that will be useful to beginners as well as experienced CAS users. It includes examples from creating connections to CAS all the way to simple statistics and machine learning, but it is also useful as a desktop reference.

In this episode, Wayne Eckerson and Lenin Gali discuss the past and future of the cloud and big data.

Gali is a data analytics practitioner who has always been on the leading edge of where business and technology intersect. He was one of the first to move data analytics to the cloud when he was BI director at ShareThis, a social media based services provider. He was instrumental in defining an enterprise analytics strategy, developing a data platform that brought games and business data together to enable thousands of data users to build better games and services by using Hadoop & Teradata while at Ubisoft. He is now spearheading the creation of a Hadoop-based data analytics platform at Quotient, a digital marketing technology firm in the retail industry.

The data available to marketers -- literally at their fingertips by way of a few mouse clicks -- has exploded over the last decade. Yet, while there is more data -- and it is more accessible -- than it has ever been, the way we think about and use data has hardly evolved at all. With the recent advances in cloud computing and processing power, the industry is abuzz with talk of machine learning and artificial intelligence. How, then, will we get from the world of Microsoft Excel (or Tableau) to a world where "the machines" are automatically and dynamically optimizing all aspects of our marketing?

If you work with a media agency (or are one) the first question to ask them is how many data scientists do you have? Do you prefer Amazon Web Services, Microsoft Azure, or the Google Cloud Platform? Come see examples from one of Canada's largest retailers of advertising spending that is wasted from poor targeting, access issues, and lack of big data understanding. We will also dive into examples of broken implementations of Analytics that cause even more issues. If you are not in-sourcing the core components of your Media and Analytics you are almost certainly at risk or already suffering from many of these problems. In this session, Martin and Charles Farina will show you what you need to find the right partner, but more importantly what you also have to provide.

Complete Guide to Open Source Big Data Stack

See a Mesos-based big data stack created and the components used. You will use currently available Apache full and incubating systems. The components are introduced by example and you learn how they work together. In the Complete Guide to Open Source Big Data Stack, the author begins by creating a private cloud and then installs and examines Apache Brooklyn. After that, he uses each chapter to introduce one piece of the big data stack—sharing how to source the software and how to install it. You learn by simple example, step by step and chapter by chapter, as a real big data stack is created. The book concentrates on Apache-based systems and shares detailed examples of cloud storage, release management, resource management, processing, queuing, frameworks, data visualization, and more. What You’ll Learn Install a private cloud onto the local cluster using Apache cloud stack Source, install, and configure Apache: Brooklyn, Mesos, Kafka, and Zeppelin See how Brooklyn can be used to install Mule ESB on a cluster and Cassandra in the cloud Install and use DCOS for big data processing Use Apache Spark for big data stack data processing Who This Book Is For Developers, architects, IT project managers, database administrators, and others charged with developing or supporting a big data system. It is also for anyone interested in Hadoop or big data, and those experiencing problems with data size.

In this podcast, Wayne Eckerson and Joe Caserta discuss what constitutes a modern data platform. Caserta is President of a New York City-based consulting firm he founded in 2001 and a longtime data guy. In 2004, Joe teamed up with data warehousing legend, Ralph Kimball to write to write the book The Data Warehouse ETL Toolkit. Today he’s now one of the leading authorities on big data implementations. This makes Joe one of the few individuals with in-the-trenches experience on both sides of the data divide, traditional data warehousing on relational databases and big data implementations on Hadoop and the cloud. His perspectives are always insightful.

Scaling Data Services with Pivotal GemFire

In-memory data grids (IMDG) such as Pivotal GemFire, which is powered by Apache Geode, are key to making today’s modern high-speed, data-intensive applications work. By keeping data in the RAM of a horizontally scalable cluster of servers, IMDG solutions enable apps to achieve consistently low latency for data access at any scale. Many in the application development community, however, aren’t aware of IMDG’s benefits, use cases, or underlying technology. This report brings you up to speed by providing GemFire basics, including use cases and easily understood examples. You’ll determine whether GemFire can benefit your application, and learn how to install a simple test environment and build a small proof of concept. Explore GemFire use cases for Java applications—including microservices, high-speed data ingest, and transaction and event processing Get an architectural overview of GemFire, and learn installation requirements for both hardware/VM and cloud Dive into GemFire’s capabilities with continuous queries, server-side functions, and Apache Lucene integration Learn how GemFire works with the persistence model, off-heap memory, and WAN replication

Learning Elastic Stack 6.0

Learn how to harness the power of the Elastic Stack 6.0 to manage, analyze, and visualize data effectively. This book introduces you to Elasticsearch, Logstash, Kibana, and other components, helping you build scalable, real-time data processing solutions from scratch. By reading this guide, you'll gain practical insights into the platform's components, including tips for production deployment. What this Book will help me do Understand and utilize the core components of Elastic Stack 6.0, including Elasticsearch, Logstash, and Kibana. Set up scalable data pipelines for ingesting and processing vast amounts of data. Craft real-time data visualizations and analytics using Kibana. Secure and monitor Elastic Stack deployments with X-Pack and other related tools. Deploy Elastic Stack applications effectively in cloud or on-premise production environments. Author(s) Pranav Shukla and Sharath Kumar are experienced professionals with deep knowledge in distributed data systems and the Elastic Stack ecosystem. They are passionate about data analytics and visualization and bring their hands-on experience in building real-world Elastic Stack applications into this book. Their practical approach and explanatory style make complex concepts accessible to readers at all levels. Who is it for? This book is perfect for data professionals who want to analyze large datasets or create effective real-time visualizations. It is suited for those new to Elastic Stack or looking to understand its capabilities. Basic JSON knowledge is recommended, but no prior expertise with Elastic Stack is required to benefit from this practical guide.

Learning Google BigQuery

If you're ready to untap the potential of data analytics in the cloud, 'Learning Google BigQuery' will take you from understanding foundational concepts to mastering advanced techniques of this powerful platform. Through hands-on examples, you'll learn how to query and analyze massive datasets efficiently, develop custom applications, and integrate your results seamlessly with other tools. What this Book will help me do Understand the fundamentals of Google Cloud Platform and how BigQuery operates within it. Migrate enterprise-scale data seamlessly into BigQuery for further analytics. Master SQL techniques for querying large-scale datasets in BigQuery. Enable real-time data analytics and visualization with tools like Tableau and Python. Learn to create dynamic datasets, manage partition tables and use BigQuery APIs effectively. Author(s) None Berlyant, None Haridass, and None Brown are specialists with years of experience in data science, big data platforms, and cloud technologies. They bring their expertise in data analytics and teaching to make advanced concepts accessible. Their hands-on approach and real-world examples ensure readers can directly apply the skills they acquire to practical scenarios. Who is it for? This book is tailored for developers, analysts, and data scientists eager to leverage cloud-based tools for handling and analyzing large-scale datasets. If you seek to gain hands-on proficiency in working with BigQuery or want to enhance your organization's data capabilities, this book is a fit. No prior BigQuery knowledge is needed, just a willingness to learn.

SQL Server 2017 Administrator's Guide

Dive into 'SQL Server 2017 Administrator's Guide' to master the administrative and maintenance aspects of SQL Server 2017. This comprehensive guide provides expert strategies and best practices to design, secure, and manage robust SQL Server systems effectively. What this Book will help me do Understand the new features and capabilities of SQL Server 2017 to enhance your database systems. Learn step-by-step how to configure, optimize, and troubleshoot SQL Server environments for maximum performance. Gain expertise in creating reliable backup and recovery solutions that minimize downtime and protect data. Develop skills in securing SQL Server instances against threats and maintaining system health. Explore integrating SQL Server 2017 with Azure and leveraging cloud capabilities for enhanced functionality. Author(s) The authors of 'SQL Server 2017 Administrator's Guide' are seasoned database administrators and experts in SQL Server technology. With years of practical experience, they have tackled challenges across various industries and bring a wealth of know-how to this book. They aim to provide clear, actionable guidance to help readers succeed. Who is it for? This book is ideal for database administrators who want to deepen their knowledge of SQL Server 2017 administration. It is especially suitable for professionals with some experience in earlier versions of SQL Server who wish to apply their skills to the latest edition. Whether you're an aspiring DBA or an experienced professional seeking to refine your strategies, this guide offers substantial value.

Pro Power BI Desktop

Deliver eye-catching Business Intelligence with Microsoft Power BI Desktop. This new edition has been updated to cover all the latest features, including combo charts, Cartesian charts, trend lines, use of gauges, and more. Also covered are Top-N features, the ability to bin data into groupings and chart the groupings, and new techniques for detecting and handling outlier data points. You can take data from virtually any source and use it to produce stunning dashboards and compelling reports that will seize your audience’s attention. Slice and dice the data with remarkable ease and then add metrics and KPIs to project the insights that create your competitive advantage. Make raw data into clear, accurate, and interactive information with Microsoft’s free self-service business intelligence tool. Pro Power BI Desktop shows you how to choose from a wide range of built-in and third-party visualization types so that your message is always enhanced. You’ll be able to deliver those results on the PC, tablets, and smartphones, as well as share results via the cloud. This book helps you save time by preparing the underlying data correctly without needing an IT department to prepare it for you. What You'll Learn Deliver attention-grabbing information, turning data into insight Mash up data from multiple sources into a cleansed and coherent data model Create dashboards that help in monitoring key performance indicators of your business Build interdependent charts, maps, and tables to deliver visually stunning information Share business intelligence in the cloud without involving IT Deliver visually stunning and interactive charts, maps, and tables Find new insights as you chop and tweak your data as never before Adapt delivery to mobile devices such as phones and tablets Who This Book Is For Everyone from CEOs and Business Intelligence developers to power users and IT managers

Exam Ref 70-765 Provisioning SQL Databases, First Edition

Prepare for Microsoft Exam 70-765–and help demonstrate your real-world mastery of provisioning SQL Server databases both on premise and in SQL Azure. Designed for experienced IT professionals ready to advance their status, Exam Ref focuses on the critical thinking and decision-making acumen needed for success at the MCSA level. Focus on the expertise measured by these objectives: • Implement SQL in Azure • Manage databases and instances • Manage storage This Microsoft Exam Ref: • Organizes its coverage by exam objectives • Features strategic, what-if scenarios to challenge you • Assumes you have working knowledge of SQL Server administration and maintenance, as well as Azure skills Provisioning SQL Databases About the Exam Exam 70-765 focuses on skills and knowledge for provisioning, upgrading, and configuring SQL Server; managing databases and files; and provisioning, migrating, and managing databases in the Microsoft Azure cloud. About Microsoft Certification Passing this exam as well as Exam 70-764: Administering a SQL Database Infrastructure earns you MCSA: SQL 2016 Database Administration certification, qualifying you for a position as a database administrator or infrastructure specialist. See full details at: microsoft.com/learning