talk-data.com talk-data.com

Topic

Agile/Scrum

project_management software_development methodology

561

tagged

Activity Trend

163 peak/qtr
2020-Q1 2026-Q1

Activities

561 activities · Newest first

Implementing Data Governance 3.0 for the Lakehouse Era: Community-Led and Bottom-Up

In this session, I cover our lessons in rethinking data governance by approaching data governance as an enablement function through implementing over 200+ data projects. I’ll go into the nuts and bolts of tooling and cultural practices governing our team and data helped our team accomplish projects twice as fast with teams that were one-third our normal size.

The session concludes with why organizations should start believing in and investing in true data governance and implementing governance tools and processes that are agile and collaborative, rather than top-down.

Connect with us: Website: https://databricks.com Facebook: https://www.facebook.com/databricksinc Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/data... Instagram: https://www.instagram.com/databricksinc/

Modern Architecture of a Cloud-Enabled Data and Analytics Platform

In today’s modern IT organization whether it is the delivery of a sophisticated analytical model or a product advancement decision or understanding the behavior of a customer, the fact remains that in every instance we rely on data to make good, informed decisions. Given this backdrop, having an architecture which supports the ability to efficiently collect data from a wide range of sources within the company is still an important goal of all data organizations.

In this session we will explain how Bayer has deployed a hybrid data platform which strives to integrate key existing legacy data systems of the past while taking full advantage of what a modern cloud data platform has to offer in terms of scalability and flexibility. It will elaborate the use of its most significant component, Databricks, which serves to provide not only a very sophisticated data pipelining solution but also a complete ecosystem for teams to create data and analytical solutions in a flexible and agile way.

Connect with us: Website: https://databricks.com Facebook: https://www.facebook.com/databricksinc Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/data... Instagram: https://www.instagram.com/databricksinc/

Practical Data Governance in a Large Scale Databricks Environment

Learn from two governance and data practitioners what it takes to do data governance at enterprise scale. This is critical, since the power of Data Science is the ability to tap into any type of data source and turn it into pure value. It is at odds with its key enablers of Scale and Governance and we often must tackle new ways to bring our focus back to unlocking the insights inside the data. In this session, We will share new agile practices to roll out governance policies that balance Governance and Scale. We will untap how to deliver centralized fine-grained governance for ML and data transformation workloads that actually empowers data scientists in an enterprise Databricks environment that ensures privacy and compliance across hundreds of datasets. With automation being key to scale, we will also explore how we successfully automated security and governance

Connect with us: Website: https://databricks.com Facebook: https://www.facebook.com/databricksinc Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/data... Instagram: https://www.instagram.com/databricksinc/

Destination Lakehouse: All Your Data, Analytics and AI on One Platform

The data lakehouse is the future for modern data teams seeking to innovate with a data architecture that simplifies data workloads, eases collaboration, and maintains the flexibility and openness to stay agile as a company scales. The Databricks Lakehouse Platform realizes this idea by unifying analytics, data engineering, machine learning, and streaming workloads across clouds on one simple, open data platform. In this session, learn how the Databricks Lakehouse Platform can meet your needs for every data and analytics workload, with examples of real-customer applications, reference architectures, and demos to showcase how you can create modern data solutions of your own.

Connect with us: Website: https://databricks.com Facebook: https://www.facebook.com/databricksinc Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/data... Instagram: https://www.instagram.com/databricksinc/

Manufacturing Experience at Data + AI Summit 2022

Welcome data teams and executives in the Manufacturing industry! This year’s Data + AI Summit is jam-packed with talks, demos and discussions on the biggest innovations around improving manufacturing operations, building agile supply chains and enabling an AI-driven business. To help you take full advantage of the Manufacturing experience at Summit, we’ve curated all the programs in one place.

Highlights at this year’s Summit:

Manufacturing Industry Forum: Our capstone event for Manufacturing attendees at Summit featuring keynotes and panel discussions with John Deere, Honeywell and Collins Aerospace followed by networking. More details in the agenda below. Manufacturing Lounge:Stop by our lounge located outside the Expo floor to meet with Databricks’ industry experts and see solutions from The Global Solution Integrator and Tredence. Session Talks: Insightful talks on predicting and preventing machine downtime, real-time process optimization and leveraging informational and operational technology data to make enterprise decisions.

Connect with us: Website: https://databricks.com Facebook: https://www.facebook.com/databricksinc Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/data... Instagram: https://www.instagram.com/databricksinc/

In a recent conversation with data warehousing legend Bill Inmon, I learned about a new way to structure your data warehouse and self-service BI environment called the Unified Star Schema. The Unified Star Schema is potentially a small revolution for data analysts and business users as it allows them to easily join tables in a data warehouse or BI platform through a bridge. This gives users the ability to spend time and effort on discovering insights rather than dealing with data connectivity challenges and joining pitfalls. Behind this deceptively simple and ingenious invention is author and data modelling innovator Francesco Puppini. Francesco and Bill have co-written the book ‘The Unified Star Schema: An Agile and Resilient Approach to Data Warehouse and Analytics Design’ to allow data modellers around the world to take advantage of the Unified Star Schema and its possibilities. Listen to this episode of Leaders of Analytics, where we explore: What the Unified Star Schema is and why we need itHow Francesco came up with the concept of the USSReal-life examples of how to use the USSThe benefits of a USS over a traditional star schema galaxyHow Francesco sees the USS and data warehousing evolving in the next 5-10 years to keep up with new demands in data science and AI, and much more.Connect with Francesco Francesco on Linkedin: https://www.linkedin.com/in/francescopuppini/ Francesco's book on the USS: https://www.goodreads.com/author/show/20792240.Francesco_Puppini

Episode Description In one of my past memos to my list subscribers, I addressed some questions about agile and data products. Today, I expound on each of these and share some observations from my consulting work. In some enterprise orgs, mostly outside of the software industry, agile is still new and perceived as a panacea. In reality, it can just become a factory for shipping features and outputs faster–with positive outcomes and business value being mostly absent. To increase the adoption of enterprise data products that have humans in the loop, it’s great to have agility in mind, but poor technology shipped faster isn’t going to serve your customers any better than what you’re doing now. 

Here are the 10 reflections I’ll dive into on this episode: 

You can't project manage your way out of a [data] product problem. 

The more you try to deploy agile at scale, take the trainings, and hire special "agilists", the more you're going to tend to measure success by how well you followed the Agile process.

Agile is great for software engineering, but nobody really wants "software engineering" given to them. They do care about the perceived reality of your data product.

Run from anyone that tells you that you shouldn't ever do any design, user research, or UX work "up front" because "that is waterfall." 

Everybody else is also doing modified scrum (or modified _).

Marty Cagan talks about this a lot, but in short: while the PM (product managers) may own the backlog and priorities, what’s more important is that these PMs “own the problem” space as opposed to owning features or being solution-centered. 

Before Agile can thrive, you will need strong senior leadership buy-in if you're going to do outcome-driven data product work.

There's a huge promise in the word "agile." You've been warned. 

If you don't have a plan for how you'll do discovery work, defining clear problem sets and success metrics, and understanding customers feelings, pains, needs, and wants, and the like, Agile won't deliver much improvement for data products (probably).

Getting comfortable with shipping half-right, half-quality, half-done is hard. 

Quotes from Today’s Episode  “You can get lost in following the process and thinking that as long as we do that, we’re going to end up with a great data product at the end.” - Brian (3:16) “The other way to define clear success criteria for data products and hold yourself accountable to those on the user and business side is to really understand what does a positive outcome look like? How would we measure it?” - Brian (5:26) “The most important thing is to know that the user experience is the perceived reality of the technology that you built. Their experience is the only reality that matters.” - Brian (9:22) “Do the right amount of planning work upfront, have a strategy in place, make sure the team understands it collectively, and then you can do the engineering using agile.” - Brian (18:15) “If you don’t have a plan for how you’ll do discovery work, defining clear problem sets and success metrics, and understanding customers’ feelings, pains, needs, wants, and all of that, then agile will not deliver increased adoption of your data products. - Brian (36:07)

Links: designingforanalytics.com: https://designingforanalytics.com designingforanalytics.com/list: https://designingforanalytics.com/list

Today I’m talking about how to measure data product value from a user experience and business lens, and where leaders sometimes get it wrong. Today’s first question was asked at my recent talk at the Data Summit conference where an attendee asked how UX design fits into agile data product development. Additionally, I recently had a subscriber to my Insights mailing list ask about how to measure adoption, utilization, and satisfaction of data products. So, we’ll jump into that juicy topic as well.

Answering these inquiries also got me on a related tangent about the UX challenges associated with abstracting your platform to support multiple, but often theoretical, user needs—and the importance of collaboration to ensure your whole team is operating from the same set of assumptions or definitions about success. I conclude the episode with the concept of “game framing” as a way to conceptualize these ideas at a high level. 

Key topics and cues in this episode include: 

An overview of the questions I received (:45) Measuring change once you’ve established a benchmark (7:45)  The challenges of working in abstractions (abstracting your platform to facilitate theoretical future user needs) (10:48) The value of having shared definitions and understanding the needs of different stakeholders/users/customers (14:36) The importance of starting from the “last mile” (19:59) The difference between success metrics and progress metrics (24:31) How measuring feelings can be critical to measuring success (29:27) “Game framing” as a way to understand tracking progress and success (31:22)

Quotes from Today’s Episode “Once you’ve got your benchmark in place for a data product, it’s going to be much easier to measure what the change is because you’ll know where you’re starting from.” - Brian (7:45)

“When you’re deploying technology that’s supposed to improve people’s lives so that you can get some promise of business value downstream, this is not a generic exercise. You have to go out and do the work to understand the status quo and what the pain is right now from the user's perspective.” - Brian (8:46)

“That user perspective—perception even—is all that matters if you want to get to business value. The user experience is the perceived quality, usability, and utility of the data product.” - Brian (13:07)

“A data product leader’s job should be to own the problem and not just the delivery of data product features, applications or technology outputs. ” - Brian (26:13)

“What are we keeping score of? Different stakeholders are playing different games so it’s really important for the data product team not to impose their scoring system (definition of success) onto the customers, or the users, or the stakeholders.” - Brian (32:05)

“We always want to abstract once we have a really good understanding of what people do, as it’s easier to create more user-centered abstractions that will actually answer real data questions later on. ” - Brian (33:34)

Links https://designingforanalytics.com/community

We talked about:

Christopher’s background The essence of DataOps Also known as Agile Analytics Operations or DevOps for Data Science Defining processes and automating them (defining “done” and “good”) The balance between heroism and fear (avoiding deferred value) The Lean approach Avoiding silos The 7 steps to DataOps Wanting to become replaceable DataOps is doable Testing tools DataOps vs MLOps The Head Chef at Data Kitchen What’s grilling at Data Kitchen? The DataOps Cookbook

Links:

DataOps Manifesto website: https://dataopsmanifesto.org/en/ DataOps Cookbook: https://dataops.datakitchen.io/pf-cookbook Recipes for DataOps Success: https://dataops.datakitchen.io/pf-recipes-for-dataops-success DataOps Certification Course: https://info.datakitchen.io/training-certification-dataops-fundamentals DataOps Blog: https://datakitchen.io/blog/ DataOps Maturity Model: https://datakitchen.io/dataops-maturity-model/ DataOps Webinars: https://datakitchen.io/webinars/

Join DataTalks.Club: https://datatalks.club/slack.html  

Our events: https://datatalks.club/events.html

Mastering Snowflake Solutions: Supporting Analytics and Data Sharing

Design for large-scale, high-performance queries using Snowflake’s query processing engine to empower data consumers with timely, comprehensive, and secure access to data. This book also helps you protect your most valuable data assets using built-in security features such as end-to-end encryption for data at rest and in transit. It demonstrates key features in Snowflake and shows how to exploit those features to deliver a personalized experience to your customers. It also shows how to ingest the high volumes of both structured and unstructured data that are needed for game-changing business intelligence analysis. Mastering Snowflake Solutions starts with a refresher on Snowflake’s unique architecture before getting into the advanced concepts that make Snowflake the market-leading product it is today. Progressing through each chapter, you will learn how to leverage storage, query processing, cloning, data sharing, and continuous data protection features. This approach allows for greater operational agility in responding to the needs of modern enterprises, for example in supporting agile development techniques via database cloning. The practical examples and in-depth background on theory in this book help you unleash the power of Snowflake in building a high-performance system with little to no administrative overhead. Your result from reading will be a deep understanding of Snowflake that enables taking full advantage of Snowflake’s architecture to deliver value analytics insight to your business. What You Will Learn Optimize performance and costs associated with your use of the Snowflake data platform Enable data security to help in complying with consumer privacy regulations such as CCPA and GDPR Share data securely both inside your organization and with external partners Gain visibility to each interaction with your customersusing continuous data feeds from Snowpipe Break down data silos to gain complete visibility your business-critical processes Transform customer experience and product quality through real-time analytics Who This Book Is for Data engineers, scientists, and architects who have had some exposure to the Snowflake data platform or bring some experience from working with another relational database. This book is for those beginning to struggle with new challenges as their Snowflake environment begins to mature, becoming more complex with ever increasing amounts of data, users, and requirements. New problems require a new approach and this book aims to arm you with the practical knowledge required to take advantage of Snowflake’s unique architecture to get the results you need.

We talked about

Geo’s background Technical Product Manager Building ML platform Working on internal projects Prioritizing the backlog Defining the problems Observability metrics Avoiding jumping into “solution mode” Breaking down the problem Important skills for product managers The importance of a technical background Data Lead vs Staff Data Scientist vs Data PM Approvals and rollout Engineering/platform teams Data scientists’ role in the engineering team Scrum and Agile in data science Transitioning from Data Scientist to Technical PM Books to read for the transition Transitioning for non-technical people Doing user research Quality assurance in ML Advice for supporting an ML team as a Scrum master

Links:

Geo's LinkedIn: https://www.linkedin.com/in/geojolly/ Product School community: https://productschool.com/ http://theleanstartup.com/  Netflix CPO Medium blog: https://gibsonbiddle.medium.com/ Glovo is hiring: https://jobs.glovoapp.com/en/?d=4040726002

Join DataTalks.Club: https://datatalks.club/slack.html

Our events: https://datatalks.club/events.html

Send us a text Want to be featured as a guest on Making Data Simple? Reach out to us at [[email protected]] and tell us why you should be next.

Abstract Making Data Simple Podcast is hosted by Al Martin, VP, IBM Expert Services Delivery, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun. This week on Making Data Simple, we have Wennie Allen Business Director, Data Science and AI Elite Team and Carlo Appugliese Program Director – Data &AI, Data Science Elite Team. This week we talk about agile AI and remote data science. Carlo discusses his book, while Wennie talks about the secret sauce. Show Notes 2:56 – How do we get people to adopt AI? 4:49 – Carlo’s book 6:15 – Why do we call it agile AI? 11:12 – Six weeks to get it done! 15:07 – Where are we at with AI? 16:54 - Problems with AI today 22:05 – Secret sauce 26:31 - Process and methodology  30:22 – Talk data 34:19 – Integration, trust, and quick deployment 36:10 – Working remote 39:40 – How do you engage? Remote Data Science Website: http://ibm.biz/RemoteDataScience Agile AI Blog: http://ibm.biz/DSE-AgileAI-Blog Agile AI Book:   http://ibm.biz/DSE-AgileAI Community: http://ibm.biz/DSE-Community Chat with the Lab: http://ibm.biz/DSE-ChatWithTheLab Consultation: http://ibm.biz/DSE-Consultation Blogs:  Virtual Data Science can rise to the challenge in unprecedented times by Wennie Allen Data Science and AI from anywhere... by Carlo Appugliese Wennie on LinkedIn linkedin.com/in/wennie-allen Carlo on LinkedIn linkedin.com/in/carloappugliese Connect with the Team Producer Kate Brown - LinkedIn. Producer Steve Templeton - LinkedIn. Host Al Martin - LinkedIn and Twitter.  Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.

Pandas in Action

Take the next steps in your data science career! This friendly and hands-on guide shows you how to start mastering Pandas with skills you already know from spreadsheet software. In Pandas in Action you will learn how to: Import datasets, identify issues with their data structures, and optimize them for efficiency Sort, filter, pivot, and draw conclusions from a dataset and its subsets Identify trends from text-based and time-based data Organize, group, merge, and join separate datasets Use a GroupBy object to store multiple DataFrames Pandas has rapidly become one of Python's most popular data analysis libraries. In Pandas in Action, a friendly and example-rich introduction, author Boris Paskhaver shows you how to master this versatile tool and take the next steps in your data science career. You’ll learn how easy Pandas makes it to efficiently sort, analyze, filter and munge almost any type of data. About the Technology Data analysis with Python doesn’t have to be hard. If you can use a spreadsheet, you can learn pandas! While its grid-style layouts may remind you of Excel, pandas is far more flexible and powerful. This Python library quickly performs operations on millions of rows, and it interfaces easily with other tools in the Python data ecosystem. It’s a perfect way to up your data game. About the Book Pandas in Action introduces Python-based data analysis using the amazing pandas library. You’ll learn to automate repetitive operations and gain deeper insights into your data that would be impractical—or impossible—in Excel. Each chapter is a self-contained tutorial. Realistic downloadable datasets help you learn from the kind of messy data you’ll find in the real world. What's Inside Organize, group, merge, split, and join datasets Find trends in text-based and time-based data Sort, filter, pivot, optimize, and draw conclusions Apply aggregate operations About the Reader For readers experienced with spreadsheets and basic Python programming. About the Author Boris Paskhaver is a software engineer, Agile consultant, and online educator. His programming courses have been taken by 300,000 students across 190 countries. Quotes Of all the introductory pandas books I’ve read—and I did read a few—this is the best, by a mile. - Erico Lendzian, idibu.com This approachable guide will get you up and running quickly with all the basics you need to analyze your data. - Jonathan Sharley, SiriusXM Media Understanding and putting in practice the concepts of this book will help you increase productivity and make you look like a pro. - Jose Apablaza, Steadfast Networks Teaches both novice and expert Python users the essential concepts required for data analysis and data science. - Ben McNamara, DataGeek

Cloud Native Integration with Apache Camel: Building Agile and Scalable Integrations for Kubernetes Platforms

Address the most common integration challenges, by understanding the ins and outs of the choices and exemplifying the solutions with practical examples on how to create cloud native applications using Apache Camel. Camel will be our main tool, but we will also see some complementary tools and plugins that can make our development and testing easier, such as Quarkus, and tools for more specific use cases, such as Apache Kafka and Keycloak. You will learn to connect with databases, create REST APIs, transform data, connect with message oriented software (MOMs), secure your services, and test using Camel. You will also learn software architecture patterns for integration and how to leverage container platforms, such as Kubernetes. This book is suitable for those who are eager to learn an integration tool that fits the Kubernetes world, and who want to explore the integration challenges that can be solved using containers. What You Will Learn Focus on how to solve integration challenges Understand the basics of the Quarkus as it’s the foundation for the application Acquire a comprehensive view on Apache Camel Deploy an application in Kubernetes Follow good practices Who This Book Is For Java developers looking to learn Apache Camel; Apache Camel developers looking to learn more about Kubernetes deployments; software architects looking to study integration patterns for Kubernetes based systems; system administrators (operations teams) looking to get a better understand of how technologies are integrated.

Data Modeling with SAP BW/4HANA 2.0: Implementing Agile Data Models Using Modern Modeling Concepts

Gain practical guidance for implementing data models on the SAP BW/4HANA platform using modern modeling concepts. You will walk through the various modeling scenarios such as exposing HANA tables and views through BW/4HANA, creating virtual and hybrid data models, and integrating SAP and non-SAP data into a single data model. Data Modeling with SAP BW/4HANA 2.0 gives you the skills you need to use the new SAP BW/HANA features and objects, covers modern modelling concepts, and equips you with the practical knowledge of how to use the best of the HANA and BW/4HANA worlds. What You Will Learn Discover the new modeling features in SAP BW/4HANA Combine SAP HANA and SAP BW/4HANA artifacts Leverage virtualization when designing and building data models Build hybrid data models combining InfoObject, OpenODS, and a field-based approach Integrate SAP and non-SAP data into single model Who This Book Is For BI consultants, architects, developers, and analysts working in the SAP BW/4HANA environment.

Data Science at the Command Line, 2nd Edition

This thoroughly revised guide demonstrates how the flexibility of the command line can help you become a more efficient and productive data scientist. You'll learn how to combine small yet powerful command-line tools to quickly obtain, scrub, explore, and model your data. To get you started, author Jeroen Janssens provides a Docker image packed with over 100 Unix power tools--useful whether you work with Windows, macOS, or Linux. You'll quickly discover why the command line is an agile, scalable, and extensible technology. Even if you're comfortable processing data with Python or R, you'll learn how to greatly improve your data science workflow by leveraging the command line's power. This book is ideal for data scientists, analysts, engineers, system administrators, and researchers. Obtain data from websites, APIs, databases, and spreadsheets Perform scrub operations on text, CSV, HTML, XML, and JSON files Explore data, compute descriptive statistics, and create visualizations Manage your data science workflow Create your own tools from one-liners and existing Python or R code Parallelize and distribute data-intensive pipelines Model data with dimensionality reduction, regression, and classification algorithms Leverage the command line from Python, Jupyter, R, RStudio, and Apache Spark

Knowledge Graphs

Applying knowledge in the right context is the most powerful lever businesses can use to become agile, creative, and resilient. Knowledge graphs add context, meaning, and utility to business data. They drive intelligence into data for unparalleled automation and visibility into processes, products, and customers. Businesses use knowledge graphs to anticipate downstream effects, make decisions based on all relevant information, and quickly respond to dynamic markets. In this report for chief information and data officers, Jesus Barassa, Amy E. Hodler, and Jim Webber from Neo4j show how to use knowledge graphs to gain insights, reveal a flexible and intuitive representation of complex data relationships, and make better predictions based on holistic information. Explore knowledge graph mechanics and common organizing principles Build and exploit a connected representation of your enterprise data environment Use decisioning knowledge graphs to explore the advantages of adding relationships to data analytics and data science Conduct virtual testing using software versions of real-world processes Deploy knowledge graphs for more trusted data, higher accuracies, and better reasoning for contextual AI

We talked about:

Ben’s Background Building solutions for customers Why projects don’t make it to production Why do people choose overcomplicated solutions? The dangers of isolating data science from the business unit The importance of being able to explain things Maximizing chances of making into production The IKEA effect Risks of implementing novel algorithms If it can be done simply – do that first Don’t become the guinea pig for someone’s white paper The importance of stat skills and coding skills Structuring an agile team for ML work Timeboxing research Mentoring Ben’s book ‘Uncool techniques’ at AI-First companies Should managers learn data science? Do data scientists need to specialize to be successful?

Links:

Ben's book: https://www.manning.com/books/machine-learning-engineering-in-action (get 35% off with code "ctwsummer21")

Join DataTalks.Club: https://datatalks.club/slack.html

Our events: https://datatalks.club/events.html

SAP S/4HANA Embedded Analytics: Experiences in the Field

Imagine you are a business user, consultant, or developer about to enter an SAP S/4HANA implementation project. You are well-versed with SAP’s product portfolio and you know that the preferred reporting option in S/4HANA is embedded analytics. But what exactly is embedded analytics? And how can it be implemented? And who can do it: a business user, a functional consultant specialized in financial or logistics processes? Or does a business intelligence expert or a programmer need to be involved? Good questions! This book will answer these questions, one by one. It will also take you on the same journey that the implementation team needs to follow for every reporting requirement that pops up: start with assessing a more standard option and only move on to a less standard option if the requirement cannot be fulfilled. In consecutive chapters, analytical apps delivered by SAP, apps created using Smart Business Services, and Analytical Queries developed either using tiles or in adevelopment environment are explained in detail with practical examples. The book also explains which option is preferred in which situation. The book covers topics such as in-memory computing, cloud, UX, OData, agile development, and more.Author Freek Keijzer writes from the perspective of an implementation consultant, focusing on functionality that has proven itself useful in the field. Practical examples are abundant, ranging from “codeless” to “hardcore coding.” What You Will Learn Know the difference between static reporting and interactive querying on real-time data Understand which options are available for analytics in SAP S/4HANA Understand which option to choose in which situation Know how to implement these options Who This Book is For SAP power users, functional consultants, developers