talk-data.com talk-data.com

Topic

data

2093

tagged

Activity Trend

3 peak/qtr
2020-Q1 2026-Q1

Activities

Showing filtered results

Filtering by: O'Reilly Data Science Books ×
Supply Chains: A Manager's Guide

“An excellent summary of the state of supply chain management going into the twenty-first century. Explains the essential concepts clearly and offers practical, down-to-earth advice for making supply chains more efficient and adaptive. Truly a survival guide for executives as they struggle to cope with the increasing competition between supply chains.” — Christian Knoll, Vice President of Global Supply Chain Management, SAP AG “Through real-world case studies and graphic illustrations, David Taylor clearly demonstrates the bottom-line benefits of managing the supply chain effectively. Although the book is written for managers, I recommend it for everyone from the executive suite to the shipping floor because they all have to work together to master the supply chain. But beware—you can expect many passionate employees demanding improvements in your company’s supply chain after reading this book!” — David Myers, President, WinfoSoft Inc., Former Board Member of Supply Chain Council “A comprehensive, thoroughly researched, and well-designed book that gives managers the information they need in a highly readable form. I am already starting to use the techniques in this book to improve our international distribution system.” — Jim Muller, Vice President of Produce Sales, SoFresh Produce “Supply chain management is a deceptively deep subject. Simple business practices combine to form complex systems that seem to defy rational analysis: Companies that form trading partnerships continue to compete despite their best efforts to cooperate; small variations in consumer buying create devastating swings in upstream demand, and so on. In his trademark fashion, Taylor clearly reveals the hidden logic at work in your supply chain and gives you the practical tools you need to make better management decisions. A must-read for every manager who affects a supply chain, and in today's marketplace there are few managers who are exempt from this requirement.” — Adrian J. Bowles, Ph.D., President, CoSource.net “David Taylor has done it again. With his new book, David makes supply chain management easy to grasp for the working manager, just as he did with his earlier guides to business technology. If you work for a company that is part of a supply chain, you need this book.” — Dirk Riehle, Ph.D. “David Taylor has done a masterful job of defining the core issues in supply chain management without getting trapped in the quicksand of jargon. This concise book is well written, highly informative, and easy to read.” — Marcia Robinson, President, E-Business Strategies, author of Services Blueprint: Roadmap “Taylor has done a tremendous job of giving readers an intuitive grasp of a complicated subject. If you’re new to supply chains, this book will give you an invaluable map of the territory. If you're already among the initiated, it will crystallize your insights and help you make better decisions. In either case, you can only come out ahead by reading this book.” — Kevin Dick, Founder of Kevin Dick Associates, author of XML: A Manager’s Guide “My motto for compressing data is ‘squeeze it til it gags.’ In the current business climate, that’s what you have to do to costs, and Taylor shows you many ways to squeeze costs out of your supply chain. He also writes with the same economy: This book contains exactly what you need to manage your supply chain effectively. Nothing is missing, and nothing is extra.” — Charles Ashbacher, President, Charles Ashbacher Technologies Today's fiercest business battles are taking place between competitors' supply chains, with victory dependent on finding a way to deliver products to customers more quickly and efficiently than the competition. For proof, just look to Dell and Amazon.com, both of which revolutionized their industries by changing how companies produce, distribute, and sell physical goods. But they're hardly alone. By revamping their supply chains, Siemens CT improved lead time from six months to two weeks, Gillette slashed $400 million of inventory, and Chrysler saved $1.7 billion a year. It's a high-stakes game, and you don't have a lot of choice about playing: If your company touches a physical product, it's part of a supply chain--and your success ultimately hangs on the weakest link in that chain. In , best-selling author David Taylor explains how to assemble a killer supply chain using the knowledge, technology, and tools employed in supply-chain success stories. Using his signature fast-track summaries and informative graphics, Taylor offers a clear roadmap to understanding and solving the complex problems of supply-chain management. Supply Chains: A Manager's Guide Modern manufacturing has driven down the time and cost of the production process, leaving supply chains as the final frontier for cost reduction and competitive advantage. will quickly give managers the foundation they need to contribute effectively to their company's supply-chain success. Supply Chains: A Manager's Guide

Database Modeling with Microsoft® Visio for Enterprise Architects

This book is for database designers and database administrators using Visio, which is the database component of Microsoft's Visual Studio .NET for Enterprise Architects suite, also included in MSDN subscriptions. This is the only guide to this product that tells DBAs how to get their job done. Although primarily focused on tool features, the book also provides an introduction to data modeling, and includes practical advice on managing database projects. The principal author was the program manager of VEA's database modeling solutions. · Explains how to model databases with Microsoft® Visio for Enterprise Architects (VEA), focusing on tool features.· Provides a platform-independent introduction to data modeling using both Object Role Modeling (ORM) and Entity Relationship Modeling (ERM), and includes practical advice on managing database projects.· Additional ORM models, course notes, and add-ins available online.

Bioinformatics

Life science data integration and interoperability is one of the most challenging problems facing bioinformatics today. In the current age of the life sciences, investigators have to interpret many types of information from a variety of sources: lab instruments, public databases, gene expression profiles, raw sequence traces, single nucleotide polymorphisms, chemical screening data, proteomic data, putative metabolic pathway models, and many others. Unfortunately, scientists are not currently able to easily identify and access this information because of the variety of semantics, interfaces, and data formats used by the underlying data sources. Bioinformatics: Managing Scientific Data tackles this challenge head-on by discussing the current approaches and variety of systems available to help bioinformaticians with this increasingly complex issue. The heart of the book lies in the collaboration efforts of eight distinct bioinformatics teams that describe their own unique approaches to data integration and interoperability. Each system receives its own chapter where the lead contributors provide precious insight into the specific problems being addressed by the system, why the particular architecture was chosen, and details on the system's strengths and weaknesses. In closing, the editors provide important criteria for evaluating these systems that bioinformatics professionals will find valuable. * Provides a clear overview of the state-of-the-art in data integration and interoperability in genomics, highlighting a variety of systems and giving insight into the strengths and weaknesses of their different approaches. * Discusses shared vocabulary, design issues, complexity of use cases, and the difficulties of transferring existing data management approaches to bioinformatics systems, which serves to connect computer and life scientists. * Written by the primary contributors of eight reputable bioinformatics systems in academia and industry including: BioKris, TAMBIS, K2, GeneExpress, P/FDM, MBM, SDSC, SRS, and DiscoveryLink.

BLAST

Sequence similarity is a powerful tool for discovering biological function. Just as the ancient Greeks used comparative anatomy to understand the human body and linguists used the Rosetta stone to decipher Egyptian hieroglyphs, today we can use comparative sequence analysis to understand genomes. BLAST (Basic Local Alignment Search Tool), is a sophisticated software package for rapid searching of nucleotide and protein databases. It is one of the most important software packages used in sequence analysis and bioinformatics. Most users of BLAST, however, seldom move beyond the program's default parameters, and never take advantage of its full power. BLAST is the only book completely devoted to this popular suite of tools. It offers biologists, computational biology students, and bioinformatics professionals a clear understanding of BLAST as well as the science it supports. This book shows you how to move beyond the default parameters, get specific answers using BLAST, and how to interpret your results. The book also contains tutorial and reference sections covering NCBI-BLAST and WU-BLAST, background material to help you understand the statistics behind BLAST, Perl scripts to help you prepare your data and analyze your results, and a wealth of tips and tricks for configuring BLAST to meet your own research needs. Some of the topics covered include: BLAST basics and the NCBI web interface How to select appropriate search parameters BLAST programs: BLASTN, BLASTP, BLASTX, TBLASTN, TBLASTX, PHI-BLAST, and PSI BLAST Detailed BLAST references, including NCBI-BLAST and WU-BLAST Understanding biological sequences Sequence similarity, homology, scoring matrices, scores, and evolution Sequence Alignment Calculating BLAST statistics Industrial-strength BLAST, including developing applications with Perl and BLAST BLAST is the only comprehensive reference with detailed, accurate information on optimizing BLAST searches for high-throughput sequence analysis. This is a book that any biologist should own.

Practical RDF

The Resource Description Framework (RDF) is a structure for describing and interchanging metadata on the Web--anything from library catalogs and worldwide directories to bioinformatics, Mozilla internal data structures, and knowledge bases for artificial intelligence projects. RDF provides a consistent framework and syntax for describing and querying data, making it possible to share website descriptions more easily. RDF's capabilities, however, have long been shrouded by its reputation for complexity and a difficult family of specifications. Practical RDF breaks through this reputation with immediate and solvable problems to help you understand, master, and implement RDF solutions. Practical RDF explains RDF from the ground up, providing real-world examples and descriptions of how the technology is being used in applications like Mozilla, FOAF, and Chandler, as well as infrastructure you can use to build your own applications. This book cuts to the heart of the W3C's often obscure specifications, giving you tools to apply RDF successfully in your own projects.The first part of the book focuses on the RDF specifications. After an introduction to RDF, the book covers the RDF specification documents themselves, including RDF Semantics and Concepts and Abstract Model specifications, RDF constructs, and the RDF Schema. The second section focuses on programming language support, and the tools and utilities that allow developers to review, edit, parse, store, and manipulate RDF/XML. Subsequent sections focus on RDF's data roots, programming and framework support, and practical implementation and use of RDF and RDF/XML.If you want to know how to apply RDF to information processing, Practical RDF is for you. Whether your interests lie in large-scale information aggregation and analysis or in smaller-scale projects like weblog syndication, this book will provide you with a solid foundation for working with RDF.

Up and Running with DB2 UDB ESE: Partitioning for Performance in an e-Business Intelligence World

Data warehouses in the 1990s were for the privileged few business analysts. Business Intelligence is now being democratized by being shared with the rank and file employee demanding higher levels of RDBMS scalability and ease of use, being delivered through Web portals. To support this emerging e-Business Intelligence world, the challenges that face the enterprises for their centralized data warehouse RDBMS technology are scalability, performance, availability and smart manageability. This IBM Redbooks publication focuses on the innovative technical functionalities of DB2 UDB ESE V8.1 and discusses: This book positions the new functionalities, so you can understand and evaluate their applicability in your own enterprise data warehouse environment, and get started prioritizing and implementing them. Please note that the additional material referenced in the text is not available from IBM.

Service- and Component-based Development: Using Select Perspective™ and UML

This book presents the approaches and practices for the construction of software systems using Select Perspective. It details the key workflows for a contemporary approach to supplying, managing, and consuming software assets to deliver business IT systems. This book provides a comprehensive development lifecycle (Select Perspective) for component and web service based IT solutions, that supports parallel development activities, to reduce time-to-market. It introduces the Select Perspective as a Supply, Manage, Consume software development process. It provides a real world project experience and examples. Technology: Unlike other development processes, Select Perspective is focused on a small number of key deliverables within an organizational framework of suppliers and consumers, of solution driven components. Audience: For CIOs, IT Directors, Project Managers, and solution developers. Level Intermediate. Hedley Apperly is Vice President, Product Marketing and Development with Aonix Hedley has graduate and post-graduate qualifications in production engineering, business computing and strategic marketing. His 19 years experience in IT, have focused on the design and development of relational, object-oriented and component-bases systems. He is also a committee member of the British Computer Societies (BCS), Object-Oriented Programming and Systems (OOPS) specialist group. As well as his involvement Component Based Development for Enterprise Systems, published by Cambridge University Press, Hedley co-authored Component Based Software Engineering; Putting the Pieces Together, published by Addison Wesley. Ralph Hofman works in Manager Services (Benelux) at Aonix. Ralph studied computer science at the University of Twente in the Netherlands. He started as a freelance consultant for different companies and specialized in methods and tools for system development. Ralph initiated component-based development as a way of working within a major international Bank. He joined Aonix in 2000, where he is responsible for the consultancy and services in the Benelux. Steve Latchem is Director of International Services with Aonix. Steve has been within the IT industry for over 18 years, holding positions in large consultancy groups and IT Departments ranging from business analyst to object oriented consultant, architect and project manager. Steve now directs the global professional services group at Aonix. Steve collaborated on AntiPatterns: Refactoring Software & Projects in Crisis and co-authored Component Based Software Engineering; Putting the Pieces Together, published by Addison Wesley. Barry Maybank is Principal Consultant with Aonix . Barry has been within the IT industry for over 17 years, holding positions in consultancy groups, IT Product Companies and Engineering companies with roles ranging from software engineer to architect. Barry McGibbon is Associate Consultant with Aonix. Barry has worked in the IT industry for over 35 years, holding very senior management positions with leading computing services providers. He has been involved in component-based development initiatives for significant enterprises in the UK and Europe. As well as a frequent contributor to major journals, he is author of Managing Your Move To Object Technology: Guidelines & Strategies for a Smooth Transition published by SIGS Books Inc. He is also Technical Chair for Europe's largest CBD/OO conference and a series editor for Cambridge University Press. David Piper is a Principal Consultant with Aonix. David has been working in the IT industry for over 20 years holding positions in manufacturing, financial services and IT consultancy with roles ranging from analyst to quality assurance manager and project manager. Chris Simons is a Senior Consultant with Aonix. Christopher has been within the IT industry for over 12 years, holding positions in real-time, defense, retail, public sector and finance domains, with roles ranging from software engineer, lead analyst to technical architect. He has also taught object-orientation and development process at various universities as a visiting lecturer.

Real R & D Options

Real R&D options are among the earliest modelled real options, with now ten primary practical uses: general R&D planning, planning R&D in stages, evaluating test information, new product development timing, operations, abandonment, risk sharing, market funding, industry strategy and regulation. This book was partly motivated by requests to identify and develop real option models for R&D in telecommunications, petroleum technology and biotechnology. Nine new models cover information and implementation costs, analytical solutions for mean reverting, or fat tailed revenues, endogenous learning and exogenous and experiential shocks, American sequential options, and innovator advantages. Four new applications include forward start development options, exploration options, innovation with information costs, and innovator's real values with changing market share. R&D directors and researchers will find several uses for these models: · general R&D planning · evaluating test information · new product development timing · risk sharing · industry strategy and regulation A practical guide to how organizations can use Real Option techniques to effectively value research and development by companies Provides a rigorous theoretical underpinning of the use of Real Option techniques *Real Options applications are orientated around the economies of North America, Europe and Asia, for an international perspective

Random Processes: Filtering, Estimation, and Detection

An understanding of random processes is crucial to many engineering fields-including communication theory, computer vision, and digital signal processing in electrical and computer engineering, and vibrational theory and stress analysis in mechanical engineering. The filtering, estimation, and detection of random processes in noisy environments are critical tasks necessary in the analysis and design of new communications systems and useful signal processing algorithms. Random Processes: Filtering, Estimation, and Detection clearly explains the basics of probability and random processes and details modern detection and estimation theory to accomplish these tasks. In this book, Lonnie Ludeman, an award-winning authority in digital signal processing, joins the fundamentals of random processes with the standard techniques of linear and nonlinear systems analysis and hypothesis testing to give signal estimation techniques, specify optimum estimation procedures, provide optimum decision rules for classification purposes, and describe performance evaluation definitions and procedures for the resulting methods. The text covers four main, interrelated topics: Probability and characterizations of random variables and random processes Linear and nonlinear systems with random excitations Optimum estimation theory including both the Wiener and Kalman Filters Detection theory for both discrete and continuous time measurements Lucid, thorough, and well-stocked with numerous examples and practice problems that emphasize the concepts discussed, Random Processes: Filtering, Estimation, and Detection is an understandable and useful text ideal as both a self-study guide for professionals in the field and as a core text for graduate students.

Process Control: Modeling, Design, and Simulation

Master process control hands on, through practical examples and MATLAB® simulations This is the first complete introduction to process control that fully integrates software tools—enabling professionals and students to master critical techniques hands on, through computer simulations based on the popular MATLAB environment. Process Control: Modeling, Design, and Simulation teaches the field's most important techniques, behaviors, and control problems through practical examples, supplemented by extensive exercises—with detailed derivations, relevant software files, and additional techniques available on a companion Web site. Coverage includes: Fundamentals of process control and instrumentation, including objectives, variables, and block diagrams Methodologies for developing dynamic models of chemical processes Dynamic behavior of linear systems: state space models, transfer function-based models, and more Feedback control; proportional, integral, and derivative (PID) controllers; and closed-loop stability analysis Frequency response analysis techniques for evaluating the robustness of control systems Improving control loop performance: internal model control (IMC), automatic tuning, gain scheduling, and enhancements to improve disturbance rejection Split-range, selective, and override strategies for switching among inputs or outputs Control loop interactions and multivariable controllers An introduction to model predictive control (MPC) Bequette walks step by step through the development of control instrumentation diagrams for an entire chemical process, reviewing common control strategies for individual unit operations, then discussing strategies for integrated systems. The book also includes 16 learning modules demonstrating how to use MATLAB and SIMULINK to solve several key control problems, ranging from robustness analyses to biochemical reactors, biomedical problems to multivariable control.

Enhance Your Business Applications: Simple Integration of Advanced Data Mining Functions

Today data mining is no longer thought of as a set of stand-alone techniques, far from the business applications, and used only by data mining specialists or statisticians. Integrating data mining with mainstream applications is becoming an important issue for e-business applications. To support this move to applications, data mining is now an extension of the relational databases that database administrators or IT developers use. They use data mining as they would use any other standard relational function that they manipulate. This IBM Redbooks publication positions the new DB2 data mining functions: Part 1 of this book helps business analysts and implementers to understand and position these new DB2 data mining functions. Part 2 provides examples for implementers on how to easily and quickly integrate the data mining functions in business applications to enhance them. And part 3 helps database administrators and IT developers to configure these functions once to prepare them for use and integration in any application. Please note that the additional material referenced in the text is not available from IBM.

Mining the Web

Mining the Web: Discovering Knowledge from Hypertext Data is the first book devoted entirely to techniques for producing knowledge from the vast body of unstructured Web data. Building on an initial survey of infrastructural issues—including Web crawling and indexing—Chakrabarti examines low-level machine learning techniques as they relate specifically to the challenges of Web mining. He then devotes the final part of the book to applications that unite infrastructure and analysis to bring machine learning to bear on systematically acquired and stored data. Here the focus is on results: the strengths and weaknesses of these applications, along with their potential as foundations for further progress. From Chakrabarti's work—painstaking, critical, and forward-looking—readers will gain the theoretical and practical understanding they need to contribute to the Web mining effort. * A comprehensive, critical exploration of statistics-based attempts to make sense of Web Mining. * Details the special challenges associated with analyzing unstructured and semi-structured data. * Looks at how classical Information Retrieval techniques have been modified for use with Web data. * Focuses on today's dominant learning methods: clustering and classification, hyperlink analysis, and supervised and semi-supervised learning. * Analyzes current applications for resource discovery and social network analysis. * An excellent way to introduce students to especially vital applications of data mining and machine learning technology.

Inclusion Breakthrough

Constant, continuing, and cataclysmic change is causing a major crisis within business organizations today. Faced with constantly advancing technology, unpredictable market shifts, intense global competition, and an increasingly independent "free agent" workforce, the only way for an organization to adapt and succeed is to build a "culture of inclusion" that nurtures and draws on the talents of a diverse workforce. Easy to say but hard to do; most organizations are mired in industrial revolution, static-world business models administered by monocultural, bordering-on-oppressive, "command and control" hierarchies. Organizations at risk include Fortune 500 giants, entrepreneurial start-ups, manufacturing and retail operations, government agencies, not-for-profits, educational institutions, and others. Most organizational change efforts-whether labeled as diversity efforts, re-engineering, right-sizing, or total-quality-management-are a waste of time, money, and human effort. Most produce more cynicism than results, and they can poison the waters for future change efforts. The Inclusion Breakthrough cuts a path through this potential minefield, offering a proven methodology for strategic organizational change, including models for diagnosing, planning, and implementing inclusion-focused, culture-change strategies tailored to each organization's individual needs. It also describes the key competencies for leading and sustaining a culture of inclusion. Offering real-world results of "before and after" surveys, including anecdotal and statistical reports of organizational change achieved using the methodologies described, The Inclusion Breakthrough presents an overview of current workplace conditions, attitudes, and policies based on interviews, surveys, and focus groups encompassing thousands of people in major organizations. The Inclusion Breakthrough demonstrates why the bottom line must be the central focus of any change strategy-and more importantly, how to carry that strategy out successfully.

SAS for Linear Models, Fourth Edition

This clear and comprehensive guide provides everything you need for powerful linear model analysis. Using a tutorial approach and plenty of examples, authors Ramon Littell, Walter Stroup, and Rudolf Freund lead you through methods related to analysis of variance with fixed and random effects. You will learn to use the appropriate SAS procedure for most experiment designs (including completely random, randomized blocks, and split plot) as well as factorial treatment designs and repeated measures. SAS for Linear Models, Fourth Edition, also includes analysis of covariance, multivariate linear models, and generalized linear models for non-normal data. Find inside: regression models; balanced ANOVA with both fixed- and random-effects models; unbalanced data with both fixed- and random-effects models; covariance models; generalized linear models; multivariate models; and repeated measures. New in this edition: MIXED and GENMOD procedures, updated examples, new software-related features, and other new material.

This book is part of the SAS Press program.

The Boost Graph Library: User Guide and Reference Manual

The Boost Graph Library (BGL) is the first C++ library to apply the principles of generic programming to the construction of the advanced data structures and algorithms used in graph computations. Problems in such diverse areas as Internet packet routing, molecular biology, scientific computing, and telephone network design can be solved by using graph theory. This book presents an in-depth description of the BGL and provides working examples designed to illustrate the application of BGL to these real-world problems. Written by the BGL developers, gives you all the information you need to take advantage of this powerful new library. Part I is a complete user guide that begins by introducing graph concepts, terminology, and generic graph algorithms. This guide also takes the reader on a tour through the major features of the BGL; all motivated with example problems. Part II is a comprehensive reference manual that provides complete documentation of all BGL concepts, algorithms, and classes. The Boost Graph Library: User Guide and Reference Manual Readers will find coverage of: Graph terminology and concepts Generic programming techniques in C++ Shortest-path algorithms for Internet routing Network planning problems using the minimum-spanning tree algorithms BGL algorithms with implicitly defined graphs BGL Interfaces to other graph libraries BGL concepts and algorithms BGL classes–graph, auxiliary, and adaptor Groundbreaking in its scope, this book offers the key to unlocking the power of the BGL for the C++ programmer looking to extend the reach of generic programming beyond the Standard Template Library.

Virtual Bio-Instrumentation: Biomedical, Clinical, and Healthcare Applications in LabVIEW

Bringing the power of virtual instrumentation to the biomedical community. Applications across diverse medical specialties Detailed design guides for LabVIEW and BioBench applications Hands-on problem-solving throughout the book Laboratory, clinical, and healthcare applications Numerous VI's with source code, plus several demos, are available on the book's web site Virtual instrumentation allows medical researchers and practitioners to combine the traditional diagnostic tools with advanced technologies such as databases, Active X, and the Internet. In both laboratory and clinical environments, users can interact with a wealth of disparate systems, facilitating better, faster, and more informed decision making. Virtual Bio-Instrumentation: Biomedical, Clinical, and Healthcare Applications in LabVIEW is the first book of its kind to apply VI technology to the biomedical field. Hands-on problems throughout the book demonstrate immediate practical uses Examples cover a variety of medical specialties Detailed design instructions give the inside view of LabVIEW and BioBench applications Both students and practicing professionals will appreciate the practical applications offered for modeling fundamental physiology, advanced systems analysis, medical device development and testing, and even hospital management and clinical engineering scenarios.

Longitudinal Data and SAS
Working with longitudinal data introduces a unique set of challenges. Once you've mastered the art of performing calculations within a single observation of a data set, you're faced with the task of performing calculations or making comparisons between observations. It's easy to look backward in data sets, but how do you look forward and across observations? Ron Cody provides straightforward answers to these and other questions. Longitudinal Data and SAS details useful techniques for conducting operations between observations in a SAS data set. For quick reference, the book is conveniently organized to cover tools, including an introduction to powerful SAS programming techniques for longitudinal data; case studies, including a variety of illuminating examples that use Ron's techniques; and macros, including detailed descriptions of helpful longitudinal data macros. Beginning to intermediate SAS users will appreciate this book's informative, easy-to-comprehend style. And users who frequently process longitudinal data will learn to make the most of their analyses by following Ron's methodologies.

This book is part of the SAS Press program.

Ten Minute Guide to Microsoft® Visio® 2002

Because most people don't have the luxury of sitting down uninterrupted for hours at a time to learn Visio, this 10-Minute Guide focuses on the most often used features, covering them in lessons designed to take 10 minutes or less to complete. In addition, this guide teaches the user how to use Visio without relying on technical jargon, by providing straightforward, easy-to-follow explanations and lists of numbered steps that tell the user which keys to press and which options to select.

Developing Bioinformatics Computer Skills

Bioinformatics--the application of computational and analytical methods to biological problems--is a rapidly evolving scientific discipline. Genome sequencing projects are producing vast amounts of biological data for many different organisms, and, increasingly, storing these data in public databases. Such biological databases are growing exponentially, along with the biological literature. It's impossible for even the most zealous researcher to stay on top of necessary information in the field without the aid of computer-based tools. Bioinformatics is all about building these tools. Developing Bioinformatics Computer Skills is for scientists and students who are learning computational approaches to biology for the first time, as well as for experienced biology researchers who are just starting to use computers to handle their data. The book covers the Unix file system, building tools and databases for bioinformatics, computational approaches to biological problems, an introduction to Perl for bioinformatics, data mining, and data visualization. Written in a clear, engaging style, Developing Bioinformatics Computer Skills will help biologists develop a structured approach to biological data as well as the tools they'll need to analyze the data.

Say It With Charts: The Executive’s Guide to Visual Communication, 4th Edition

Step-by-step guide to creating compelling, memorable presentations A chart that once took ten hours to prepare can now be produced by anyone with ten minutes and a computer keyboard. What hasn't changed, however, are the basics behind creating a powerful visual - what to say, why to say it, and how to say it for the most impact. In Say It With Charts, Fourth Edition --the latest, cutting-edge edition of his best-selling presentation guide -- Gene Zelazny reveals time-tested tips for preparing effective presentations. Then, this presentation guru shows you how to combine those tips with today's hottest technologies for sharper, stronger visuals. Look to this comprehensive presentation encyclopedia for information on: * How to prepare different types of charts -- pie, bar, column, line, or dot -- and when to use each * Lettering size, color choice, appropriate chart types, and more * Techniques for producing dramatic eVisuals using animation, scanned images, sound, video, and links to pertinent websites