ACCELERATE MODEL TRAINING WITH PYTORCH 2.X build more accurate models by boosting the model training process
Description
More Details
Notes
Also in this Series
Reviews from GoodReads
Citations
Alves, M. M., & Titular, L. M. d. A. D. (2024). ACCELERATE MODEL TRAINING WITH PYTORCH 2.X: build more accurate models by boosting the model training process (1st edition.). Packt Publishing Ltd..
Chicago / Turabian - Author Date Citation, 17th Edition (style guide)Alves, Maicon Melo and Lúcia Maria de Assumpação Drummond, Titular. 2024. ACCELERATE MODEL TRAINING WITH PYTORCH 2.X: Build More Accurate Models By Boosting the Model Training Process. Birmingham, UK: Packt Publishing Ltd.
Chicago / Turabian - Humanities (Notes and Bibliography) Citation, 17th Edition (style guide)Alves, Maicon Melo and Lúcia Maria de Assumpação Drummond, Titular. ACCELERATE MODEL TRAINING WITH PYTORCH 2.X: Build More Accurate Models By Boosting the Model Training Process Birmingham, UK: Packt Publishing Ltd, 2024.
Harvard Citation (style guide)Alves, M. M. and Titular, L. M. d. A. D. (2024). ACCELERATE MODEL TRAINING WITH PYTORCH 2.X: build more accurate models by boosting the model training process. 1st edn. Birmingham, UK: Packt Publishing Ltd.
MLA Citation, 9th Edition (style guide)Alves, Maicon Melo,, and Lúcia Maria de Assumpação Drummond Titular. ACCELERATE MODEL TRAINING WITH PYTORCH 2.X: Build More Accurate Models By Boosting the Model Training Process 1st edition., Packt Publishing Ltd., 2024.
Staff View
Grouping Information
Grouped Work ID | 10deb14f-b8e1-6c3b-378f-5ec1fd561456-eng |
---|---|
Full title | accelerate model training with pytorch 2 x build more accurate models by boosting the model training process |
Author | alves maicon melo |
Grouping Category | book |
Last Update | 2025-01-24 12:33:29PM |
Last Indexed | 2025-05-03 03:03:03AM |
Book Cover Information
Image Source | default |
---|---|
First Loaded | Feb 1, 2025 |
Last Used | Feb 1, 2025 |
Marc Record
First Detected | Dec 16, 2024 11:30:05 PM |
---|---|
Last File Modification Time | Dec 17, 2024 08:29:15 AM |
Suppressed | Record had no items |
MARC Record
LEADER | 07327cam a22004697a 4500 | ||
---|---|---|---|
001 | on1429720163 | ||
003 | OCoLC | ||
005 | 20241217082703.0 | ||
006 | m o d | ||
007 | cr |n||||||||| | ||
008 | 240414s2024 enk o 000 0 eng | ||
019 | |a 1429723284 | ||
020 | |a 9781805121916|q (electronic bk.) | ||
020 | |a 180512191X|q (electronic bk.) | ||
035 | |a (OCoLC)1429720163|z (OCoLC)1429723284 | ||
037 | |a 9781805120100|b O'Reilly Media | ||
040 | |a YDX|b eng|c YDX|d OCLCO|d ORMDA|d OCLCO|d EBLCP | ||
049 | |a MAIN | ||
050 | 4 | |a QA76.73.P98 | |
082 | 0 | 4 | |a 006.3/2|2 23/eng/20240506 |
100 | 1 | |a Alves, Maicon Melo,|e author. | |
245 | 1 | 0 | |a ACCELERATE MODEL TRAINING WITH PYTORCH 2.X|h [electronic resource] :|b build more accurate models by boosting the model training process /|c Maicon Melo Alves ; foreword by Prof. Lúcia Maria de Assumpação Drummond Titular. |
250 | |a 1st edition. | ||
260 | |a Birmingham, UK :|b Packt Publishing Ltd.,|c 2024. | ||
300 | |a 1 online resource | ||
505 | 0 | |a Cover -- Title page -- Copyright and credits -- Foreword -- Contributors -- Table of Contents -- Preface -- Part 1: Paving the Way -- Chapter 1: Deconstructing the Training Process -- Technical requirements -- Remembering the training process -- Dataset -- The training algorithm -- Understanding the computational burden of the model training phase -- Hyperparameters -- Operations -- Parameters -- Quiz time! -- Summary -- Chapter 2: Training Models Faster -- Technical requirements -- What options do we have? -- Modifying the software stack -- Increasing computing resources | |
505 | 8 | |a Modifying the application layer -- What can we change in the application layer? -- Getting hands-on -- What if we change the batch size? -- Modifying the environment layer -- What can we change in the environment layer? -- Getting hands-on -- Quiz time! -- Summary -- Part 2: Going Faster -- Chapter 3: Compiling the Model -- Technical requirements -- What do you mean by compiling? -- Execution modes -- Model compiling -- Using the Compile API -- Basic usage -- Give me a real fight -- training a heavier model! -- How does the Compile API work under the hood? -- Compiling workflow and components | |
505 | 8 | |a Backends -- Quiz time! -- Summary -- Chapter 4: Using Specialized Libraries -- Technical requirements -- Multithreading with OpenMP -- What is multithreading? -- Using and configuring OpenMP -- Using and configuring Intel OpenMP -- Optimizing Intel CPU with IPEX -- Using IPEX -- How does IPEX work under the hood? -- Quiz time! -- Summary -- Chapter 5: Building an Efficient Data Pipeline -- Technical requirements -- Why do we need an efficient data pipeline? -- What is a data pipeline? -- How to build a data pipeline -- Data pipeline bottleneck -- Accelerating data loading | |
505 | 8 | |a Optimizing a data transfer to the GPU -- Configuring data pipeline workers -- Reaping the rewards -- Quiz time! -- Summary -- Chapter 6: Simplifying the Model -- Technical requirements -- Knowing the model simplifying process -- Why simplify a model? (reason) -- How to simplify a model? (process) -- When do we simplify a model? (moment) -- Using Microsoft NNI to simplify a model -- Overview of NNI -- NNI in action! -- Quiz time! -- Summary -- Chapter 7: Adopting Mixed Precision -- Technical requirements -- Remembering numeric precision -- How do computers represent numbers? | |
505 | 8 | |a Floating-point representation -- Novel data types -- A summary, please! -- Understanding the mixed precision strategy -- What is mixed precision? -- Why use mixed precision? -- How to use mixed precision -- How about Tensor Cores? -- Enabling AMP -- Activating AMP on GPU -- AMP, show us what you are capable of! -- Quiz time! -- Summary -- Part 3: Going Distributed -- Chapter 8: Distributed Training at a Glance -- Technical requirements -- A first look at distributed training -- When do we need to distribute the training process? -- Where do we execute distributed training? | |
520 | |a Dramatically accelerate the building process of complex models using PyTorch to extract the best performance from any computing environment Key Features Reduce the model-building time by applying optimization techniques and approaches Harness the computing power of multiple devices and machines to boost the training process Focus on model quality by quickly evaluating different model configurations Purchase of the print or Kindle book includes a free PDF eBook Book Description Penned by an expert in High-Performance Computing (HPC) with over 25 years of experience, this book is your guide to enhancing the performance of model training using PyTorch, one of the most widely adopted machine learning frameworks. You'll start by understanding how model complexity impacts training time before discovering distinct levels of performance tuning to expedite the training process. You'll also learn how to use a new PyTorch feature to compile the model and train it faster, alongside learning how to benefit from specialized libraries to optimize the training process on the CPU. As you progress, you'll gain insights into building an efficient data pipeline to keep accelerators occupied during the entire training execution and explore strategies for reducing model complexity and adopting mixed precision to minimize computing time and memory consumption. The book will get you acquainted with distributed training and show you how to use PyTorch to harness the computing power of multicore systems and multi-GPU environments available on single or multiple machines. By the end of this book, you'll be equipped with a suite of techniques, approaches, and strategies to speed up training , so you can focus on what really matters--building stunning models! What you will learn Compile the model to train it faster Use specialized libraries to optimize the training on the CPU Build a data pipeline to boost GPU execution Simplify the model through pruning and compression techniques Adopt automatic mixed precision without penalizing the model's accuracy Distribute the training step across multiple machines and devices Who this book is for This book is for intermediate-level data scientists who want to learn how to leverage PyTorch to speed up the training process of their machine learning models by employing a set of optimization strategies and techniques. To make the most of this book, familiarity with basic concepts of machine learning, PyTorch, and Python is essential. However, there is no obligation to have a prior understanding of distributed computing, accelerators, or multicore processors. | ||
590 | |a O'Reilly|b O'Reilly Online Learning: Academic/Public Library Edition | ||
650 | 0 | |a Neural networks (Computer science)|9 65536 | |
650 | 0 | |a Machine learning.|9 46043 | |
650 | 0 | |a Python (Computer program language)|9 71333 | |
700 | 1 | |a Titular, Lúcia Maria de Assumpação Drummond,|e writer of foreword. | |
776 | 0 | 8 | |i Print version:|z 1805120107|z 9781805120100|w (OCoLC)1427657298 |
856 | 4 | 0 | |u https://library.access.arlingtonva.us/login?url=https://learning.oreilly.com/library/view/~/9781805120100/?ar|x O'Reilly|z eBook |
938 | |a YBP Library Services|b YANK|n 20985773 | ||
938 | |a ProQuest Ebook Central|b EBLB|n EBL31267492 | ||
994 | |a 92|b VIA | ||
999 | |c 360794|d 360794 |