Scalable and Distributed Machine Learning and Deep Learning Patterns

Scalable and Distributed Machine Learning and Deep Learning Patterns

Release Date: August, 2023|Copyright: © 2023 |Pages: 286
DOI: 10.4018/978-1-6684-9804-0
ISBN13: 9781668498040|ISBN10: 1668498049|EISBN13: 9781668498057
Hardcover:
Available
$270.00
TOTAL SAVINGS: $270.00
Benefits
  • Printed-On-Demand (POD)
  • Usually ships one day from order
Hardcover:
Available
$270.00
TOTAL SAVINGS: $270.00
Benefits
  • Printed-On-Demand (POD)
  • Usually ships one day from order
E-Book:
Available
$270.00
TOTAL SAVINGS: $270.00
Benefits
  • Multi-user license (no added fee)
  • Immediate access after purchase
  • No DRM
  • PDF download
E-Book:
Available
$270.00
TOTAL SAVINGS: $270.00
Benefits
  • Immediate access after purchase
  • No DRM
  • PDF download
  • Receive a 10% Discount on eBooks
Hardcover +
E-Book:
Available
$325.00
TOTAL SAVINGS: $325.00
Benefits
  • Printed-On-Demand (POD)
  • Usually ships one day from order
  • Multi-user license (no added fee)
  • Immediate access after purchase
  • No DRM
  • PDF download
Hardcover +
E-Book:
Available
$325.00
TOTAL SAVINGS: $325.00
Benefits
  • Printed-On-Demand (POD)
  • Usually ships one day from order
  • Immediate access after purchase
  • No DRM
  • PDF download
OnDemand:
(Individual Chapters)
Available
$37.50
TOTAL SAVINGS: $37.50
Benefits
  • Purchase individual chapters from this book
  • Immediate PDF download after purchase or access through your personal library
Effective immediately, IGI Global has discontinued softcover book production. The softcover option is no longer available for direct purchase.
Description & Coverage
Description:

Scalable and Distributed Machine Learning and Deep Learning Patterns is a practical guide that provides insights into how distributed machine learning can speed up the training and serving of machine learning models, reduce time and costs, and address bottlenecks in the system during concurrent model training and inference. The book covers various topics related to distributed machine learning such as data parallelism, model parallelism, and hybrid parallelism. Readers will learn about cutting-edge parallel techniques for serving and training models such as parameter server and all-reduce, pipeline input, intra-layer model parallelism, and a hybrid of data and model parallelism. The book is suitable for machine learning professionals, researchers, and students who want to learn about distributed machine learning techniques and apply them to their work.

This book is an essential resource for advancing knowledge and skills in artificial intelligence, deep learning, and high-performance computing. The book is suitable for computer, electronics, and electrical engineering courses focusing on artificial intelligence, parallel computing, high-performance computing, machine learning, and its applications. Whether you're a professional, researcher, or student working on machine and deep learning applications, this book provides a comprehensive guide for creating distributed machine learning, including multi-node machine learning systems, using Python development experience. By the end of the book, readers will have the knowledge and abilities necessary to construct and implement a distributed data processing pipeline for machine learning model inference and training, all while saving time and costs.

Coverage:

The many academic areas covered in this publication include, but are not limited to:

  • Advantages of Pipeline Parallelism
  • Autoencoders and RBMs
  • Convolutional Neural Networks
  • Data Parallelism
  • Deep Feedforward Networks
  • Disadvantages of Pipeline Parallelism
  • Distributed Learning
  • Generative Adversarial Network
  • Global Batch Size
  • High-Level Bits of Data Parallelism
  • Hybrid of Data and Model Parallelism
  • Hyperparameter Tuning
  • Layer Split
  • Learning Rate Adjustment
  • Machine Learning With Spark
  • Model Parallelism
  • Model Synchronization
  • Model Synchronization Schemes
  • Notes on Intra-Layer Model Parallelism
  • Parameter Server and All-Reduce
  • Pipeline Input
  • Pros and Cons of Pipeline Parallelism
  • Recurrent Neural Networks
  • Stochastic Gradient Descent
  • Training Feedforward Networks
Table of Contents
Search this Book:
Reset
Editor/Author Biographies
J. Joshua Thomas received his PhD. degree from University Sains Malaysia (USM), School of Computer Sciences, in 2015. He worked as research assistant at the Artificial Intelligence Lab in University Sains Malaysia. His research interests include scheduling algorithms, machine learning algorithms, data analytics, deep learning, and visual analytics, and Chemoinformatics. Dr. J. Joshua has authored several publications in leading international conferences and journals. Some of his research work were published in conferences including, IEEE, ICONIP, IVIC, IV, COMPSE, ICO. He has funded external, internal, short term research grants and industry collaborative projects. He has been invited as plenary speaker at IAIM2019, delivered, conduct Workshop's (IVIC19) at International conferences. He is an Associate Editor for the journal of Energy Optimization and Engineering (IJEOE), and invited as guest editor for JVLC-Elsevier, IJDSA-Springer, IJCC-IGI-Global, IJIRR-Inderscience.
S. Harini serves an as Associate Professor in Distributive Architecture and Parallel Systems at the Vellore Institute of Technology.
V. Pattabiraman obtained his Bachelor's from Madras University and Master's degree from Bharathidasan University. He completed his PhD from Bharathiar University, India. He has a total Professional experience of more than 16 years working in various prestigious institutions. He has published more than 30 papers in various National and International peer reviewed journals and conferences. He visited various countries namely few China, Singapore, Malaysia, Thailand and South Africa etc. for presenting his research contributions as well as to giving key note address. He is currently an Associate Professor and Program-Chair for Master's Programme at VIT University-Chennai Campus, India. His teaching and research expertise covers a wide range of subject area including Data Structures, Knowledge Discovery and Data mining, Database echnologies, Big Data Analytics, Networks and Information security, etc.
Archiving
All of IGI Global's content is archived via the CLOCKSS and LOCKSS initiative. Additionally, all IGI Global published content is available in IGI Global's InfoSci® platform.