Get Free Shipping on orders over $89
Optimization Algorithms for Distributed Machine Learning - Gauri Joshi

Optimization Algorithms for Distributed Machine Learning

By: Gauri Joshi

Paperback | 26 November 2022

At a Glance

Paperback


$129.99

or 4 interest-free payments of $32.50 with

 or 

Ships in 10 to 15 business days

This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime.

More in Algorithms & Data Structures

Addiction by Design : Machine Gambling in Las Vegas - Natasha Dow Schll
Learning Algorithms : A Programmer's Guide to Writing Better Code - George Heineman
Data Science from Scratch : First Principles with Python - Joel Grus
Python for Algorithmic Trading : From Idea to Cloud Deployment - Yves Hilpisch
Artificial Intelligence in Medicine - Thompson  Stephan
Contentious Data in Movement - Cristina Flesher Fominaya