
Probabilistic Machine Learning: Advanced Topics (Adaptive Computation and Machine Learning series)
Original price was: $150.00.$19.99Current price is: $19.99.
✔️ (PDF) • Pages : 1355
An advanced book for researchers and graduate students working in machine learning and statistics who want to learn about deep learning, Bayesian inference, generative models, and decision making under uncertainty.
An advanced counterpart to Probabilistic Machine Learning: An Introduction, this high-level textbook provides researchers and graduate students detailed coverage of cutting-edge topics in machine learning, including deep generative modeling, graphical models, Bayesian inference, reinforcement learning, and causality. This volume puts deep learning into a larger statistical context and unifies approaches based on deep learning with ones based on probabilistic modeling and inference. With contributions from top scientists and domain experts from places such as Google, DeepMind, Amazon, Purdue University, NYU, and the University of Washington, this rigorous book is essential to understanding the vital issues in machine learning.
- Covers generation of high dimensional outputs, such as images, text, and graphs
- Discusses methods for discovering insights about data, based on latent variable models
- Considers training and testing under different distributions
- Explores how to use probabilistic models and inference for causal inference and decision making
- Features online Python code accompaniment
5 reviews for Probabilistic Machine Learning: Advanced Topics (Adaptive Computation and Machine Learning series)
You must be logged in to post a review.
Professor dot biz (verified owner) –
First, Murphy is a superstar both as researcher / developer and teacher and this shines through in all three volumes.
But before you pay North of $200 for the latest 2 volumes, two related non-Bayesian decision criteria are gated: 1. Quality of the work and 2. Your level of experience. The first is a no-brainer, there is simply no better, more up to date at this writing text and handbook like this one that combines outstanding teaching technique with perfect detail balance between math, code, heuristics and cutting edge new developments ala generative DRL, Pearlian causal inference, and so much more. Where the first volume details Jacobians in feedforward for backpropagation, for example, this volume drills down many more laters of both matrix calculus and gradient descent after you resolve the gradient.
But what if you’re ok in basic math, have done some regression work in data science, but are relatively new to ML? Here’s the balance. This is neither a “here’s every step carefully explained” nor a showboat author-ego piece with mind numbing abstraction. The takeaway is that this middle ground opens even this advanced text, though targeted at grad school level, to a much wider audience including those of us rusty on partial differential equations but willing to fill in the blanks on youtube or coursera.
The reason is that Kev takes the time to give generous detail on notation, including the RARE step of explaining where working notation is either author-specific, or different from “textbook” notation (and notation is changing daily as rhis field is moving so fast).
What if you don’t even have a good grasp of linear algebra, pseudo code algorithms, python, etc? First, you may want to start with the first volume before tackling this. Next, the online accompaying resources are wonderful, but go farther into code rather than backward into basics.
The answer is a question: how willing are you to spend hours on each page checking basics with refresher or new linear algebra study, or review of stats and probability? It’s like an advanced chess book, it has to be studied, not read if you’re new to ML.
The volume is complete and self contained and goes beyond just generative LLMs, for example, with apps to recognition, imaging, and much more, DO review the TOC!
Finally, as a diagramology guy, I judge the quality of any systems text by the graphics included, and Murphy gets 10 stars here, with not only outstanding figures, but natching code so you can generate the models yourself. If you don’t have a python sdk/ide on your machine you can also run them online, and dont forget that Matlab code also runs for the most part on Octave if you’re self-studying at home without corporate or university infrastructure.
Bottom line is that this volume is a must have if you want to organize your study or practice around the most current topics rather than spend 5 times as much on verticals in each domain. Don’t be put off at the quantitative focus even if you are strictly dealing with generative NLP; when you get to the training step of ML, it’s 80% vectors even with linguistic causal inference. And the causal side, Judea Pearl notwithstanding, is still in its infancy. A very advanced search we just tested with google JAX answered the question, “What made Will Durant ignore Alfred Whitehead in favor of Russell in quantum causality even though Russell was Whitehead’s student?” with basketball links and pimple medication ads! We’ve still got a ways to go!!
Ranajit Gangopadhyay (verified owner) –
I have all three books from Kevin Murphy. All of them are some of the best books I have read.
Every time I read the book, I learn something new. As you read through the topics, first you get to understand the basics, but as you read through the paragraphs there are lots of references back to other sections and also to articles.
The first time around I read just the chapter, the second time I read the chapter referencing the section within the book, and then I read through papers the paragraph referenced. Each time I think I understood the topic a little deeper, reads like I have a teacher teaching me on my bookshelf.
EgLuPer (verified owner) –
Excelente livro, volume I junto com volume II (Avançado).
Loïc & Marie (verified owner) –
Best book to dive into theory in Machine Learning.
Michal Grochol (verified owner) –
It is a daunting task to read this book. Although it is not theoretical physics and requires only knowledge of probabilistic theory and linear algebra it is not easy to follow since there are similar concepts and one gets easily lost in a number of abbreviations. There are many typos and some chapters written by various people are sometimes hard to follow. I take from this book that ML is just advanced mathematics and computer science and that there is a lot of space for an improvement and a development.