For this purpose, MIT's Eulerian Video Magnification is next to impossible when you have both color changes and large motions (it is better suited to magnifying small motions, or subtle color changes in unmoving patches). NASA Glenn's thearn has developed another approach which has the same problems (https://github.com/thearn/webcam-pulse-detector).
We developed our own solution, involving (among other things) pixel-accurate facial feature detection and filtering methods not subject to FFT's regularity assumption
You can try the algorithm yourself if you scroll down, then double check with your webcam and your own pulse ;-). Note that motion compensation is disabled for this demo, so it only works well if you do not move for a few seconds (the reason being that the pixel-accurate facial feature and skin detection algorithms would kill the server if 2-3 people try at the same time).
Surprising that there aren't more, given that it's literally the #1 cause of death