# Tuning HMC¶

## Goals¶

If you care about the quality of the samples you obtain, you must tune the sampler. It’s simply unavoidable.

When the step size `epsilon`

, is too small, the system is too conservative,
and doesn’t explore parameter space rapidly. On the other hand, when
`epsilon`

is too large, the trajectory is unstable and all of the steps
basically get rejected during the Metropolis step.

Theoretical analysis indicates that the optimal balance of these two factors comes when the acceptance probability is \(\sim 0.651\). This isn’t a strict number that you have to nail, but getting the acceptance probability within the \(0.4 - 0.9\) is a good target.

## Practical¶

In practice, some simple advice is to set the number of steps per sample to
\(\sim 10\), `n_steps = 10`

, and then adjust `epsilon`

to tune the
acceptance rate to an acceptable value.

Then, use autocorrelation time functions to estimate the correlation time of the sampler. The integrated autocorrelation time determines the statistical errors in Monte Carlo measurements of \(\langle f \rangle\), which converge like \(\sim 1/\sqrt{\frac{n_{samples}}{\tau_{int}}}\). The “effective” number of independent samples thus basically reduced by a factor of \(\tau_{int}\). (Note that depending on the definition of \(\tau_{int}\), there may be an extra factor of 2, but this is already accounted for in our implementation.)

To achieve a reasonably small statistical error it is necessary to make a run of length \(\approx 1000\tau_{int}\).

See Sokal’s notes on MCMC and sample estimators for autocorrelation times for more details.