We aim to accelerate the restarted generalized minimal residual (GMRES) method for the solution of linear systems by combining two types of techniques. On the one hand, mixed precision GMRES algorithms, which use lower precision in certain steps of the inner cycles, offer significant reductions to computational and memory costs. On the other hand, augmented GMRES algorithms, which recycle information on the eigenvalues of the matrix between restarts by incorporating additional vectors into the Krylov basis, can significantly speed up the convergence. In this work, we investigate how to combine mixed precision and augmentation, in order to cumulate the reduced per-iteration cost of the former with the reduced number of iterations of the latter. We first explore the GMRES with deflated restarting (GMRES-DR) variant, which we show to present limited mixed precision opportunities. Indeed GMRES-DR can exploit a preconditioner constructed in low precision, but requires a flexible paradigm to also apply it in low precision; moreover, the matrix–vector product and orthonormalization steps must both be kept in high precision as otherwise the method stagnates at low accuracy. We explain that this is because GMRES-DR is based on some algebraic simplifications that are only valid in exact arithmetic, but fail to hold in finite precision. This observation motivates us to investigate another augmented GMRES variant (AugGMRES) that avoids making these simplifications. We show that AugGMRES is much more resilient to the use of low precision, does not require a flexible paradigm, and successfully converges to high accuracy even when low precision is used for all inner operations. Our experimental results on real-life sparse matrices demonstrate that this mixed precision AugGMRES approach provides a robust and efficient solution, offering significant benefits for scientific computing and engineering applications.