The H-Matrix algebra offers numerous advantages for solving linear systems, both in terms of numerical stability and computation time compared to more traditional direct or iterative solvers. However, it presents a limitation regarding the size of the problems it can handle. Indeed, whether in terms of spatial cost (excessive memory occupation) or temporal cost (computations limited by memory bandwidth), reducing the memory size of the matrices used is a major challenge. Furthermore, this allows for alleviating disk storage requirements for "out-of-core" computations and reducing communication overhead, a critical factor, particularly in calculations on distributed memory architectures. In this perspective, and within the framework of this thesis, we focus on the floating-point compression of the different blocks of an H-Matrix in an industrial context. The goal is to reduce the memory footprint during computations, leading to gains in both space and time, while maintaining controlled precision loss. To this end, several arithmetic compression schemes are considered and compared to determine those that offer the best compression rates for a given precision. Initial tests will be performed on a sequential version of the H-Matrix library. Subsequently, this compression will be integrated into a parallel version of the code. The objective would thus be to address larger-scale problems (allowing for finer and more precise modeling) or problems of the same size but with reduced spatial and temporal costs.