Hybrid precoding is expected to play a key role in realizing massive multiple-input multiple-output (MIMO) transmitters with controllable cost, size, and power. MIMO transmitters are required to frequently adapt their precoding patterns based on the variation in the channel conditions. In the hybrid setting, such an adaptation often involves lengthy optimization which may affect the network performance. In this work we employ the emerging learn-to-optimize paradigm to enable rapid optimization of hybrid precoders. In particular, we leverage data to learn iteration-dependent hyperparameter setting of projected gradient optimization, thus preserving the fully interpretable flow of the optimizer while improving its convergence speed. Numerical results demonstrate that our approach yields six to twelve times faster convergence compared to conventional optimization with shared hyperparameters, while achieving similar and even improved sum-rate performance.