As the size of modern datasets grows, it becomes increasingly common to delegate computational tasks to service providers. Doing so, however, raises privacy concerns. Privatization schemes which enable learning algorithms to be executed unaltered have been recently popularized under the name instance encoding, aiming to circumvent the large overhead of traditional cryptographic primitives. In this work we take an information/coding-theoretic approach towards instance encoding. Specifically, recent works have shown that general-purpose data sharing can be achieved without leaking information about any individual datapoint (marginally), while maintaining high mutual information with the dataset in its entirety. We first extend this framework to capture the entire privacy-utility tradeoff, accounting for the privatization of any subset of the dataset, and provide a coding scheme for doing so. Second, we introduce a necessary algebraic condition for applying unaltered learning algorithms on encrypted data, termed signal preservation, and present an additional scheme which guarantees it. Both schemes achieve almost maximal mutual information with the entire dataset, under appropriate assumptions. The construction relies on some classic ideas such as Shamir secret sharing, as well as a novel technique called random Hadamard coding.