Given a random binary sequence X(n) of random variables, X t, t = 1, 2, . . . , n, for instance, one that is generated by a Markov source (teacher) of order k* (each state represented by k* bits). Assume that the probability of the event Xt = 1 is constant and denote it by β. Consider a learner which is based on a parametric model, for instance a Markov model of order k, who trains on a sequence x (m) which is randomly drawn by the teacher. Test the learner's performance by giving it a sequence x(n) (generated by the teacher) and check its predictions on every bit of x(n). An error occurs at time t if the learner's prediction Yt differs from the true bit value Xt. Denote by ξ(n) the sequence of errors where the error bit ξt at time t equals 1 or 0 according to whether the event of an error occurs or not, respectively. Consider the subsequence ξ(v) of ξ(n) which corresponds to the errors of predicting a 0, i.e., ξ(v) consists of the bits of ξ(n) only at times t such that Yt = 0. In this paper we compute an estimate on the deviation of the frequency of 1s of ξ(v) from β. The result shows that the level of randomness of ξ(v) decreases relative to an increase in the complexity of the learner.