Given a discrete memoryless source X, it is well known that the expected codeword length per symbol. L-n (X) of an optimal prefix code for the extended source X-n converges to the source entropy as n approaches infinity. However, the sequence L-n (X) need not be monotonic in n, which implies that the coding efficiency cannot be increased by simply encoding a larger block of source symbols (unless the block length is appropriately chosen). As the encoding and decoding complexity increases exponentially with the block length, from a practical perspective it is useful to know when an increase in the block length guarantees a decrease in the expected codeword length per symbol. While this paper does not provide a complete answer to that question, we give some properties of L-n (X) and obtain for each n >= 1 and nondyadic p(1)(n) (p(1) is the probability of the most likely source symbol) an integer k* for which L-kn (X) < L-n (X) for all k >= k*, implying that the coding efficiency of encoding blocks of length k(n) is higher than that of encoding blocks of length it for all k >= k*. This question is simpler in part because L-kn (X) <= L-n (X) is guaranteed for all it >= 1 and k >= 1, but our results distinguish scenarios where increasing the multiplicative factor guarantees strict improvement. These results extend and generalize those by Montgomery and Kumar.