Transistor sizing of a latched sense amplifier is shown to be able to trade sensitivity for area, speed, and power. Because of the parametric variations inherent in a manufacturing process, the bit-line drive/load in an SRAM may vary wildly. Generally, low-sensitivity but high-speed sense amplifiers can work, if the voltage-differentiating rate on a bit-line pair is large enough to develop a significant voltage difference during sensing; otherwise, a high-sensitivity but low-speed sense amplifier must be used. We model the worst-case voltage-differentiating rate, based on TSMC 0.25 μm, 0.18μm, and 0.13μm generic CMOS technologies. The sensitivity of the sense amplifier is derived by equivalent channel length mismatches which link the design margins to the tolerances to process variations. We propose a simulation-based method to optimize the transistor sizing. The optimized sense amplifier achieves higher sensitivity and less power consumption than the original one. It is demonstrated in a 16K × 256 embedded synchronous SRAM.