A new hybrid duration Hidden Markov Model (hdHMM), which combines the ideas of both the infinite duration models and finite duration models, is proposed here and applied to a large vocabulary Taiwanese speech recognition task. Such a model not only has better state duration distribution than the traditional left-to-right HMM but also is more computational efficient than the finite-duration HMM. The experiment was performed on a large vocabulary Taiwanese (Min-nan) multi-syllabic word recognition. For the speaker dependent case, the best word error rate achieved here is 7.9%. Since this paper is also one of the first papers on the speech recognition of Taiwanese speech, some basic facts about Taiwanese phonetics is also briefly introduced.