搜索结果: 1-1 共查到“应用语言学 Reducing Training Data Mismatch”相关记录1条 . 查询时间(0.046 秒)
N-gram Weighting: Reducing Training Data Mismatch in Cross-Domain Language Model Estimation
N-gram Weighing Data Mismatch Cross-Domain Language Model
2014/11/27
In domains with insufficient matched training data, language models are often constructed by interpolating component models trained from partially matched corpora. Since the ngrams from such corpora m...