跑通kaldi中timit试验以及awk不能找到gensub函数解决方法

来源:互联网 发布:网页视频剪辑软件 编辑:程序博客网 时间:2024/06/11 00:53

我的实验环境是在CentOS  6上,所以各种环境坑等待我去填,建议同学们使用Ubuntu 16.10以上的,或者Debian(我linux入门的第一个操作系统)也好~~~~


继续试验egs/timit例子,发现一个致命问题:

awk(gawk)找不到gensub函数,吸取之前的教训怀疑版本问题:

[houwenbin@localhost gawk-4.2.0]$ awk --versionawk version 20070501

的确有些年头了,下载最新版本来试试:

wget ftp://ftp.gnu.org/gnu/gawk/gawk-4.2.0.tar.xz

tar xzf gawk-4.2.0.tar.xz

cd gawk-4.2.0

./configure --prefix=/

make & make install

查看新版本:

[houwenbin@localhost ~]$ awk --versionGNU Awk 4.2.0, API: 2.0Copyright (C) 1989, 1991-2017 Free Software Foundation.This program is free software; you can redistribute it and/or modifyit under the terms of the GNU General Public License as published bythe Free Software Foundation; either version 3 of the License, or(at your option) any later version.This program is distributed in the hope that it will be useful,but WITHOUT ANY WARRANTY; without even the implied warranty ofMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See theGNU General Public License for more details.You should have received a copy of the GNU General Public Licensealong with this program. If not, see http://www.gnu.org/licenses/.

如果还看不到更新,请检查是否系统中还有awk,比如交叉编译环境NDK下也有awk哦!!!


参照 http://blog.csdn.net/shmilyforyq/article/details/75258259 愉快地开启TIMIT试验了!!!


[houwenbin@localhost s5]$ ./run.sh ============================================================================                Data & Lexicon & Language Preparation                     ============================================================================wav-to-duration --read-entire-file=true scp:train_wav.scp ark,t:train_dur.ark LOG (wav-to-duration[5.2]:main():wav-to-duration.cc:92) Printed duration for 3696 audio files.LOG (wav-to-duration[5.2]:main():wav-to-duration.cc:94) Mean duration was 3.06336, min and max durations were 0.91525, 7.78881wav-to-duration --read-entire-file=true scp:dev_wav.scp ark,t:dev_dur.ark LOG (wav-to-duration[5.2]:main():wav-to-duration.cc:92) Printed duration for 400 audio files.LOG (wav-to-duration[5.2]:main():wav-to-duration.cc:94) Mean duration was 3.08212, min and max durations were 1.09444, 7.43681wav-to-duration --read-entire-file=true scp:test_wav.scp ark,t:test_dur.ark LOG (wav-to-duration[5.2]:main():wav-to-duration.cc:92) Printed duration for 192 audio files.LOG (wav-to-duration[5.2]:main():wav-to-duration.cc:94) Mean duration was 3.03646, min and max durations were 1.30562, 6.21444Data preparation succeededLOGFILE:/dev/null$bin/ngt -i="$inpfile" -n=$order -gooout=y -o="$gzip -c > $tmpdir/ngram.${sdict}.gz" -fd="$tmpdir/$sdict" $dictionary $additional_parameters >> $logfile 2>&1$bin/ngt -i="$inpfile" -n=$order -gooout=y -o="$gzip -c > $tmpdir/ngram.${sdict}.gz" -fd="$tmpdir/$sdict" $dictionary $additional_parameters >> $logfile 2>&1$scr/build-sublm.pl $verbose $prune $prune_thr_str $smoothing "$additional_smoothing_parameters" --size $order --ngrams "$gunzip -c $tmpdir/ngram.${sdict}.gz" -sublm $tmpdir/lm.$sdict $additional_parameters >> $logfile 2>&1inpfile: data/local/lm_tmp/lm_phone_bg.ilm.gzoutfile: /dev/stdoutloading up to the LM level 1000 (if any)dub: 10000000OOV code is 50OOV code is 50Saving in txt format to /dev/stdoutDictionary & language model preparation succeededutils/prepare_lang.sh --sil-prob 0.0 --position-dependent-phones false --num-sil-states 3 data/local/dict sil data/local/lang_tmp data/langChecking data/local/dict/silence_phones.txt ...--> reading data/local/dict/silence_phones.txt--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespaces--> data/local/dict/silence_phones.txt is OKChecking data/local/dict/optional_silence.txt ...--> reading data/local/dict/optional_silence.txt--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespaces--> data/local/dict/optional_silence.txt is OKChecking data/local/dict/nonsilence_phones.txt ...--> reading data/local/dict/nonsilence_phones.txt--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespaces--> data/local/dict/nonsilence_phones.txt is OKChecking disjoint: silence_phones.txt, nonsilence_phones.txt--> disjoint property is OK.Checking data/local/dict/lexicon.txt--> reading data/local/dict/lexicon.txt--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespaces--> data/local/dict/lexicon.txt is OKChecking data/local/dict/extra_questions.txt ...--> reading data/local/dict/extra_questions.txt--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespaces--> data/local/dict/extra_questions.txt is OK--> SUCCESS [validating dictionary directory data/local/dict]**Creating data/local/dict/lexiconp.txt from data/local/dict/lexicon.txtfstaddselfloops data/lang/phones/wdisambig_phones.int data/lang/phones/wdisambig_words.int prepare_lang.sh: validating output directoryutils/validate_lang.pl data/langChecking data/lang/phones.txt ...--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespaces--> data/lang/phones.txt is OKChecking words.txt: #0 ...--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespaces--> data/lang/words.txt is OKChecking disjoint: silence.txt, nonsilence.txt, disambig.txt ...--> silence.txt and nonsilence.txt are disjoint--> silence.txt and disambig.txt are disjoint--> disambig.txt and nonsilence.txt are disjoint--> disjoint property is OKChecking sumation: silence.txt, nonsilence.txt, disambig.txt ...--> summation property is OKChecking data/lang/phones/context_indep.{txt, int, csl} ...--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespaces--> 1 entry/entries in data/lang/phones/context_indep.txt--> data/lang/phones/context_indep.int corresponds to data/lang/phones/context_indep.txt--> data/lang/phones/context_indep.csl corresponds to data/lang/phones/context_indep.txt--> data/lang/phones/context_indep.{txt, int, csl} are OKChecking data/lang/phones/nonsilence.{txt, int, csl} ...--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespaces--> 47 entry/entries in data/lang/phones/nonsilence.txt--> data/lang/phones/nonsilence.int corresponds to data/lang/phones/nonsilence.txt--> data/lang/phones/nonsilence.csl corresponds to data/lang/phones/nonsilence.txt--> data/lang/phones/nonsilence.{txt, int, csl} are OKChecking data/lang/phones/silence.{txt, int, csl} ...--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespaces--> 1 entry/entries in data/lang/phones/silence.txt--> data/lang/phones/silence.int corresponds to data/lang/phones/silence.txt--> data/lang/phones/silence.csl corresponds to data/lang/phones/silence.txt--> data/lang/phones/silence.{txt, int, csl} are OKChecking data/lang/phones/optional_silence.{txt, int, csl} ...--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespaces--> 1 entry/entries in data/lang/phones/optional_silence.txt--> data/lang/phones/optional_silence.int corresponds to data/lang/phones/optional_silence.txt--> data/lang/phones/optional_silence.csl corresponds to data/lang/phones/optional_silence.txt--> data/lang/phones/optional_silence.{txt, int, csl} are OKChecking data/lang/phones/disambig.{txt, int, csl} ...--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespaces--> 2 entry/entries in data/lang/phones/disambig.txt--> data/lang/phones/disambig.int corresponds to data/lang/phones/disambig.txt--> data/lang/phones/disambig.csl corresponds to data/lang/phones/disambig.txt--> data/lang/phones/disambig.{txt, int, csl} are OKChecking data/lang/phones/roots.{txt, int} ...--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespaces--> 48 entry/entries in data/lang/phones/roots.txt--> data/lang/phones/roots.int corresponds to data/lang/phones/roots.txt--> data/lang/phones/roots.{txt, int} are OKChecking data/lang/phones/sets.{txt, int} ...--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespaces--> 48 entry/entries in data/lang/phones/sets.txt--> data/lang/phones/sets.int corresponds to data/lang/phones/sets.txt--> data/lang/phones/sets.{txt, int} are OKChecking data/lang/phones/extra_questions.{txt, int} ...--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespaces--> 2 entry/entries in data/lang/phones/extra_questions.txt--> data/lang/phones/extra_questions.int corresponds to data/lang/phones/extra_questions.txt--> data/lang/phones/extra_questions.{txt, int} are OKChecking optional_silence.txt ...--> reading data/lang/phones/optional_silence.txt--> data/lang/phones/optional_silence.txt is OKChecking disambiguation symbols: #0 and #1--> data/lang/phones/disambig.txt has "#0" and "#1"--> data/lang/phones/disambig.txt is OKChecking topo ...Checking word-level disambiguation symbols...--> data/lang/phones/wdisambig.txt exists (newer prepare_lang.sh)Checking data/lang/oov.{txt, int} ...--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespaces--> 1 entry/entries in data/lang/oov.txt--> data/lang/oov.int corresponds to data/lang/oov.txt--> data/lang/oov.{txt, int} are OK--> data/lang/L.fst is olabel sorted--> data/lang/L_disambig.fst is olabel sorted--> SUCCESS [validating lang directory data/lang]Preparing train, dev and test dataChecking data/train/text ...--> reading data/train/text--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespacesutils/validate_data_dir.sh: Successfully validated data-directory data/trainChecking data/dev/text ...--> reading data/dev/text--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespacesutils/validate_data_dir.sh: Successfully validated data-directory data/devChecking data/test/text ...--> reading data/test/text--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespacesutils/validate_data_dir.sh: Successfully validated data-directory data/testPreparing language models for testarpa2fst --disambig-symbol=#0 --read-symbol-table=data/lang_test_bg/words.txt - data/lang_test_bg/G.fst LOG (arpa2fst[5.2]:Read():arpa-file-parser.cc:98) Reading \data\ section.LOG (arpa2fst[5.2]:Read():arpa-file-parser.cc:153) Reading \1-grams: section.LOG (arpa2fst[5.2]:Read():arpa-file-parser.cc:153) Reading \2-grams: section.WARNING (arpa2fst[5.2]:ConsumeNGram():arpa-lm-compiler.cc:313) line 60 [-3.26717        <s> <s>] skipped: n-gram has invalid BOS/EOS placementLOG (arpa2fst[5.2]:RemoveRedundantStates():arpa-lm-compiler.cc:359) Reduced num-states from 50 to 50fstisstochastic data/lang_test_bg/G.fst 0.000510126 -0.0763018utils/validate_lang.pl data/lang_test_bgChecking data/lang_test_bg/phones.txt ...--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespaces--> data/lang_test_bg/phones.txt is OKChecking words.txt: #0 ...--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespaces--> data/lang_test_bg/words.txt is OKChecking disjoint: silence.txt, nonsilence.txt, disambig.txt ...--> silence.txt and nonsilence.txt are disjoint--> silence.txt and disambig.txt are disjoint--> disambig.txt and nonsilence.txt are disjoint--> disjoint property is OKChecking sumation: silence.txt, nonsilence.txt, disambig.txt ...--> summation property is OKChecking data/lang_test_bg/phones/context_indep.{txt, int, csl} ...--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespaces--> 1 entry/entries in data/lang_test_bg/phones/context_indep.txt--> data/lang_test_bg/phones/context_indep.int corresponds to data/lang_test_bg/phones/context_indep.txt--> data/lang_test_bg/phones/context_indep.csl corresponds to data/lang_test_bg/phones/context_indep.txt--> data/lang_test_bg/phones/context_indep.{txt, int, csl} are OKChecking data/lang_test_bg/phones/nonsilence.{txt, int, csl} ...--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespaces--> 47 entry/entries in data/lang_test_bg/phones/nonsilence.txt--> data/lang_test_bg/phones/nonsilence.int corresponds to data/lang_test_bg/phones/nonsilence.txt--> data/lang_test_bg/phones/nonsilence.csl corresponds to data/lang_test_bg/phones/nonsilence.txt--> data/lang_test_bg/phones/nonsilence.{txt, int, csl} are OKChecking data/lang_test_bg/phones/silence.{txt, int, csl} ...--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespaces--> 1 entry/entries in data/lang_test_bg/phones/silence.txt--> data/lang_test_bg/phones/silence.int corresponds to data/lang_test_bg/phones/silence.txt--> data/lang_test_bg/phones/silence.csl corresponds to data/lang_test_bg/phones/silence.txt--> data/lang_test_bg/phones/silence.{txt, int, csl} are OKChecking data/lang_test_bg/phones/optional_silence.{txt, int, csl} ...--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespaces--> 1 entry/entries in data/lang_test_bg/phones/optional_silence.txt--> data/lang_test_bg/phones/optional_silence.int corresponds to data/lang_test_bg/phones/optional_silence.txt--> data/lang_test_bg/phones/optional_silence.csl corresponds to data/lang_test_bg/phones/optional_silence.txt--> data/lang_test_bg/phones/optional_silence.{txt, int, csl} are OKChecking data/lang_test_bg/phones/disambig.{txt, int, csl} ...--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespaces--> 2 entry/entries in data/lang_test_bg/phones/disambig.txt--> data/lang_test_bg/phones/disambig.int corresponds to data/lang_test_bg/phones/disambig.txt--> data/lang_test_bg/phones/disambig.csl corresponds to data/lang_test_bg/phones/disambig.txt--> data/lang_test_bg/phones/disambig.{txt, int, csl} are OKChecking data/lang_test_bg/phones/roots.{txt, int} ...--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespaces--> 48 entry/entries in data/lang_test_bg/phones/roots.txt--> data/lang_test_bg/phones/roots.int corresponds to data/lang_test_bg/phones/roots.txt--> data/lang_test_bg/phones/roots.{txt, int} are OKChecking data/lang_test_bg/phones/sets.{txt, int} ...--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespaces--> 48 entry/entries in data/lang_test_bg/phones/sets.txt--> data/lang_test_bg/phones/sets.int corresponds to data/lang_test_bg/phones/sets.txt--> data/lang_test_bg/phones/sets.{txt, int} are OKChecking data/lang_test_bg/phones/extra_questions.{txt, int} ...--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespaces--> 2 entry/entries in data/lang_test_bg/phones/extra_questions.txt--> data/lang_test_bg/phones/extra_questions.int corresponds to data/lang_test_bg/phones/extra_questions.txt--> data/lang_test_bg/phones/extra_questions.{txt, int} are OKChecking optional_silence.txt ...--> reading data/lang_test_bg/phones/optional_silence.txt--> data/lang_test_bg/phones/optional_silence.txt is OKChecking disambiguation symbols: #0 and #1--> data/lang_test_bg/phones/disambig.txt has "#0" and "#1"--> data/lang_test_bg/phones/disambig.txt is OKChecking topo ...Checking word-level disambiguation symbols...--> data/lang_test_bg/phones/wdisambig.txt exists (newer prepare_lang.sh)Checking data/lang_test_bg/oov.{txt, int} ...--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespaces--> 1 entry/entries in data/lang_test_bg/oov.txt--> data/lang_test_bg/oov.int corresponds to data/lang_test_bg/oov.txt--> data/lang_test_bg/oov.{txt, int} are OK--> data/lang_test_bg/L.fst is olabel sorted--> data/lang_test_bg/L_disambig.fst is olabel sorted--> data/lang_test_bg/G.fst is ilabel sorted--> data/lang_test_bg/G.fst has 50 statesfstdeterminizestar data/lang_test_bg/G.fst /dev/null --> data/lang_test_bg/G.fst is determinizable--> utils/lang/check_g_properties.pl successfully validated data/lang_test_bg/G.fst--> utils/lang/check_g_properties.pl succeeded.--> Testing determinizability of L_disambig . Gfsttablecompose data/lang_test_bg/L_disambig.fst data/lang_test_bg/G.fst fstdeterminizestar --> L_disambig . G is determinizable--> SUCCESS [validating lang directory data/lang_test_bg]Succeeded in formatting data.============================================================================         MFCC Feature Extration & CMVN for Training and Test set          ============================================================================steps/make_mfcc.sh --cmd run.pl --max-jobs-run 10 --nj 10 data/train exp/make_mfcc/train mfccChecking data/train/text ...--> reading data/train/text--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespacesutils/validate_data_dir.sh: Successfully validated data-directory data/trainsteps/make_mfcc.sh: [info]: no segments file exists: assuming wav.scp indexed by utterance.Succeeded creating MFCC features for trainsteps/compute_cmvn_stats.sh data/train exp/make_mfcc/train mfccSucceeded creating CMVN stats for trainsteps/make_mfcc.sh --cmd run.pl --max-jobs-run 10 --nj 10 data/dev exp/make_mfcc/dev mfccChecking data/dev/text ...--> reading data/dev/text--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespacesutils/validate_data_dir.sh: Successfully validated data-directory data/devsteps/make_mfcc.sh: [info]: no segments file exists: assuming wav.scp indexed by utterance.Succeeded creating MFCC features for devsteps/compute_cmvn_stats.sh data/dev exp/make_mfcc/dev mfccSucceeded creating CMVN stats for devsteps/make_mfcc.sh --cmd run.pl --max-jobs-run 10 --nj 10 data/test exp/make_mfcc/test mfccChecking data/test/text ...--> reading data/test/text--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespacesutils/validate_data_dir.sh: Successfully validated data-directory data/teststeps/make_mfcc.sh: [info]: no segments file exists: assuming wav.scp indexed by utterance.Succeeded creating MFCC features for teststeps/compute_cmvn_stats.sh data/test exp/make_mfcc/test mfccSucceeded creating CMVN stats for test============================================================================                     MonoPhone Training & Decoding                        ============================================================================steps/train_mono.sh --nj 30 --cmd run.pl --max-jobs-run 10 data/train data/lang exp/monosteps/train_mono.sh: Initializing monophone system.steps/train_mono.sh: Compiling training graphssteps/train_mono.sh: Aligning data equally (pass 0)steps/train_mono.sh: Pass 1steps/train_mono.sh: Aligning datasteps/train_mono.sh: Pass 2steps/train_mono.sh: Aligning datasteps/train_mono.sh: Pass 3steps/train_mono.sh: Aligning datasteps/train_mono.sh: Pass 4steps/train_mono.sh: Aligning datasteps/train_mono.sh: Pass 5steps/train_mono.sh: Aligning datasteps/train_mono.sh: Pass 6steps/train_mono.sh: Aligning datasteps/train_mono.sh: Pass 7steps/train_mono.sh: Aligning datasteps/train_mono.sh: Pass 8steps/train_mono.sh: Aligning datasteps/train_mono.sh: Pass 9steps/train_mono.sh: Aligning datasteps/train_mono.sh: Pass 10steps/train_mono.sh: Aligning datasteps/train_mono.sh: Pass 11steps/train_mono.sh: Pass 12steps/train_mono.sh: Aligning datasteps/train_mono.sh: Pass 13steps/train_mono.sh: Pass 14steps/train_mono.sh: Aligning datasteps/train_mono.sh: Pass 15steps/train_mono.sh: Pass 16steps/train_mono.sh: Aligning datasteps/train_mono.sh: Pass 17steps/train_mono.sh: Pass 18steps/train_mono.sh: Aligning datasteps/train_mono.sh: Pass 19steps/train_mono.sh: Pass 20steps/train_mono.sh: Aligning datasteps/train_mono.sh: Pass 21steps/train_mono.sh: Pass 22steps/train_mono.sh: Pass 23steps/train_mono.sh: Aligning datasteps/train_mono.sh: Pass 24steps/train_mono.sh: Pass 25steps/train_mono.sh: Pass 26steps/train_mono.sh: Aligning datasteps/train_mono.sh: Pass 27steps/train_mono.sh: Pass 28steps/train_mono.sh: Pass 29steps/train_mono.sh: Aligning datasteps/train_mono.sh: Pass 30steps/train_mono.sh: Pass 31steps/train_mono.sh: Pass 32steps/train_mono.sh: Aligning datasteps/train_mono.sh: Pass 33steps/train_mono.sh: Pass 34steps/train_mono.sh: Pass 35steps/train_mono.sh: Aligning datasteps/train_mono.sh: Pass 36steps/train_mono.sh: Pass 37steps/train_mono.sh: Pass 38steps/train_mono.sh: Aligning datasteps/train_mono.sh: Pass 39steps/diagnostic/analyze_alignments.sh --cmd run.pl --max-jobs-run 10 data/lang exp/monosteps/diagnostic/analyze_alignments.sh: see stats in exp/mono/log/analyze_alignments.log2 warnings in exp/mono/log/align.*.*.logexp/mono: nj=30 align prob=-99.15 over 3.12h [retry=0.0%, fail=0.0%] states=144 gauss=986steps/train_mono.sh: Done training monophone system in exp/monotree-info exp/mono/tree tree-info exp/mono/tree fsttablecompose data/lang_test_bg/L_disambig.fst data/lang_test_bg/G.fst fstdeterminizestar --use-log=true fstpushspecial fstminimizeencoded fstisstochastic data/lang_test_bg/tmp/LG.fst -0.00841336 -0.00928521fstcomposecontext --context-size=1 --central-position=0 --read-disambig-syms=data/lang_test_bg/phones/disambig.int --write-disambig-syms=data/lang_test_bg/tmp/disambig_ilabels_1_0.int data/lang_test_bg/tmp/ilabels_1_0.9606 fstisstochastic data/lang_test_bg/tmp/CLG_1_0.fst -0.00841336 -0.00928521make-h-transducer --disambig-syms-out=exp/mono/graph/disambig_tid.int --transition-scale=1.0 data/lang_test_bg/tmp/ilabels_1_0 exp/mono/tree exp/mono/final.mdl fsttablecompose exp/mono/graph/Ha.fst data/lang_test_bg/tmp/CLG_1_0.fst fstminimizeencoded fstdeterminizestar --use-log=true fstrmsymbols exp/mono/graph/disambig_tid.int fstrmepslocal fstisstochastic exp/mono/graph/HCLGa.fst 0.000381709 -0.00951555add-self-loops --self-loop-scale=0.1 --reorder=true exp/mono/final.mdl steps/decode.sh --nj 5 --cmd run.pl --max-jobs-run 10 exp/mono/graph data/dev exp/mono/decode_devdecode.sh: feature type is deltasteps/diagnostic/analyze_lats.sh --cmd run.pl --max-jobs-run 10 exp/mono/graph exp/mono/decode_devsteps/diagnostic/analyze_lats.sh: see stats in exp/mono/decode_dev/log/analyze_alignments.logOverall, lattice depth (10,50,90-percentile)=(5,25,120) and mean=55.6steps/diagnostic/analyze_lats.sh: see stats in exp/mono/decode_dev/log/analyze_lattice_depth_stats.logsteps/decode.sh --nj 5 --cmd run.pl --max-jobs-run 10 exp/mono/graph data/test exp/mono/decode_testdecode.sh: feature type is deltasteps/diagnostic/analyze_lats.sh --cmd run.pl --max-jobs-run 10 exp/mono/graph exp/mono/decode_teststeps/diagnostic/analyze_lats.sh: see stats in exp/mono/decode_test/log/analyze_alignments.logOverall, lattice depth (10,50,90-percentile)=(6,27,143) and mean=74.1steps/diagnostic/analyze_lats.sh: see stats in exp/mono/decode_test/log/analyze_lattice_depth_stats.log============================================================================           tri1 : Deltas + Delta-Deltas Training & Decoding               ============================================================================steps/align_si.sh --boost-silence 1.25 --nj 30 --cmd run.pl --max-jobs-run 10 data/train data/lang exp/mono exp/mono_alisteps/align_si.sh: feature type is deltasteps/align_si.sh: aligning data in data/train using model from exp/mono, putting alignments in exp/mono_alisteps/diagnostic/analyze_alignments.sh --cmd run.pl --max-jobs-run 10 data/lang exp/mono_alisteps/diagnostic/analyze_alignments.sh: see stats in exp/mono_ali/log/analyze_alignments.logsteps/align_si.sh: done aligning data.steps/train_deltas.sh --cmd run.pl --max-jobs-run 10 2500 15000 data/train data/lang exp/mono_ali exp/tri1steps/train_deltas.sh: accumulating tree statssteps/train_deltas.sh: getting questions for tree-building, via clusteringsteps/train_deltas.sh: building the treesteps/train_deltas.sh: converting alignments from exp/mono_ali to use current treesteps/train_deltas.sh: compiling graphs of transcriptssteps/train_deltas.sh: training pass 1steps/train_deltas.sh: training pass 2steps/train_deltas.sh: training pass 3steps/train_deltas.sh: training pass 4steps/train_deltas.sh: training pass 5steps/train_deltas.sh: training pass 6steps/train_deltas.sh: training pass 7steps/train_deltas.sh: training pass 8steps/train_deltas.sh: training pass 9steps/train_deltas.sh: training pass 10steps/train_deltas.sh: aligning datasteps/train_deltas.sh: training pass 11steps/train_deltas.sh: training pass 12steps/train_deltas.sh: training pass 13steps/train_deltas.sh: training pass 14steps/train_deltas.sh: training pass 15steps/train_deltas.sh: training pass 16steps/train_deltas.sh: training pass 17steps/train_deltas.sh: training pass 18steps/train_deltas.sh: training pass 19steps/train_deltas.sh: training pass 20steps/train_deltas.sh: aligning datasteps/train_deltas.sh: training pass 21steps/train_deltas.sh: training pass 22steps/train_deltas.sh: training pass 23steps/train_deltas.sh: training pass 24steps/train_deltas.sh: training pass 25steps/train_deltas.sh: training pass 26steps/train_deltas.sh: training pass 27steps/train_deltas.sh: training pass 28steps/train_deltas.sh: training pass 29steps/train_deltas.sh: training pass 30steps/train_deltas.sh: aligning datasteps/train_deltas.sh: training pass 31steps/train_deltas.sh: training pass 32steps/train_deltas.sh: training pass 33steps/train_deltas.sh: training pass 34steps/diagnostic/analyze_alignments.sh --cmd run.pl --max-jobs-run 10 data/lang exp/tri1steps/diagnostic/analyze_alignments.sh: see stats in exp/tri1/log/analyze_alignments.log1 warnings in exp/tri1/log/compile_questions.log74 warnings in exp/tri1/log/init_model.log52 warnings in exp/tri1/log/update.*.logexp/tri1: nj=30 align prob=-95.28 over 3.12h [retry=0.0%, fail=0.0%] states=1882 gauss=15036 tree-impr=5.40steps/train_deltas.sh: Done training system with delta+delta-delta features in exp/tri1tree-info exp/tri1/tree tree-info exp/tri1/tree fstcomposecontext --context-size=3 --central-position=1 --read-disambig-syms=data/lang_test_bg/phones/disambig.int --write-disambig-syms=data/lang_test_bg/tmp/disambig_ilabels_3_1.int data/lang_test_bg/tmp/ilabels_3_1.3514 fstisstochastic data/lang_test_bg/tmp/CLG_3_1.fst 0 -0.00928518make-h-transducer --disambig-syms-out=exp/tri1/graph/disambig_tid.int --transition-scale=1.0 data/lang_test_bg/tmp/ilabels_3_1 exp/tri1/tree exp/tri1/final.mdl fstrmepslocal fsttablecompose exp/tri1/graph/Ha.fst data/lang_test_bg/tmp/CLG_3_1.fst fstrmsymbols exp/tri1/graph/disambig_tid.int fstdeterminizestar --use-log=true fstminimizeencoded fstisstochastic exp/tri1/graph/HCLGa.fst 0.000449687 -0.0175772HCLGa is not stochasticadd-self-loops --self-loop-scale=0.1 --reorder=true exp/tri1/final.mdl steps/decode.sh --nj 5 --cmd run.pl --max-jobs-run 10 exp/tri1/graph data/dev exp/tri1/decode_devdecode.sh: feature type is deltasteps/diagnostic/analyze_lats.sh --cmd run.pl --max-jobs-run 10 exp/tri1/graph exp/tri1/decode_devsteps/diagnostic/analyze_lats.sh: see stats in exp/tri1/decode_dev/log/analyze_alignments.logOverall, lattice depth (10,50,90-percentile)=(3,11,42) and mean=19.2steps/diagnostic/analyze_lats.sh: see stats in exp/tri1/decode_dev/log/analyze_lattice_depth_stats.logsteps/decode.sh --nj 5 --cmd run.pl --max-jobs-run 10 exp/tri1/graph data/test exp/tri1/decode_testdecode.sh: feature type is deltasteps/diagnostic/analyze_lats.sh --cmd run.pl --max-jobs-run 10 exp/tri1/graph exp/tri1/decode_teststeps/diagnostic/analyze_lats.sh: see stats in exp/tri1/decode_test/log/analyze_alignments.logOverall, lattice depth (10,50,90-percentile)=(3,12,49) and mean=21.9steps/diagnostic/analyze_lats.sh: see stats in exp/tri1/decode_test/log/analyze_lattice_depth_stats.log============================================================================                 tri2 : LDA + MLLT Training & Decoding                    ============================================================================steps/align_si.sh --nj 30 --cmd run.pl --max-jobs-run 10 data/train data/lang exp/tri1 exp/tri1_alisteps/align_si.sh: feature type is deltasteps/align_si.sh: aligning data in data/train using model from exp/tri1, putting alignments in exp/tri1_alisteps/diagnostic/analyze_alignments.sh --cmd run.pl --max-jobs-run 10 data/lang exp/tri1_alisteps/diagnostic/analyze_alignments.sh: see stats in exp/tri1_ali/log/analyze_alignments.logsteps/align_si.sh: done aligning data.steps/train_lda_mllt.sh --cmd run.pl --max-jobs-run 10 --splice-opts --left-context=3 --right-context=3 2500 15000 data/train data/lang exp/tri1_ali exp/tri2steps/train_lda_mllt.sh: Accumulating LDA statistics.steps/train_lda_mllt.sh: Accumulating tree statssteps/train_lda_mllt.sh: Getting questions for tree clustering.steps/train_lda_mllt.sh: Building the treesteps/train_lda_mllt.sh: Initializing the modelsteps/train_lda_mllt.sh: Converting alignments from exp/tri1_ali to use current treesteps/train_lda_mllt.sh: Compiling graphs of transcriptsTraining pass 1Training pass 2steps/train_lda_mllt.sh: Estimating MLLTTraining pass 3Training pass 4steps/train_lda_mllt.sh: Estimating MLLTTraining pass 5Training pass 6steps/train_lda_mllt.sh: Estimating MLLTTraining pass 7Training pass 8Training pass 9Training pass 10Aligning dataTraining pass 11Training pass 12steps/train_lda_mllt.sh: Estimating MLLTTraining pass 13Training pass 14Training pass 15Training pass 16Training pass 17Training pass 18Training pass 19Training pass 20Aligning dataTraining pass 21Training pass 22Training pass 23Training pass 24Training pass 25Training pass 26Training pass 27Training pass 28Training pass 29Training pass 30Aligning dataTraining pass 31Training pass 32Training pass 33Training pass 34steps/diagnostic/analyze_alignments.sh --cmd run.pl --max-jobs-run 10 data/lang exp/tri2steps/diagnostic/analyze_alignments.sh: see stats in exp/tri2/log/analyze_alignments.log183 warnings in exp/tri2/log/update.*.log105 warnings in exp/tri2/log/init_model.log1 warnings in exp/tri2/log/compile_questions.logexp/tri2: nj=30 align prob=-47.86 over 3.12h [retry=0.0%, fail=0.0%] states=2010 gauss=15034 tree-impr=5.56 lda-sum=28.46 mllt:impr,logdet=1.63,2.18steps/train_lda_mllt.sh: Done training system with LDA+MLLT features in exp/tri2tree-info exp/tri2/tree tree-info exp/tri2/tree make-h-transducer --disambig-syms-out=exp/tri2/graph/disambig_tid.int --transition-scale=1.0 data/lang_test_bg/tmp/ilabels_3_1 exp/tri2/tree exp/tri2/final.mdl fstrmepslocal fsttablecompose exp/tri2/graph/Ha.fst data/lang_test_bg/tmp/CLG_3_1.fst fstrmsymbols exp/tri2/graph/disambig_tid.int fstdeterminizestar --use-log=true fstminimizeencoded fstisstochastic exp/tri2/graph/HCLGa.fst 0.000461769 -0.0175772HCLGa is not stochasticadd-self-loops --self-loop-scale=0.1 --reorder=true exp/tri2/final.mdl steps/decode.sh --nj 5 --cmd run.pl --max-jobs-run 10 exp/tri2/graph data/dev exp/tri2/decode_devdecode.sh: feature type is ldasteps/diagnostic/analyze_lats.sh --cmd run.pl --max-jobs-run 10 exp/tri2/graph exp/tri2/decode_devsteps/diagnostic/analyze_lats.sh: see stats in exp/tri2/decode_dev/log/analyze_alignments.logOverall, lattice depth (10,50,90-percentile)=(2,8,29) and mean=13.3steps/diagnostic/analyze_lats.sh: see stats in exp/tri2/decode_dev/log/analyze_lattice_depth_stats.logsteps/decode.sh --nj 5 --cmd run.pl --max-jobs-run 10 exp/tri2/graph data/test exp/tri2/decode_testdecode.sh: feature type is ldasteps/diagnostic/analyze_lats.sh --cmd run.pl --max-jobs-run 10 exp/tri2/graph exp/tri2/decode_teststeps/diagnostic/analyze_lats.sh: see stats in exp/tri2/decode_test/log/analyze_alignments.logOverall, lattice depth (10,50,90-percentile)=(2,8,32) and mean=14.6steps/diagnostic/analyze_lats.sh: see stats in exp/tri2/decode_test/log/analyze_lattice_depth_stats.log============================================================================              tri3 : LDA + MLLT + SAT Training & Decoding                 ============================================================================steps/align_si.sh --nj 30 --cmd run.pl --max-jobs-run 10 --use-graphs true data/train data/lang exp/tri2 exp/tri2_alisteps/align_si.sh: feature type is ldasteps/align_si.sh: aligning data in data/train using model from exp/tri2, putting alignments in exp/tri2_alisteps/diagnostic/analyze_alignments.sh --cmd run.pl --max-jobs-run 10 data/lang exp/tri2_alisteps/diagnostic/analyze_alignments.sh: see stats in exp/tri2_ali/log/analyze_alignments.logsteps/align_si.sh: done aligning data.steps/train_sat.sh --cmd run.pl --max-jobs-run 10 2500 15000 data/train data/lang exp/tri2_ali exp/tri3steps/train_sat.sh: feature type is ldasteps/train_sat.sh: obtaining initial fMLLR transforms since not present in exp/tri2_alisteps/train_sat.sh: Accumulating tree statssteps/train_sat.sh: Getting questions for tree clustering.steps/train_sat.sh: Building the treesteps/train_sat.sh: Initializing the modelsteps/train_sat.sh: Converting alignments from exp/tri2_ali to use current treesteps/train_sat.sh: Compiling graphs of transcriptsPass 1Pass 2Estimating fMLLR transformsPass 3Pass 4Estimating fMLLR transformsPass 5Pass 6Estimating fMLLR transformsPass 7Pass 8Pass 9Pass 10Aligning dataPass 11Pass 12Estimating fMLLR transformsPass 13Pass 14Pass 15Pass 16Pass 17Pass 18Pass 19Pass 20Aligning dataPass 21Pass 22Pass 23Pass 24Pass 25Pass 26Pass 27Pass 28Pass 29Pass 30Aligning dataPass 31Pass 32Pass 33Pass 34steps/diagnostic/analyze_alignments.sh --cmd run.pl --max-jobs-run 10 data/lang exp/tri3steps/diagnostic/analyze_alignments.sh: see stats in exp/tri3/log/analyze_alignments.log42 warnings in exp/tri3/log/init_model.log1 warnings in exp/tri3/log/compile_questions.log18 warnings in exp/tri3/log/update.*.logsteps/train_sat.sh: Likelihood evolution:-50.1573 -49.2762 -49.0764 -48.8736 -48.1773 -47.467 -47.0375 -46.7895 -46.553 -46.0244 -45.767 -45.4404 -45.2512 -45.1163 -45.0002 -44.8829 -44.7724 -44.6672 -44.5614 -44.4011 -44.2651 -44.1746 -44.0909 -44.0093 -43.9307 -43.8546 -43.7783 -43.7032 -43.6313 -43.5378 -43.4676 -43.4394 -43.4229 -43.4139 exp/tri3: nj=30 align prob=-47.01 over 3.12h [retry=0.0%, fail=0.0%] states=1935 gauss=15013 fmllr-impr=4.04 over 2.79h tree-impr=8.71steps/train_sat.sh: done training SAT system in exp/tri3tree-info exp/tri3/tree tree-info exp/tri3/tree make-h-transducer --disambig-syms-out=exp/tri3/graph/disambig_tid.int --transition-scale=1.0 data/lang_test_bg/tmp/ilabels_3_1 exp/tri3/tree exp/tri3/final.mdl fstrmepslocal fsttablecompose exp/tri3/graph/Ha.fst data/lang_test_bg/tmp/CLG_3_1.fst fstrmsymbols exp/tri3/graph/disambig_tid.int fstdeterminizestar --use-log=true fstminimizeencoded fstisstochastic exp/tri3/graph/HCLGa.fst 0.000461769 -0.0175772HCLGa is not stochasticadd-self-loops --self-loop-scale=0.1 --reorder=true exp/tri3/final.mdl steps/decode_fmllr.sh --nj 5 --cmd run.pl --max-jobs-run 10 exp/tri3/graph data/dev exp/tri3/decode_devsteps/decode.sh --scoring-opts  --num-threads 1 --skip-scoring false --acwt 0.083333 --nj 5 --cmd run.pl --max-jobs-run 10 --beam 10.0 --model exp/tri3/final.alimdl --max-active 2000 exp/tri3/graph data/dev exp/tri3/decode_dev.sidecode.sh: feature type is ldasteps/diagnostic/analyze_lats.sh --cmd run.pl --max-jobs-run 10 exp/tri3/graph exp/tri3/decode_dev.sisteps/diagnostic/analyze_lats.sh: see stats in exp/tri3/decode_dev.si/log/analyze_alignments.logOverall, lattice depth (10,50,90-percentile)=(2,9,34) and mean=15.2steps/diagnostic/analyze_lats.sh: see stats in exp/tri3/decode_dev.si/log/analyze_lattice_depth_stats.logsteps/decode_fmllr.sh: feature type is ldasteps/decode_fmllr.sh: getting first-pass fMLLR transforms.steps/decode_fmllr.sh: doing main lattice generation phasesteps/decode_fmllr.sh: estimating fMLLR transforms a second time.steps/decode_fmllr.sh: doing a final pass of acoustic rescoring.steps/diagnostic/analyze_lats.sh --cmd run.pl --max-jobs-run 10 exp/tri3/graph exp/tri3/decode_devsteps/diagnostic/analyze_lats.sh: see stats in exp/tri3/decode_dev/log/analyze_alignments.logOverall, lattice depth (10,50,90-percentile)=(1,5,16) and mean=7.6steps/diagnostic/analyze_lats.sh: see stats in exp/tri3/decode_dev/log/analyze_lattice_depth_stats.logsteps/decode_fmllr.sh --nj 5 --cmd run.pl --max-jobs-run 10 exp/tri3/graph data/test exp/tri3/decode_teststeps/decode.sh --scoring-opts  --num-threads 1 --skip-scoring false --acwt 0.083333 --nj 5 --cmd run.pl --max-jobs-run 10 --beam 10.0 --model exp/tri3/final.alimdl --max-active 2000 exp/tri3/graph data/test exp/tri3/decode_test.sidecode.sh: feature type is ldasteps/diagnostic/analyze_lats.sh --cmd run.pl --max-jobs-run 10 exp/tri3/graph exp/tri3/decode_test.sisteps/diagnostic/analyze_lats.sh: see stats in exp/tri3/decode_test.si/log/analyze_alignments.logOverall, lattice depth (10,50,90-percentile)=(2,10,37) and mean=16.8steps/diagnostic/analyze_lats.sh: see stats in exp/tri3/decode_test.si/log/analyze_lattice_depth_stats.logsteps/decode_fmllr.sh: feature type is ldasteps/decode_fmllr.sh: getting first-pass fMLLR transforms.steps/decode_fmllr.sh: doing main lattice generation phasesteps/decode_fmllr.sh: estimating fMLLR transforms a second time.steps/decode_fmllr.sh: doing a final pass of acoustic rescoring.steps/diagnostic/analyze_lats.sh --cmd run.pl --max-jobs-run 10 exp/tri3/graph exp/tri3/decode_teststeps/diagnostic/analyze_lats.sh: see stats in exp/tri3/decode_test/log/analyze_alignments.logOverall, lattice depth (10,50,90-percentile)=(1,5,19) and mean=8.6steps/diagnostic/analyze_lats.sh: see stats in exp/tri3/decode_test/log/analyze_lattice_depth_stats.log============================================================================                        SGMM2 Training & Decoding                         ============================================================================steps/align_fmllr.sh --nj 30 --cmd run.pl --max-jobs-run 10 data/train data/lang exp/tri3 exp/tri3_alisteps/align_fmllr.sh: feature type is ldasteps/align_fmllr.sh: compiling training graphssteps/align_fmllr.sh: aligning data in data/train using exp/tri3/final.alimdl and speaker-independent features.steps/align_fmllr.sh: computing fMLLR transformssteps/align_fmllr.sh: doing final alignment.steps/align_fmllr.sh: done aligning data.steps/diagnostic/analyze_alignments.sh --cmd run.pl --max-jobs-run 10 data/lang exp/tri3_alisteps/diagnostic/analyze_alignments.sh: see stats in exp/tri3_ali/log/analyze_alignments.logsteps/train_ubm.sh --cmd run.pl --max-jobs-run 10 400 data/train data/lang exp/tri3_ali exp/ubm4steps/train_ubm.sh: feature type is ldasteps/train_ubm.sh: using transforms from exp/tri3_alisteps/train_ubm.sh: clustering model exp/tri3_ali/final.mdl to get initial UBMsteps/train_ubm.sh: doing Gaussian selectionPass 0Pass 1Pass 2steps/train_sgmm2.sh --cmd run.pl --max-jobs-run 10 7000 9000 data/train data/lang exp/tri3_ali exp/ubm4/final.ubm exp/sgmm2_4steps/train_sgmm2.sh: feature type is ldasteps/train_sgmm2.sh: using transforms from exp/tri3_alisteps/train_sgmm2.sh: accumulating tree statssteps/train_sgmm2.sh: Getting questions for tree clustering.steps/train_sgmm2.sh: Building the treesteps/train_sgmm2.sh: Initializing the modelsteps/train_sgmm2.sh: doing Gaussian selectionsteps/train_sgmm2.sh: compiling training graphssteps/train_sgmm2.sh: converting alignmentssteps/train_sgmm2.sh: training pass 0 ... steps/train_sgmm2.sh: training pass 1 ... steps/train_sgmm2.sh: training pass 2 ... steps/train_sgmm2.sh: training pass 3 ... steps/train_sgmm2.sh: training pass 4 ... steps/train_sgmm2.sh: training pass 5 ... steps/train_sgmm2.sh: re-aligning datasteps/train_sgmm2.sh: training pass 6 ... steps/train_sgmm2.sh: training pass 7 ... steps/train_sgmm2.sh: training pass 8 ... steps/train_sgmm2.sh: training pass 9 ... steps/train_sgmm2.sh: training pass 10 ... steps/train_sgmm2.sh: re-aligning datasteps/train_sgmm2.sh: training pass 11 ... steps/train_sgmm2.sh: training pass 12 ... steps/train_sgmm2.sh: training pass 13 ... steps/train_sgmm2.sh: training pass 14 ... steps/train_sgmm2.sh: training pass 15 ... steps/train_sgmm2.sh: re-aligning datasteps/train_sgmm2.sh: training pass 16 ... steps/train_sgmm2.sh: training pass 17 ... steps/train_sgmm2.sh: training pass 18 ... steps/train_sgmm2.sh: training pass 19 ... steps/train_sgmm2.sh: training pass 20 ... steps/train_sgmm2.sh: training pass 21 ... steps/train_sgmm2.sh: training pass 22 ... steps/train_sgmm2.sh: training pass 23 ... steps/train_sgmm2.sh: training pass 24 ... steps/train_sgmm2.sh: building alignment model (pass 25)steps/train_sgmm2.sh: building alignment model (pass 26)steps/train_sgmm2.sh: building alignment model (pass 27)1 warnings in exp/sgmm2_4/log/compile_questions.log198 warnings in exp/sgmm2_4/log/update_ali.*.log1726 warnings in exp/sgmm2_4/log/update.*.logDonetree-info exp/sgmm2_4/tree tree-info exp/sgmm2_4/tree make-h-transducer --disambig-syms-out=exp/sgmm2_4/graph/disambig_tid.int --transition-scale=1.0 data/lang_test_bg/tmp/ilabels_3_1 exp/sgmm2_4/tree exp/sgmm2_4/final.mdl fstrmepslocal fsttablecompose exp/sgmm2_4/graph/Ha.fst data/lang_test_bg/tmp/CLG_3_1.fst fstrmsymbols exp/sgmm2_4/graph/disambig_tid.int fstdeterminizestar --use-log=true fstminimizeencoded fstisstochastic exp/sgmm2_4/graph/HCLGa.fst 0.000476187 -0.0175772HCLGa is not stochasticadd-self-loops --self-loop-scale=0.1 --reorder=true exp/sgmm2_4/final.mdl steps/decode_sgmm2.sh --nj 5 --cmd run.pl --max-jobs-run 10 --transform-dir exp/tri3/decode_dev exp/sgmm2_4/graph data/dev exp/sgmm2_4/decode_devsteps/decode_sgmm2.sh: feature type is ldasteps/decode_sgmm2.sh: using transforms from exp/tri3/decode_devsteps/diagnostic/analyze_lats.sh --cmd run.pl --max-jobs-run 10 exp/sgmm2_4/graph exp/sgmm2_4/decode_devsteps/diagnostic/analyze_lats.sh: see stats in exp/sgmm2_4/decode_dev/log/analyze_alignments.logOverall, lattice depth (10,50,90-percentile)=(2,6,20) and mean=9.5steps/diagnostic/analyze_lats.sh: see stats in exp/sgmm2_4/decode_dev/log/analyze_lattice_depth_stats.logsteps/decode_sgmm2.sh --nj 5 --cmd run.pl --max-jobs-run 10 --transform-dir exp/tri3/decode_test exp/sgmm2_4/graph data/test exp/sgmm2_4/decode_teststeps/decode_sgmm2.sh: feature type is ldasteps/decode_sgmm2.sh: using transforms from exp/tri3/decode_teststeps/diagnostic/analyze_lats.sh --cmd run.pl --max-jobs-run 10 exp/sgmm2_4/graph exp/sgmm2_4/decode_teststeps/diagnostic/analyze_lats.sh: see stats in exp/sgmm2_4/decode_test/log/analyze_alignments.logOverall, lattice depth (10,50,90-percentile)=(2,6,23) and mean=10.7steps/diagnostic/analyze_lats.sh: see stats in exp/sgmm2_4/decode_test/log/analyze_lattice_depth_stats.log============================================================================                    MMI + SGMM2 Training & Decoding                       ============================================================================steps/align_sgmm2.sh --nj 30 --cmd run.pl --max-jobs-run 10 --transform-dir exp/tri3_ali --use-graphs true --use-gselect true data/train data/lang exp/sgmm2_4 exp/sgmm2_4_alisteps/align_sgmm2.sh: feature type is ldasteps/align_sgmm2.sh: using transforms from exp/tri3_alisteps/align_sgmm2.sh: aligning data in data/train using model exp/sgmm2_4/final.alimdlsteps/align_sgmm2.sh: computing speaker vectors (1st pass)steps/align_sgmm2.sh: computing speaker vectors (2nd pass)steps/align_sgmm2.sh: doing final alignment.steps/align_sgmm2.sh: done aligning data.steps/diagnostic/analyze_alignments.sh --cmd run.pl --max-jobs-run 10 data/lang exp/sgmm2_4_alisteps/diagnostic/analyze_alignments.sh: see stats in exp/sgmm2_4_ali/log/analyze_alignments.logsteps/make_denlats_sgmm2.sh --nj 30 --sub-split 30 --acwt 0.2 --lattice-beam 10.0 --beam 18.0 --cmd run.pl --max-jobs-run 10 --transform-dir exp/tri3_ali data/train data/lang exp/sgmm2_4_ali exp/sgmm2_4_denlatssteps/make_denlats_sgmm2.sh: Making unigram grammar FST in exp/sgmm2_4_denlats/langsteps/make_denlats_sgmm2.sh: Compiling decoding graph in exp/sgmm2_4_denlats/dengraphtree-info exp/sgmm2_4_ali/tree tree-info exp/sgmm2_4_ali/tree fsttablecompose exp/sgmm2_4_denlats/lang/L_disambig.fst exp/sgmm2_4_denlats/lang/G.fst fstminimizeencoded fstdeterminizestar --use-log=true fstpushspecial fstisstochastic exp/sgmm2_4_denlats/lang/tmp/LG.fst 1.27271e-05 1.27271e-05fstcomposecontext --context-size=3 --central-position=1 --read-disambig-syms=exp/sgmm2_4_denlats/lang/phones/disambig.int --write-disambig-syms=exp/sgmm2_4_denlats/lang/tmp/disambig_ilabels_3_1.int exp/sgmm2_4_denlats/lang/tmp/ilabels_3_1.27913 fstisstochastic exp/sgmm2_4_denlats/lang/tmp/CLG_3_1.fst 1.27657e-05 0make-h-transducer --disambig-syms-out=exp/sgmm2_4_denlats/dengraph/disambig_tid.int --transition-scale=1.0 exp/sgmm2_4_denlats/lang/tmp/ilabels_3_1 exp/sgmm2_4_ali/tree exp/sgmm2_4_ali/final.mdl fsttablecompose exp/sgmm2_4_denlats/dengraph/Ha.fst exp/sgmm2_4_denlats/lang/tmp/CLG_3_1.fst fstminimizeencoded fstrmepslocal fstrmsymbols exp/sgmm2_4_denlats/dengraph/disambig_tid.int fstdeterminizestar --use-log=true fstisstochastic exp/sgmm2_4_denlats/dengraph/HCLGa.fst 0.000481185 -0.000485819add-self-loops --self-loop-scale=0.1 --reorder=true exp/sgmm2_4_ali/final.mdl steps/make_denlats_sgmm2.sh: feature type is ldasteps/make_denlats_sgmm2.sh: using fMLLR transforms from exp/tri3_alisteps/make_denlats_sgmm2.sh: Merging archives for data subset 1steps/make_denlats_sgmm2.sh: Merging archives for data subset 2steps/make_denlats_sgmm2.sh: Merging archives for data subset 3steps/make_denlats_sgmm2.sh: Merging archives for data subset 4steps/make_denlats_sgmm2.sh: Merging archives for data subset 5steps/make_denlats_sgmm2.sh: Merging archives for data subset 6steps/make_denlats_sgmm2.sh: Merging archives for data subset 7steps/make_denlats_sgmm2.sh: Merging archives for data subset 8steps/make_denlats_sgmm2.sh: Merging archives for data subset 9steps/make_denlats_sgmm2.sh: Merging archives for data subset 10steps/make_denlats_sgmm2.sh: Merging archives for data subset 11steps/make_denlats_sgmm2.sh: Merging archives for data subset 12steps/make_denlats_sgmm2.sh: Merging archives for data subset 13steps/make_denlats_sgmm2.sh: Merging archives for data subset 14steps/make_denlats_sgmm2.sh: Merging archives for data subset 15steps/make_denlats_sgmm2.sh: Merging archives for data subset 16steps/make_denlats_sgmm2.sh: Merging archives for data subset 17steps/make_denlats_sgmm2.sh: Merging archives for data subset 18steps/make_denlats_sgmm2.sh: Merging archives for data subset 19steps/make_denlats_sgmm2.sh: Merging archives for data subset 20steps/make_denlats_sgmm2.sh: Merging archives for data subset 21steps/make_denlats_sgmm2.sh: Merging archives for data subset 22steps/make_denlats_sgmm2.sh: Merging archives for data subset 23steps/make_denlats_sgmm2.sh: Merging archives for data subset 24steps/make_denlats_sgmm2.sh: Merging archives for data subset 25steps/make_denlats_sgmm2.sh: Merging archives for data subset 26steps/make_denlats_sgmm2.sh: Merging archives for data subset 27steps/make_denlats_sgmm2.sh: Merging archives for data subset 28steps/make_denlats_sgmm2.sh: Merging archives for data subset 29steps/make_denlats_sgmm2.sh: Merging archives for data subset 30steps/make_denlats_sgmm2.sh: done generating denominator lattices with SGMMs.steps/train_mmi_sgmm2.sh --acwt 0.2 --cmd run.pl --max-jobs-run 10 --transform-dir exp/tri3_ali --boost 0.1 --drop-frames true data/train data/lang exp/sgmm2_4_ali exp/sgmm2_4_denlats exp/sgmm2_4_mmi_b0.1steps/train_mmi_sgmm2.sh: feature type is ldasteps/train_mmi_sgmm2.sh: using transforms from exp/tri3_alisteps/train_mmi_sgmm2.sh: using speaker vectors from exp/sgmm2_4_alisteps/train_mmi_sgmm2.sh: using Gaussian-selection info from exp/sgmm2_4_aliIteration 0 of MMI trainingIteration 0: objf was 0.500664422464595, MMI auxf change was 0.0161997754313345Iteration 1 of MMI trainingIteration 1: objf was 0.515510864906709, MMI auxf change was 0.00240651195788137Iteration 2 of MMI trainingIteration 2: objf was 0.518162614976294, MMI auxf change was 0.000690078350104861Iteration 3 of MMI trainingIteration 3: objf was 0.519018203153884, MMI auxf change was 0.000602987314448584MMI training finishedsteps/decode_sgmm2_rescore.sh --cmd run.pl --max-jobs-run 10 --iter 1 --transform-dir exp/tri3/decode_dev data/lang_test_bg data/dev exp/sgmm2_4/decode_dev exp/sgmm2_4_mmi_b0.1/decode_dev_it1steps/decode_sgmm2_rescore.sh: using speaker vectors from exp/sgmm2_4/decode_devsteps/decode_sgmm2_rescore.sh: feature type is ldasteps/decode_sgmm2_rescore.sh: using transforms from exp/tri3/decode_devsteps/decode_sgmm2_rescore.sh: rescoring lattices with SGMM model in exp/sgmm2_4_mmi_b0.1/1.mdlsteps/decode_sgmm2_rescore.sh --cmd run.pl --max-jobs-run 10 --iter 1 --transform-dir exp/tri3/decode_test data/lang_test_bg data/test exp/sgmm2_4/decode_test exp/sgmm2_4_mmi_b0.1/decode_test_it1steps/decode_sgmm2_rescore.sh: using speaker vectors from exp/sgmm2_4/decode_teststeps/decode_sgmm2_rescore.sh: feature type is ldasteps/decode_sgmm2_rescore.sh: using transforms from exp/tri3/decode_teststeps/decode_sgmm2_rescore.sh: rescoring lattices with SGMM model in exp/sgmm2_4_mmi_b0.1/1.mdlsteps/decode_sgmm2_rescore.sh --cmd run.pl --max-jobs-run 10 --iter 2 --transform-dir exp/tri3/decode_dev data/lang_test_bg data/dev exp/sgmm2_4/decode_dev exp/sgmm2_4_mmi_b0.1/decode_dev_it2steps/decode_sgmm2_rescore.sh: using speaker vectors from exp/sgmm2_4/decode_devsteps/decode_sgmm2_rescore.sh: feature type is ldasteps/decode_sgmm2_rescore.sh: using transforms from exp/tri3/decode_devsteps/decode_sgmm2_rescore.sh: rescoring lattices with SGMM model in exp/sgmm2_4_mmi_b0.1/2.mdlsteps/decode_sgmm2_rescore.sh --cmd run.pl --max-jobs-run 10 --iter 2 --transform-dir exp/tri3/decode_test data/lang_test_bg data/test exp/sgmm2_4/decode_test exp/sgmm2_4_mmi_b0.1/decode_test_it2steps/decode_sgmm2_rescore.sh: using speaker vectors from exp/sgmm2_4/decode_teststeps/decode_sgmm2_rescore.sh: feature type is ldasteps/decode_sgmm2_rescore.sh: using transforms from exp/tri3/decode_teststeps/decode_sgmm2_rescore.sh: rescoring lattices with SGMM model in exp/sgmm2_4_mmi_b0.1/2.mdlsteps/decode_sgmm2_rescore.sh --cmd run.pl --max-jobs-run 10 --iter 3 --transform-dir exp/tri3/decode_dev data/lang_test_bg data/dev exp/sgmm2_4/decode_dev exp/sgmm2_4_mmi_b0.1/decode_dev_it3steps/decode_sgmm2_rescore.sh: using speaker vectors from exp/sgmm2_4/decode_devsteps/decode_sgmm2_rescore.sh: feature type is ldasteps/decode_sgmm2_rescore.sh: using transforms from exp/tri3/decode_devsteps/decode_sgmm2_rescore.sh: rescoring lattices with SGMM model in exp/sgmm2_4_mmi_b0.1/3.mdlsteps/decode_sgmm2_rescore.sh --cmd run.pl --max-jobs-run 10 --iter 3 --transform-dir exp/tri3/decode_test data/lang_test_bg data/test exp/sgmm2_4/decode_test exp/sgmm2_4_mmi_b0.1/decode_test_it3steps/decode_sgmm2_rescore.sh: using speaker vectors from exp/sgmm2_4/decode_teststeps/decode_sgmm2_rescore.sh: feature type is ldasteps/decode_sgmm2_rescore.sh: using transforms from exp/tri3/decode_teststeps/decode_sgmm2_rescore.sh: rescoring lattices with SGMM model in exp/sgmm2_4_mmi_b0.1/3.mdlsteps/decode_sgmm2_rescore.sh --cmd run.pl --max-jobs-run 10 --iter 4 --transform-dir exp/tri3/decode_dev data/lang_test_bg data/dev exp/sgmm2_4/decode_dev exp/sgmm2_4_mmi_b0.1/decode_dev_it4steps/decode_sgmm2_rescore.sh: using speaker vectors from exp/sgmm2_4/decode_devsteps/decode_sgmm2_rescore.sh: feature type is ldasteps/decode_sgmm2_rescore.sh: using transforms from exp/tri3/decode_devsteps/decode_sgmm2_rescore.sh: rescoring lattices with SGMM model in exp/sgmm2_4_mmi_b0.1/4.mdlsteps/decode_sgmm2_rescore.sh --cmd run.pl --max-jobs-run 10 --iter 4 --transform-dir exp/tri3/decode_test data/lang_test_bg data/test exp/sgmm2_4/decode_test exp/sgmm2_4_mmi_b0.1/decode_test_it4steps/decode_sgmm2_rescore.sh: using speaker vectors from exp/sgmm2_4/decode_teststeps/decode_sgmm2_rescore.sh: feature type is ldasteps/decode_sgmm2_rescore.sh: using transforms from exp/tri3/decode_teststeps/decode_sgmm2_rescore.sh: rescoring lattices with SGMM model in exp/sgmm2_4_mmi_b0.1/4.mdl============================================================================                    DNN Hybrid Training & Decoding                        ============================================================================steps/nnet2/train_tanh.sh --mix-up 5000 --initial-learning-rate 0.015 --final-learning-rate 0.002 --num-hidden-layers 2 --num-jobs-nnet 30 --cmd run.pl --max-jobs-run 10 data/train data/lang exp/tri3_ali exp/tri4_nnetsteps/nnet2/train_tanh.sh: calling get_lda.shsteps/nnet2/get_lda.sh --transform-dir exp/tri3_ali --splice-width 4 --cmd run.pl --max-jobs-run 10 data/train data/lang exp/tri3_ali exp/tri4_nnetsteps/nnet2/get_lda.sh: feature type is ldasteps/nnet2/get_lda.sh: using transforms from exp/tri3_alifeat-to-dim 'ark,s,cs:utils/subset_scp.pl --quiet 333 data/train/split30/1/feats.scp | apply-cmvn  --utt2spk=ark:data/train/split30/1/utt2spk scp:data/train/split30/1/cmvn.scp scp:- ark:- | splice-feats --left-context=3 --right-context=3 ark:- ark:- | transform-feats exp/tri4_nnet/final.mat ark:- ark:- | transform-feats --utt2spk=ark:data/train/split30/1/utt2spk ark:exp/tri3_ali/trans.1 ark:- ark:- |' - transform-feats exp/tri4_nnet/final.mat ark:- ark:- splice-feats --left-context=3 --right-context=3 ark:- ark:- apply-cmvn --utt2spk=ark:data/train/split30/1/utt2spk scp:data/train/split30/1/cmvn.scp scp:- ark:- transform-feats --utt2spk=ark:data/train/split30/1/utt2spk ark:exp/tri3_ali/trans.1 ark:- ark:- WARNING (feat-to-dim[5.2]:Close():kaldi-io.cc:501) Pipe utils/subset_scp.pl --quiet 333 data/train/split30/1/feats.scp | apply-cmvn  --utt2spk=ark:data/train/split30/1/utt2spk scp:data/train/split30/1/cmvn.scp scp:- ark:- | splice-feats --left-context=3 --right-context=3 ark:- ark:- | transform-feats exp/tri4_nnet/final.mat ark:- ark:- | transform-feats --utt2spk=ark:data/train/split30/1/utt2spk ark:exp/tri3_ali/trans.1 ark:- ark:- | had nonzero return status 36096feat-to-dim 'ark,s,cs:utils/subset_scp.pl --quiet 333 data/train/split30/1/feats.scp | apply-cmvn  --utt2spk=ark:data/train/split30/1/utt2spk scp:data/train/split30/1/cmvn.scp scp:- ark:- | splice-feats --left-context=3 --right-context=3 ark:- ark:- | transform-feats exp/tri4_nnet/final.mat ark:- ark:- | transform-feats --utt2spk=ark:data/train/split30/1/utt2spk ark:exp/tri3_ali/trans.1 ark:- ark:- | splice-feats --left-context=4 --right-context=4 ark:- ark:- |' - transform-feats exp/tri4_nnet/final.mat ark:- ark:- apply-cmvn --utt2spk=ark:data/train/split30/1/utt2spk scp:data/train/split30/1/cmvn.scp scp:- ark:- transform-feats --utt2spk=ark:data/train/split30/1/utt2spk ark:exp/tri3_ali/trans.1 ark:- ark:- splice-feats --left-context=4 --right-context=4 ark:- ark:- splice-feats --left-context=3 --right-context=3 ark:- ark:- WARNING (feat-to-dim[5.2]:Close():kaldi-io.cc:501) Pipe utils/subset_scp.pl --quiet 333 data/train/split30/1/feats.scp | apply-cmvn  --utt2spk=ark:data/train/split30/1/utt2spk scp:data/train/split30/1/cmvn.scp scp:- ark:- | splice-feats --left-context=3 --right-context=3 ark:- ark:- | transform-feats exp/tri4_nnet/final.mat ark:- ark:- | transform-feats --utt2spk=ark:data/train/split30/1/utt2spk ark:exp/tri3_ali/trans.1 ark:- ark:- | splice-feats --left-context=4 --right-context=4 ark:- ark:- | had nonzero return status 36096steps/nnet2/get_lda.sh: Accumulating LDA statistics.steps/nnet2/get_lda.sh: Finished estimating LDAsteps/nnet2/train_tanh.sh: calling get_egs.shsteps/nnet2/get_egs.sh --transform-dir exp/tri3_ali --splice-width 4 --samples-per-iter 200000 --num-jobs-nnet 30 --stage 0 --cmd run.pl --max-jobs-run 10 --io-opts --max-jobs-run 5 data/train data/lang exp/tri3_ali exp/tri4_nnetsteps/nnet2/get_egs.sh: feature type is ldasteps/nnet2/get_egs.sh: using transforms from exp/tri3_alisteps/nnet2/get_egs.sh: working out number of frames of training datautils/data/get_utt2dur.sh: segments file does not exist so getting durations from wave filesutils/data/get_utt2dur.sh: successfully obtained utterance lengths from sphere-file headersutils/data/get_utt2dur.sh: computed data/train/utt2durfeat-to-len 'scp:head -n 10 data/train/feats.scp|' ark,t:- steps/nnet2/get_egs.sh: Every epoch, splitting the data up into 1 iterations,steps/nnet2/get_egs.sh: giving samples-per-iteration of 37740 (you requested 200000).Getting validation and training subset examples.steps/nnet2/get_egs.sh: extracting validation and training-subset alignments.copy-int-vector ark:- ark,t:- LOG (copy-int-vector[5.2]:main():copy-int-vector.cc:83) Copied 3696 vectors of int32.Getting subsets of validation examples for diagnostics and combination.Creating training examplesGenerating training examples on disksteps/nnet2/get_egs.sh: rearranging examples into parts for different parallel jobssteps/nnet2/get_egs.sh: Since iters-per-epoch == 1, just concatenating the data.Shuffling the order of training examples(in order to avoid stressing the disk, these won't all run at once).steps/nnet2/get_egs.sh: Finished preparing training examplessteps/nnet2/train_tanh.sh: initializing neural netTraining transition probabilities and setting priorssteps/nnet2/train_tanh.sh: Will train for 15 + 5 epochs, equalling steps/nnet2/train_tanh.sh: 15 + 5 = 20 iterations, steps/nnet2/train_tanh.sh: (while reducing learning rate) + (with constant learning rate).Training neural net (pass 0)Training neural net (pass 1)Training neural net (pass 2)Training neural net (pass 3)Training neural net (pass 4)Training neural net (pass 5)Training neural net (pass 6)Training neural net (pass 7)Training neural net (pass 8)Training neural net (pass 9)Training neural net (pass 10)Training neural net (pass 11)Training neural net (pass 12)Mixing up from 1935 to 5000 componentsTraining neural net (pass 13)Training neural net (pass 14)Training neural net (pass 15)Training neural net (pass 16)Training neural net (pass 17)Training neural net (pass 18)Training neural net (pass 19)Setting num_iters_final=5Getting average posterior for purposes of adjusting the priors.Re-adjusting priors based on computed posteriorsDoneCleaning up datasteps/nnet2/remove_egs.sh: Finished deleting examples in exp/tri4_nnet/egsRemoving most of the modelssteps/nnet2/decode.sh --cmd run.pl --max-jobs-run 10 --nj 5 --num-threads 6 --transform-dir exp/tri3/decode_dev exp/tri3/graph data/dev exp/tri4_nnet/decode_devsteps/nnet2/decode.sh: feature type is ldasteps/nnet2/decode.sh: using transforms from exp/tri3/decode_devsteps/diagnostic/analyze_lats.sh --cmd run.pl --max-jobs-run 10 --iter final exp/tri3/graph exp/tri4_nnet/decode_devsteps/diagnostic/analyze_lats.sh: see stats in exp/tri4_nnet/decode_dev/log/analyze_alignments.logOverall, lattice depth (10,50,90-percentile)=(7,34,172) and mean=76.7steps/diagnostic/analyze_lats.sh: see stats in exp/tri4_nnet/decode_dev/log/analyze_lattice_depth_stats.logscore best pathsscore confidence and timing with scliteDecoding done.steps/nnet2/decode.sh --cmd run.pl --max-jobs-run 10 --nj 5 --num-threads 6 --transform-dir exp/tri3/decode_test exp/tri3/graph data/test exp/tri4_nnet/decode_teststeps/nnet2/decode.sh: feature type is ldasteps/nnet2/decode.sh: using transforms from exp/tri3/decode_teststeps/diagnostic/analyze_lats.sh --cmd run.pl --max-jobs-run 10 --iter final exp/tri3/graph exp/tri4_nnet/decode_teststeps/diagnostic/analyze_lats.sh: see stats in exp/tri4_nnet/decode_test/log/analyze_alignments.logOverall, lattice depth (10,50,90-percentile)=(7,37,192) and mean=88.6steps/diagnostic/analyze_lats.sh: see stats in exp/tri4_nnet/decode_test/log/analyze_lattice_depth_stats.logscore best pathsscore confidence and timing with scliteDecoding done.============================================================================                    System Combination (DNN+SGMM)                         ========================================================================================================================================================               DNN Hybrid Training & Decoding (Karel's recipe)            ============================================================================steps/nnet/make_fmllr_feats.sh --nj 10 --cmd run.pl --max-jobs-run 10 --transform-dir exp/tri3/decode_test data-fmllr-tri3/test data/test exp/tri3 data-fmllr-tri3/test/log data-fmllr-tri3/test/datasteps/nnet/make_fmllr_feats.sh: feature type is lda_fmllrutils/copy_data_dir.sh: copied data from data/test to data-fmllr-tri3/testChecking data-fmllr-tri3/test/text ...--> reading data-fmllr-tri3/test/text--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespacesutils/validate_data_dir.sh: Successfully validated data-directory data-fmllr-tri3/teststeps/nnet/make_fmllr_feats.sh: Done!, type lda_fmllr, data/test --> data-fmllr-tri3/test, using : raw-trans None, gmm exp/tri3, trans exp/tri3/decode_teststeps/nnet/make_fmllr_feats.sh --nj 10 --cmd run.pl --max-jobs-run 10 --transform-dir exp/tri3/decode_dev data-fmllr-tri3/dev data/dev exp/tri3 data-fmllr-tri3/dev/log data-fmllr-tri3/dev/datasteps/nnet/make_fmllr_feats.sh: feature type is lda_fmllrutils/copy_data_dir.sh: copied data from data/dev to data-fmllr-tri3/devChecking data-fmllr-tri3/dev/text ...--> reading data-fmllr-tri3/dev/text--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespacesutils/validate_data_dir.sh: Successfully validated data-directory data-fmllr-tri3/devsteps/nnet/make_fmllr_feats.sh: Done!, type lda_fmllr, data/dev --> data-fmllr-tri3/dev, using : raw-trans None, gmm exp/tri3, trans exp/tri3/decode_devsteps/nnet/make_fmllr_feats.sh --nj 10 --cmd run.pl --max-jobs-run 10 --transform-dir exp/tri3_ali data-fmllr-tri3/train data/train exp/tri3 data-fmllr-tri3/train/log data-fmllr-tri3/train/datasteps/nnet/make_fmllr_feats.sh: feature type is lda_fmllrutils/copy_data_dir.sh: copied data from data/train to data-fmllr-tri3/trainChecking data-fmllr-tri3/train/text ...--> reading data-fmllr-tri3/train/text--> text seems to be UTF-8 or ASCII, checking whitespaces--> text contains only allowed whitespacesutils/validate_data_dir.sh: Successfully validated data-directory data-fmllr-tri3/trainsteps/nnet/make_fmllr_feats.sh: Done!, type lda_fmllr, data/train --> data-fmllr-tri3/train, using : raw-trans None, gmm exp/tri3, trans exp/tri3_aliutils/subset_data_dir_tr_cv.sh data-fmllr-tri3/train data-fmllr-tri3/train_tr90 data-fmllr-tri3/train_cv10/home/houwenbin/kaldi-master/egs/timit/s5/utils/subset_data_dir.sh: reducing #utt from 3696 to 3320/home/houwenbin/kaldi-master/egs/timit/s5/utils/subset_data_dir.sh: reducing #utt from 3696 to 376LOG ([5.2]:main():cuda-gpu-available.cc:86) ...### WE DID NOT GET A CUDA GPU!!! ###### If your system has a 'free' CUDA GPU, try re-installing latest 'CUDA toolkit' from NVidia (this updates GPU drivers too).### Otherwise 'nvidia-smi' shows the status of GPUs:### - The versions should match ('NVIDIA-SMI' and 'Driver Version'), otherwise reboot or reload kernel module,### - The GPU should be unused (no 'process' in list, low 'memory-usage' (<100MB), low 'gpu-fan' (<30%)),### - You should see your GPU (burnt GPUs may disappear from the list until reboot),# Accounting: time=0 threads=1# Ended (code 1) at Mon Nov 27 16:29:09 CST 2017, elapsed time 0 seconds# steps/nnet/pretrain_dbn.sh --hid-dim 1024 --rbm-iter 20 data-fmllr-tri3/train exp/dnn4_pretrain-dbn # Started at Mon Nov 27 23:16:11 CST 2017#steps/nnet/pretrain_dbn.sh --hid-dim 1024 --rbm-iter 20 data-fmllr-tri3/train exp/dnn4_pretrain-dbn# INFOsteps/nnet/pretrain_dbn.sh : Pre-training Deep Belief Network as a stack of RBMs         dir       : exp/dnn4_pretrain-dbn          Train-set : data-fmllr-tri3/train '3696'LOG ([5.2]:main():cuda-gpu-available.cc:49) ### IS CUDA GPU AVAILABLE? 'localhost.localdomain' ###ERROR ([5.2]:SelectGpuId():cu-device.cc:121) No CUDA GPU detected!, diagnostics: cudaError_t 35 : "CUDA driver version is insufficient for CUDA runtime version", in cu-device.cc:121[ Stack-Trace: ]kaldi::MessageLogger::HandleMessage(kaldi::LogMessageEnvelope const&, char const*)kaldi::MessageLogger::~MessageLogger()kaldi::CuDevice::SelectGpuId(std::string)main__libc_start_maincuda-gpu-available() [0x401739]LOG ([5.2]:main():cuda-gpu-available.cc:86) ...### WE DID NOT GET A CUDA GPU!!! ###### If your system has a 'free' CUDA GPU, try re-installing latest 'CUDA toolkit' from NVidia (this updates GPU drivers too).### Otherwise 'nvidia-smi' shows the status of GPUs:### - The versions should match ('NVIDIA-SMI' and 'Driver Version'), otherwise reboot or reload kernel module,### - The GPU should be unused (no 'process' in list, low 'memory-usage' (<100MB), low 'gpu-fan' (<30%)),### - You should see your GPU (burnt GPUs may disappear from the list until reboot),# Accounting: time=0 threads=1# Ended (code 1) at Mon Nov 27 23:16:11 CST 2017, elapsed time 0 secondsrun.pl: job failed, log is in exp/dnn4_pretrain-dbn/log/pretrain_dbn.log[houwenbin@localhost s5]$


程序会在这里中断,参照:http://blog.csdn.net/lindadasummer/article/details/77727193

exit 0 # From this point you can run Karel's DNN : local/nnet/run_dnn.sh

继续运行吧!!!


等了一晚上,悲催了,服务器上没有GPU,实验不能继续了,作罢,不过基本步骤都已经有了,剩下就去研究流程了~~~~

原创粉丝点击