hmm lda data requirement

来源:互联网 发布:ff14l雷霆捏脸数据 编辑:程序博客网 时间:2024/05/21 11:05
  • WS a 1 x N vector where WS(k) contains the vocabulary index of the kth word token, and N is the number of word tokens. The word indices are not zero based, i.e., min( WS)=1 and max( WS ) = W = number of distinct words in vocabulary. A word index of 0 denotes the end-of-sentence marker. Note that the words are ordered according to occurence in documents.
  • DS a 1 x N vector where DS(k) contains the document index of the kth word token. The document indices are not zero based, i.e., min( DS )=1 and max( DS ) = D = number of documents
  • WO a 1 x W cell array of strings where WO{k} contains the kth vocabulary item and W is the number of distinct vocabulary items. Not needed for running the Gibbs sampler but becomes necessary when writing the resulting word-topic distributions to a file using the writetopics matlab function.

OUTPUT

  • WP a sparse matrix of size W x T, where W is the number of words in the vocabulary and T is the number of topics. WP(i,j) contains the number of times word i has been assigned to topic j.
  • DP a sparse D x T matrix, where D is the number of documents. DP(d,j) contains the number of times a word token in document d has been assigned to topic j.
  • MP a sparse W x S matrix where S is the number of HMM states. MP(i,j) contains the number of times word i has been assigned to HMM state j. Note that HMM state 1 represents the LDA model and 2..S represent the "syntactic" HMM states
  • Z a 1 x N vector containing the topic assignments where N is the number of word tokens. Z(k) contains the topic assignment for token k.
  • X a 1 x N vector containing the HMM state assignments where N is the number of word tokens. X(k) contains the assignment of the kth word token to a HMM state.Note that HMM state 1 represents the document is end, state 2 represents LDA model and 3..S+2 represent the "syntactic" HMM states
0 0
原创粉丝点击