Entropy(哈夫曼编码)

来源:互联网 发布:网络剧制作播出许可证 编辑:程序博客网 时间:2024/06/14 16:14

Link:点击打开链接

Entropy

Time Limit: 2000/1000 MS (Java/Others)    Memory Limit: 65536/32768 K (Java/Others)
Total Submission(s): 4233    Accepted Submission(s): 1732


Problem Description
An entropy encoder is a data encoding method that achieves lossless data compression by encoding a message with “wasted” or “extra” information removed. In other words, entropy encoding removes information that was not necessary in the first place to accurately encode the message. A high degree of entropy implies a message with a great deal of wasted information; english text encoded in ASCII is an example of a message type that has very high entropy. Already compressed messages, such as JPEG graphics or ZIP archives, have very little entropy and do not benefit from further attempts at entropy encoding.

English text encoded in ASCII has a high degree of entropy because all characters are encoded using the same number of bits, eight. It is a known fact that the letters E, L, N, R, S and T occur at a considerably higher frequency than do most other letters in english text. If a way could be found to encode just these letters with four bits, then the new encoding would be smaller, would contain all the original information, and would have less entropy. ASCII uses a fixed number of bits for a reason, however: it’s easy, since one is always dealing with a fixed number of bits to represent each possible glyph or character. How would an encoding scheme that used four bits for the above letters be able to distinguish between the four-bit codes and eight-bit codes? This seemingly difficult problem is solved using what is known as a “prefix-free variable-length” encoding.

In such an encoding, any number of bits can be used to represent any glyph, and glyphs not present in the message are simply not encoded. However, in order to be able to recover the information, no bit pattern that encodes a glyph is allowed to be the prefix of any other encoding bit pattern. This allows the encoded bitstream to be read bit by bit, and whenever a set of bits is encountered that represents a glyph, that glyph can be decoded. If the prefix-free constraint was not enforced, then such a decoding would be impossible.

Consider the text “AAAAABCD”. Using ASCII, encoding this would require 64 bits. If, instead, we encode “A” with the bit pattern “00”, “B” with “01”, “C” with “10”, and “D” with “11” then we can encode this text in only 16 bits; the resulting bit pattern would be “0000000000011011”. This is still a fixed-length encoding, however; we’re using two bits per glyph instead of eight. Since the glyph “A” occurs with greater frequency, could we do better by encoding it with fewer bits? In fact we can, but in order to maintain a prefix-free encoding, some of the other bit patterns will become longer than two bits. An optimal encoding is to encode “A” with “0”, “B” with “10”, “C” with “110”, and “D” with “111”. (This is clearly not the only optimal encoding, as it is obvious that the encodings for B, C and D could be interchanged freely for any given encoding without increasing the size of the final encoded message.) Using this encoding, the message encodes in only 13 bits to “0000010110111”, a compression ratio of 4.9 to 1 (that is, each bit in the final encoded message represents as much information as did 4.9 bits in the original encoding). Read through this bit pattern from left to right and you’ll see that the prefix-free encoding makes it simple to decode this into the original text even though the codes have varying bit lengths.

As a second example, consider the text “THE CAT IN THE HAT”. In this text, the letter “T” and the space character both occur with the highest frequency, so they will clearly have the shortest encoding bit patterns in an optimal encoding. The letters “C”, “I’ and “N” only occur once, however, so they will have the longest codes.

There are many possible sets of prefix-free variable-length bit patterns that would yield the optimal encoding, that is, that would allow the text to be encoded in the fewest number of bits. One such optimal encoding is to encode spaces with “00”, “A” with “100”, “C” with “1110”, “E” with “1111”, “H” with “110”, “I” with “1010”, “N” with “1011” and “T” with “01”. The optimal encoding therefore requires only 51 bits compared to the 144 that would be necessary to encode the message with 8-bit ASCII encoding, a compression ratio of 2.8 to 1.
 

Input
The input file will contain a list of text strings, one per line. The text strings will consist only of uppercase alphanumeric characters and underscores (which are used in place of spaces). The end of the input will be signalled by a line containing only the word “END” as the text string. This line should not be processed.
 

Output
For each text string in the input, output the length in bits of the 8-bit ASCII encoding, the length in bits of an optimal prefix-free variable-length encoding, and the compression ratio accurate to one decimal point.
 

Sample Input
AAAAABCDTHE_CAT_IN_THE_HATEND
 

Sample Output
64 13 4.9144 51 2.8
 

Source
Greater New York 2000


题目链接: http://acm.hdu.edu.cn/showproblem.php?pid=1053

分析:这道题目是一道很典型的哈夫曼树问题,哈夫曼树(见百度百科)总之一句话,主要作用就是用来解决压缩编码问题,相信学过离散数学的同学都应该学过这个神奇的数据结构。

(1)建立哈夫曼树节点结构体

复制代码
typedef struct Huffman_trie{    int deep;//深度    int freq;//频度(即哈夫曼树中的权重)    Huffman_trie *left,*right;    //优先权队列中用于排序比较的方法,不懂的建议先学一学优先权队列        friend bool operator<(Huffman_trie a,Huffman_trie b)    return a.freq>b.freq;}Huffman_trie;
复制代码

(2)首先我们把读入的字符串进行预处理,按照字符分别放入对应的节点中,同时记录其出现频率,count_num记录字符出现次数,同时放入哈夫曼节点中

复制代码
        len = strlen(str);        str[len]='!';        sort(str,str+len);        count_num=1;        index=0;        for(int i=1;i<=len;i++)        {            if(str[i]!=str[i-1])            {                trie[index++].freq=count_num;                count_num=1;            }else count_num++;        }    
复制代码

(3)用一个优先权队列存储节点,每次取出频率最小的两个节点,合并节点并把其合并后的节点放入优先权队列中,一直合并到队列中只剩下最后一个节点,把其作为根节点。(这正是哈夫曼树建立的关键,这个地方一定要理解)

复制代码
root = (Huffman_trie*)malloc(sizeof(Huffman_trie));    for(int i=0;i<index;i++)    pq.push(trie[i]);    while(pq.size()>1)    {        Huffman_trie *h1 = (Huffman_trie*)malloc(sizeof(Huffman_trie));        *h1 = pq.top();        pq.pop();        Huffman_trie *h2 = (Huffman_trie*)malloc(sizeof(Huffman_trie));        *h2 = pq.top();        pq.pop();        Huffman_trie h3;        h3.left=h1;        h3.right=h2;        h3.freq=h1->freq+h2->freq;        pq.push(h3);    }    *root = pq.top();
复制代码

(4)哈夫曼树建立完之后就是求其编码长度的问题了,这时候就是遍历该树的问题了,方法有很多,可以深度遍历,也可以广度遍历。这里采用广度遍历,同时借助于一个queue进行的。

复制代码
queue<Huffman_trie>q;    q.push(*root);    while(!q.empty())    {        Huffman_trie ht=q.front();        q.pop();        if(ht.left!=NULL)        {            ht.left->deep=ht.deep+1;            q.push(*ht.left);        }        if(ht.right!=NULL)        {            ht.right->deep=ht.deep+1;            q.push(*ht.right);        }        if(!ht.left&&!ht.right)        sum+=ht.deep*ht.freq;    }
复制代码

(5)剩下的就很简单了,不做过多的赘述。

整个题的全部代码

复制代码
#include <iostream>#include <queue>#include <cstdio>#include <cstring>#include <algorithm>using namespace std;typedef struct Huffman_trie{    int deep;//深度    int freq;//频度(即哈夫曼树中的权重)    Huffman_trie *left,*right;        //优先权队列中用于排序比较的方法,不懂的建议先学一学优先权队列    friend bool operator<(Huffman_trie a,Huffman_trie b)    return a.freq>b.freq;}Huffman_trie;Huffman_trie trie[300];//哈夫曼树节点Huffman_trie *root;int len,count_num,index,sum;priority_queue<Huffman_trie> pq;void huffman(){    sum=0;    root = (Huffman_trie*)malloc(sizeof(Huffman_trie));    for(int i=0;i<index;i++)    pq.push(trie[i]);    while(pq.size()>1)    {        Huffman_trie *h1 = (Huffman_trie*)malloc(sizeof(Huffman_trie));        *h1 = pq.top();        pq.pop();        Huffman_trie *h2 = (Huffman_trie*)malloc(sizeof(Huffman_trie));        *h2 = pq.top();        pq.pop();        Huffman_trie h3;        h3.left=h1;        h3.right=h2;        h3.freq=h1->freq+h2->freq;        pq.push(h3);    }    *root = pq.top();    pq.pop();    root->deep=0;    queue<Huffman_trie>q;    q.push(*root);    while(!q.empty())    {        Huffman_trie ht=q.front();        q.pop();        if(ht.left!=NULL)        {            ht.left->deep=ht.deep+1;            q.push(*ht.left);        }        if(ht.right!=NULL)        {            ht.right->deep=ht.deep+1;            q.push(*ht.right);        }        if(!ht.left&&!ht.right)        sum+=ht.deep*ht.freq;    }}int main(){    char str[1000];    while(scanf("%s",str)!=EOF&&strcmp(str,"END")!=0)    {        len = strlen(str);        str[len]='!';        sort(str,str+len);        count_num=1;        index=0;        for(int i=1;i<=len;i++)        {            if(str[i]!=str[i-1])            {                trie[index++].freq=count_num;                count_num=1;            }else count_num++;        }        if(index==1)        printf("%d %d 8.0\n",len*8,len);        else        {            huffman();            printf("%d %d %.1lf\n",len*8,sum,len*8*1.0/sum);        }    }    return 0;}
复制代码

My  code:


#include<iostream>
#include<queue>
#include<algorithm>
#include<functional>
using namespace std;
struct huffman{
int deep;
int freq;
huffman *left;
huffman *right;
friend bool operator<(huffman a,huffman b)
{
return a.freq>b.freq;
}
};
huffman tree[333];
huffman *root;
int sum,id;
priority_queue<huffman>pq;
void Huffman()
{
int i;
sum=0;
for(i=0;i<id;i++)
{
pq.push(tree[i]);
}
while(pq.size()>1)
{
huffman *h1=(huffman *)malloc(sizeof(huffman));
*h1=pq.top();
pq.pop();
huffman *h2=(huffman *)malloc(sizeof(huffman));//定义的时候才要加上*,下面root就不用
*h2=pq.top();
pq.pop();
huffman h3;
h3.left=h1;
h3.right=h2;
h3.freq=h1->freq+h2->freq;
pq.push(h3);
}
root=(huffman *)malloc(sizeof(huffman));
*root=pq.top();
pq.pop();
root->deep=0;
queue<huffman>q;
q.push(*root);
while(!q.empty())
{
   huffman ht=q.front();
q.pop();
if(ht.left!=NULL)
{
ht.left->deep=ht.deep+1;
q.push(*ht.left);
}
if(ht.right!=NULL)
{
ht.right->deep=ht.deep+1;
q.push(*ht.right);
}
if(!ht.left&&!ht.right)
{
sum+=ht.deep*ht.freq;
}
}
}
int main()
{
char s[333];
int i,len,cnt;
while(scanf("%s",s)!=EOF)
{
if(strcmp(s,"END")==0)
break;
len=strlen(s);
s[len]='!';
sort(s,s+len);
cnt=1;
id=0;
for(i=1;i<=len;i++)
{
if(s[i]!=s[i-1])
{
tree[id++].freq=cnt;
cnt=1;
}
else
cnt++;
}
if(id==1)
printf("%d %d 8.0\n",len*8,len);
else
{
Huffman();
printf("%d %d %.1f\n",len*8,sum,len*8.0/sum);
}
}
return 0;
}

0 0
原创粉丝点击