(java)UTF-8 definition

来源:互联网 发布:python keras 人脸 编辑:程序博客网 时间:2024/05/29 16:27
UTF-8 definition   In UTF-8, characters are encoded using sequences of 1 to 6 octets.   The only octet of a "sequence" of one has the higher-order bit set to   0, the remaining 7 bits being used to encode the character value. In   a sequence of n octets, n>1, the initial octet has the n higher-order   bits set to 1, followed by a bit set to 0.  The remaining bit(s) of   that octet contain bits from the value of the character to be   encoded.  The following octet(s) all have the higher-order bit set to   1 and the following bit set to 0, leaving 6 bits in each to contain   bits from the character to be encoded.   The table below summarizes the format of these different octet types.   The letter x indicates bits available for encoding bits of the UCS-4   character value.Yergeau                     Standards Track                     [Page 3]RFC 2279                         UTF-8                      January 1998   UCS-4 range (hex.)           UTF-8 octet sequence (binary)   0000 0000-0000 007F   0xxxxxxx   0000 0080-0000 07FF   110xxxxx 10xxxxxx   0000 0800-0000 FFFF   1110xxxx 10xxxxxx 10xxxxxx   0001 0000-001F FFFF   11110xxx 10xxxxxx 10xxxxxx 10xxxxxx   0020 0000-03FF FFFF   111110xx 10xxxxxx 10xxxxxx 10xxxxxx 10xxxxxx   0400 0000-7FFF FFFF   1111110x 10xxxxxx ... 10xxxxxx   Encoding from UCS-4 to UTF-8 proceeds as follows:   1) Determine the number of octets required from the character value      and the first column of the table above.  It is important to note      that the rows of the table are mutually exclusive, i.e. there is      only one valid way to encode a given UCS-4 character.   2) Prepare the high-order bits of the octets as per the second column      of the table.   3) Fill in the bits marked x from the bits of the character value,      starting from the lower-order bits of the character value and      putting them first in the last octet of the sequence, then the      next to last, etc. until all x bits are filled in.      The algorithm for encoding UCS-2 (or Unicode) to UTF-8 can be      obtained from the above, in principle, by simply extending each      UCS-2 character with two zero-valued octets.  However, pairs of      UCS-2 values between D800 and DFFF (surrogate pairs in Unicode      parlance), being actually UCS-4 characters transformed through      UTF-16, need special treatment: the UTF-16 transformation must be      undone, yielding a UCS-4 character that is then transformed as      above.      Decoding from UTF-8 to UCS-4 proceeds as follows:   1) Initialize the 4 octets of the UCS-4 character with all bits set      to 0.   2) Determine which bits encode the character value from the number of      octets in the sequence and the second column of the table above      (the bits marked x).   3) Distribute the bits from the sequence to the UCS-4 character,      first the lower-order bits from the last octet of the sequence and      proceeding to the left until no x bits are left.      If the UTF-8 sequence is no more than three octets long, decoding      can proceed directly to UCS-2.
0 0
原创粉丝点击