20 issues of porting C++ code on the 64-bit platform

来源:互联网 发布:吉利电动车知豆d1图片 编辑:程序博客网 时间:2024/06/06 20:09

20 issues of porting C++ code on the 64-bit platform

Table of contents

  • Introduction
  • Off-warnings
  • Use of the functions with a variable number of arguments
  • Magic numbers
  • Storing of integers in double type
  • Bit shifting operations
  • Storing of pointer addresses
  • Memsize types in unions
  • Change of an array type
  • Virtual functions with arguments of memsize type
  • Serialization and data exchange
    • The use of types of volatile size
    • Ignoring of the byte order
  • Bit fields
  • Pointer address arithmetic
  • Arrays indexing
  • Mixed use of simple integer types and memsize types
  • Implicit type conversions while using functions
  • Overload functions
  • Data alignment
  • Exceptions
  • The use of outdated functions and predefined constants
  • Explicit type conversions
  • Error diagnosis
  • Unit test
  • Code review
  • Built-in means of compilers
  • Static analyzers
  • Conclusion
  • Resources
  • History

Introduction

We offer you this article devoted to the portation of the program code of 32-bit applications on 64-bit systems. The article is written for programmers who use C++, but it may also be useful for all who face porting applications on other platforms.

One should properly understand that the new class of errors that appears while writing 64-bit programs is not just some new incorrect constructions among thousands of others. These are inevitable difficulties that the developers of any developing program will face. This article will help you to be ready for these difficulties and will show the ways to overcome them. Besides advantages, any new technology -- in programming and other spheres, as well -- carries some limitations and even usage problems. The same situation can be found in the sphere of developing 64-bit software. We all know that 64-bit software is the next step of information technology development. But in reality, only a few programmers have faced the nuances of this sphere and developing 64-bit programs in particular.

We won't dwell on the advantages that the use of 64-bit architecture opens for programmers. There are many publications devoted to this theme and the reader can find them easily. The aim of this article is rather to observe thoroughly those problems which the developer of 64-bit programs can face. In the article, you will learn about:

  • typical errors of programming, which occur on 64-bit systems;
  • the reasons for the appearance of these errors and corresponding examples;
  • methods of correcting the listed errors;
  • a review of the methods and means of searching for errors in 64-bit programs.

The given information will allow you:

  • to learn the differences between 32-bit and 64-bit systems;
  • to avoid errors while writing the code for 64-bit systems;
  • to speed up the process of migration of a 32-bit application on 64-bit architecture by significantly reducing debugging and testing time;
  • to forecast the time of porting the code on the 64-bit system more accurately and seriously.

There are many examples given in the article that you should try in the programming environment for better understanding. Going into them, you will get something more than just an addition of separate elements. You will open the door into the world of 64-bit systems. To make the following text easier to understand, let's review some types which we may face. See table N1.

Type name Type size (32-bit system) Type size (64-bit system) Description ptrdiff_t 32 64 Signed integer type that appears after the subtraction of two pointers. It is used to keep sizes. Sometimes it is used as the result of a function returning the size or -1 if an error occurs. size_t 32 64 Unsigned integer type. The result of the sizeof() operator. It is used to keep the size or count of objects.

intptr_t, uintptr_t, SIZE_T, SSIZE_T, INT_PTR, DWORD_PTR, etc

32 64 Integer types able to keep the pointer value. time_t 32 64 Time in seconds. Table N1. Description of some integer types.

We'll use the term "memsize" type in this text. By this, we'll understand any simple integer type that is able to keep a pointer and changes its size according to the change of the platform dimension from 32-bit to 64-bit. The example memsize types are: size_t, ptrdiff_t, all pointers, intptr_t, INT_PTR and DWORD_PTR. We should now say some words about the data models that determine the correspondence of the sizes of fundamental types for different systems. Table N2 contains data models which may interest us.

  ILP32 LP64 LLP64 ILP64 char 8 8 8 8 short 16 16 16 16 int 32 32 32 64 long 32 64 32 64 long long 64 64 64 64 size_t 32 64 64 64 pointer 32 64 64 64 Table N2. 32-bit and 64-bit data models.

By default, in this article we'll consider that the program port will be fulfilled from the system with the data model ILP32 on systems with data model LP64 or LLP64. Note that the 64-bit model in Linux (LP64) differs from that in Windows (LLP64) only in the size of long type. So long as it is their only difference, to summarize the account we'll avoid using long, unsigned long types and will instead use ptrdiff_t, size_t types. Let's start observing the type errors that occur while porting programs on the 64-bit architecture.

Off-warnings

In all books devoted to the development of quality code, it is recommended to set a warning level of warnings shown by the compiler on as high a level as possible. However, there are situations in practice when for some project parts there is a lower diagnosis level set or it is even set off. Usually, it is very old code that is supported but not modified. Programmers who work over the project are used to this code working and don't take its quality into consideration. Here it is a danger to miss serious warnings by the compiler while porting programs on the new 64-bit system.

While porting an application, you should obligatorily set on warnings for the whole project that help to check the compatibility of the code and analyze it thoroughly. It can help save a lot of time while debugging the project on the new architecture. If we don't do this, the simplest and stupidest errors will occur in all their variety. Here is a simple example of overflow which occurs in a 64-bit program if we ignore warnings at all.

unsigned char *array[50];unsigned char size = sizeof(array);32-bit system: sizeof(array) = 20064-bit system: sizeof(array) = 400

Use of functions with a variable number of arguments

The typical example is the incorrect use of the printf, scanf functions and their variants:

1) const char *invalidFormat = "%u";   size_t value = SIZE_MAX;   printf(invalidFormat, value);
2) char buf[9];   sprintf(buf, "%p", pointer);

In the first case, it is not taken into account that size_t type is not equivalent to the unsigned type on the 64-bit platform. It will cause the printing of an incorrect result in case if value > UINT_MAX. In the second case, the author of the code didn't take into account that the pointer size may become more than 32-bit in future. As a result, this code will cause the buffer to overflow on the 64-bit architecture.

The incorrect use of functions with a variable number of arguments is a typical error on all the architectures, not only on 64-bit ones. This is related to the fundamental danger of the use of the given C++ language constructions. The common practice is to refuse them and use safe programming methods. We strongly recommend that you to modify the code and use safe methods. For example, you may replace printf with cout, and sprintf with boost::format or std::stringstream. If you have to support code that uses functions of the sscanf type, in the control lines format we can use special macros that open into necessary modifiers for different systems. Here's an example:

// PR_SIZET on Win64 = "I"// PR_SIZET on Win32 = ""// PR_SIZET on Linux64 = "l"// ...size_t u;scanf("%" PR_SIZET "u", &u);

Magic numbers

In low-quality code there are often "magic numbers," the mere presence of which is dangerous. During the migration of the code on the 64-bit platform, these magic numbers may make the code inefficient if they participate in operations such as calculation of address, object size or bit operations. Table N3 contains basic magic numbers that may influence the workability of an application on a new platform.

Value Description 4 Number of bytes in a pointer type 32 Number of bits in a pointer type 0x7fffffff The maximum value of a 32-bit signed variable. Mask for zeroing of the high bit in a 32-bit type. 0x80000000 The minimum value of a 32-bit signed variable. Mask for allocation of the high bit in a 32-bit type. 0xffffffff The maximum value of a 32-bit variable. An alternative record -1 as an error sign. Table N3. Basic magic numbers which can be dangerous during the port of applications from 32-bit platform on the 64-bit one.

You should study the code thoroughly in search of magic numbers and replace them with safe numbers and expressions. To do so, you can use the sizeof() operator, special values from <limits.h>, <inttypes.h>, etc. Let's look at some errors related to the use of magic numbers. The most frequent is a record of number values of type sizes.

1) size_t ArraySize = N * 4;   intptr_t *Array = (intptr_t *)malloc(ArraySize);
2) size_t values[ARRAY_SIZE];   memset(values, ARRAY_SIZE * 4, 0);
3) size_t n, newexp;   n = n >> (32 - newexp);

Let's suppose that in all cases the size of the types used is always 4 bytes. The correction of the code is in the use of the sizeof() operator.

1) size_t ArraySize = N * sizeof(intptr_t);   intptr_t *Array = (intptr_t *)malloc(ArraySize);
2) size_t values[ARRAY_SIZE];   memset(values, ARRAY_SIZE * sizeof(size_t), 0);

or

   memset(values, sizeof(values), 0); //preferred alternative
3) size_t n, newexp;   n = n >> (CHAR_BIT * sizeof(n) - newexp);

Sometimes we may need a specific number. As an example, let's take size_t where all the bits except 4 low bits must be filled with ones. In a 32-bit program, this number may be declared in the following way.

// constant '1111..110000'const size_t M = 0xFFFFFFF0u;

This is incorrect code in the case of the 64-bit system. Such errors are very unpleasant because the record of magic numbers may be carried out in different ways and the search for them is very laborious. Unfortunately, there is no other way except to find and correct this code using #ifdef or a special macro.

#ifdef _WIN64    #define CONST3264(a) (a##i64)#else    #define CONST3264(a)  (a)#endifconst size_t M = ~CONST3264(0xFu);

Sometimes the "-1" value is used as an error code or other special marker, which is written as "0xffffffff". On the 64-bit platform the recorded expression is incorrect and we should evidently use the -1 value. An example of the incorrect code using the 0xffffffff value as an error sign is:

#define INVALID_RESULT (0xFFFFFFFFu)size_t MyStrLen(const char *str) {    if (str == NULL)    return INVALID_RESULT;    ...    return n;}size_t len = MyStrLen(str);if (len == (size_t)(-1))    ShowError();

To be on the safe side, let's make sure that you clearly know the result of the "(size_t)(-1)" value on the 64-bit platform. You may make a mistake saying the value 0x00000000FFFFFFFFu. According to C++ rules, the -1 value turns into a signed equivalent of a higher type and then into an unsigned value:

int a = -1;           // 0xFFFFFFFFi32ptrdiff_t b = a;      // 0xFFFFFFFFFFFFFFFi64size_t c = size_t(b); // 0xFFFFFFFFFFFFFFFui64

Thus, "(size_t)(-1)" on the 64-bit architecture is represented by the 0xFFFFFFFFFFFFFFFui64 value, which is the highest value for the 64-bit size_t type. Let's return to the error with INVALID_RESULT. The use of the number 0xFFFFFFFFu causes execution failure of the "len == (size_t)(-1)" condition in a 64-bit program. The best solution is to change the code in such a way that it doesn't need special marker values. If you cannot refuse them for some reason or think it unreasonable to correct the code fundamentally, just use the fair value -1.

#define INVALID_RESULT (size_t(-1))...

Storing of integers in double type

Double type as a rule has size 64 bits and is compatible with IEEE-754 standard on 32-bit and 64-bit systems. Some programmers use double type for storing of and work with integer types.

size_t a = size_t(-1);double b = a;--a;--b;size_t c = b; // x86: a == c              // x64: a != c

The given example can be justified on a 32-bit system for double type, has 52 significant bits and is capable of storing a 32-bit integer value without a loss. However, while trying to store a 64-bit integer in double, the exact value can be lost. See picture 1.

Screenshot - 1.png

Picture 1. The number of significant bits in size_t and double types.

 

It is possible that an approximate value can be used in your program, but to be on the safe side we'd like to warn you about the possible effects on the new architecture. In any case, it is not recommended to mix integer arithmetic with floating-point arithmetic.

Bit shifting operations

Bit shifting operations can cause a lot of troubles during the port from the 32-bit system to the 64-bit one if used inattentively. Let's begin with an example of a function that defines the bit you've chosen as 1 in a variable of memsize type:

ptrdiff_t SetBitN(ptrdiff_t value, unsigned bitNum) {    ptrdiff_t mask = 1 << bitNum;    return value | mask;}

The given code works only on the 32-bit architecture and allows definition of bits with numbers from 0 to 31. After the program port onto the 64-bit platform, it becomes necessary to define bits from 0 to 63. What value do you think the following call of the SetBitN(0, 32) function will return? If you think 0x100000000, the authors are glad because we haven't prepared this article in vain. You'll get 0. Pay attention that "1" has int type and during the shift an overflow will occur, as is shown in picture 2.

Screenshot - 2.png

Picture 2. Calculation of mask value.

 

To correct the code, it is necessary to make the constant "1" of the same type as the variable mask.

ptrdiff_t mask = ptrdiff_t(1) << bitNum;

or

ptrdiff_t mask = CONST3264(1) << bitNum;

One more question. What will be the result of the uncorrected function SetBitN(0, 31)'s call? The right answer is 0xffffffff80000000. The result of the 1 << 31 expression is the negative number Ц2147483648. This number is formed in a 64-bit integer variable as 0xffffffff80000000. You should keep in mind and take into account the effects of the shift of values of different types. For you to better understand the stated information, table N4 contains some interesting expressions with shifts on the 64-bit system.

Expression Result (Dec) Result (Hex) ptrdiff_t Result; Result = 1 << 31; -2147483648 0xffffffff80000000 Result = ptrdiff_t(1) << 31; 2147483648 0x0000000080000000 Result = 1U << 31; 2147483648 0x0000000080000000 Result = 1 << 32; 0 0x0000000000000000 Result = ptrdiff_t(1) << 32; 4294967296 0x0000000100000000 Table N4. Expressions with shifts and results on the 64-bit system.

Storing of pointer addresses

A large number of errors during migration onto 64-bit systems are related to the change of a pointer size in relation to the size of usual integers. In the environment with the data ILP32 usual integers and pointers have the same size. Unfortunately, the 32-bit code is based on this supposition everywhere. Pointers are often casted to int, unsigned int and other types improper to fulfill address calculations.

You should understand that one should use only memsize types for integer pointers form. Preference should be given to the uintptr_t type, for it shows intentions more clearly and makes the code more portable, thus saving it from changes in future. Let's look at two small examples:

1) char *p;   p = (char *) ((int)p & PAGEOFFSET);
2) DWORD tmp = (DWORD)malloc(ArraySize);    ...   int *ptr = (int *)tmp;

Neither example takes into account that the pointer size may differ from 32-bits. They use the explicit type conversion that truncates high bits in the pointer and this is surely a mistake on the 64-bit system. Here are the corrected variants that use the integer memsize type intptr_t and DWORD_PTR to store pointer addresses:

1) char *p;   p = (char *) ((intptr_t)p & PAGEOFFSET);
2) DWORD_PTR tmp = (DWORD_PTR)malloc(ArraySize);    ...   int *ptr = (int *)tmp;

The danger in the two examples studied is that the fail in the program may be found much later. The program may work absolutely correctly with a small data size on the 64-bit system while the truncated addresses lie in the first 4 Gb of memory. Then for large production aims there will be memory allocation out of the first 4 Gb on launching the program. The code given in the examples will cause an undefined behavior of the program on the object out of first 4 Gb while processing the pointer. The following code won't hide and will show up at the first execution:

void GetBufferAddr(void **retPtr) {    ...    // Access violation on 64-bit system    *retPtr = p;}unsigned bufAddress;GetBufferAddr((void **)&bufAddress); 

The correction is also in the choice of the type able to store the pointer:

uintptr_t bufAddress;GetBufferAddr((void **)&bufAddress); //OK

There are situations when storing the pointer address into a 32-bit type is just necessary. Mostly, such situations appear when it is necessary to work with old API functions. For such cases, one should resort to special functions such as LongToIntPtr, PtrToUlong, etc. In the end, I'd like to mention that it will be bad style to store the pointer address into types that are always equal to 64-bits. One will have to again correct the code shown when 128-bit systems appear.

PVOID p;// Bad style. The 128-bit time will come.__int64 n = __int64(p);p = PVOID(n);

Memsize types in unions

The peculiarity of a union is that for all members of the union the same memory area is allocated. That is, they overlap. Although access to this memory area is possible with the use of any of the elements, the element for this aim should be chosen so that the result won't be senseless. One should pay attention to the unions that contain pointers and other members of the memsize type.

When there is a necessity to work with a pointer as an integer, sometimes it is convenient to use the union as it is shown in the example and work with the number form of the type without using explicit conversions.

union PtrNumUnion {    char *m_p;    unsigned m_n;} u;u.m_p = str;u.m_n += delta;

This code is correct on 32-bit systems and is incorrect on 64-bit ones. Changing member m_n on the 64-bit system, we work only with a part of the m_p. We should use the type that will correspond to the pointer size.

union PtrNumUnion {    char *m_p;    size_t m_n; //type fixed} u;

Another frequent use of the union is in the presentation of one member as an addition of other smaller ones. For example, we may need to split the value size_t type into bytes to carry out the table algorithm of the calculation of the number of zero bits in a byte.

union SizetToBytesUnion {    size_t value;    struct     {        unsigned char b0, b1, b2, b3;    }     bytes;} u;   SizetToBytesUnion u;u.value = value;size_t zeroBitsN = TranslateTable[u.bytes.b0] +    TranslateTable[u.bytes.b1] +    TranslateTable[u.bytes.b2] +    TranslateTable[u.bytes.b3];

Here, the supposition that the size_t type consists of 4 bytes is a fundamental algorithmic error. The possibility of an automatic search for algorithmic errors is hardly possible, but we can provide a search of all unions and check the presence of memsize types in them. Having found such a union, we can find an algorithmic error and overwrite the code in the following way:

union SizetToBytesUnion {    size_t value;    unsigned char bytes[sizeof(value)];} u;   SizetToBytesUnion u;u.value = value;size_t zeroBitsN = 0;for (size_t i = 0; i != sizeof(bytes); ++i)    zeroBitsN += TranslateTable[bytes[i]];

Change of an array type

In programs, sometimes it is necessary -- or just convenient -- to present array items in the form of elements of a different type. Dangerous and safe type conversions are shown in the following code:

int array[4] = { 1, 2, 3, 4 };enum ENumbers { ZERO, ONE, TWO, THREE, FOUR };//safe cast (for MSVC2005)ENumbers *enumPtr = (ENumbers *)(array);cout << enumPtr[1] << " ";//unsafe castsize_t *sizetPtr = (size_t *)(array);cout << sizetPtr[1] << endl;//Output on 32-bit system: 2 2//Output on 64 bit system: 2 17179869187

As you can see, the program output result is different in 32-bit and 64-bit variants. On the 32-bit system, access to the array items is fulfilled correctly when the sizes of size_t and int coincide and we see the output "2 2". On the 64-bit system, we got "2 17179869187" in the output because its value is 17179869187. This value is situated in the first item of the sizetPtr array. See picture 3. In some cases we need this very behavior, but usually it is an error.

Screenshot - 3.png

Picture 3. Arrangement of arrays items in memory.

 

Correction of the described situation resides in the refusal of dangerous type conversions by modernizing the program. Another variant is to create a new array and copy values of the original one into it.

Virtual functions with arguments of memsize type

If there are big derived class graphs with virtual functions in your program, there is a risk of inattentively using arguments of different types. However, these types actually coincide on the 32-bit system. For example, in the base class you use the size_t type as an argument of a virtual function and in the derived class type unsigned. So, this code will be incorrect on the 64-bit system. However, an error like this doesn't necessarily hide in big derived class graphs. Here it is one of the examples:

class CWinApp {    ...    virtual void WinHelp(DWORD_PTR dwData, UINT nCmd);};class CSampleApp : public CWinApp {    ...    virtual void WinHelp(DWORD dwData, UINT nCmd);};

Let's follow the development life-cycle of some applications. First, imagine that it was being developed for Microsoft Visual C++ 6.0 when the WinHelp function in the CWinApp class had the following prototype:

virtual void WinHelp(DWORD dwData, UINT nCmd = HELP_CONTEXT);

It was absolutely correct to carry out an overlap of the virtual function in the CSampleApp class, as is shown in the example. Then the project was ported into Microsoft Visual C++ 2005 where the function prototype in the CWinApp class had undergone some changes. These changes consisted of the replacement of the DWORD type with the DWORD_PTR type. On the 32-bit system, the program will work absolutely correctly because here, types DWORD and DWORD_PTR coincide. Troubles will appear during the compilation of the given code for the 64-bit platform. We'll gave two functions with the same name, but different parameters, and as a result the user's code won't be executed. The correction consists of the use of the same types in the corresponding virtual functions:

class CSampleApp : public CWinApp {    ...    virtual void WinHelp(DWORD_PTR dwData, UINT nCmd);};

Serialization and data exchange

An important point during the port of a software solution onto the new platform is succession to the existing data exchange protocol. It is necessary to provide the read of the existing projects formats, to carry out the data exchange between 32-bit and 64-bit processes, etc. Mostly, errors of this kind consist of the serialization of the memsize types and the data exchange operations using them.

1) size_t PixelCount;   fread(&PixelCount, sizeof(PixelCount), 1, inFile);
2) __int32 value_1;   SSIZE_T value_2;   inputStream >> value_1 >> value_2;
3) time_t time;   PackToBuffer(MemoryBuf, &time, sizeof(time));

In all the given examples, there are errors of two kinds: the use of types of volatile size in binary interfaces and the ignoring of byte order.

Using types of volatile size

It is unacceptable to use types that change their size depending on the development environment in binary interfaces of data exchange. In the C++ language, not all types have distinct sizes. Consequently, it is not possible to use them all for these aims. That's why the developers of the development means and programmers themselves develop data types that have an exact size, such as __int8, __int16, INT32, word64, etc.

The use of such types provides data portability between programs on different platforms, although it needs the use of odd ones. The three examples shown are written inaccurately and this will show up in the changing capacity of some data types from 32-bit to 64-bit. Taking into account the necessity of supporting old data formats, the correction may look as follows:

1) size_t PixelCount;   __uint32 tmp;   fread(&tmp, sizeof(tmp), 1, inFile);   PixelCount = static_cast<size_t>(tmp);
2) __int32 value_1;   __int32 value_2;   inputStream >> value_1 >> value_2;
3) time_t time;   __uint32 tmp = static_cast<__uint32>(time);   PackToBuffer(MemoryBuf, &tmp, sizeof(tmp));

However, the given variant of correction cannot be the best. During the port on the 64-bit system, the program may process a large amount of data and the use of 32-bit types in the data may become a serious obstacle. In this case, we may leave the old code for compatibility with the old data format having corrected the incorrect types. Then we may fulfill the new binary data format, taking into account the errors made. One more variant is to refuse binary formats and instead take text or other formats provided by various libraries.

Ignoring of the byte order

Even after the correction of volatile type sizes, you may face the incompatibility of binary formats. The reason is a different data presentation. Most frequently, it is related to a different byte order.

The byte order is a method of recording of bytes of multibyte numbers. See picture 4. The little-endian order means that the recording begins with the lowest byte and ends with the highest one. This record order was acceptable in the memory of PCs with x86-processors. With big-endian order Ц the recording begins with the highest byte and ends with the lowest one. This order is a standard for TCP/IP protocols. That's why the big-endian byte order is often called the network byte order. This byte order is used by processors such as Motorola 68000, SPARC.

Screenshot - 4.png

Picture 4. Byte order in a 64-bit type on little-endian and big-endian systems.

 

While developing the binary interface or data format, you should remember the byte order. If the 64-bit system on which you are porting a 32-bit application has a different byte order, you'll just have to take it into account in your code. For conversion between the big-endian byte order and the little-endian one, you may use functions htonl(), htons(), bswap_64, etc.

Bit fields

If you use bit fields, you should keep in mind that the use of memsize types will change the sizes of structures and alignment. For example, the structure shown further will have a size of 4 bytes on the 32-bit system and 8 bytes on the 64-bit one.

struct MyStruct {    size_t r : 5;};

However, our attention to bit fields is not limited by that. Let's take a delicate example:

struct BitFieldStruct {    unsigned short a:15;    unsigned short b:13;};BitFieldStruct obj;obj.a = 0x4000;size_t addr = obj.a << 17; //Sign Extensionprintf("addr 0x%Ix/n", addr);//Output on 32-bit system: 0x80000000//Output on 64-bit system: 0xffffffff80000000

Pay attention that if you compile the example for the 64-bit system there is a sign extension in the "addr = obj.a << 17;" expression. This is in spite of the fact that both variables addr and obj.a are unsigned. This sign extension is caused by the rules of type conversion that are used in the following way. See picture 5, also:

  • A member of obj.a is converted from a bit field of unsigned short type into int. We get int type and not unsigned int because the 15-bit field can be located in the 32-bit sign integer.
  • The "obj.a << 17" expression has int type, but it is converted into ptrdiff_t and then into size_t before it will be assigned to the variable addr. The sign extension occurs during conversion from int into ptrdiff_t.

Screenshot - 5-1.png

 

Screenshot - 5-2.png

Picture 5. Calculation of "addr = obj.a << 17" expression on different systems.

 

So, be attentive while working with bit fields. To avoid the described effect in our example, we can simply use the explicit conversion from obj.a type to size_t type.

...size_t addr = size_t(obj.a) << 17;printf("addr 0x%Ix/n", addr);//Output on 32-bit system: 0x80000000//Output on 64-bit system: 0x80000000

Pointer address arithmetic

The first example:

unsigned short a16, b16, c16;char *pointer;...pointer += a16 * b16 * c16;

This example works correctly with pointers if the value of the "a16 * b16 * c16" expression does not exceed UINT_MAX (4Gb). Such code could always work correctly on the 32-bit platform because the program has never allocated arrays of large sizes. On the 64-bit architecture, the size of the array exceeded UINT_MAX items. Suppose that we would like to shift the pointer value on 6.000.000.000 bytes and that's why variables a16, b16 and c16 have values 3000, 2000 and 1000 respectively. While calculating the "a16 * b16 * c16" expression according to C++ rules, all of the variables will be converted to int type. Only then their multiplication will occur. During the process of multiplication, an overflow will occur. The incorrect expression result will be extended to ptrdiff_t type and the calculation of the pointer will be incorrect.

One should take care to avoid possible overflows in pointer arithmetic. For this purpose, it's better to use memsize types or the explicit type conversion in expressions that carry pointers. Using the explicit type conversion, we can rewrite the code in the following way:

short a16, b16, c16;char *pointer;...pointer += static_cast<ptrdiff_t>(a16) *    static_cast<ptrdiff_t>(b16) *    static_cast<ptrdiff_t>(c16);

If you think only those inaccurate programs that work on larger data sizes face troubles, we have to disappoint you. Let's look at some interesting code for working with an array containing only 5 items. The second example works in the 32-bit variant and does not work in the 64-bit one.

int A = -2;unsigned B = 1;int array[5] = { 1, 2, 3, 4, 5 };int *ptr = array + 3;ptr = ptr + (A + B); //Invalid pointer value on 64-bit platformprintf("%i/n", *ptr); //Access violation on 64-bit platform

Let's follow how the calculation of the "ptr + (a + b)" expression develops:

  • According to C++ rules, variable A of int type is converted to unsigned type.
  • Addition of A and B occurs. The result we get is value 0xFFFFFFFF of unsigned type.

Then calculation of "ptr + 0xFFFFFFFFu" takes place, but the result of it depends on the pointer size on the particular architecture. If addition will take place in a 32-bit program, the given expression will be an equivalent of "ptr Ц 1" and we'll successfully print the number 3. In a 64-bit program, the 0xFFFFFFFFu value will be added fairly to the pointer and the result will be that the pointer is outbound of the array. While trying to get access to the item of this pointer, we'll face troubles. To avoid this situation, as well as the first one, we advise you to use only memsize types in pointer arithmetic. Here are two variants of the code correction:

ptr = ptr + (ptrdiff_t(A) + ptrdiff_t(B));ptrdiff_t A = -2;size_t B = 1;...ptr = ptr + (A + B);

You may object and offer the following variant of the correction:

int A = -2;int B = 1;...ptr = ptr + (A + B);

Yes, this code will work, but it is bad for the following reasons:

  1. It will teach you inaccurate work with pointers. After a while, you may forget nuances and make a mistake by making one of the variables of unsigned type.
  2. Use of non-memsize types together with pointers is potentially dangerous. Suppose variable Delta of int type participates in an expression with a pointer. This expression is absolutely correct. However, an error may hide in the calculation of the variable Delta itself because 32 bits may be not enough to make the necessary calculations while working with large data arrays. The use of the memsize type for variable Delta liquidates the danger automatically.

Arrays indexing

This kind of error is separated from the others for better structuring of the account. This is because indexing in arrays with the use of square brackets is just a different record of the address arithmetic observed before. When programming in C and then C++, a practice was formed for use in the construction of the following kinds of variables of int/unsigned types:

unsigned Index = 0;while (MyBigNumberField[Index] != id)    Index++;

However, time passes and everything changes. Now it's a high time to say, "Do not do so anymore!" Use it for indexing large arrays of memsize types. The given code won't process an array containing more than UINT_MAX items in a 64-bit program. After the access to the item with the UNIT_MAX index, an overflow of the variable Index will occur and we'll get an infinite loop. To persuade you entirely of the necessity of using only memsize types for indexing and for the expressions of address arithmetic, I'll give the last example:

class Region {    float *array;    int Width, Height, Depth;    float Region::GetCell(int x, int y, int z) const;    ...};float Region::GetCell(int x, int y, int z) const {    return array[x + y * Width + z * Width * Height];}

The given code is taken from a real mathematics simulation program in which the size of the RAM is important and the possibility of using more than 4 Gb of memory on the 64-bit architecture greatly improves the calculation speed. In programs of this class, one-dimensional arrays are often used to save memory while they participate as three-dimensional arrays. For this purpose, there are functions like GetCell that provide access to the necessary items. The given code will work correctly only with arrays containing less than INT_MAX items. The reason for that is the use of 32-bit int types for calculation of the items index. Programmers often make a mistake trying to correct the code in the following way:

float Region::GetCell(int x, int y, int z) const {    return array[static_cast<ptrdiff_t>(x) + y * Width +        z * Width * Height];}

They know that according to C++ rules, the expression for calculating the index will have ptrdiff_t type. They hope to avoid the overflow with its help. However, the overflow may occur inside the sub-expression "y * Width" or "z * Width * Height" because int type is still used to calculate them. If you want to correct the code without changing the types of the variables participating in the expression, you may use the explicit type conversion of every variable memsize type:

float Region::GetCell(int x, int y, int z) const {    return array[ptrdiff_t(x) +        ptrdiff_t(y) * ptrdiff_t(Width) +        ptrdiff_t(z) * ptrdiff_t(Width) *        ptrdiff_t(Height)];}

Another solution is to replace types of variables with the memsize type:

typedef ptrdiff_t TCoord;class Region {    float *array;    TCoord Width, Height, Depth;    float Region::GetCell(TCoord x, TCoord y, TCoord z) const;    ...};float Region::GetCell(TCoord x, TCoord y, TCoord z) const {    return array[x + y * Width + z * Width * Height];}

Mixed use of simple integer types and memsize types

Mixed use of memsize and non-memsize types in expressions may cause incorrect results on 64-bit systems and be related to the change of the input values rate. Let's study some examples:

size_t Count = BigValue;for (unsigned Index = 0; Index != Count; ++Index){ ... }

This is an example of an infinite loop if Count > UINT_MAX. Suppose that this code worked on 32-bit systems with the range less than UINT_MAX iterations. A 64-bit variant of the program may process more data and it can demand more iterations. As long as the values of the variable Index lie in range -- i.e. 0 to UINT_MAX -- the condition "Index != Count" will never be executed and this causes the infinite loop. Another frequent error is a record of the expressions of the following kind:

int x, y, z;intptr_t SizeValue = x * y * z;

Similar examples were discussed earlier when, during the calculation of values with the use of non-memsize types, an arithmetic overflow occurred. The last result was incorrect. Search and correction of the given code is made more difficult because compilers do not show any warning messages about it as a rule. From the point of view of the C++ language, this is absolutely correct construction. Several variables of int type are multiplied and after that, the result is implicitly converted to intptr_t type and assignment occurs. Let's give small code example that shows the danger of inaccurate expressions with mixed types. The results are obtained using the Microsoft Visual C++ 2005, 64-bit compilation mode.

int x = 100000;int y = 100000;int z = 100000;intptr_t size = 1;                  // Result:intptr_t v1 = x * y * z;            // -1530494976intptr_t v2 = intptr_t(x) * y * z;  // 1000000000000000intptr_t v3 = x * y * intptr_t(z);  // 141006540800000intptr_t v4 = size * x * y * z;     // 1000000000000000intptr_t v5 = x * y * z * size;     // -1530494976intptr_t v6 = size * (x * y * z);   // -1530494976intptr_t v7 = size * (x * y) * z;   // 141006540800000intptr_t v8 = ((size * x) * y) * z; // 1000000000000000intptr_t v9 = size * (x * (y * z)); // -1530494976

It is necessary that all operands in such expressions be converted to the type of larger capacity in time. Remember that expressions of the kind...

intptr_t v2 = intptr_t(x) * y * z;

...do not promise the right result. They promise only that the "intptr_t(x) * y * z" expression will have intptr_t type. The right result shown by this expression in the example is simply good luck caused by a particular compiler version and occasional process.

The calculation order of an expression with operators of the same priority is not defined. To be more exact, the compiler calculates sub-expressions in an order that it considers to be more efficient, even if sub-expressions cause side-effects. The side effects' order of appearance is not defined. Expressions including communicative and association operations -- such as *, +, &, |, ^ -- may be converted in a free way even if there are brackets. To assign a strict order expression calculation, it is necessary to use the explicit temporary variable. That's why if the result of the expression should be of memsize type, only memsize types must participate in the expression. The right variant is:

intptr_t v2 = intptr_t(x) * intptr_t(y) * intptr_t(z); // OK!

Note that if you have a lot of integer calculations and control over the overflows is important to you, pay attention to the SafeInt class. The realization and description of it can be found on MSDN. Mixed use of types may occur in the change of the program logic.

ptrdiff_t val_1 = -1;unsigned int val_2 = 1;if (val_1 > val_2)    printf ("val_1 is greater than val_2/n");else    printf ("val_1 is not greater than val_2/n");//Output on 32-bit system: "val_1 is greater than val_2"//Output on 64-bit system: "val_1 is not greater than val_2"

On the 32-bit system, the variable val_1 according to C++ rules was extended to unsigned int and became value 0xFFFFFFFFu. As a result, the condition "0xFFFFFFFFu > 1" was executed. On the 64--bit system, it's the other way around: the variable val_2 is extended to ptrdiff_t type. In this case, the expression "-1 > 1" is checked. The changes occurring are shown in picture 6.

Screenshot - 6-1.png

 

Screenshot - 6-2.png

Picture 6. Changes occurring in the expression "val_1 &gt; val_2".

 

If you need to return the previous behavior, you should change the variable val_2 type.

ptrdiff_t val_1 = -1;size_t val_2 = 1;if (val_1 > val_2)    printf ("val_1 is greater than val_2/n");else    printf ("val_1 is not greater than val_2/n");

Implicit type conversions while using functions

Observing the previous kinds of errors related to mixing simple integer types and memsize types, we surveyed only simple expressions. Similar problems may occur while using other C++ constructions, too.

extern int Width, Height, Depth;size_t GetIndex(int x, int y, int z) {    return x + y * Width + z * Width * Height;}...MyArray[GetIndex(x, y, z)] = 0.0f;

If you work with large arrays -- i.e. more than INT_MAX items -- the given code may behave incorrectly and we'll not address those items of the array MyArray that we wanted. In spite of the fact that we return the value of size_t type, the "x + y * Width + z * Width * Height" expression is calculated with the use of int type. We suppose you have already guessed that the corrected code will look as follows:

extern int Width, Height, Depth;size_t GetIndex(int x, int y, int z) {    return (size_t)(x) +        (size_t)(y) * (size_t)(Width) +        (size_t)(z) * (size_t)(Width) * (size_t)(Height);}

In the next example, we have memsize type (pointer) and simple unsigned type mixed.

extern char *begin, *end;unsigned GetSize() {    return end - begin;}

The result of the "end - begin" expression has the ptrdiff_t type. As long as the function returns unsigned type, the implicit type conversion occurs during which high bits of the results get lost. Thus, if pointers begin and end address the beginning and the end of the array according to the larger UINT_MAX (4Gb), the function will return the incorrect value. Here is one more example, but now we'll observe not the returned value but the formal function argument:

void foo(ptrdiff_t delta);int i = -2;unsigned k = 1;foo(i + k);

Doesn't this code remind you of the example of the incorrect pointer arithmetic discussed earlier? Yes, we find the same situation here. The incorrect result appears during the implicit type conversion of the actual argument -- which has value 0xFFFFFFFF -- and from unsigned type to ptrdiff_t type.

Overload functions

During the port of 32-bit programs onto the 64-bit platform, the work logic changes may be found that are related to the use of overload functions. If the function is overlapped for 32-bit and 64-bit values, access to it with the argument of memsize type will be compiled into different calls on different systems. This method may be useful as, for example, in the following code:

static size_t GetBitCount(const unsigned __int32 &) {    return 32;}static size_t GetBitCount(const unsigned __int64 &) {    return 64;}size_t a;size_t bitCount = GetBitCount(a);

Such change of logic has a potential danger, however. Imagine a program in which a class for organizing stacks is used for some aims. The peculiarity of this class is that it allows storing values of different types.

class MyStack {    ...    public:    void Push(__int32 &);    void Push(__int64 &);    void Pop(__int32 &);    void Pop(__int64 &);} stack;ptrdiff_t value_1;stack.Push(value_1);...int value_2;stack.Pop(value_2);

A careless programmer placed and then chose from the stack of values of different types (ptrdiff_t and int). On the 32-bit system, their sizes coincided and everything worked perfectly. When the size of ptrdiff_t type changes in a 64-bit program, the stack began to include more bytes than it could extract out later. We think you should understand this kind of error and that you should pay attention to the call of overload functions when transferring actual arguments of memsize type.

Data alignment

Processors work more efficiently when they deal with properly aligned data. As a rule, the 32-bit data item must be aligned at the border a multiple of 4 bytes and the 64-bit item at the border 8 bytes. Then trying to work with unaligned data on processors IA-64 (Itanium), as is shown in the following example, will cause an exception:

#pragma pack (1) // Also set by key /Zp in MSVCstruct AlignSample {    unsigned size;    void *pointer;} object;void foo(void *p) {    object.pointer = p; // Alignment fault}

If you have to work with unaligned data on Itanium, you should indicate this to the compiler. For example, you may use a special macro UNALIGNED:

#pragma pack (1) // Also set by key /Zp in MSVCstruct AlignSample {    unsigned size;    void *pointer;} object;void foo(void *p) {    *(UNALIGNED void *)&object.pointer = p; //Very slow}

This decision is not efficient because access to the unaligned data will be several times slower. A better result may be achieved when you arrange up to 32-bit, 16-bit and 8-bit items in 64-bit data items.On the architecture x64 during access to unaligned data, an exception does not occur, but you should avoid them all the same. First, significant speed reduction occurs when accessing the data. Second, there is a high probability of porting the program on the platform IA-64 in future. Let's look at one more example of the code that does not take the data alignment into account:

struct MyPointersArray {    DWORD m_n;    PVOID m_arr[1];} object;...malloc( sizeof(DWORD) + 5 * sizeof(PVOID) );...

If we want to allocate the memory size necessary to store the object of MyPointersArray type containing 5 pointers, we should take into account that the beginning of the array m_arr will be aligned at the border of 8 bytes. The order of the data in memory on different systems (Win32/Win64) is shown in picture 7:

Screenshot - 7.png

Picture 7. Alignment of the data in memory on systems Win32 and Win64

 

The correct calculation of the size should look as follows:

struct MyPointersArray {    DWORD m_n;    PVOID m_arr[1];} object;...malloc( FIELD_OFFSET(struct MyPointersArray, m_arr) +    5 * sizeof(PVOID) );...

In this code, we see the shift of the last structure member and sum up this shift and its size. The shift of a member of the structure or a class may be recognized when macro offsetof or FIELD_OFFSET is used. Always use these macros to get a shift in the structure without relying on your knowledge of the type sizes and alignment. Here is an example of code with the correct calculation of the structure member address:

struct TFoo {    DWORD_PTR whatever;    int value;} object;int *valuePtr =     (int *)((size_t)(&object) + offsetof(TFoo, value)); // OK

Exceptions

Catching and handling exceptions when integer types participate is not a good programming practice while working in the C++ language. For such aims, you should use more informative types, for example, classes derived from the class std::exception. Sometimes, however, one has to work with less quality code, as shown:

char *ptr1;char *ptr2;try {    try     {        throw ptr2 - ptr1;    }    catch (int)     {        std::cout << "catch 1: on x86" << std::endl;    }}catch (ptrdiff_t) {    std::cout << "catch 2: on x64" << std::endl;}

You should thoroughly avoid catch or handle exceptions with the use of memsize types, as they may change the logic of program work. Correction of the given code may contain a replacement of "catch (int)" with "catch (ptrdiff_t)". A more proper correction would be the use of a special class for transferring information about the error that took place.

The use of outdated functions and predefined constants

While developing a 64-bit application, be sure to bear in mind the changes of the environment in which it will be performed. Some functions will become outdated and it will be necessary to replace them with some variants. GetWindowLong is a good example of such a function in the Windows operation system. Pay attention to constants that refer to interaction with the environment in which the program is functioning. In Windows, the lines containing "system32" or "Program Files" will be suspect.

Explicit type conversions

Be accurate with explicit type conversions. They may change the logic of program execution when types change their capacity or cause the loss of significant bits. It is difficult to adduce typical examples of errors related to the explicit type conversion because they are very different and specific for different programs. You became acquainted with some errors related to explicit type conversion earlier.

Error diagnosis

The diagnosis of the errors occurring while porting 32-bit programs on 64-bit systems is a difficult task. Porting low-quality code written without taking the peculiarities of other architectures into account may demand a lot of time and effort. That's why we'll pay attention to the description of methods and means that may simplify this task.

Unit test

Unit tests obtained well-earned respect among programmers long ago. Unit tests help to check the correctness of a program after portation to a new platform. However, there is one nuance that you should keep in mind.

Unit tests may not allow you to check the new ranges of input values that become accessible on 64-bit systems. Unit tests were originally developed in such a way that they could be passed in a short time. The function that usually works with an array of size tens of Mb will probably process tens of Kb in unit tests. It is justified in tests for this function to be called many times with different sets of input values. Suppose, however, that you have a 64-bit variant of the program. Now the function we study is processing more than 4 Gb of data. Surely there now appears a necessity to raise the input size of an array in the tests up to sizes more than 4 Gb. The problem is that test times will increase greatly in such a case.

That's why, while modifying the sets of tests, you should keep in mind the compromise between the speed of passing unit tests and the fullness of the checks. Fortunately, there are other methods that can help you ascertain the efficiency of your applications.

Code review

Code review is the best method of searching for errors and improving code. Thorough code review may completely eliminate errors in the program that are related to the peculiarities of development with 64-bit applications. Of course, in the beginning one should learn which errors exactly one should search for. Otherwise, the review won't give good results. For this purpose, it is necessary to read this and other articles devoted to the port of programs from 32-bit systems on 64-bit ones. Some interesting links concerning this topic can be found at the end of this article.

Unfortunately, this approach to analysis of the original code has on significant disadvantage. It demands a lot of time and because of this, it is actually inapplicable to large projects. The compromise is in the use of static analyzers. A static analyzer can be considered an automated system for code review where a fetch of potentially dangerous places is created for a programmer so that he can carry out the further analysis.

In any case, it is desirable to provide several code reviews for the combined teaching of the team and to search for new kinds of errors occurring on 64-bit systems.

Built-in means of compilers

Compilers solve some problems in searching defective code. They often have built-in mechanisms for diagnosing observed errors. For example, in Microsoft Visual C++ 2005 the following keys may be useful: /Wp64, /Wall, and in SunStudio C++ key Цxport64.

Unfortunately, the possibilities they provide are often not enough and you should not rely only on them. In any case, it is highly recommended to enable the corresponding options of a compiler for diagnosing errors in the 64-bit code.

Static analyzers

Static analyzers are a fine means of improving the quality and safety of program code. The main difficulty related to the use of static analyzers lies in the fact that they generate quite a lot false warning messages about potential errors. Programmers, being lazy by nature, use this argument to find some way to not correct the errors found. In Microsoft, this problem is solved by including the found errors in the bug tracking system. Thus, a programmer cannot choose between the corrections of the code and tries to avoid this.

We think that such strict rules are justified. The profit in code quality covers the outlay of time for static analysis and corresponding code modification. This profit is achieved by means of simplifying the code support and reducing the time of debugging and testing. Static analyzers may be successfully used for diagnosing many of the kinds of errors observed in this article.

The authors know of 3 static analyzers that are supposed to have means of diagnosing errors related to the port of programs on 64-bit systems. We would like to warn you at once that we may be mistaken about the possibilities they have. Moreover, these are developing products and new versions may have greater efficiency:

  1. Gimpel Software PC-Lint. This analyzer has a large list of supported platforms and is the static analyzer of general purpose. It allows us to catch errors while porting programs on the architecture with the LP64 data model. The advantage is in the possibility of exercising strict control over the type conversions. A disadvantage is the absence of an environment, but this may be corrected by using the external Riverblade Visual Lint.
  2. Parasoft C++test. This is another well-know static analyzer of general purpose. It has the support of a lot of device and program platforms, too. It has a built-in environment, which simplifies the work and setting of the analysis rules greatly. As well as PC-Lint, it is intended for the LP64 data model.
  3. Viva64. Unlike other analyzers, it is intended to work with the Windows LLP64 data model. It is integrated into the development environment of Visual Studio 2005. It is intended only for diagnosing problems related to the port of programs on 64-bit systems and that simplifies its settings greatly.

Conclusion

If you read these lines, we are glad that you're interested. We hope the article has been useful for you and will help to simplify the development and debugging of 64-bit applications. We will be glad to receive your opinions, remarks, corrections, additions and will surely include them in the next version of the article. The more we describe typical errors, the more profitable will be the use of our experience and help.

Article by Andrey Karpov and Evgeniy Ryzhkov. Visit Viva64.com

Resources

  1. Chandra Shekar. Extend your application's reach from 32-bit to 64-bit environments.
  2. Converting 32-bit Applications Into 64-bit Applications: Things to Consider.
  3. Andrew Josey. Data Size Neutrality and 64-bit Support.
  4. Harsha S. Adiga. Porting Linux applications to 64-bit systems.
  5. Transitioning C and C++ programs to the 64-bit data model.
  6. Porting an Application to 64-bit Linux on HP Integrity Servers.
  7. Stan Murawski. Beyond Windows XP: Get Ready Now for the Upcoming 64-Bit Version of Windows.
  8. Steve Graegert. 64-bit Data Models Explained.
  9. Updating list of resources devoted to the development of 64-bit applications.

History

  • 19 May, 2007 -- Original version posted
  • 25 June, 2007 -- Article edited and moved to the main CodeProject.com article base

Karpov2007


Click here to view Karpov2007's online profile.

原创粉丝点击