DWARF Debugging Format DWARF Debugging Standard Wiki

Do we expect 64-bit ELF implementations to use dwarf 64 everywhere? If not, how does the compiler determine when 64-bit dwarf should be used? Dwarf 64 needs to be used if the total size of any dwarf section exceeds the range of a 32-bit integer. But that fact isn\’t known until you link together all the .o files into an executable or shared library. An implementation could use a special flag at compile time that needs to be specified by the user, if they are creating a very large program. Or it could be used all the time. The debug information (in theory) might exceed the size of the code and data in the program. So you might even want to use dwarf64 with a 32-bit program.

How are other implementations deciding to use dwarf 64?

I think the Sun compiler uses it all the time when 64-bit object code is being generated, but I think that\’s a waste.

Keep in mind that large addresses (code and data) are controlled by the address size, which is orthogonal to whether we\’re using dwarf 32 or dwarf 64.

The choice of dwarf 32 versus dwarf 64 is more than just a dwarf-internal issue. It means that a whole different family of relocations need to be used for section offset references.

thoughts?


Response by David Anderson: SGI compilers use a unique 64bit DWARF2 format (there was no defined way to do it in 1993, SGI defined its own way). Every 64bit-pointer object generates the SGI-unique 64bit DWARF2.

gcc generates 32 bit dwarf even for 64bit-pointer-objects (in all targets for which gcc generates objects, except of course the SGI target for which gcc follows the SGI convention).

As of 2007 there are no known instances of an a.out or shared object exceeding the 32bit dwarf limits.

dwarfstd.org is supported by Sourceware. Contributions are welcome.

All logos and trademarks in this site are property of their respective owner.
The comments are property of their posters, all the rest © 2007-2022 by DWARF Standards Committee.