Author

Topic: Byte order in serialisation (Read 622 times)

legendary
Activity: 1135
Merit: 1166
April 16, 2014, 06:11:29 AM
#6
It does not currently work on big-endian systems, no. But it could in principle be modified to work, by adding the appropriate conditional byteswaps.

Yes of course.  I was looking for those in the first place.  But if the code is not (yet) meant for big endian systems, everything is fine.
legendary
Activity: 905
Merit: 1012
April 16, 2014, 03:04:32 AM
#5
It does not currently work on big-endian systems, no. But it could in principle be modified to work, by adding the appropriate conditional byteswaps.
legendary
Activity: 1135
Merit: 1166
April 16, 2014, 01:01:31 AM
#4
Ok thanks.  So this means that the reference client can't be compiled for big endian platforms?  Have those become so rare in the last years?  I didn't expect that.  But anyway, this answers my question!
legendary
Activity: 905
Merit: 1012
April 15, 2014, 01:49:54 PM
#3
How the compiled binaries behave on supported platforms (just x86, currently) is the standard behavior. It would have been better if Satoshi followed typical conventions with regard to bit- and byte-order, but that ship has long since sailed.
legendary
Activity: 1232
Merit: 1094
April 15, 2014, 08:21:52 AM
#2
Pretty much all numbers in the protocol are little endian.

Implementations on big endian machines would have to convert.

IP addresses and ports are big endian though (in the addr message).  DER format is big endian too and it is used for keys.
legendary
Activity: 1135
Merit: 1166
April 15, 2014, 07:16:30 AM
#1
Looking at Bitcoin's serialize.h and in particular the WRITEDATA / READDATA macros, I wonder how this handles byte order / endian-ness issues correctly.  Seemingly, primitive data types (for instance, (unsigned) integers) are serialised just as they appear in memory.  The same seems to be true for the individual "words" (uint32_t each) of uint256.  I can not find anything that handles converting to a unified byte order.  Aren't these routines also used to compute the hashes and to transmit data over the network?  At least for these uses, shouldn't the byte order be "normalised" somehow?

I'm probably just missing something obvious, since I've only started looking at these pieces of the code.  What is it? Smiley
Jump to: