I think SDCC has weird bitfields, originally due to a bug but now baked into the ABI for some of its platforms?
But there are only a handful of ways to lay out bitfields:
* Use little-endian or big-endian bit order? (this need not match the byte order, but usually does - IIRC recent GCC no longer supports any platforms where it doesn't?). This is the only one you can't control. Networking headers rely on a 2-way preprocessor branch so no other ways are really possible, at least up to `unsigned int` (may be 16) bits.
* If padding is needed, is it on the left or on the right? or, on the top or on the bottom? (this might always be tied to bit-endianness, but theoretically it is detached). For maximum portability, you must add explicit padding, doing math with `CHAR_BIT`.
* What exactly happens if an over-sized bitfield is attempted? (beware, there is no stable ABI for this in practice!)
* Do zero-sized bitfields align to the next unit?
* Can bitfields cross units? Is perhaps a larger-than-user-specified unit used? Consider `struct {uint8_t a:4, b:8, c:4};`
* If a mixture of types is used, do they get aggregated unconditionally, by size, by type? Ignoring signedness or not? Consider `struct { int a:1; long b:1; long long c:1; }`
* Is `int` the same as `signed int` or `unsigned int`, and for what widths?
It's unfortunate that testing some of these can't be done at compile time (a few can via `sizeof`), so you may need to either manually inspect assembly code, or else rely on optimization to do the emit-constants-via-strings trick when cross-compiling (C++, if supported on your platform, may do better than C at accessible compile-time optimization).