Comment From: ianlancetaylor
Can you provide a real-life use case?
Comment From: twoleds
it's good for UUID, IPv6, hashing (MD5) etc, we can store IPv6 into uint128 instead byte slice and do some mathematics with subnetworks, checking range of IP addresses
Comment From: minux
These use cases are not strong enough to justify adding 128-bit types, which is a big task to emulate it on all targets. 1. MD5 is not secure anymore, so there is little benefit adding types to store its result. 2. How often do you need to manipulate a UUID as a number rather than a byte slice (or a string)? 3. The other use cases can be done with math/big just as easy.
Also note that GCC doesn't support __int128 on 32-bit targets and Go do want consistent language features across all supported architectures.
Comment From: twoleds
I agree with you there aren't a lot of benefits for int128/uint128, maybe a little better performance for comparing and hashing in maps when we use uint128 for storing UUID/IPv6 because for byte slices or string we need do some loops and extra memory but it isn't important I think
Comment From: mei-rune
I stat all interface flux of a device in one day.
Comment From: the80srobot
In addition to crypto, UUID and IPv6, int128 would be enormously helpful for volatile memory analysis, by giving you a safe uintptr diff type.
Comment From: iMartyn
It also just makes code that much more readable if you have to deal with large IDs e.g. those you get back from google directory API amongst others (effectively they're uuids encoded as uint128).
Obviously you can use math/big
but it makes the code much harder to reason about because you have to parse the code mentally first, distracting you from reading the code.
Comment From: ericlagergren
Adding a data point: ran into a situation with a current project where I need to compute (x * y) % m where x*y can possibly overflow and require a 128-bit integer. Doing the modulus by hand for the high and low halves is needlessly complicated.
Comment From: jfesler
Another +1 for both IPv6 and UUID cases.
Comment From: ianlancetaylor
The examples of UUID and IPv6 are not convincing to me. Those types can be done as a struct just as easily.
It's not clear that this is worth doing if processors do not have hardware support for the type; are there processors with 128-bit integer multiply and divide instructions?
See also #19623.
Comment From: ericlagergren
@ianlancetaylor I do not think so. GCC seems to use the obvious 6 instructions for mul, 4 for add and sub, and a more involved routine for quo. I'm not how anybody could emulate mul, add, or sub that precisely (in Go) without assembly, but that prohibits inlining and adds function call overhead.
Comment From: ianlancetaylor
The fact that the current tools can't yet inline asm code is not in itself an argument for changing the language. We would additionally need to see a significant need for efficient int128
arithmetic.
If there were hardware support, that in itself would suggest a need, since presumably the processor manufacturers would only add such instructions if people wanted them.
Comment From: ericlagergren
If there were hardware support, that in itself would suggest a need
A need that—presumably—compilers couldn't meet by adding their own 128-bit types, which they have. I mean, for all but division it's a couple extra instructions. For most cases that's been sufficient.
I confess I'm not an expert on CPU characteristics, but my understanding is much of the driving force behind adding larger sizes was the ability to address more memory. That makes me think general 128-bit support is rather unlikely.
Yet major compilers have added support (GCC, Clang, ICC, ...) for C and C++. Rust has them because of LLVM. Julia has them as well.
Other languages and compilers having support isn't sufficient reason to make a language change, sure. But it's evidence there exists a need other than simply UUIDs.
Their domain seems to lie in cryptography and arbitrary-precision calculations, for now.
Comment From: FlorianUekermann
Additional usecases are timestamps, cryptographic nonces and database keys.
Examples like database keys, nonces and UUID represent a pretty large collection of applications where keys/handles can't ever be reused or number ranges can't overlap.
Comment From: ianlancetaylor
@FlorianUekermann People keep saying UUID, but I see no reason that a UUID could not be implemented using a struct. It's not like people use arithmetic on a UUID once it has been created. The only reason to add int128
to the language is if people are going to use arithmetic on values of that type.
Comment From: FlorianUekermann
It's not like people use arithmetic on a UUID once it has been created
They do. UUIDs don't have to be random. Sequential UUIDs are common in databases for example. Combine sequential UUIDs with some range partitioning and you'll wish for integer ops in practice.
Still, timestamps seem like the most obvious example to me, where 64bit is not sufficient and the full range of arithmetic operations is obviously meaningful. Had it been available, I would expect that the time package contained some examples.
How big of an undertaking is the implementation of div? The rest seems rather straightforward.
Comment From: ericlagergren
How big of an undertaking is the implementation of div?
The code for naïve 128-bit division exists in the stdlib already (math/big). The PowerPC Compiler Writer’s Guide has a 32-bit implementation of 64-bit division (https://cr.yp.to/2005-590/powerpc-cwg.pdf, page 82) that can be translated upwards.
Comment From: josharian
Use case: [u]int128 can be used to check for overflow of [u]int64 operations in a natural way. Yes, this could make you want int256, but since int64 is the word size of many machines, this particular overflow matters a lot. See e.g. #21588. Other obvious options to address this use case are math/bits and
19623.
Somewhat related use case: https://github.com/golang/go/issues/21835#issuecomment-356478304.
Comment From: cespare
I have wanted int128 for representing currency in some situations.
Comment From: FlorianUekermann
If there were hardware support, that in itself would suggest a need
The ADX and BMI2 ISA extensions are implemented by recent Intel and AMD processors.
For example MULX does 128bit=64bit*64bit.
Comment From: FiloSottile
are there processors with 128-bit integer multiply and divide instructions
There are instructions that let you multiply two 64-bit registers to two 64-bit registers. Having access to those in the form of uint128 can significantly speed up cryptographic code, which might reduce our reliance on assembly implementations.
An example from what I'm looking at today: https://github.com/mit-plv/fiat-crypto/blob/f7b212b9/src/Specific/X25519/C64/femulDisplay.log
(I am however also very much unconvinced by the data storage use cases: there's no reason to store hashes or foreign IDs as numbers if you are not doing arithmetic on them. []byte
is fine.)
How about (uint64, uint64) -> (uint64, uint64) math/bits intrinsics if not uint128 in the language?
Comment From: FiloSottile
[...] Unfortunately, this requires writing assembly, because writing high-performance arithmetic is not possible in Go — it's simply not a design goal of the language. (There are a few reasons, most notably that there's no way to directly compute the (128-bit) product of 64-bit integers.)
https://blog.cloudflare.com/sidh-go/
Comment From: ericlagergren
How about (uint64, uint64) -> (uint64, uint64) math/bits intrinsics if not uint128 in the language?
How would the API work? ISTM there's two options:
- Each function takes two
uint64
and returns the upper and lower halves and it's DIY with the results, or - Each function takes two
uint64
returns atype Uint128 [2]uint64
that has various methods
#1 has the downside of not making arithmetic operations any easier. For example, one use case is convolutions in a prime field. If you need to compute (a * b) % p
then getting the upper and lower halves of the 128-bit multiplication of a
and b
means you still need to do the modulus by hand.
#2 seems to just be a clumsy wrapper for a builtin uint128
type.
(BTW: I don't want to shoot down your idea right off the bat, and I'd rather have bits.Uint128
than nothing—those two issues just stood out to me.)
Comment From: growler
I'd much appreciate atomic.CompareAndSwapUint128 (or CASDoubleUint64(*[2]int64 to avoid language change), since double-word CAS makes it easy to implement tagged (or version) pointers to prevent ABA problem in lockless algorithms.
Comment From: ianlancetaylor
If we add these types, we will presumably need to change at least the strconv package to support converting int128
(and uint128
) from string to integer. This will require new functions, as the existing functions take int64
arguments. What should these new functions be called? What should we do if we decide to add int256
in the future?
The math/big package might need new setters and getters.
What packages other than strconv should change if we add these types?
Comment From: FiloSottile
(By the way, #24813 seems to be working well for the cryptography use case, so we don't really need the uint128 type for that anymore.)
Comment From: michaelthoward
Everything related to currencies/finance should be done in fixed-point ... and the 18 digits of precision you get with int64 is not enough. It seems to me that 'int128' and 'uint128' should be reserved words in the language ... even if you choose not to implement it at this time.
Comment From: ianlancetaylor
@michaelthoward That's not necessary, as none of the predeclared type names like int
, int64
, etc., are reserved words. They are declared in the "universe block" and can be shadowed by names defined in package scope.
Comment From: michaelthoward
none of the predeclared type names like int, int64, etc., are reserved words
Thank you for explaining
Comment From: jwatte
The Snowflake data warehouse supports int128, and the type is useful for anything from IPv6 addresses, to time-plus-ID serials that sort well. (I e, 64 bit nanosecond timestamp in high order, 64-bit random node id in low order.)
The most-generic option would be support for array-as-int-of-fixed-size, which is different from the math/big.Int
type. I e, I could pass in a [16]byte, or a [32]byte, or a [6]byte, or whatever, and the compiler would perhaps have smarts enough to generate better code in cases where that's possible?
Comment From: waynenilsen
I came here to say that I want this for high performance fixed precision decimal calculations. You have essentially 37 digits to work with in a signed 128 bit integer. You set the radix (decimal) at some point, let's say you want 8 digits of precision then you have 29 digits (+/-100 octillion) before you overflow. This is very useful for high frequency trading and cryptocurrency applications where big
may be considered to be too slow due to heap allocation and looping.
Comment From: Freeaqingme
@cespare
I have wanted int128 for representing currency in some situations.
Can you describe this use case in more detail? I don't see it, but it may be because I'm simply not rich enough ;)
I was just reading up on Rust, and happily surprised to see it there. Now that's not a reason why Go should have it in and of itself, of course. But still, I think it's a neat feature to have.
One place where it would come in handy would be subnet calculations on ipv6 addresses. Obviously that's possible with [16]byte, but the logic would be so much clearer when it'd be a single uint128 instead.
@ianlancetaylor
The examples of UUID and IPv6 are not convincing to me. Those types can be done as a struct just as easily.
Do you mean just storing the data, or also performing calculations on them?
Comment From: ianlancetaylor
I meant just storing the data. It seems to me that people don't normally use arithmetic operations with UUID values (once they have been created initially) or IPv6 addresses.
Comment From: Freeaqingme
Maybe not arithmetic operations, but a common operation is to see if an IP address is within a certain subnet size (e.g. when implementing white/blacklists). Then you'd typically compare the first N bits of an address with the given subnet to see if it falls within the give subnet. Right now, that's quite tedious to do effectively.
Edit: To clarify, I'm not saying that we need the int128 support only because of this usecase. But given that it was brought up I felt I could show some support since that particular use case is one that bugged me before and would be easily solved with int128 support.
Comment From: j0y
time.Duration int64 is not enough for values starting from January 1, 1601.
Comment From: stevenj
My 2c. The argument that "The processor doesn't do 128 bit maths so neither should the compiler" is not very convincing. If that were the case, we would be stuck on 8 bit math with 8 bit micros, and yet 8 bit micro's can easily do 32 bit math and its handled by the compiler (as it should be).
The reason to support it is most modern CPU's can do 64 bit maths natively. It is trivial in assembler to generate the necessary code sequences to extend the number range over two registers, the even have hardware features to facilitate it (carry flags anyone?), but to do the same in a high level language, it is NOT TRIVIAL. (unless go wants to expose the carry flag?) There are enough problem domains requiring large integers beyond 64 bits that asking for a convincing "reason" is torturous. The reason is, because the compiler should be there to make the life easy, especially for mathematics. Processors as its been pointed out already have built in functions to help with 128 bit maths beyond the simple carry flag.
Even if you are limiting yourself to 64 bit precision, there are many times when its a lot easier and clearer in code to extend into 128 bits then modulo back down. The performance hit on a 64bit machine would be negligible, and not much worse on even a 32 bit machine. Certainly the compiler doing it would be inordinately faster than emulating it at the high level. There is no rational reason why a 8 bit chip can support 32 bit integers, but a 32 bit chip can't support 128 bit ones.
Personally I would go further and say int256 should also be supported.
Would adding int128 hurt any existing application? No.
Do you need to add support to print them as strings? Certainly.
Does any other function need to change to support them? No, why would they? There is no reason any other function needs to take them, they could, maybe it would be great, but if they don't its not a reason to deprive them of use generally.
Comment From: chrispassas
This issue is marked as "language change" and "Go 2" which I think makes it much less likely to be considered.
Would it break the Go 1 compatibility promise to add support for uint128/int128?
Years ago the Context was added and that required adding new functions and types to a huge number of stdlib's.
Comment From: ianlancetaylor
@chrispassas All language changes are marked "Go 2". That's just how we do things. As you can see from release notes, the language changes we've made in the last couple of releases were tied to "Go 2" issues, although they were backward compatible.
Comment From: ianlancetaylor
Would it still make sense to add these types if we implemented #19623 in one form or another?
Comment From: cespare
Would it still make sense to add these types if we implemented #19623 in one form or another?
Yes. When I've missed int128, it's because I want 128 bits. I end up using two uint64s and implementing the annoying conversion/arithmetic logic myself.
When I want arbitrary precision arithmetic, using big.Int is not a problem.
(I deeply hope that #19623 never happens.)
Comment From: michaelthoward
I agree completely with @cespare int128 and uint128 should implemented regardless of what happens with #19623
Comment From: stevenj
Yes, #19623 and this are not related and the commentary on #19623 indicates as much. Otherwise may as well get rid of all the other sized types as well.
Comment From: martisch
I think uint128
is useful on its own for e.g. optimizations as it corresponds nicely to 128bit registers on amd64. The compiler can easily and efficiently generate single instructions for compares and load/stores on amd64 allowing to write high level go code that utilizes larger register widths than 64bits. This can be useful for writing memcpy or compare loops more efficiently without resorting to assembler. If the compile is made to recognize combined loads like it is already implemented for 32/64bits then loops like in https://go-review.googlesource.com/c/go/+/228823/5/src/unicode/utf8/utf8.go#497 can be speed up further without resorting to assembler.
(I understand that we could make this work already with the current compiler but this likely will be more complex in discovering loads to different array indeces)
This could also replace uses of [2]uint64
for memequal in current code:
https://github.com/golang/go/blob/2b8e60d464515634462ca472ca09c791e2cbf6ae/src/runtime/alg.go#L252
Platforms that dont support 128bit registers natively can split into two 64bit registers in the backend much like 64bits are split up into two 32bit valued on 386.
Personally I have seen more potential utility in having uint128
than existing uses for complex128
in everyday coding.
Comment From: ethindp
I agree with @stevenj. There's no excuse not to add 128-bit integer types, just like there's no reason why we shouldn't, in the future, expose architecture-level intrinsics like SSE and AVX vector types. Used correctly, 128-bit integer types can make code a lot easier to read and reason about.
Comment From: ghost
upvote -> network programming. 2xuint64 structs are a hacky mess of a workaround. it's 2020....
Comment From: k06a
One more reason: blockchains use uint256
as a combination of [4]uint64
, it will be much faster with using [2]uint128
.
Comment From: stevenj
@k06a or just uint256. I don't think its unreasonable for a compiler to supply native integers that are 2 sizes bigger than the base machine. In this case the most common base machine is 64 bit, so i don't think its unreasonable or excessively hefty for the compiler to supply 128 and 256 bit ints. No one complains when gcc will happily provide 32bit ints for 8 bit micros (such as AVR chips).
Comment From: harshadptl
uint128 will be great for all the blockchain stuff written in golang. Do you know how difficult it is to serialize a big.Int.
Comment From: danderson
A small note that inet.af/netaddr could use uint128 in two ways:
- We represent IP addresses as a pair of uint64s. The resulting code ends up having a bunch of branches and duplication to handle ops that span ip.hi and ip.lo. It would be nice if the compiler could generate good code for that and let us operate on 128-bit values.
- We would like to add IPPrefix.Size()
, which would return the number of IP addresses contained by a given CIDR prefix. However, because IPPrefix might be IPv6, we can't do that without making the function return (hi, lo uint64)
, or big.Int, or some other cumbersome type. As a result, we don't have that helper.
Comment From: Cyberax
One practical use for us are dates. 2^64 nanoseconds is around 548 years. There are practical applications that need to represent dates from outside of this range (historical dates, model projections into future, etc.).
Moving to int128 of nanoseconds would allow to represent dates trillions of years into the future.
Comment From: phuclv90
I think
uint128
is useful on its own for e.g. optimizations as it corresponds nicely to 128bit registers on amd64. The compiler can easily and efficiently generate single instructions for compares and load/stores on amd64 allowing to write high level go code that utilizes larger register widths than 64bits.
@martisch no there are no 128-bit integer registers on any current platforms and you can't have a single-instruction 128-bit comparison on x86. Anything longer than 128-bit are SIMD registers like 128-bit SSE, 256-bit AVX and 512-bit AVX-512 on x86 or Neon and SVE on ARM. Those are meant for processing multiple small pieces of data at the same time, not a single big integer at a time. You can do multiple small comparisons in an SSE/AVX register but not a 128-bit comparison on them. 128-bit arithmetic are still generally faster with scalar instructions because it's extremely hard to propagate the carry serially to the higher units when the numbers are added in parallel
Comment From: batara666
How this going, our embed software needs i128 type
Comment From: ianlancetaylor
@batara666 Can you tell us why? Thanks.
Comment From: batara666
because Big are not enough and slow for us
Comment From: ianlancetaylor
@batara666 Thanks. The question is not whether people who need 128 bit integers should use the math/big package. Clearly the math/big package, which supports extremely large integers, is going to be slower.
The question is: why do you need 128 bit integers?
Comment From: chrispassas
@ianlancetaylor I don't know how the interest here compares to other issues but it seems like there is already several solid examples for uses for 128bit integers in this conversations.
Has this ever been discussed by the Go committee that decides what to add to the language?
Comment From: swatson314159
I started following this issue because I'm hoping for atomic 128 bit operation support. See CMPXCHG16B
. My project needs to be very efficient and avoid race conditions when updating two 64 bit values.
Comment From: ianlancetaylor
@chrispassas Yes, it just helps to have persuasive examples. The examples I see here seem like more like "could be convenient, but other approaches also work.
@swatson314159 A 128-bit atomic swap function in the sync/atomic package does not require an int128
type. And if we added an int128
type, we would not necessarily add a function to sync/atomic. The issues with sync/atomic are different; there we would specifically be concerned by which architectures supports the operation, and what we should do on other architectures. It's straightforward to emulate 128-bit integer arithmetic on architectures that don't support it. Implementing a 128 bit compare-and-swap is less clear, and may have surprising performance effects.
Comment From: FlorianUekermann
I feel like the goalposts keep moving here. First it was usecases, then it was hardware support, now the usecases provided years ago, plenty of which require arithmetics, aren't convincing enough.
@ianlancetaylor : What exactly are you missing or do you doubt in the usecases, CPU instructions on x86 architectures, performance and readability benefits provided over the last few years? Maybe you could be more specific. There is plenty of precedent in other languages, compiler intrinsics etc, so this isn't exactly uncharted territory either. You have much more experience with other compiler and language implementations than most, so maybe there is some issue you know of from those projects, which isn't obvious to others in this thread.
In general, I feel like the case for (u)int128 has been made and the discussion has run its course, maybe it is time to close the issue if the case was neither convincing nor the reason can be substantiated further.
Comment From: the80srobot
Florian brings up a good point. What exactly is the standard here? It seems like one of three things is true:
- We're waiting to see if Go adds arbitrary width ints. If so maybe block this issue behind that one?
- There is no case convincing enough to add another primitive data type. If so, maybe close this?
- There is some list of things that must be true for this support to be added. If so, can they be stated?
Comment From: ianlancetaylor
I apologize if it seems like the goalposts keep moving. The truth is that there are no goalposts.
I honestly haven't found any of the use cases given above to be particularly convincing. Sorry if I'm missing something obvious.
If there were an obvious and essential need for int128
, we would have done it already. So I guess that what I am doing is looking for that obvious and essential need. Not "might be nice," or "we would use it," but more like "our Go program today can only be written with an int128 package, and it would be better if we could just use a built-in int128
type instead."
Comment From: chrispassas
@ianlancetaylor Thank you for explaining. I understand you read the use cases and probably feel someone could write their own code to have two uint64's in struct and not really need uint128 support.
This probably isn't an issue of "I can't do X in Go". It's more of an issue of "If Go had uint128, X,Y,Z would be easier for me to do".
Not everyone is of the caliber we can code around not having uint128 support so having that would allow less experienced developers to solve their problems better.
We are just advocating to the Go team to prioritize this feature request. I do think if it does not meet the bar it should be closed. While I wish the feature was there I recognize your team has to spend your time in a smart way and this might not be the best use of that time.
As a community member I don't feel I have the ability to add this feature to the language my self. I would just use it if it was there.
Comment From: jwatte
The code generated by me implementing an int128 on top of a struct or byte array can be an order of magnitude worse than that a compiler can generate knowing what I want to achieve at a higher level. And, to add insult to injury, Go of course does everything in its power to prevent me from abstracting over anything, staying 100% concrete. (It's a design goal, I understand.)
Go is already slow enough compared to alternatives like C/C++/Rust/Java, putting more code boilerplate and slower code in front of it will just make it worse.
Sincerely,
Jon Watte
-- "I find that the harder I work, the more luck I seem to have." -- Thomas Jefferson
On Fri, Jan 29, 2021 at 12:16 PM Chris Passas notifications@github.com wrote:
@ianlancetaylor https://github.com/ianlancetaylor Thank you for explaining. I understand you read the use cases and probably feel someone could write their own code to have two uint64's in struct and not really need uint128 support.
This probably isn't an issue of "I can't do X in Go". It's more of an issue of "If Go had uint128, X,Y,Z would be easier for me to do".
Not everyone is of the caliber we can code around not having uint128 support so having that would allow less experienced developers to solve their problems better.
We are just advocating to the Go team to prioritize this feature request. I do think if it does not meet the bar it should be closed. While I wish the feature was there I recognize your team has to spend your time in a smart way and this might not be the best use of that time.
As a community member I don't feel I have the ability to add this feature to the language my self. I would just use it if it was there.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/golang/go/issues/9455#issuecomment-770026897, or unsubscribe https://github.com/notifications/unsubscribe-auth/AADVU5M7NDGUCQEJBHB46RTS4MJSXANCNFSM4AZR5JSA .
Comment From: the80srobot
If I understand the standard that Ian proposes, then it's similar to asking what can only be done with exactly 128 bits, as opposed to 256, 512 and so on. Cryptographic functions will continue to have larger images and timestamps and currency can require storing arbitrarily large numbers. It doesn't seem to matter much whether the line between primitive and BigInt is at 64 bits or 128 bits.
No one would argue that Go shouldn't support 64 bit numbers, because you need them to address memory efficiently, and they translate to efficient code. The only reason I can think of to support specifically int128 is memory addressing. Subtracting two uintptr values can result in a negative number. Neither uint64 nor int64 are safe to hold the result of such an operation, but an int128 would be.
This is less esoteric than it might sound, mind you. Memory analysis, forensics and some related hardware domains do use those operations and programmers writing that code get them wrong.
As a sidenote, Go has primitive complex number types, which I have never seen used in a real codebase. I imagine this is the source of some of the reluctance to add new primitives. Fair enough - maybe an efficient BigInt implementation is just as good.
Comment From: jwatte
Some use cases come from SQL connectors where certain databases support 128 bit integers as native, and can use them for Serial and timestamps and such. Also, the Go date library actually truncates dates because 64 bits worth of nanoseconds isn't enough. Try converting "0" nanoseconds to a date, and then converting "1" nanoseconds to the date, print them out, and look in amazement at the result! Go timestamps as designed would be better expressed as >64 bits.
I agree that adding the complex number datatypes was a mistake, and likely has been a mistake in other languages that do the same -- special complex support was important for fourier transforms in the '60s and '70s, but that's a long way back. Better library interfaces is a better solution to this problem in modern lives.
BigInt: Can I use built-in "+" and "/" and "<" operators? Will it be usable as a key in a map? Will it be a value or a reference type? Will size be statically determined such that SIMD processor targets can generate good code? Will I be guaranteed that the byte representation is 2's complement, in the same byte order as the host? Can I print it with "%d" and "%x" in formatted prints? (There are also questions around conversions to float types, and so on, which I think are less likely to be particularly troublesome in practice.)
Many of the particular implementation choices in Go make these questions important to answer up front. Also, those same implementation choices is what drives the desire for native types, rather than user-supplied types. If we could get a good answer for BigInt, that would probably be fine! But that answer ideally looks similar to "we support int128 and int256 and ... int<2^N>" A BigInt that's a mandatory heap allocation, not usable as a map key, where native "<" comparison doesn't actually compare the values, would be a significant burden -- no better than what we can hand-roll already.
Sincerely,
Jon Watte
-- "I find that the harder I work, the more luck I seem to have." -- Thomas Jefferson
On Fri, Jan 29, 2021 at 2:34 PM Adam Sindelar notifications@github.com wrote:
If I understand the standard that Ian proposes, then it's similar to asking what can only be done with exactly 128 bits, as opposed to 256, 512 and so on. Cryptographic functions will continue to have larger images and timestamps and currency can require storing arbitrarily large numbers. It doesn't seem to matter much whether the line between primitive and BigInt is at 64 bits or 128 bits.
No one would argue that Go shouldn't support 64 bit numbers, because you need them to address memory efficiently, and they translate to efficient code. The only reason I can think of to support specifically int128 is memory addressing. Subtracting two uintptr values can result in a negative number. Neither uint64 nor int64 are safe to hold the result of such an operation, but an int128 would be.
This is less esoteric than it might sound, mind you. Memory analysis, forensics and some related hardware domains do use those operations and programmers writing that code get them wrong.
As a sidenote, Go has primitive complex number types, which I have never seen used in a real codebase. I imagine this is the source of some of the reluctance to add new primitives. Fair enough - maybe an efficient BigInt implementation is just as good.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/golang/go/issues/9455#issuecomment-770086084, or unsubscribe https://github.com/notifications/unsubscribe-auth/AADVU5OKM5RJJW7BJNT7EWDS4MZYLANCNFSM4AZR5JSA .
Comment From: the80srobot
Some of the things you're asking for, like being able to use %d to print the big.Int, and using native operators seem to me to be more of a matter of preference. It's not like you can write generic code, or compare an int64 with an int32 without casting anyway, so I don't know why doing x.Less(y) is substantially worse than x < y.
I also don't think it's true that big.Int requires a heap allocation. Internally it's represented as a byte slice. I'm not an expert on Go's escape analysis, but I think it should be possible for it to remain on the stack. At any rate, Go doesn't specify what ends up on the stack and what on the heap, even for things where you might think it's a sure thing.
You can use big.Int.Bytes() as a map key, although I agree it's unfortunate that you have to copy it into either a string or a byte array first. (This seems like a problem with Go maps in general: the only variable size type they can use as key is string.)
As I understand it, native int128 wouldn't necessarily have some of the other properties you want, like being guaranteed a certain representation in memory.
Comment From: jwatte
native int128 wouldn't necessarily have some of the other properties you want, like being guaranteed a certain representation in memory.
It could if we chose to, just like for int64. There exist no in-use architectures where there's a significant draw-back or performance concern to make sure that an emulated int128 looks like the logical extension of int64 in memory.
If we didn't worry about good support and convenience for use cases, then we could just all write our programs in assembly. Go has chosen to make "user data types" be a second class citizen compared to built-in types. Because of that choice, the pressure to add many common-ish data types into the language is, understandably, high, and as long as the data type is generic enough, I believe it SHOULD be added, because users can't sufficiently help themselves.
Sincerely,
Jon Watte
-- "I find that the harder I work, the more luck I seem to have." -- Thomas Jefferson
On Fri, Jan 29, 2021 at 3:18 PM Adam Sindelar notifications@github.com wrote:
Some of the things you're asking for, like being able to use %d to print the big.Int, and using native operators seem to me to be more of a matter of preference. It's not like you can write generic code, or compare an int64 with an int32 without casting anyway, so I don't know why doing x.Less(y) is substantially worse than x < y.
I also don't think it's true that big.Int requires a heap allocation. Internally it's represented as a byte slice. I'm not an expert on Go's escape analysis, but I think it should be possible for it to remain on the stack. At any rate, Go doesn't specify what ends up on the stack and what on the heap, even for things where you might think it's a sure thing.
You can use big.Int.Bytes() as a map key, although I agree it's unfortunate that you have to copy it into either a string or a byte array first. (This seems like a problem with Go maps in general: the only variable size type they can use as key is string.)
As I understand it, native int128 wouldn't necessarily have some of the other properties you want, like being guaranteed a certain representation in memory.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/golang/go/issues/9455#issuecomment-770100261, or unsubscribe https://github.com/notifications/unsubscribe-auth/AADVU5NSLFZFNIHZKSCG5N3S4M635ANCNFSM4AZR5JSA .
Comment From: josharian
@ianlancetaylor FWIW I believe that the IPv6 use case described by @danderson above fit your criteria pretty well.
Comment From: stevenj
Big Ints are not equivalent to fixed size 128bit or 256bit integers.
Fixed size integers have the property that any addition or subtraction near the extremes of their range cause them to wrap. This is desirable in many situations. Arbitrary sized integers do not, they just grow to accommodate. In every circumstance where i use fixed size integers i absolutely want to rely on the fact that 0xFFFFFFFF FFFFFFFF + 1 = 0x00000000 00000000 and NOT 0x 1 00000000 00000000
Further Fixed size integers will have large speed advantages over any arbitrary sized number scheme. Some say "the speed difference isn't important" i am not all knowing enough to say whats important to someone else's project. But i have been involved with many projects where the speed difference could make all the difference in the world between success and failure. Should we be placing unnecessary barriers in the way?
It has also been said, there is no hardware support for 128bit integers in 64 bits cpus. Patently not true. X86-64 can multiply two 64 bit values and produce a 128 bit result in a single instruction. Arms A64 instruction set has specific support also for 128 bit results from 64 bit x 64 bit multiplies (in two instructions). That is 128 bit support. It certainly isn't feature complete but it is still supported. The fact the result is split over multiple "registers" is of no consequence, that would be like saying the 8088 could not do 16 bit maths because it put the result in AH and AL (two registers). No sane person would make that argument.
But all of that aside, the sheer number of comments in this proposal indicates a significant number of people consider this such an issue that they bothered to take time out of their day to add support to the proposal. They obviously feel they have a need for this feature which is not currently being met and cant easily be replaced with a library.
Comment From: ethindp
I have to add my support to this proposal too. x86 has support for 128-bit and 256-bit integers via SSE and AVX. ARM has something similar. There is no excuse not to add 128-bit integers at minimum when the arguments against it have all been discussed in other programming language discussions like Rust, C/C++ via GCC, LLVM and so on. If LLVM and GCC have support for it, Go has no excuse and sticks out like a soar thumb. If then even Rust has it, Go looks like an amateur language. Right now if someone wants to use a 128-bit integer in Go they have to simulate it. That, in turn, requires a lot of extra unnecessary code that isn't going to be optimized away. That, in turn, causes performance hits because Go isn't even taking advantage of SSE2/3/4. IPv6 addresses is just one use-case where 128-bit integers are useful. There is no excuse for not adding them -- and "language simplicity" doesn't count because adding uint128 and int128 does not make the language any more complex than it is now.
On 1/30/21, Steven Johnson notifications@github.com wrote:
Bit Ints are not equivalent to fixed size 128bit or 256bit integers.
Fixed size integers have the property that any addition or subtraction near the extremes of their range cause them to wrap. This is desirable in many situations. Arbitrary sized integers do not, they just grow to accommodate. In every circumstance where i use fixed size integers i absolutely want to rely on the fact that 0xFFFFFFFF FFFFFFFF + 1 = 0x00000000 00000000 and NOT 0x 1 00000000 00000000
Further Fixed size integers will have large speed advantages over any arbitrary sized number scheme. Some say "the speed difference isn't important" i am not all knowing enough to say whats important to someone else's project. But i have been involved with many projects where the speed difference could make all the difference in the world between success and failure. Should we be placing unnecessary barriers in the way?
It has also been said, there is no hardware support for 128bit integers in 64 bits cpus. Patently not true. X86-64 can multiply two 64 bit values and produce a 128 bit result in a single instruction. Arms A64 instruction set has specific support also for 128 bit results from 64 bit x 64 bit multiplies (in two instructions). That is 128 bit support. It certainly isn't feature complete but it is still supported. The fact the result is split over multiple "registers" is of no consequence, that would be like saying the 8088 could not do 16 bit maths because it put the result in AH and AL (two registers). No sane person would make that argument.
But all of that aside, the sheer number of comments in this proposal indicates a significant number of people consider this such an issue that they bothered to take time out of their day to add support to the proposal. They obviously feel they have a need for this feature which is not currently being met and cant easily be replaced with a library.
-- You are receiving this because you commented. Reply to this email directly or view it on GitHub: https://github.com/golang/go/issues/9455#issuecomment-770325372
-- Signed, Ethin D. Probst
Comment From: josharian
In an attempt to refocus the discussion: Arguments that Go needs 128 bit ints do not move the conversation forward. Concrete use cases do. Please share those. Thanks.
Comment From: phuclv90
I have to add my support to this proposal too. x86 has support for 128-bit and 256-bit integers via SSE and AVX. ARM has something similar.
@ethindp Please read my comment on @martisch above https://github.com/golang/go/issues/9455#issuecomment-753252500, there's zero native 128-bit integer support in any current architectures, SSE, AVX, Neon, SVE... are all SIMD which are intended for operations on multiple small integers at the same time. They have no way to treat the register as a single 128-bit integer. All 128-bit arithmetics are done in the normal GPRs
Comment From: jwatte
Several use cases have been shown above. Some answers have been in the form of "well, I don't think that use case seems all that important, so I'm going to ignore that particular use case."
On Sun, Jan 31, 2021 at 12:36 PM Josh Bleecher Snyder notifications@github.com wrote:
In an attempt to refocus the discussion: Arguments that Go needs 128 bit ints do not move the conversation forward. Concrete use cases do. Please share those. Thanks.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/golang/go/issues/9455#issuecomment-770446468, or unsubscribe https://github.com/notifications/unsubscribe-auth/AADVU5K4YQOIFEEQUTP6OLLS4W5N5ANCNFSM4AZR5JSA .
Comment From: as
I don't think IPv6 is a compelling use-case for int128
at all.
Specifically, its address space is larger than usual to allow hardware to route it without treating the addresses as numbers. Go doesn't even utilize int32
types for ipv4
. Doing so would require handling byte-ordering, which is a common source of bugs in custom networking code.
Comment From: stevenj
I have to add my support to this proposal too. x86 has support for 128-bit and 256-bit integers via SSE and AVX. ARM has something similar.
@ethindp Please read my comment on @martisch above #9455 (comment), there's zero native 128-bit integer support in any current architectures, SSE, AVX, Neon, SVE... are all SIMD which are intended for operations on multiple small integers at the same time. They have no way to treat the register as a single 128-bit integer. All 128-bit arithmetics are done in the normal GPRs
@phuclv90 This is completely untrue. Both X86-64 and A64 architecture support 128 bit integers as a result from 64 bit x 64 bit multiplication, that is NATIVE SUPPORT for 128 bit integers (using the normal integer registers, not the extended ones). Both architectures support carry flags which are specifically designed to allow integer operations greater than the word size of the base register. A cursory glance at the instruction set shows they both are specifically designed to implement integer mathematics in multiples of the base register size and have a number of features which promote this and make it easier. Your metric for "unsupported" seems to be "the base register needs to be at least this big." If that is the case then we should be discussing removing 64 bit integers from 32 bit go.
Similar to RISC philosophy, processor designers see no need to extend register size when two instructions can give you what you need across two registers. ie, their thinking goes, "128bit integer support is trivial and fast AS IS, why would we double the size of the register and the necessary complications that arise for the tiny speed improvement it would yield". Your definition of "native 128-bit integer support" is very narrow.
Comment From: phuclv90
@stevenj obviously I know that you can use multiple registers to do 128-bit arithmetic. But the persons I'm addressing at above claim that you use a single SIMD register (like SSE, AVX, Neon...) to do store 128-bit integers and do math on it which is completely false. I said that 128-bit operations must be done in the GPRs. Did you even read my comment clearly?
Comment From: ethindp
I don't ever recall claiming that. I said that x86 has native support for 128-bit integers because it does. Yes, you use two registers to perform the operations, but it still allows such, and other programming languages take advantage of that fact (see rusts u128/i128 type and GCC/Clang's __int128/__uint128). If we were talking about quad-precision FP, then I'd agree because no architecture (other than RISC-V) supports those operations yet (and with RISC-V its just an extension), and so you have to emulate those in software. But we're specifically talking about 128-bit signed and unsigned integer operations.
On 2/1/21, Lưu Vĩnh Phúc notifications@github.com wrote:
@stevenj obviously I know that you can use multiple registers to do 128-bit arithmetic. But the persons I'm addressing at above claim that you use a single SIMD register (like SSE, AVX, Neon...) to do store 128-bit integers and do math on it which is completely false
-- You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub: https://github.com/golang/go/issues/9455#issuecomment-771339220
-- Signed, Ethin D. Probst
Comment From: DmitriyMV
@ethindp that still doesn't answer questions about how u128/u128 would be supported on x86-32 and arm v7 - since those platforms are officially supported by Go. IIRC even LLVM has troubles with u128 on arm v7.
Comment From: martisch
@phuclv90
@stevenj obviously I know that you can use multiple registers to do 128-bit arithmetic. But the persons I'm addressing at above claim that you use a single SIMD register (like SSE, AVX, Neon...) to do store 128-bit integers and do math on it which is completely false. I said that 128-bit operations must be done in the GPRs. Did you even read my comment clearly?
As far as I remember I never commented one can do math (e.g. add them in a single construction) on 128bit integers. I commented about load, store and compares of 128bit registers:
I think uint128 is useful on its own for e.g. optimizations as it corresponds nicely to 128bit registers on amd64. The compiler can easily and efficiently generate single instructions for compares and load/stores on amd64 allowing to write high level go code that utilizes larger register widths than 64bits.
full width 128bit comparisons exist (not all of them but some):
You can do multiple small comparisons in an SSE/AVX register but not a 128-bit comparison on them.
A 128bit compare can be done using PTEST. https://www.felixcloutier.com/x86/ptest So there is at least one comparison single instructions that compares two 128bit registers as a whole. It can be used for checking if the 128bit register is e.g. zero. More complex compares e.g. equality require an additional operation (PXOR) on the whole 128bit register. I my comment was read as all comparisons are possible on 128bit in a single instruction thats not what I wanted to imply.
Note im implying uint128 should be modeled in a 128bit register always. But it can if e.g. only equality and moving around from memory is involved.
Comment From: stevenj
I don't think IPv6 is a compelling use-case for
int128
at all.Specifically, its address space is larger than usual to allow hardware to route it without treating the addresses as numbers. Go doesn't even utilize
int32
types foripv4
. Doing so would require handling byte-ordering, which is a common source of bugs in custom networking code.
@as Storage in memory and serialization for transmission on the wire are two completely different problems. if go is forgoing obvious advantages to masking and comparing 32bit qtys as 32bit values in 32 bit registers then that speaks to the implementation, and not the usefulness of such an approach.
The whole "common source of bugs" thow away argument isn't very likely given that byte ordering being wrong isn't a subtle problem its immediately broken. So, sure, someone in their first pass might get it wrong, it quickly works itself out though. I would love a reference to this "custom networking code" with "common byte-ordering bugs". Having written and worked with plenty of custom networking code, i haven't seen it be "common". Have you?
Comment From: jwatte
While I agree that all participants here have good viewpoints related to networking, CPU architecture, specific instruction support, and other implementation details, I think this thread would be more productive if we gave each other the benefit of assuming basic programming capability and being well intentioned. 32-bit CPUs still have carry, and can implement 128-bit arithmetic through extension, although clearly less efficiently than 64-bit architectures can.
There are a number of experienced developers (me included) who believe that native 128-bit support would simplify code, reduce bugs, and improve the experience of developing in go, in areas including (from what I've seen so far,) database interfaces, cryptocurrencies, and networking. (I'm probably missing some more -- I've not memorized the entire thread.)
There's an argument that all these use cases don't together rise to the level of needing built-in support. To this, I wonder why there's a complex type, which approximately nobody uses in production code? Go explicitly doesn't have the language affordances to write efficient user-level implementations of basic arithmetic types, so it seems to me the inclination should be to lean towards inclusiveness rather than exclusiveness.
That being said -- I'm not a go maintainer, I don't write go compiler code, and I'm not going to do this myself. If those who are in that position, still feel that they don't want to, there's nothing more I can do about that, although I request that this feature request remains open (given that it's a recurring request) rather than closing it.
On Tue, Feb 2, 2021 at 6:02 AM Steven Johnson notifications@github.com wrote:
I don't think IPv6 is a compelling use-case for int128 at all.
Specifically, its address space is larger than usual to allow hardware to route it without treating the addresses as numbers. Go doesn't even utilize int32 types for ipv4. Doing so would require handling byte-ordering, which is a common source of bugs in custom networking code.
@as https://github.com/as Storage in memory and serialization for transmission on the wire are two completely different problems. if go is forgoing obvious advantages to masking and comparing 32bit qtys as 32bit values in 32 bit registers then that speaks to the implementation, and not the usefulness of such an approach.
The whole "common source of bugs" thow away argument isn't very likely given that byte ordering being wrong isn't a subtle problem its immediately broken. So, sure, someone in their first pass might get it wrong, it quickly works itself out though. I would love a reference to this "custom networking code" with "common byte-ordering bugs". Having written and worked with plenty of custom networking code, i haven't seen it be "common". Have you?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/golang/go/issues/9455#issuecomment-771655215, or unsubscribe https://github.com/notifications/unsubscribe-auth/AADVU5KUKIVNGTRWURMLPZ3S5AAYRANCNFSM4AZR5JSA .
Comment From: Bjohnson131
In an attempt to refocus the discussion: Arguments that Go needs 128 bit ints do not move the conversation forward. Concrete use cases do. Please share those. Thanks.
IMO Looking for 'use cases' to justify implementation is not a good way to go about this.
Demand should drive implementation, not personal judgement.
Comment From: ethindp
I'll just say this: it is not personal judgement that determines whether a type should be in a language if its not maintained by a single individual. If its maintained by a community, it is up to the community to decide what types and functionality is in the language, not up to a handful of people -- which seems to be the case here ("We won't add these types because we don't think it would be beneficial to the language"). Maybe I don't understand the Go community all that well but I'm kinda curious who put all the authority for that kind of thing into a small group of people. It doesn't matter if there are use-cases for 128-bit integers or not. There is high demand because there may be use-cases in future, if there isn't already, that justify the addition. You could make the leap of logic here and go "Well, by that logic we should add every kind of type in existence," and that would be a valid rebuttal, but I'm specifically talking about types and functionality that have a high demand, as this issue does. People have explained rationales for 128-bit integers and a lot of people would find it easier to have them because it would make tasks easier. Finally, something like 128-bit integers is a trivial task. To be honest, I'm puzzled as to why most of this debate has surrounded architectural issues, given that LLVM and GCC have already solved this problem. The logical thing, I imagine, would be to work off of existing knowledge instead of reinventing the wheel. Its not as though 128-bit integers will make the Go language any more complex than it already is. Its an extremely minor and trivial thing to add. If we were talking about something like a method to use Go on bare metal systems (because such a thing is, given Go's design, very difficult to do at the moment, if my knowledge is correct), I would understand. But we aren't talking about that here. So I'm really confused why this debate has gone on as long as it has. I might've repeated myself in this comment a few times. But the halting of this issue has started to get kind of annoying to me at least, all purely because we're getting tied up either because (1) someone's "personal judgement" prevents us from adding it to the language or (2) we get tied up in issues that have already been solved by other compilers far more complicated than the Go toolchain is. Just my two cents. I apologize if I've been harsh in this comment -- that wasn't my intent, but I'm getting kinda frustrated.
Comment From: zx2c4
@josharian
In an attempt to refocus the discussion: Arguments that Go needs 128 bit ints do not move the conversation forward. Concrete use cases do. Please share those. Thanks.
Concrete use cases I'm aware of: - IPv6 (as discussed above by others) - Crypto
It's this last point that's most interesting. Some architectures offer a 64x64=>128 multiplication with integer instructions, making it efficient to use 128bit types in implementing fast generic C implementations. curve25519-donna comes to mind, but the more interesting rendition would be porting Hacl-Star's formally verified donna-inspired implemention to Go, which uses 128bit types. Similarly, Poly1305 is very naturally implemented with 128bit types to store the multiplication results, and winds up being the most efficient implementation strategy using generic integer code on 64bit machines. Of course clever-enough compilers can recognize idioms on 64bit types, but it's not nearly as pleasant to write or optimize for. Having clearer and faster generic Go implementations of code that's easy to screw up seems like a good thing.
Comment From: josharian
Of course clever-enough compilers can recognize idioms on 64bit types, but it's not nearly as pleasant to write or optimize for.
The standard library and compiler do provide idiom-free access to most of these instructions via math/bits (e.g. bits.Mul64), and netaddr uses these as appropriate in its nascent uint128 implemention, which we are discussing splitting off from networking-world. (I also have plans to add idiom recognition to the compiler this cycle for implementing 128 bit shifts.)
But the point about readability and clarity stands. I'd like to be able to check for overflow with normal code like x := uint128(a) + uint128(b); if uint128(uint64(x)) != x { // overflow }
instead of bits.Add64
.
On the crypto front, if we also added uint256, we could use it for curve25519 keys. :P
Comment From: josharian
For those expressing impatience, I also really want 128 bit integers in the language. But Go language changes always happen slowly. (#395 took a decade.)
Go's commitment to backwards compatibility requires treading very carefully. And I believe the Go language and compiler team is rather absorbed by generics at the moment.
Also of note is that in reality this isn't "just" adding uint128 and int128 types. The difficulty here is not in the compiler, it's in the spec, carefully working through all the consequences of this change, and being convinced that it is worth the costs.
As just one example, consider strconv.ParseUint. Its signature is:
func ParseUint(s string, base int, bitSize int) (uint64, error)
This works because uint64 is the widest integer type. We can't change it to return a uint128, so we probably need a new ParseUint128
function. That's a bit unfortunate. And should it still take a bitSize
argument, or should we assume that for smaller bit sizes the caller can use ParseUint
?
Before you rush to provide answers about what we should do about package strconv, consider that there are many such functions. It took me 60 seconds to find another: reflect.Value.OverflowUint
.
Another that might (might) help move this proposal along is a thorough, detailed design doc, which identifies all the changes that would go into this. An example of such a document is https://go.googlesource.com/proposal/+/master/design/19308-number-literals.md.
Comment From: ianlancetaylor
@ethindp Go is not a language that decides what features to add based solely on demand. Of course demand plays a role, but it is not the determining factor.
Comment From: smasher164
I can branch off a separate discussion/proposal for this, but many new languages are starting to support arbitrary bit-width integers. This is I suspect a result of LLVM supporting this capability. If the comparable difficulty of supporting [u]int128
and [u]intN
isn't too great, Go could leapfrog to support that instead.
int24
is used in go-audio, and int26
/uint6
are used in x/image/fixed by utilizing shifts on an int32
to extract the integral and fractional parts to represent Int26_6
.
Comment From: JAicewizard
Small usecase for 128 bit integers: I am writing a posit library and while it might technically be possible to write posits using big.int. The current implementation is already 2 or 3x slower compared to the rust implementation. (not a one-one translation, but there are no obvious big diferences that could lead to such a diference). And that is using normal ints and masks to help the compiler optimize a lot. Using big.Int would remove all usability from 64 bit posits. Having n*2 bit integers is needed for things like multiplication without the loss of precision.
Comment From: jwatte
- IPv6 (as discussed above by others)
- Crypto
Also: databases with 128-bit int support, like Postgres and Snowflake (and, I'm sure, many others.) big.Int is an order of magnitude slower, or more.
Sure, ParseInt() currently is specified as returning a 64-bit int. I image that will have to stay. Most of the time 64-bit ints are enough, and are faster to boot. Call it a "long"? Call it an Int128?
Again, the main argument here is "there are a bunch of use cases that matter to people, but because the people doing the work and making the calls don't agree with those use cases (contrary to the community) it's not going to happen." Which, if that's how the go maintainers feel about the community in general, is fine for them. It's not like I have a stick (nor a carrot) to try to affect the outcome here, other than adding support in this thread.
Comment From: mvasl
128 bits arithmetic would definitely help me in my current project, where i have to basically compute something like UUID mod N
to efficiently distribute events between N
instances of handler replicas.
By now I only see an option of using math/big package to perform such computation without involving cgo or assembly, but big.Int
performance is not enough for some cases. So here it is, a use case of arithmetics required on what's basically a random UUID's.
Comment From: phuclv90
128 bits arithmetic would definitely help me in my current project, where i have to basically compute something like
UUID mod N
to efficiently distribute events betweenN
instances of handler replicas.
@justmax437 in this case it's easy, just hash the low and high parts separately then combine:
h := high64 % N
l := low64 % N
bucket := (h ^ l) % N
Comment From: mvasl
128 bits arithmetic would definitely help me in my current project, where i have to basically compute something like
UUID mod N
to efficiently distribute events betweenN
instances of handler replicas.@justmax437 in this case it's easy, just hash the low and high parts separately then combine:
h := high64 % N l := low64 % N bucket := (h ^ l) % N
Yeah, i know, but i still would like to see some native way to do this, either thru SSE/AVX or 128 bit integer support. We already have complex128
type, which is, well, more complex than plain integers.
Comment From: phuclv90
Yeah, i know, but i still would like to see some native way to do this, either thru SSE/AVX or 128 bit integer support. We already have
complex128
type, which is, well, more complex than plain integers.
@justmax437 no it's impossible to use SSE/AVX for 128-bit integer support, because they're meant for SIMD operations in parallel. There's no way to propagate the carry quickly from the lower part to the high significant part. See
- Is it possible to use SSE and SSE2 to make a 128-bit wide integer?
- practical BigNum AVX/SSE possible?
- Can long integer routines benefit from SSE?
They're useful to operate on multiple 128-bit integers in parallel though. But for a single integer scalar operations are still much faster. complex128
isn't a single integer but 2 numbers so it's much easier to work on, as each number fits in a single register
Comment From: vikmik
-
+1 to the "database" use-case. Databases use large integer types (I'll add ClickHouse to the list: https://clickhouse.tech/docs/en/sql-reference/data-types/int-uint/ ) but using them from Go requires a bunch of boilerplate code, and some drivers opt for the route of not supporting them at all because it's all quite messy.
-
Also, hash algorithms often output a 128bit or 256bit output, which are often used as identifiers / DB columns. 64 bits does not provide enough collision protection ( https://preshing.com/20110504/hash-collision-probabilities/ ) in a lot of use cases - so the use of custom larger integer types is common. Unfortunately, the language makes it tempting to use 64bit content hashes just because that removes the need for a lot of boilerplate code - even though it is not a good practice.
Indeed, not being able to have a canonical type for hash values stored in DBs attracts a good share of complexity, like:
* error handling when dealing with []byte / [n]byte <-> [custom library type] conversions. For example, how to handle nil
when the target type is a struct? These concerns tend to leak in many places. It's easy to write code that causes a panic, or on the contrary to write overly-cautious code with handling of errors that cannot happen. Similarly, the Golang concept of "zero value" does not always translate well, or can be ambiguous
* dealing with [16]byte conversions from []byte. DB drivers and libraries may provide one or the other - so we're often left with having to do awkward invocations of copy()
in a compatibility layer
* All this is exacerbated when using multiple DBs at the same time (time series, K/V store, relational), or multiple libraries that use their own custom type. It causes the proliferation of compatibility layers, which can have different failure modes.
- Lastly, doing 64bit arithmetic can sometimes be made quite difficult and/or unsafe when underflows / overflows are possible. Using math/big for this feels like using a rather inefficient tank, and underflow/overflow-safe code can be tricky to write and test. An integer type > 64 bits would be welcome, just as a safe / straightforward / fast way to compute 64 bit values with intermediate steps that can overflow.
Comment From: robpike
It is not clear to me that the benefit comes close to outweighing the cost, which is substantial when rippled through all the code that must be updated and added to push these types through the library. This is not a small change, this is adding two new basic types to the language, which would touch huge swaths of the compiler, library, and community packages.
Obviously it would be "useful" (people don't propose useless things in earnest), and it might be convenient for some tasks, but not many, and not often. Having survived the transition from 16 to 32, and from 32 to 64, both of which seemed necessary at the time, there is clearly zero urgency for a transition to 128; it would be analogous to having a 64-bit type in the 32-bit era. But to be honest, far less widely used.
Rather than push towards adding another pair of locked-down types, and given the lack of urgency, I suggest instead that we think about other ways to grow the integer types, either by an extension mechanism that lets one define an integer of any size (which I first saw in PL/I; it's nothing new) or the thing I still feel is best: Making int have arbitrary precision, as described in https://github.com/golang/go/issues/19623. That approach achieves far more and puts the whole topic to bed.
Comment From: JAicewizard
Although I do like this proposal, and would be in favour, I do not think it would be a good replacement for uint128.
I think waiting to be closer to real hardware support for 128 bit integers before implementing this would be a better approach. With ice lake only lately supporting 57 bit address ranges it might take another decade (or 2) before we actually start to need 128 bits for addressing. However it could be that we actually start to see hardware support sooner than that.
Comment From: ethindp
@robpike This is a pretty nonsensical excuse not to implement 128-bit integers. It doesn't even hold up under any kind of scrutiny. Libraries will use them when they want to. You don't need to update any code at all. Let the authors of the code in question do that. If it requires a breaking change in stdlib, add a new method that does the same thing as the one you need to break and deprecate the old one. People can keep using the old one if they want, and you keep the golang compatibility promise as well.
I don't understand why we're still debating this issue. Do I have to point out that at least GCC implements 128-bit integers, DWORD integers, various floating-point types and even half-precision floating-point types? Or that Rust implements the i128 and u128 types? How about the fact that LLVM allows arbitrary bit integers up to 2^23 bits in size? Or that Ada allows you to limit the ranges of integral types (theoretically with no limit) via subtype declarations?
We're not asking for all of these features to be added. Many would say that adding things like double-word integers or half-precision FP types would be a bit too much for now. We're asking for 128-bit integers to be added and have provided (many) different examples where they're useful (including the fact that using the bigint package is a pain).
Furthermore, this isn't even a "big" language change. It will negatively impact nobody if its added. If you don't want to use it, don't. Simple as that. This is a change that will only be used when people need it. You might need software emulation to do it but that's where you take advantage of things like what LLVM provides you (or you take inspiration from them).
I hate to rant like this, but this discussion has gone back and forth and everyone is just dragging their feet and nothing has even been attempted from what I can tell. All I see from my POV are excuse after excuse after excuse about how its (supposedly) a bad idea, which is funny because if it was such a bad idea, then you wouldn't have that functionality in GCC and LLVM/Rust at minimum. The folks who work on GCC and LLVM aren't idiots. They're incredibly intelligent people. They don't add features like n-sized integers (that have a limit that no computer is going to reach within the next century) for no reason. Clearly, if they added it, then why not take advantage of it?
I would understand not doing so if the type in question had no use cases. But a number of use cases have been presented on this (very long, very overdrawn) issue that are very legitimate and would greatly ease the implementation of various features including databases and IPv6. There are no excuses for not adding this type at this point. If you want to know how to actually do it, go look at how GCC or LLVM do it.
Like I said, I'm sorry for ranting like this, and I apologize if I went too far in this comment. But this is utterly ridiculous. This (does not) violate the golang compatibility promise in any way. It (will not) cause any major problems in the go ecosystem because those who want to use it will use it and those who don't won't use it. Considering that there are (many) practical uses for this type, as very clearly demonstrated by a lot of other comments in this issue in support of this proposal, there is absolutely no reason not to accept this proposal and get it added in, perhaps, go 1.18 or 1.19. You have lots of sources and code and resources available to you if you want to know the internals of how to do this properly.
Comment From: smasher164
@ethindp This is the second time you've both apologized for ranting while continuing to rant anyways. Please keep the discussion civil, without accusing other replies of being "nonsensical" or "ridiculous".
No one is "dragging their feet" on this issue. Other languages and compilers adding a feature doesn't stand on its own as a reason for adding it to Go.
You say that we could just add [u]int128
and "keep everything else the same," but as @josharian replied in https://github.com/golang/go/issues/9455#issuecomment-780973060, a concrete proposal for this feature should outline changes to the spec and implications for the standard library (regardless of backwards compatibility).
Comment From: robpike
@ethindp Maybe I wasn't clear enough, but your comment doesn't address in a helpful way what I wrote, which I think was nonnonsensical and nonridiculous. It can be summed up as: If there is no urgent need for this feature, then it can wait while we look for a general approach that avoids adding yet another integer type pair to the language, but fixes the problem (whatever it is) for perpetuity.
Yes, it's easy conceptually just to add a couple of new types, although the cost as many have said is significant, but good design means looking for general solutions rather than just adding features. I'm arguing that we should respond to the problem, not the proposed solution.
Comment From: batara666
just curiosity, it is easy for go core team to implement int256?
Comment From: batara666
@ethindp why you not just make a PR that implement int128 for Go?
Comment From: seebs
I think that if a change is going to add int128
, that change should also either add int256
or make it clear to everyone when and how int256 would be added, if it were to be added later. Like, right now, it's unclear whether we would ever really want int256 (although I'm sure the people who worked on AVX would at least consider advocating for it), but I think that if int128 gets added, the process should also establish what the requirements are for considering new int types, and when/how they would be named, or how the namespace questions would be addressed, and so on.
Personally, I would like parametric integer types to be available and possible. It might be worth looking at all at what C did, although Go lacks C's namespace problems. On the other hand, I'd love to be able to express the distinction between "I need an integer of at least 32 bits, but whatever's efficient is fine" and "I want exactly 32 bits".
I would sort of like to be able to express things like 24-bit or 48-bit values more naturally, but I also worry about the hidden costs they'd carry on most modern hardware. Similarly, I'd sort of like saturating types, but worry about their hidden costs. And I don't think C's solution of "we describe how they would be specified but you don't have to provide them" is a great fit for Go; one of Go's strengths is that you don't have to worry about the hardware's specific capabilities. On the other hand, "you can definitely have a 24-bit integer type but we can't promise it's fast enough to be usable" is sort of awful. (But if you're on a hypothetical DSP with native 24-bit hardware available, it'd be NICE to be able to use that for values that don't fit in uint16, wouldn't it?)
Long story short, I think Rob's right and that this is a thing important enough to do well. I don't entirely like the state of not having uint128, but I will note that, except for multiplying int64 values, I've never wanted int128, only uint128.
Comment From: Bjohnson131
I'd just like to add that it seems like most arguments against int128 / uint128 are just a stall for the inevitable. They're becoming more commonly used every day. I do think that there's a lot of effort to go into this change, certainly it's not trivial for x86 systems, but when I hear that the work is not worth the payoff, I know that time is not on their side.
Comment From: chiro-hiro
Some sort of zk-STARKs are using integer 128 bits. The implement in Rust looks pretty good with uint128
.
Comment From: Shaptic
Adding another real-world use case to the list: we use 128-bit integers to hold intermediate calculations about trade outcomes among 64-bit values. https://github.com/stellar/go/blob/f6989352abafe4b7a93d6e35c2eafaf040305091/exp/orderbook/pools.go#L106-L118
This code is a hot path in a tight loop that involves exploring massive orderbook graphs, so it needs to be as performant as possible. As of today, it uses math/big.Int
, but obviously having native language support would result in code that is more performant, more readable, and less error-prone.
Comment From: CAFxX
I'm still on the fence on this issue, but just for the sake of discussion another use case for uint128
would be providing hash.Hash128
to mirror the existing hash.Hash32
and hash.Hash64
. Even without changing their signatures hash/fnv.New128
and hash/fnv.New128a
would be obvious candidates to make use of the new type.
(In theory, Hash128 could be defined even today by returning a pair of uint64
. While this was not discussed at the time I suspect the reason why this was not done had to do with the uncertainty surrounding whether we would eventually add 128bit types to the language.)
Comment From: lfaoro
We're tired of using the slow and tiresome big
package w/ it's dreadful API for computing *big.Int
's.
We need int128
and int256
for dozens of use-cases, everybody does; please acknowledge it.
Comment From: ernado
Lack of built-in uint128
leads to [2]uint64
which can be sub-optimal.
Was re-writing CityHash from [2]uint64
to struct{ Low, High uint64 }
recenetly.
This was very counter-intuitive optimization.
name old time/op new time/op delta
CityHash64-32 336ns ± 0% 145ns ± 2% -56.92% (p=0.008 n=5+5)
CityHash128-32 353ns ± 0% 149ns ± 2% -57.81% (p=0.008 n=5+5)
name old speed new speed delta
CityHash64-32 3.04GB/s ± 0% 7.07GB/s ± 2% +132.16% (p=0.008 n=5+5)
CityHash128-32 2.90GB/s ± 0% 6.87GB/s ± 2% +137.06% (p=0.008 n=5+5)
Comment From: Al2Klimov
Whether or not you'll add this, consider also float128. For more or less the same reasons and for symmetry.
Comment From: JAicewizard
I don't see a reason for float128? Almost all the reasons listed don't apply to floats. Its also offtopic, maybe a new issue should be opened?
Comment From: Bjohnson131
I don't see a reason for float128? Almost all the reasons listed don't apply to floats. Its also offtopic, maybe a new issue should be opened?
There's plenty of reasons for float-128s. And you're right, they're not listed / off topic here.
if an issue is created, feel free to link it here to redirect users.
Comment From: anatol
Modern hardware starts using 128bit numbers as a future-proof way to handle data.
For example a lot of data structures in NVMe specification use 128bits (e.g. for SMART log information).
Comment From: MagicalTux
The new net/netip
package is using its own uint128
:
https://cs.opensource.google/go/go/+/master:src/net/netip/uint128.go
Comment From: josharian
Another use case, admittedly rare: I have a project in which I do lots of bitwise manipulation of known-length small strings. For performance, I am using ints to hold the string contents. For strings with length 9-16, I have effectively recreated a uint128 type. If uint128s were in the language, beyond convenience, the compiler could use 16 byte registers for my (SIMD-friendly) code. As it is, I hand-wrote assembly.
Comment From: x1unix
A good example of use-case from real world (current project).
Cryptomining profit calculation.
Current net hashrate is usually provided in kilohashes and overflows 64bit value (e.g. 197396944003512530000
).
Also a formula for calculating profit will require 128bit numbers to store intermediate values (currently can be replaced using bcmath)
Comment From: lfaoro
The new
net/netip
package is using its ownuint128
:https://cs.opensource.google/go/go/+/master:src/net/netip/uint128.go
This is pretty hilarious: the team refuses to implement the 128bit types in the language for all the Go users claiming it is not necessary nor useful, but then implement an internal 128bit type for stdlib packages.
Comment From: danderson
Co-author of that internal library here: I don't think that's a particularly strong argument for adding a builtin type that the compiler has to understand.
The netip package went through about 3 iterations before we felt the need for a uint128 struct, and even then it's a very stripped-down implementation that doesn't support all arithmetic and logic operations. It's a bespoke type designed for the needs of netip, not something generic.
If anything, the existence of that type is an argument for providing an int128 implementation as a standalone library, which doesn't require a proposal or stdlib changes. But this proposal is about adding int128 as a builtin type, which has much wider ranging implications for the compiler and language as a whole.
Implementing netip would have been easier if a uint128 type existed natively, for sure. But "ha, even the stdlib needs a 128-bit int" isn't much of a gotcha, given how simple it was to implement as a library without compiler alterations, and how it only supports a reduced set of semantics that are necessary for IP addrs.
Comment From: Bjohnson131
As someone who hasn't worked on compilers before, how much work would it be to add int128 /uint128 for just the amd64 arch?
not implying that's all that's asked here, but there's been an incredible amount of pushback on what seems to be a rather simple feature.
Comment From: ethindp
There's a useful stack overflow discussion if your interested. I'm not sure why there's been so much push-back when the two top compilers in the world already implement it. I don't see why we can't just take what they did and adapt it accordingly. Some of the complaints about it (e.g. address spaces not being 128-bit) aren't really legitimate complaints against the feature, since paging is completely unrelated to the discussion at hand, and your not going to be working with 128-bit pointers on any architecture, but the feature (128-bit integers) is still useful.
Comment From: mvdan
To give some context given the recent attention, the main worry here is changes to the language: as per https://go.dev/doc/go1compat, once a feature is in the spec, it can practically never be removed or changed. That's why Go is usually slow to make changes to the language, no matter how seemingly easy they are to implement.
Comment From: ethindp
@mvdan I mean, I fully understand that. But what I'm getting at (and what I think everyone else who's in favor of this feature) is that it won't impact the language at all. It will just be an optional type. It will require zero changes to code for people who don't want to use it.
We could submit this as a formal change, I suppose, but what would be the point? The "language impact" statement would pretty much say "there is no language impact to adding this feature". If this was a proposal to modify the function syntax or something, then I'd understand and would be onboard with the discussions of language impact because that would actually cause problems for people depending on what the change was. But for forwards-compatible features like this, where your just adding an extension for the language that has no actual impact other than in its implementation (and even then, your implementation could be based on the many implementations that already exist), I feel like the process should be a bit less formal. I would think that this discussion would be a proposal in itself; I imagine that if the OP had originally gone through the formal channels to get this feature added this same discussion would've arisen. But if there's a hard requirement that this discussion should be compacted and rewritten into a formal proposal, I think many of us would be happy to take on that mantle, though the "cons" section would be pretty empty (a lot of the anti-128-bit integer reasons look like faulty reasoning to me in many cases).
Comment From: robpike
@ethindp That's not accurate. Much of the standard library would need updating: fmt, reflect, most encoders and decoders, crypto, and many more, while other swathes of the library might want to be updated as well. Most packages in the open source world that do similar things will also need updating. And there will then be compatibility cascades resulting from those changes.
Adding a core type to a language is not a "zero change" thing to do, even for "people who don't want to use it".
I'm not arguing for or against the change, at least not in this message, but your implication that it's essentially free to do this is just not true.
Comment From: 0xN0x
@ethindp That's not accurate. Much of the standard library would need updating: fmt, reflect, most encoders and decoders, crypto, and many more, while other swathes of the library might want to be updated as well. Most packages in the open source world that do similar things will also need updating. And there will then be compatibility cascades resulting from those changes.
Adding a core type to a language is not a "zero change" thing to do, even for "people who don't want to use it".
And this is why this should have been done long ago, this issue is already from 2014. The longer it takes to be accepted, the more complicated and time-consuming it will be to set it up.
Comment From: beoran
#53171 seems relevant here. If we add a (u)int128 type, on several architectures these can be handled with SIMD it similar instructions, and get the benefit of optimization without having to write assembly.
The idea of making int a flexible integer type, which can grow automatically, is also great, for example, Ruby has this. But it is a separate issue, so maybe @robpike could open a new issue for this?
Comment From: josharian
making int a flexible integer type
That's https://github.com/golang/go/issues/19623.
Comment From: josharian
Generics provides a potential answer to the strconv problem.
The strconv problem (also present in reflect and any number of other places) is functions and methods parameterized across int types. That parameterization happens in several ways: by function name, by parameter, or by returning the largest possible type (which still requires further parameterization for signed vs unsigned). In all instances, adding a new int type requires a large, messy API addition.
This type parameterization would probably be better accomplished with generics. (As anyone who has written much generics code quickly discovers, the non-generic stdlib is a significant stumbling block, often requiring reflect-based adapters, so there's a good reason to do this anyway.)
And once you have type parameterization, adding a new, larger int type becomes much lower impact.
It'd still be good to take a thorough pass through and make sure this would handle all the necessary cases. But this at least seems to me to be a promising and principled general approach.
Comment From: thockin
I just wanted to +1 the idea of 128 bit ints as a core feature. We have a path where we represent the size of an IP range, and IPv6 ranges can be larger than 64 bits. uint128 would have fit the bill. We can work around it with big.Int or something, but +1 to it just being a standard thing.
Comment From: lfaoro
I don't understand why it's such a big deal to just do it, you have 128 comments on this issue, it's been open since 2014, other mainstream languages already complied w/ the users need.
What else do you need to consider support for int128? We don't want to use big.Int anymore.
Comment From: gophun
@lfaoro It's been explained why it would be a big deal: https://github.com/golang/go/issues/9455#issuecomment-1110344614
Comment From: phuclv90
We have a path where we represent the size of an IP range, and IPv6 ranges can be larger than 64 bits. uint128 would have fit the bill.
@thockin an IPv6 address isn't a number and you don't do arithmetic on it. Only bitwise operations are needed. Therefore that's not a good reason to add the type. There are already lots of comments regarding IPv6 above
Comment From: thockin
Regretfully, we find ourselves in a place where the number of IPs in a given CIDR is used, and changing that is harder that just useing a bignum.
On Thu, Mar 16, 2023, 6:58 AM Lưu Vĩnh Phúc @.***> wrote:
We have a path where we represent the size of an IP range, and IPv6 ranges can be larger than 64 bits. uint128 would have fit the bill.
@thockin https://github.com/thockin an IPv6 address isn't a number and you don't do arithmetic on it. Only bitwise operations are needed. Therefore that's not a good reason to add the type. There are already lots of comments regarding IPv6 above
— Reply to this email directly, view it on GitHub https://github.com/golang/go/issues/9455#issuecomment-1472027254, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABKWAVHPJJ64OQBK77NSSJTW4MMAHANCNFSM4AZR5JSA . You are receiving this because you were mentioned.Message ID: @.***>
Comment From: chiro-hiro
If anyone try to build a WebAssembly runtime with Go, int128
would help.
Comment From: JesseCoretta
Hi!
I have read the comments on this thread, and for the most part I see both points of view. But one thing occurs to me, and while it is somewhat related to UUIDs, unfortunately this can of worms is a little deeper when you consider ASN.1 Object Identifiers.
Per [X.667], there is precedent for UUIDs to be interpreted as actual integers in certain Object Identifiers. One that exists in the wild, just for those who are curious, is:
// {joint-iso-itu-t(2) uuid(25) ans(987895962269883002155146617097157934)}
2.25.987895962269883002155146617097157934
Now, many of you can look at that leaf arc and see quite clearly that it will overflow not just int64
, but uint64
as well. By quite a bit, actually:
- 340,282,366,920,938,463,463,374,607,431,768,211,455 for
uint128
- 18,446,744,073,709,551,615 for
uint64
- 9,223,372,036,854,775,807 for
int64
Now, I know some of you might be biting your tongues, thinking "oh no ... if she's right, that would mean the encoding/asn1
-provided ObjectIdentifier
type is basically invalid too, right?".
Well, in a way it always been invalid (at least in my point of view). No offense to the Go team, its just that because int
is used as the slice type, and int
allows negative values -- something that should never, ever appear in an OID -- that's something else I need to account for manually in my code. And this isn't even taking the overflow issue into account.
That said, I'm not here to bash the asn1
package, nor am I directly requesting any changes to it. I know how the Go team feels about this package, but quite frankly I love ASN.1 and I am just grateful it is supported via my favorite language.
But I am voting in favor of uint128
support (and, I suppose, int128
by necessity), just from my obscure PoV. I can easily make my own OID type (e.g.: []uint128
) and do what I need to do, never worrying about overflows involving legal UUID-based OIDs. So I guess, from where I stand, its sort of a compromise.
One counter-argument that I can foresee:
"OIDs have no set limit on the magnitude of individual arcs, so conceivably this would be an issue when someone registers an OID that has some ungodly long number that would overflow the largest supported
uintXXX
-type in the universe."
This is true. One can only do so much. I'm also certain that OID RAs (Registration Authorities) look for such questionable moves and perhaps mitigate them. But you're not wrong. And yet, [X.667] exists for a reason, so I would cite that as motivation for supporting uint128
. At least those registrations are legitimate, if a little odd ...
Thank you for listening
Jesse 💜❤️
Comment From: Bjohnson131
Is the team still looking for "strong use cases" for uint128?
Comment From: josharian
@Bjohnson131 more strong use cases are welcome. Please "load hidden" on the comments on this issue first, though, and take a pass through--it's easy to miss things that have already been discussed above.
I suspect that the most helpful thing at this moment might be to write a complete doc, paying particular attention to what packages might be impacted (search for the text "strconv" and "generics" in comments after loading all hidden), and what the answer there is. But that's a lot of work, with no guarantee of impact.
(It'd also be helpful to update the original post with a list of use cases proposed in the comments.)
Comment From: phuclv90
FWIW .NET added support for 128-bit int not so long ago and you may want to check out the discussions
- https://github.com/dotnet/runtime/issues/20640
- https://github.com/dotnet/runtime/issues/67151
Comment From: josharian
@phuclv90 thanks! That conversation is strikingly similar to this one, including (perhaps) the denouement of using generics once they were available to solve the strconv problem.
Comment From: Bjohnson131
I think that there's 2 things that people have not mentioned. (u)int128s allow us to write cleaner, more error-free software. period.
There's an air among communities that this isn't a valid reason, as people should write good code. Often though, (and we all know we're all guilty of this) our code isn't good or completely readable. (u)int128s would help everybody in this regaurd every time they're used to avoid messy code as well as code with bugs.
Intrinsically, this value-add should be obvious. more people writing more better bug-free code is good for the entire ecosystem.
Comment From: c-robinson
It looks like the Go source indicates that it might be useful to have at least a uint128
. There are multiple references to uint128
peppered around the crypto/aes
package, as well as a type in crypto/internal/edwards25519
for uses that are waaaaay over my head:
// uint128 holds a 128-bit number as two 64-bit limbs, for use with the
// bits.Mul64 and bits.Add64 intrinsics.
type uint128 struct {
lo, hi uint64
}
and then net/netip
defines the same type with the same name, but adds a bunch of methods... so much so that the netip
implementation seems like it would be entirely usable as a replacement for, say, a uint128-sized value stored in big.Int
.
@minux's initial counter-argument to this proposal hinged on the statement that
The other use cases can be done with math/big just as easy.
But that would seem to be in conflict with the netip approach.
Comment From: Eisenwave
I have written a proposal P3140: std::int_least128_t
for C++.
The Motivation in that proposal is largely language-agnostic, so it might be helpful in this discussion.
Comment From: andreyvit
+1. My use cases: arithmetics — 128-bit fixed-point numerics (high-precision monetary values), bitwise ops — hashing, identifiers, larger bitmasks. Like others have noted, code simplification for bitwise operations is a noticeable benefit.
However, if the team wants an alternative proposal, we could allow arithmetics and bitwise ops on [n]uint64 types. This would take care of uint128, uint256 and similar, without extending the list of primary types. Then extending fmt, strconv, json and other packages would be an optional effort that doesn't have to happen in lockstep with compiler changes (or maybe doesn't need to happen at all).
Comment From: sylr
+1. My use cases: arithmetics — 128-bit fixed-point numerics (high-precision monetary values)
Same here, an int128 based fixed-point numeric could allow to be precise enough for cryptocurrencies such as bitcoin and be "large" enough to be also used for market capitalizations with fiat currencies.
Comment From: andreyvit
I went through my codebase, and I'm more in favor of [2]uint64
proposal now (i.e. allow treating [n]uint64
as integers for arithmetics, bitwise ops, comparisons, maybe even literal initializers) than simply an uint128 type. This would be assigning new semantics to something that is already 99% used for bignums, won't break any existing code, and won't require any library changes (we can add extra strconv functions for these, but, most importantly perhaps, no sweeping reflection changes across all serialization libs).
People can add MarshalText etc on top of these as appropriate for their domains (which will probably produce hex, base64 or even dashed UUID formatting).
The reasoning for [n]uint64
over int128
etc is:
- in at least two cases, I really want an uint256, not uint128
- formatting and parsing is domain-specific (and materially differs in different places in my code)
- I bet at some point in the future people will start asking for uint512, uint1024, etc
- really the annoying part making code hard to read is operators; if these are solved by the compiler, everything else can be added trivially
Should I open a separate ticket for discussion, or is this a non-starter @ianlancetaylor @robpike?
Comment From: gaby
Why isnt https://github.com/golang/go/blob/master/src/net/netip/uint128.go exported ? It would solve part of this issue.
Comment From: Bjohnson131
I'm more in favor of
[2]uint64
proposal now
Allow me to play the consistency questioner.
- Will a float128 then be called a [2]float64?
- How will you express generic functions that accept []uint64s as well as []uint128s?
Let me say this now, if [2]uint64 worked for my use-case, I would not be here. What you're proposing is something which is already taught as a work-around for the lack of u128 support, and it is not a solution to many use-cases named here.
Comment From: andreyvit
@Bjohnson131 Could you please clarify about the use cases not covered by [n]uint64? I feel like I've read the entire thread, but cannot recall any that fit, would love to consider those.
Let's make sure we're talking about the same thing. I'm proposing that the following code should be legal (for any operator in place of +
):
var a, b, c [2]uint64
a = b + c
Re: generics overall, I'd imagine initially there'd be no way to write generic code that works for [n]uint64 with any n, but that's not any worse than having an uint128 type. You can probably write generic code over something like ~[2]uint64 | ~[4]uint64
if needed.
But perhaps I misunderstand your question about generics. If we imagine that type uint128 = [2]uint64
, then accepting either []uint64
or []uint128
looks like func xxx[T uint64 | [2]uint64](v []T)
.
Re: float128, I don't imagine this being applicable for floats. Long uints are truly often represented as [n]uint64, but that's not true about floats at all.
Comment From: dylan1951
Why does it take 10+ years to add 128 bit ints despite enormous demand for it?
Comment From: DmitriyMV
@dylan1951 please refer to https://github.com/golang/go/issues/9455#issuecomment-1110344614
Comment From: dylan1951
@dylan1951 please refer to #9455 (comment)
The comment says it's not a zero change thing to do, and says it isn't arguing against the change. What's your point?
Comment From: c-chk
Zig has u666 if needed, but these guys need to drag us for A DECADE before they give us something to work with.
Comment From: rasa
go+ natively supports int128
's, and uint128
's, as well as bigint
but not float128
.
Comment From: cespare
I needed uint128 again today for a radix tree of IP prefixes. I ended up rolling my own thing again but it sure would have been convenient. And the fact that net/netip
includes its own internal uint128 implementation but I can't use it is a bit galling.
Comment From: sylr
@ethindp Maybe I wasn't clear enough, but your comment doesn't address in a helpful way what I wrote, which I think was nonnonsensical and nonridiculous. It can be summed up as: If there is no urgent need for this feature, then it can wait while we look for a general approach that avoids adding yet another integer type pair to the language, but fixes the problem (whatever it is) for perpetuity.
Yes, it's easy conceptually just to add a couple of new types, although the cost as many have said is significant, but good design means looking for general solutions rather than just adding features. I'm arguing that we should respond to the problem, not the proposed solution.
I remember a rust core developer reviewing the Go's type parameters whitepaper as follow: "boring, boring is good in that case".
It's been nearly 11 years and what I understand from the comments is that people would like to be able to do regular arithmetic on things larger than (u)int64. There are already means to solve the described used cases but not as simple or expressive as having access to regular arithmetic.
After this long, we can probably admit that either no one is looking to find a "general approach" or maybe there isn't one ?
So, I'm asking, couldn't Go just do the boring thing and implement (u)int128 like people expect and like it has been done elsewhere ?