Pages:
Author

Topic: C# or C/C++ code to convert Bitcoin brainwallet to public address (Read 385 times)

legendary
Activity: 1512
Merit: 7340
Farewell, Leo
Yep, I'm with you. But, it's worth thinking about where basing decisions on a criterion like that will eventually lead, no?
Absolutely. So in which criterion was lack of more than 64-bit guaranteed data type based on, in C89? At the time of publishing that standard (1989), one could argue problems didn't need more than that integer size, or it may have been unusable with processors of that time (I think MIPS32 and MIPS64 were popular in the early 90's, none of which could handle more than 64 bits number). And as I'm reading pooya's reply, we haven't exceeded that on the newest processors neither.
legendary
Activity: 3472
Merit: 10611
Sure, it's possible, but to rephrase my question: why isn't it standard already? I mean, the last C standard was published in 2017, long after Big Integers became a need. And still, the largest standard integer you can define is 64-bits long.
I suppose it is because of hardware limitations. We haven't really improved a lot of things in CPUs, x64 was released 30 years ago and there is no x128 on the horizons or for example core speed (clock rate) hasn't really improved for decades (we just get more cores to have faster CPU).
Our hardware simply can not handle integers bigger than 64-bit (no bigger registers). So it has to be implemented using smaller chunks which is what loads of existing math/arithmetic libraries do so I don't think the standard itself needs to have them.

However, many languages have added the bigger integer types. You already know the one in C++, there is also Int128 in dotnet. It took them 7 version of core and at least 5 years to implement it and as you can see it doesn't do anything special, it uses 2 UInt64 limbs and the arithmetic is a bunch of branches handling the overflow which is not the fastest thing.
hero member
Activity: 510
Merit: 4005
(...) To put it this way: if the average practitioner wants to write a program with 256-bit integers, and acknowledges he's incapable of maintaining that in C, won't he just switch to an alternative, like Python?
Yep, I'm with you. But, it's worth thinking about where basing decisions on a criterion like that will eventually lead, no?

Adding things to C out of fear that people might otherwise abandon it for an easier language will (over time) turn the spec into a real mess...

Each decision in isolation seems justifiable, but in the limit, they'll eat C from the inside out and turn it into a language with no clear vision and no well-defined purpose.

I think it's worth reflecting on just how much can be (and has been) accomplished with C89. If you compare the "complexity" of C89 to its "expressive power" (just as an abstract exercise, don't think too deeply about defining those terms, or their units, etc.) you'll find that it hits a kind of magical sweet spot (same thing is true, and in an even more compelling way, for Forth and Lisp). The old guard really knew what they were doing, and every programmer I admire "gets" C (even if they no longer use it) in a way that most modern programmers don't (or maybe can't).

Humanity seems to agree that things should always be changing, but in my experience, this appetite for improvement presents most strongly in people that don't actually know what they're doing.

(Don't take anything I've said personally, none of my dismissive commentary is directed at you, I'm just sharing my thoughts.)
legendary
Activity: 1512
Merit: 7340
Farewell, Leo
Every time something is made easier for people, there's a corresponding drop in the skill level of the average practitioner.
Hmm. I don't think that's enough of an argument to not have standard replacements of this kind. To put it this way: if the average practitioner wants to write a program with 256-bit integers, and acknowledges he's incapable of maintaining that in C, won't he just switch to an alternative, like Python?

You can do that in any language by constructing something (ie. struct in c#) in that language that consists of multiple instances of primitive types that store the bits. For example if you want to create a 128-bit data type you'd create a struct holding two 64-bit integers on x64 or 4 32-bit integers on x86 machine.
Sure, it's possible, but to rephrase my question: why isn't it standard already? I mean, the last C standard was published in 2017, long after Big Integers became a need. And still, the largest standard integer you can define is 64-bits long.
legendary
Activity: 3472
Merit: 10611
You just define the data structure as array of bytes (byte[]), with known length and operate on it as it is.
FWIW Using bytes (ie. 8-bit unsigned integers) is the least efficient way of storing bits and also for performing arithmetic since after all you are performing them on 8 bits at a time whereas bigger primitive types exist that can be used to perform arithmetic on more bits like 32-bit integers ({U}Int32) and 64-bit integers ({U}Int64).

Of course in terms of performance many depends on the CPU architecture and not always using 64 bit type could be faster than using 32 bit type - could be opposite if machine is 32bit.
Regardless of the architecture, using 64-bit types may not always be the best option considering that you have to handle overflow and specifically the multiplication is going to be tough if you are using a 64-bit chunks.
This is why in libsecp256k1 something like the field element is implemented using 64-bit integers but only 52 bits of each limb is used, the remaining 12 bits are there to handle overflow. That is radix 252 representation of the 256-bit integer.
legendary
Activity: 952
Merit: 1386
I'm just curious why I can't define a 256-bit or x-bit data type in C.
You can do that in any language by constructing something (ie. struct in c#) in that language that consists of multiple instances of primitive types that store the bits.
(...)
Ignoring the fixed length and the specialization of the implementation, the above code is essentially what the Biginteger class in dotnet (C#) does.

It is a bit complicated (https://stackoverflow.com/a/54815033) bit you may also use 128 bit type. Of course in terms of performance many depends on the CPU architecture and not always using 64 bit type could be faster than using 32 bit type - could be opposite if machine is 32bit.

And of course BigInteger could be the solution, in c# or java.

full member
Activity: 297
Merit: 133
Question of mine: how can you do this in C? Not sure about C++, but in C the longest integer value can be minimum 64 bits (long long). In most systems, it's 64 bits, but I'm quite sure in no system it exceeds 256 bits, which are required for SHA-256. (Also, this)

Is there a known library that lets you do some sort of "tricks" with the processor?

You don't need any additional libraries.

You just define the data structure as array of bytes (byte[]), with known length and operate on it as it is.

Eventually you can make sth called union, where you define set of bytes or other type data and put it in one structure, where the data is one after another in memory.
full member
Activity: 161
Merit: 230
You can define any data type you like in C/C++ with structs/classes, they just aren't native data types and you have to implement the operations on them yourself. With operator overloading in C++ you can even have them act like native numbers with + and - etc.
legendary
Activity: 3472
Merit: 10611
I'm just curious why I can't define a 256-bit or x-bit data type in C.
You can do that in any language by constructing something (ie. struct in c#) in that language that consists of multiple instances of primitive types that store the bits. For example if you want to create a 128-bit data type you'd create a struct holding two 64-bit integers on x64 or 4 32-bit integers on x86 machine. You can see how it is done in C for a 256-bit integer with 4x 64-bit chunks and 8x 32-bit chunks (of course implementation is specialized for use in ECC).
Ignoring the fixed length and the specialization of the implementation, the above code is essentially what the Biginteger class in dotnet (C#) does. That is using an existing primitive type (UInt32) to hold bits/chucks and performing the arithmetic on them.
hero member
Activity: 510
Merit: 4005
Not sure I understand. You're saying that if I wanted a 32-bit data type, and my only data type available was unsigned char, I could design 32-bit data type by defining a struct as following?
Yup, but that would only get you like 1% of the way there (defining the struct, that is; the bulk of the work is in defining the operations).

And so, if I wanted to store integer 2^32-1, I'd fill in each of these fields with 255?
Exactly, (255*256**3) + (255*256**2) + (255*256**1) + (255*256**0) == (1*2**31) + (1*2**30) + ... + (1*2**0) == 2**32-1 (brackets for readability).

Theoretically, I can extend this to unlimited bits long data type (...)
Yep, but if you're looking for arbitrary precision, then it's worth studying something like GMP.

(...) but various functions from the standard library won't work, and I'll have to rewrite them myself (...)
That's right.

(...) whereas in C# if I'm not mistaken, such capabilities exist already.
Yup, I'm not a big C# guy, but it has BigInteger (in the System.Numerics namespace, since .NET Framework 4.0, I think).

I'm just curious why I can't define a 256-bit or x-bit data type in C.
That's coming in C23. I'm not sure what value BITINT_MAXWIDTH will take on most compilers, but assuming it's large enough, then you'll be able to write _BitInt(256), or unsigned _BitInt(256), etc.

I can't really say that I'm a fan of this approach (especially in a systems programming language like C). Every time something is made easier for people, there's a corresponding drop in the skill level of the average practitioner. I'm not a sadist, but I do think there's harm in a situation where programmers can make heavier and heavier lifts, but have less and less idea of how things actually work.
legendary
Activity: 1512
Merit: 7340
Farewell, Leo
For example, if all you had was (let's say) unsigned char (assume that CHAR_BIT == 8), and you wanted an unsigned 32-bit data type, then you could simulate one with a struct (and your own functions to implement arithmetic and logic operations) by thinking in terms of a 4-digit number in base 256
Not sure I understand. You're saying that if I wanted a 32-bit data type, and my only data type available was unsigned char, I could design 32-bit data type by defining a struct as following?
Code:
struct integer_type{
    unsigned char byte_1;
    unsigned char byte_2;
    unsigned char byte_3;
    unsigned char byte_4;
}

And so, if I wanted to store integer 2^32-1, I'd fill in each of these fields with 255? Theoretically, I can extend this to unlimited bits long data type, but various functions from the standard library won't work, and I'll have to rewrite them myself (i.e., printf), which besides difficult, is beyond simple to read, whereas in C# if I'm not mistaken, such capabilities exist already.

I'm just curious why I can't define a 256-bit or x-bit data type in C.
legendary
Activity: 1042
Merit: 2805
Bitcoin and C♯ Enthusiast
Check project "FinderOuter" (https://github.com/Coding-Enthusiast/FinderOuter)
I am not sure if it has exact functionality you mentioned, but I am 99% sure you will find there all the blocks you need (sha256, pubkey generation, base58 conversion etc).
Almost all the code used in FinderOuter is heavily specialized to perform special tasks required for recovery. Considering how it doesn't have a recovery option for brainwallets, it is not useful for OP. (I haven't seen any demand for brainwallet recovery option to add it to FinderOuter)

If you want general implementation of different algorithms like SHA256, ECC, etc. my library Bitcoin.Net is more useful.
Although I should add that if the goal is to "recover" a brainwallet, any "general purpose library" is very inefficient.
hero member
Activity: 510
Merit: 4005
Question of mine: how can you do this in C? (...)
You mean how can you define a 256-bit data type?

The general idea is that you express the number in a different base (depending on what data types you do have available). For example, if all you had was (let's say) unsigned char (assume that CHAR_BIT == 8), and you wanted an unsigned 32-bit data type, then you could simulate one with a struct (and your own functions to implement arithmetic and logic operations) by thinking in terms of a 4-digit number in base 256, rather than a 32-digit number in base 2. Same thing extends to thinking of a 256-bit number as a 16-digit number in base 65536, or as a 4-digit number in base 2**64, etc.

There are variations on this technique, but that's the basic idea.

Here's one implementation (in C, and assembly): https://github.com/piggypiggy/fp256.

And here's (a piece of) a more specialized implementation from libsecp256k1 (52-bit limbs instead of 64-bit ones): https://github.com/bitcoin-core/secp256k1/blob/master/src/field_5x52_impl.h.
legendary
Activity: 1512
Merit: 7340
Farewell, Leo
Question of mine: how can you do this in C? Not sure about C++, but in C the longest integer value can be minimum 64 bits (long long). In most systems, it's 64 bits, but I'm quite sure in no system it exceeds 256 bits, which are required for SHA-256. (Also, this)

Is there a known library that lets you do some sort of "tricks" with the processor?
full member
Activity: 297
Merit: 133
Solved.

I used NBitcoin libraries and using code like this:

Code:
Key keyU = new Key(ComputeSHA256(line), fCompressedIn: false);
Console.WriteLine(keyU.GetAddress(ScriptPubKeyType.Legacy, Network.Main) + " # " + keyU.GetWif(Network.Main) + " # " + line);
legendary
Activity: 952
Merit: 1386
Check project "FinderOuter" (https://github.com/Coding-Enthusiast/FinderOuter)
I am not sure if it has exact functionality you mentioned, but I am 99% sure you will find there all the blocks you need (sha256, pubkey generation, base58 conversion etc).
full member
Activity: 297
Merit: 133
Brainflayer does it, of course focused on speed instead of readability and modularity

Can you share command for brainflayer that it converts brainwallet to public address in Base58Check format?
full member
Activity: 161
Merit: 230
Brainflayer does it, of course focused on speed instead of readability and modularity
hero member
Activity: 560
Merit: 1060
This could be written in one class in one file.

I disagree! In general it's better using one class / service for each specific task that you want to implement.

The reason my program is split into multiple files, even though it looks like it shouldn't, is because it is much easier to maintain a program that has well-separated functionalities.

Imagine a program where the entropy generation, the printing utils, the type conversion utils,  the qr code generation and everything else is in one file. This is not recommended.

I rarely used Java.

Need some C/C++/C# solution.

Ok! I am sure you can find people here that write code in those languages.
Pages:
Jump to: