Pages:
Author

Topic: Potentially faster method for mining on the CPU (Read 14286 times)

member
Activity: 131
Merit: 16
I wouldn't believe all the announcements in the news. There were a lot of such announcements, including from Intel, but mining equipment from Bitmain is the most widespread in this market and therefore the best. I remember only 1 case when a private company released an ASIC for Ethereum, which was very good, but the supply was limited.

I think this is a software based solution rather than ASIC hardware. With the advancement of AI and ML over recent years including things like chat GPT its certainly possible there is software which can feed better inputs to an ASIC to solve a block

perhaps now with bitcoin ETF and bitcoin becoming a bit more mainstream we may see companies outside of china/russia which step up in the ASIC game, would be nice to have a competitor to bitmain, also one that doesnt totally scalp their prices
legendary
Activity: 1610
Merit: 1026
I wouldn't believe all the announcements in the news. There were a lot of such announcements, including from Intel, but mining equipment from Bitmain is the most widespread in this market and therefore the best. I remember only 1 case when a private company released an ASIC for Ethereum, which was very good, but the supply was limited.
member
Activity: 131
Merit: 16
Quote
Quantum announced a new proprietary "Method C" for more efficient BTC mining, based on machine learning and predictive AI technology.
QBT
 said the new method was yielding consistent results, and had favourably demonstrated predictive ability in c. 30% of cases if an input to SHA-256 would produce a winning hash.

Quantum Blockchain has announced a new proprietary method of Bitcoin mining, this time implemented in hardware, promising significant energy savings compared to traditional methods. The company's "Method B" made headlines last year as it promised to increase the rate of successful Bitcoin mining by 2.6 times - entirely achieved in software.
Well how about starting a new thread on this in the Development & Technical Discussion area where it belongs ? Hijacking an ancient thread with off-topic information is just - pointless - and does nothing to expose your information to those who are better qualified to look into it and assess its viability.
Its not off topic, the topic is faster bitcoin mining (with CPU), the guy posted earlier this year mentioning quantum and asked for some more info on literature, unlike you instead of saying its not going to work i gave the info to the (maybe) 30% faster solution. Also, Kano said he was attempted something and ran out of ram quickly. Maybe faster RAM and more cores (from the next-gen threadripper workstation) can help solve this seemingly high-resource-demand software problem...

You are confusing what is being said with mining directly on the CPU. I think here is a software solution to feed potentially better input to ASICs.
legendary
Activity: 3612
Merit: 2506
Evil beware: We have waffles!
Quote
Quantum announced a new proprietary "Method C" for more efficient BTC mining, based on machine learning and predictive AI technology.
QBT
 said the new method was yielding consistent results, and had favourably demonstrated predictive ability in c. 30% of cases if an input to SHA-256 would produce a winning hash.

Quantum Blockchain has announced a new proprietary method of Bitcoin mining, this time implemented in hardware, promising significant energy savings compared to traditional methods. The company's "Method B" made headlines last year as it promised to increase the rate of successful Bitcoin mining by 2.6 times - entirely achieved in software.
Well how about starting a new thread on this in the Development & Technical Discussion area where it belongs ? Hijacking an ancient thread with off-topic information is just - pointless - and does nothing to expose your information to those who are better qualified to look into it and assess its viability.
member
Activity: 131
Merit: 16
I found the article


https://www.voxmarkets.co.uk/articles/quantum-blockchain-s-new-bitcoin-mining-method-promises-to-slash-energy-usage-8f70e57/


Quote
Think those new 96 core threadrippers pro workstations with 8 channel ddr5 2TB ram could handle it?  Grin the ram is ECC too
No it can't. Period.
There are no 'complex equations' or patterns to solve therefore applying possible mathematical shortcuts is not possible because there are none. What part of 'random' do you folks not get?

i would like to see your thoughts on this

Quote
Quantum announced a new proprietary "Method C" for more efficient BTC mining, based on machine learning and predictive AI technology.
QBT
 said the new method was yielding consistent results, and had favourably demonstrated predictive ability in c. 30% of cases if an input to SHA-256 would produce a winning hash.

Quantum Blockchain has announced a new proprietary method of Bitcoin mining, this time implemented in hardware, promising significant energy savings compared to traditional methods. The company's "Method B" made headlines last year as it promised to increase the rate of successful Bitcoin mining by 2.6 times - entirely achieved in software.

The new Method C uses predictive AI to accelerate the cumbersome BTC mining process. Specifically, the technology tries to predict whether an input to SHA-256, the core algorithm for Bitcoin mining, is likely to generate a winning hash, or not. The underlying assumption for Method C is that an "oracle" decision is materially less computationally demanding than the SHA-256 calculation for the same input string.
legendary
Activity: 3612
Merit: 2506
Evil beware: We have waffles!
Quote
Think those new 96 core threadrippers pro workstations with 8 channel ddr5 2TB ram could handle it?  Grin the ram is ECC too
No it can't. Period.
There are no 'complex equations' or patterns to solve therefore applying possible mathematical shortcuts is not possible because there are none. What part of 'random' do you folks not get?
member
Activity: 131
Merit: 16
I did recently see that a company with quantum in their name had figured a method to more efficiently brute force/ solve a block, i think they had 3 methods and had completed testing to prove that their nethods were up to something like 30% quicker than existing block solving

What ive said here isn't quite accurate as i cant remember exactly what they found out or if its even true but its interesting... This kind of tech could be worth alot of money and i bet its extremely well protected. Imagine if one of the big pools or players were using this. "30% more blocks? Damn!!!"

Wrote code to do the code generation, and optimise the results automatically at each step, long ago with 40 cores of CPU and 512GB of ram.
Ran out of ram real fast Smiley

Think those new 96 core threadrippers pro workstations with 8 channel ddr5 2TB ram could handle it?  Grin the ram is ECC too
legendary
Activity: 4466
Merit: 1798
Linux since 1997 RedHat 4
Wrote code to do the code generation, and optimise the results automatically at each step, long ago with 40 cores of CPU and 512GB of ram.
Ran out of ram real fast Smiley
legendary
Activity: 3612
Merit: 2506
Evil beware: We have waffles!
This thread and the link you gave are from the very early years of BTC and were long ago proven to be irrelevant when compared to using ASIC's to just brute-force the process.
newbie
Activity: 1
Merit: 0
hi! im kind of new on the subject and brings me to this post and i think that this is a very intresting discussion, any news about this? everyone disapear because they found the way? haha

I recently found that an algebraic fault attack could be effective to get a input of a sha256 .
Here is the link if anyone want to check.
https://archive.org/details/algebraic-fault-attack-on-the-sha-256-compression-function/page/n4/mode/1up

I hope you found something intresting in the papper and i hope to hear some news about you guys.

newbie
Activity: 46
Merit: 0
exploring which input bits effect which output bits (1, 2 or 3 at a time) might be interesting
I don't expect SHA2 to be broken by the ways discussed in this thread, but it's a good thing that someone tries it. A good result from such approach would be not analytical solution (seems impossible, but there always is a hope that equations system will collapse Smiley), but representation of double SHA2 better (by operations count or height) that ones currently used in ASICs. I think I know how to chop 10% (or so) off from ~120000 binary operations, but let's hope that this thread will bring some fresh ideas Smiley

logred_v8.zip ready - faster computation of !a which is the most expensive operation.

ri seems to have almost exactly 2i-1 terms.  So the last bit r31 will have 230 terms.  I've confirmed this with ri i=0..12 and it's bang on.  The Quine-McClumskey reduction sure does a number on the complexity but it's not enough.

botnet - to take this further, you need to demonstrate that one of the input bits of the sha256 compression function does not pass through the Least Significant Bit inputs of the adds.

Or show one of the compression function output bits do not depend on the Most Significant Bit outputs of the 32bit adds.

Or that chained 32-bit adds have some reductions.

Cheers
hero member
Activity: 524
Merit: 500
exploring which input bits effect which output bits (1, 2 or 3 at a time) might be interesting
I don't expect SHA2 to be broken by the ways discussed in this thread, but it's a good thing that someone tries it. A good result from such approach would be not analytical solution (seems impossible, but there always is a hope that equations system will collapse Smiley), but representation of double SHA2 better (by operations count or height) that ones currently used in ASICs. I think I know how to chop 10% (or so) off from ~120000 binary operations, but let's hope that this thread will bring some fresh ideas Smiley
newbie
Activity: 46
Merit: 0
But I still think you haven't quite grasped the magnitude of the P/NP problem yet! Then again, if the impossible ever gets done, it'll be as always, by someone who didn't know it was impossible.

Also botnet - If you've got the crypto-bug - exploring which input bits effect which output bits (1, 2 or 3 at a time) might be interesting.  This kind of differential analysis is what took down MD5 and cost me $10,000 (http://en.wikipedia.org/wiki/MD5CRK) Smiley

Eg:
Code:
SHA256(000000...0000) = A
SHA256(000000...0001) = B
SHA256(000000...0011) = C
SHA256(000000...0010) = D
SHA256(000000...0110) = E
SHA256(000000...0111) = F
SHA256(000000...0101) = G
...

then

Code:
A^B = ?
A^C = ?
A^D = ?
...
B^C = ?
B^D = ?
...
newbie
Activity: 46
Merit: 0
Note: working on Quine-McClusky now.  Smiley  Always wanted a high speed lib for logic reductions ...

Alright, got Quine-McCluskey implemented.  Also implements logred_set_equiv(a,b) which goes through all possible input permutations and confirms "a" behaves the same as "b".

Q-McC is an expensive operation (Q-McC on r05 takes a full second when compiled with -s -O6), so I have not included it in logred_set_reduce().

It reduces r05 (the 6th bit in 32bit addition) from
Code:
r05 (terms=  1120, symbols=   8842, uniqueSymbols=12)
to
Code:
r05 (terms=   156, symbols=   1080, uniqueSymbols=12)

It has no effect on c00,c01,..ci

Still don't think you'll find what you're looking for.  But have fun!

logred_v7.zip

hero member
Activity: 492
Merit: 500
Best of luck with this, botnet, but I feel it's a wild goose chase. I first heard about bitcoins in autumn 2011 and immediately set about trying to 'simplify' sha256(sha256(x)) in the same way as you, by trying to find boolean expressions for the first bits of the output in terms of the 640 bits of input. As you've found out, it's the seemingly simple addition of two 32-bit numbers that's the big problem. It simply doesn't lend itself to being simplfied, as it's recursive. As you crawl along the bits from right to left the expression balloons exponentially. All the other operations in SHA-256, the logical connectives and the rotations/shifts are trivial by comparison.

You might want to look into the structure of the block header to see how the 640 bits change at different rates. Fr'instance I believe the version number (first 32 bits of the header, but in stupid-endian) has been 1 for most of bitcoin's history but may be 2 now or soon. That's 30 zero bits right there. Then you have the first (or, goddammit, probably last) 32 bits of the previous hash guaranteed to be zero. The difficulty changes every two weeks, the merkle hash every block, and so on.

But I still think you haven't quite grasped the magnitude of the P/NP problem yet! Then again, if the impossible ever gets done, it'll be as always, by someone who didn't know it was impossible.
newbie
Activity: 35
Merit: 0
1)    logred_set_scanf(a, "a00 + a00");
   logred_set_reduce(a);
   logred_set_printf(a);

Shouldn't that always produce 0?   for me it outputs a00
Nope.  A or A == A.   A or !A == 1  and A!A = 0

Ahh, there was misunderstanding about the + operator.
I've been thinking about arithmetic operations mod 2, where "(a+b) mod 2" is equivalent to XOR
0+0 = 0, 0 mod 2 = 0
0+1 = 1, 1 mod 2 = 1
1+0 = 1, 1 mod 2 = 1
1+1 = 2, 2 mod 2 = 0
= XOR
newbie
Activity: 46
Merit: 0
Note: working on Quine-McClusky now.  Smiley  Always wanted a high speed lib for logic reductions ...

Been trying this out, so many questions...

1)    logred_set_scanf(a, "a00 + a00");
   logred_set_reduce(a);
   logred_set_printf(a);

Shouldn't that always produce 0?   for me it outputs a00

Nope.  A or A == A.   A or !A == 1  and A!A = 0

2) Similarly I would expect (a + !a) = 1
   logred_set_scanf(a, "a00");
   logred_set_not(b, a);
   logred_set_xor(c, a, b);
= crash

a = "a00"
b = "!a00"
c = "a00" ^ "!a00"
  = a00!(!a00) + !(a00)!a00
  = a00 + !a00
  = 1

Yup, it crashes.  Comments out "logred_set_reduce_4_A_nAB_n(src)" in "logred_set_reduce()" to fix this.

It doesn't resolve down to "1" yet.  soon.

3) Haven't debugged enough, but part 2 could be related to this?:
   logred_set_scanf(a, "!a00!b00"); = works
    logred_set_scanf(a, "!a00"); = crash

Version I'm working on doesn't crash on this. 

4) Actually, is there support for numeric literals (0, 1)?  c = a+b is purely symbolic, but during SHA256 you add in those K-Constants which can sometimes help reduce things further.

Humm, good point.  Will not be ready for this release. Soon.

4) x64 only?  This is giving me heap problems:
#define VAR_T           size_t
#define VAR_T_SIZE_BITS ( 8 * sizeof(VAR_T) )
#define VAR_A_BITS      256
#define VAR_A_LEN       4L   

(worked around setting VAR_A_BITS = 128... maybe this is the source of all my problems? Smiley)

The code should be x32/x64 agnostic.  logred_v6.zip is a snapshot of what I have so far.  Ignore the logred_set_reduce_5_Quine_McCluskey() functions.

The Makefile now lets you define a bunch of flags like VAR_A_BITS.  But it should work for values up to 960.  It may work beyond that with some tweeks to the variable string lenghts ("a960" and "a1024" are different lengths).

Cheers.
newbie
Activity: 35
Merit: 0
Playing a bit here, turns out the A=A+AB and A+B=A+!AB reductions have no effect in 32bit add.  Guess that's what makes it a nice 32bit bijection.

Download logred_v4.zip which will be faster for your application.

You can disable those reduction algorithms by commenting out logred_set_reduce_2_A_AB() and logred_set_reduce_4_A_nAB_n() inside logred_set_reduce()

Been trying this out, so many questions...

1)    logred_set_scanf(a, "a00 + a00");
   logred_set_reduce(a);
   logred_set_printf(a);

Shouldn't that always produce 0?   for me it outputs a00

2) Similarly I would expect (a + !a) = 1
   logred_set_scanf(a, "a00");
   logred_set_not(b, a);
   logred_set_xor(c, a, b);
= crash

3) Haven't debugged enough, but part 2 could be related to this?:
   logred_set_scanf(a, "!a00!b00"); = works
    logred_set_scanf(a, "!a00"); = crash

4) Actually, is there support for numeric literals (0, 1)?  c = a+b is purely symbolic, but during SHA256 you add in those K-Constants which can sometimes help reduce things further.

4) x64 only?  This is giving me heap problems:
#define VAR_T           size_t
#define VAR_T_SIZE_BITS ( 8 * sizeof(VAR_T) )
#define VAR_A_BITS      256
#define VAR_A_LEN       4L   

(worked around setting VAR_A_BITS = 128... maybe this is the source of all my problems? Smiley)
newbie
Activity: 46
Merit: 0
I'll PM you a link to my C code for logic reduction in a few mins.  You can compile C right?  And know how to code?  It should be quite a bit faster than Mathmatica or Maple.

It uses de Morgan's rules and some basic reduction rules to optimize the circuit.

This is excellent by the way Smiley I will start integrating with my code.   Right now my stuff is all C#, so I will have to compile your code into a lib and do marshaling -  though at first glance of your API I don't think marshaling will have too bad of a perf impact.   I'll try with c=a+b to test the runtime and validate the results match yours.

Do you know how your de Morgan approach compares to this Quine McCluskey one I keep reading about? http://en.wikipedia.org/wiki/Quine%E2%80%93McCluskey_algorithm


Quine-McCluskey is the analytical twin of Karnaugh Maps (http://en.wikipedia.org/wiki/Karnaugh_map)

Basically, QMC/KMap takes I/O and creates formulas.  My lib lets you build the formulas directly and simplify them.  You *could* create the formulas based on I/Os I guess.  But since we know the SHA-256 formulas you could skip that step.

In fact you really want to, otherwise you will never have the correct formula unless you iterate all possible inputs for the SHA256 compression function (2^256 + 2^(W-array inputs) ).

Even 2^32 addition is going to be massive.  The nth (0..31) output bit of 32bit-add will be 2^(n+1) terms long.  You're only hope is multiple 32bit-adds start simplifying.

Download logred_v3.zip - there was a bug in performing (A+BCD+EFG+...)(HI+!JK+...)(...)... operations.

Cheers

Edit:
Playing a bit here, turns out the A=A+AB and A+B=A+!AB reductions have no effect in 32bit add.  Guess that's what makes it a nice 32bit bijection.

Download logred_v4.zip which will be faster for your application.

You can disable those reduction algorithms by commenting out logred_set_reduce_2_A_AB() and logred_set_reduce_4_A_nAB_n() inside logred_set_reduce()
newbie
Activity: 35
Merit: 0
I'll PM you a link to my C code for logic reduction in a few mins.  You can compile C right?  And know how to code?  It should be quite a bit faster than Mathmatica or Maple.

It uses de Morgan's rules and some basic reduction rules to optimize the circuit.

This is excellent by the way Smiley I will start integrating with my code.   Right now my stuff is all C#, so I will have to compile your code into a lib and do marshaling -  though at first glance of your API I don't think marshaling will have too bad of a perf impact.   I'll try with c=a+b to test the runtime and validate the results match yours.

Do you know how your de Morgan approach compares to this Quine McCluskey one I keep reading about? http://en.wikipedia.org/wiki/Quine%E2%80%93McCluskey_algorithm
Pages:
Jump to: