Author

Topic: Towards a better proof of work algorithm (Read 1352 times)

hero member
Activity: 546
Merit: 500
August 20, 2014, 02:23:07 PM
#18
If you want a better Proof-of-Work Algorithm why not take a look at BURST : https://bitcointalksearch.org/topic/annburst-burst-efficient-hdd-mining-new-123-fork-block-92000-731923

It uses a new POC (Proof of Capacity) Algorithm which is basically HDD Mining. It may not be mined the same way, but it's a pretty interesting concept Wink
hero member
Activity: 798
Merit: 1000
‘Try to be nice’
August 20, 2014, 02:20:27 PM
#17
the only problem wiuth asic is that they increase their hashpower too fast compared to gpu, and thus the money needed to buy them

at one point you can't compete anymore with the rich guy next to you and it's over

gpu don't have this problem, the new generation gpu come every year but they just give a small improvement

100% as the market should function.
hero member
Activity: 798
Merit: 1000
‘Try to be nice’
August 20, 2014, 02:19:08 PM
#16
I honestly don't get this whole anti-GPU stance.
You guys probably have all hi-end iSomethingmeaningless 100+ processors in your boxes.

there is actually no "anti GPU" stance -

the software will adapt to GPU, its just a "nice" thing to say.

the point is the be able to "shift the goal posts" on ASICs with software .

so worry not GPU miner , Hash complexity is about CPU and GPU.
legendary
Activity: 3248
Merit: 1070
August 20, 2014, 12:52:29 PM
#15
the only problem wiuth asic is that they increase their hashpower too fast compared to gpu, and thus the money needed to buy them

at one point you can't compete anymore with the rich guy next to you and it's over

gpu don't have this problem, the new generation gpu come every year but they just give a small improvement
hero member
Activity: 672
Merit: 500
August 20, 2014, 12:36:51 PM
#14
I honestly don't get this whole anti-GPU stance.
You guys probably have all hi-end iSomethingmeaningless 100+ processors in your boxes.
hero member
Activity: 798
Merit: 1000
‘Try to be nice’
August 20, 2014, 11:50:19 AM
#13
Answering the question directly; I don't see that it would be too difficult to optimize an algorithm specifically for x86 hardware; such that an ASIC would simply be x86 with a few instructions removed. The question is whether or not that's useful.

I think having a solution that required several proof of work algorithms, i.e. one is selected somewhat randomly based on the last block might be the way to go.

A general CPU would be good for this task.

yeeep.

i.e its already here - Quark  ( 6 Algos + 3 random functions )

also there is now M7  (of Bitfreaks mini block-chain)

http://cryptonite.info/wiki/index.php?title=M7_PoW

"In order to avoid bias accumulation we multiply the 7 hashes together and then pass that number through the SHA-256 function one last time. The multiplication step is also harder for GPU's and ASIC's but works very efficiently on a CPU."

hero member
Activity: 672
Merit: 500
August 20, 2014, 11:25:23 AM
#12
I'd suggest trying to come up with an algorithm ...  Inverting big matrices (1000x1000 or up) is a good example.
No it isn't. Put that in your head: every single algorithm can be ASIC'ed. It's as simple as that.
By contrast, multiple algorithms cannot, unless you multiply your investment and risk.
Floating point? Really? They don't give the same results even across different models of the same processors let alone different architectures. Validation would thus have to go through an accurate, deterministic, hardware-independant path. Not going to happen.

Ah, by the way: computation does not belong to CPUs anymore. It hasn't been that for at least 10 years.
aa
hero member
Activity: 544
Merit: 500
Litecoin is right coin
August 19, 2014, 10:05:22 PM
#11
Litecion's implementation of scrypt was great; compared to Bitcoin's SHA-256, the use of scrypt provided a 1000x hashrate resistance in standard hardware, and it's still providing a huge $/hash resistance to ASICs. This helped Litecoin start off relatively slow in order to gain adoption in the community--especially at a time when there were almost no altcoins--without giving away massive amounts of coins to a single group within a few hours or days.
newbie
Activity: 6
Merit: 1
August 19, 2014, 07:59:17 PM
#10
I think this all depends on what you term "ASIC-resistant", my short reading on the used algorithms and their progress thus far seems to point to all ending up having some sort of GPU implementation. This suggests that should it become profitable, an ASIC will be developed for just about any algorithm developed.

Perhaps a better approach would be to focus on raising the already prohibitive cost of ASIC development through modifying existing algorithms  and how the overall mining procedure functions.
legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
August 19, 2014, 06:32:22 PM
#9
Begging the question; is ASIC resistance actually desirable?

It has both upsides and downsides.

That is an empirical question, to be decided in the future by the market.  Nevertheless it's fun (and potentially profitable) attempting to deduce the answer from first principles.

I think we'll see a diverse mix of successful approaches, because I like to hedge my ASIC loving btc/ltc with GPU/CPU based xpm/xmr/xcn.

For now let's just agree that PoS stands for Proof of Scam, along with variants Dead Piece of Shit and Proof of Suck Ass.   Cool
sr. member
Activity: 441
Merit: 250
August 19, 2014, 04:08:10 PM
#7
Begging the question; is ASIC resistance actually desirable?

IMNSHO, absolutely not.

I don't think it is a coincidence that the price really took off when ASICs started to be commonly available. Even Litecoin, which was initially marketed as "ASIC-hard", rose slightly in value when scrypt miners sold in volume. It secures the blockchain against supercomputer attacks, and makes mining much more capital intensive (which requires a larger payoff, raising the bar on an acceptable lowest price).

There is also the matter of spyware and botnets, which can make a fortune on CPU or GPU mining. And anything ASIC-hard is likely to be GPU-hard, making this problem even worse. If the coin is successful, you won't stand a chance mining with your laptop anyway, so who would you rather see making the most from it? Botnet owners or ASIC mining farms?

I'm know exactly what my answer is. Satoshi made a good choice.
legendary
Activity: 2128
Merit: 1073
August 19, 2014, 10:20:23 AM
#6
There's been interest in "ASIC-resistant" proof of work algorithms. SCRYPT was supposed to do that, but ASICs have been built for SCRYPT, so that didn't work. What would work?

I'd suggest trying to come up with an algorithm which requires large numbers of 64-bit floating point operations and considerable memory. Inverting big matrices (1000x1000 or up) is a good example. Any ASIC capable of doing big matrix inversions would have to have multiple 64-bit superscalar FPUs inside, plus caches. It would have to be a number-crunching CPU. It would need a gate count comparable to CPUs of equivalent compute power.

So can anybody come up with a suitable mineable algorithm with some big matrix inversions inside?
Good proof of work algorithm has the following property: hard to compute and easy to verify.

Matrix inversion doesn't have good compute/verify ratio: O(n3) computation, O(n2) verification. Also, it really doesn't need caches, the access patterns are very predictable, so a dedicated prefetcher would easily outperform caches.

High-precision general purpose FPU would still be a decent defense against GPU and FPGA and would radically increase the cost of developing an ASIC.

Nearly 3 years ago I was thinking along the similar lines: pick a chaotic numerical algorithm (e.g. fractals) as a kernel for the proof-of-work for the (then proposed) Solidcoin v2.0 .

https://bitcointalksearch.org/topic/m.537010
newbie
Activity: 29
Merit: 0
August 19, 2014, 08:06:28 AM
#5
Answering the question directly; I don't see that it would be too difficult to optimize an algorithm specifically for x86 hardware; such that an ASIC would simply be x86 with a few instructions removed. The question is whether or not that's useful.

I think having a solution that required several proof of work algorithms, i.e. one is selected somewhat randomly based on the last block might be the way to go.

A general CPU would be good for this task.
member
Activity: 87
Merit: 10
August 19, 2014, 05:49:05 AM
#4
ASIC is indeed good. Imagine government using supercomputers for mining. They will make too much profit, and BTC will crash.
legendary
Activity: 1330
Merit: 1003
August 18, 2014, 05:54:44 PM
#3
I like ASICS because they reduce energy consumption and make the network resistant to supercomputers. Having a ton of ASICS out there makes it much less likely that the government can just build a massive mining center to overwhelm miners and execute a 51% attack. They could just buy a bunch of ASICS I suppose, but to me it seems as though ASICS put "private sector" miners on a more even footing. I don't own any by the way.
member
Activity: 96
Merit: 10
esotericnonsense
August 18, 2014, 05:01:39 PM
#2
Begging the question; is ASIC resistance actually desirable?

It has both upsides and downsides.

The up-side is that individual miners can make a go of it with bog standard machines.
This will likely result in more individual actors mining; though it wouldn't address the issue of them coalescing into pools.

The down-side is that standard hardware is much easier to game. Botnets, or in an extreme scenario buying a ton of hardware. (Harder with ASICs; it's quite obvious what you're doing).

I think we were reasonably lucky to not be killed in the pre-GPU era. Perhaps we were just not really a target, a 'joke' at the time.

Answering the question directly; I don't see that it would be too difficult to optimize an algorithm specifically for x86 hardware; such that an ASIC would simply be x86 with a few instructions removed. The question is whether or not that's useful.
legendary
Activity: 1204
Merit: 1002
August 18, 2014, 03:40:38 PM
#1
There's been interest in "ASIC-resistant" proof of work algorithms. SCRYPT was supposed to do that, but ASICs have been built for SCRYPT, so that didn't work. What would work?

I'd suggest trying to come up with an algorithm which requires large numbers of 64-bit floating point operations and considerable memory. Inverting big matrices (1000x1000 or up) is a good example. Any ASIC capable of doing big matrix inversions would have to have multiple 64-bit superscalar FPUs inside, plus caches. It would have to be a number-crunching CPU. It would need a gate count comparable to CPUs of equivalent compute power.

So can anybody come up with a suitable mineable algorithm with some big matrix inversions inside?
Jump to: