Pages:
Author

Topic: Are GPU's Satoshi's mistake? - page 2. (Read 8213 times)

hero member
Activity: 630
Merit: 500
October 03, 2011, 04:48:48 PM
#65
The proof of work is not a hash of the entire block then?

Or is it an indirect hash of something in the header which is itself a hash to all transactions? Even if it's that, wouldn't such header have to be retransmitted each time a new transaction is propagated?
donator
Activity: 2058
Merit: 1054
October 03, 2011, 03:40:04 PM
#64
like Garvin said, the problem is msg relaying
Good point. It doesn't really matter if the mining algorithm is CPU-friendly. If bitcoin usage grows significantly, other resources - mainly bandwidth - required by the mining process will probably rule out the "average guy".

Mining will probably become a specialized business despite the mining algorithm. So, better to keep the algorithm which doesn't make us vulnerable to botnets.
Mining pools. The miner only needs the block headers.
hero member
Activity: 630
Merit: 500
October 03, 2011, 03:35:25 PM
#63
like Garvin said, the problem is msg relaying

Good point. It doesn't really matter if the mining algorithm is CPU-friendly. If bitcoin usage grows significantly, other resources - mainly bandwidth - required by the mining process will probably rule out the "average guy".

Mining will probably become a specialized business despite the mining algorithm. So, better to keep the algorithm which doesn't make us vulnerable to botnets.
donator
Activity: 1218
Merit: 1079
Gerald Davis
October 03, 2011, 03:24:34 PM
#62
That's the opposite of independence. It means that the same party needs to do both CPU and GPU to validate their block. So end users can't mine because they don't have GPUs. And there's no way to adjust difficulty separately since you don't have separate blocks to count.

No it wouldn't.  It would simply be a public double signing.

Two algorithms lets call them C & G (for obvious reasons).

A pool of G miners find a hash below their target and sign the block and publish it to all other nodes in network.  The block is now half signed.
A pool of C miners then take the half signed block and look for a hash that meets their independent target.  The block is now fully signed.

Simply adjust the rules for a valid block then for those half signing they can only generate a reward half the size + half transaction fees.  The second half does the same.  So the G miner (or pool) who half signs the block gets 25BTC + 1/2 transaction fees, the C miners would complete the half signed block get the other 25 BTC + 1/2 the transaction fees.

A block isn't considered confirmed until both halves of the hash pair are complete and published.  If you want block signing to take 10 minutes of average adjust the difficulty for each half so that average solution takes 5 minutes for half signed block.

While I doubt any dual algorithm solution is needed it makes more sense to have both keys required otherwise bitcoin becomes vulnerable to the weaker of either algorithm (which is worse than having single algorithm).

donator
Activity: 2058
Merit: 1054
October 03, 2011, 03:12:58 PM
#61
If you have blocks corresponding to two different hash functions, you have two options:
1. Have a single difficulty value, and hardcode a ratio between the hash targets. Then if you choose a wrong ratio one type will be more profitable than the other and be solely used.
2. Have two separate difficulty values, each computed by the time it took to find X blocks of a type compared to the desired time. To know what the desired time is you have to set what % of the blocks you want to be of this type. It needn't be 50/50 but that gives the most security for this application. Then the respective difficulties of the two types will converge to a point where they are equally profitable.

Or simply make both half's Independent.  Currently the bitcoin block is signed by a single hash (well technically a hash of the hash).    There is no reason some alternate design couldn't requires 2 hashes.  A valid block is only valid if signed by both hashing algorithms and each hash is below their required difficulty.  In essence a double key to lock each block.  Each algorithm would be completely independent in terms of difficulty and target and would reset

Even if you compromised one of the two algorithms you still wouldn't have control over the block chain.  IF the algorithms were different enough that no single hardware was efficient at both you would see hashing split into two distinct camps each developing independent "economies".
That's the opposite of independence. It means that the same party needs to do both CPU and GPU to validate their block. So end users can't mine because they don't have GPUs. And there's no way to adjust difficulty separately since you don't have separate blocks to count.

It will create very weird dynamics where you first try to find one key, and if you do you power down that key and start the other to find a matching key. If you can find both keys before anyone else you get the block. And I think this means that it's enough to dominate one type because then you "only" need to find the key for the other to win a block.
donator
Activity: 1218
Merit: 1079
Gerald Davis
October 03, 2011, 02:53:59 PM
#60
If you have blocks corresponding to two different hash functions, you have two options:
1. Have a single difficulty value, and hardcode a ratio between the hash targets. Then if you choose a wrong ratio one type will be more profitable than the other and be solely used.
2. Have two separate difficulty values, each computed by the time it took to find X blocks of a type compared to the desired time. To know what the desired time is you have to set what % of the blocks you want to be of this type. It needn't be 50/50 but that gives the most security for this application. Then the respective difficulties of the two types will converge to a point where they are equally profitable.

Or simply make both half's Independent.  Currently the bitcoin block is signed by a single hash (well technically a hash of the hash).    There is no reason some alternate design couldn't requires 2 hashes.  A valid block is only valid if signed by both hashing algorithms and each hash is below their required difficulty.  In essence a double key to lock each block.  Each algorithm would be completely independent in terms of difficulty and target and would reset

Even if you compromised one of the two algorithms you still wouldn't have control over the block chain.  IF the algorithms were different enough that no single hardware was efficient at both you would see hashing split into two distinct camps each developing independent "economies".
donator
Activity: 2058
Merit: 1054
October 03, 2011, 02:36:13 PM
#59
You could do something like that. Have two kinds of blocks, call them "CPU blocks" and "GPU blocks", each using a hashing function with corresponding friendliness. They're mostly equivalent for purposes of confirming transactions and they exist on the same chain (a GPU block can reference a CPU block, etc.). Each type will have its own difficulty, where the difficulties are targeted so that 50% of blocks are GPU and 50% are CPU. There is an additional rule that a block is invalid if the last 8 blocks were the same type as it. So a botnet dominating the CPU blocks or a cluster dominating the GPU blocks can't do much because it can't generate a branch longer than 8 blocks (if someone manages to do both there's still a problem). You can change the recommended waiting period from 6 to 10 blocks.
I think you shouldn't artificially balance targets. They would both converge to a point where both are at the same level of profitability, regardless of the advances in technology. Requiring an alternate every n blocks makes sense though...
If you have blocks corresponding to two different hash functions, you have two options:
1. Have a single difficulty value, and hardcode a ratio between the hash targets. Then if you choose a wrong ratio one type will be more profitable than the other and be solely used.
2. Have two separate difficulty values, each computed by the time it took to find X blocks of a type compared to the desired time. To know what the desired time is you have to set what % of the blocks you want to be of this type. It needn't be 50/50 but that gives the most security for this application. Then the respective difficulties of the two types will converge to a point where they are equally profitable.
hero member
Activity: 938
Merit: 1002
October 03, 2011, 02:07:44 PM
#58
the flaw of Bitcoin design is not the GPUs, it is the mining pools. They completely invalidate the initial assumption thatr every Bitcoin participator is contributing to the network security. With the pools, only pool owners do. Currently the end miners don't need a Bitcoin client and can't even know for sure which network they are mining for...

There's no need for that, the "deepbit security problem" is only because of an implementation detail. Currently the pool both handles payments and generates getwork, but there's no need for this to be the case. In theory miners can generate work themselves or get it from another node and still mine for the pool. Also, things like p2pool (as a substrate for proxy pools) can do away with the need for giant centralized pools to reduce variance.

Also, to make it easier, we could separate hashing pools from work servers. Pools get signed work units from work servers and pass work from a random source to each miner. Ordinary mining tools can be used, but in order to make sure the pool operator is honest, mining software can support requesting specific channels (round-robin style) or keep a list of signatures in order to verify received work units. This has other advantages like allowing work servers to add a small fee of their own, which would motivate running persistent nodes when running a node becomes a professional business. Most work servers would be operated by respectable Bitcoin businesses (banks, etc.) that would benefit from additional promotion, so fees would probably be close to 0 anyway.

You could do something like that. Have two kinds of blocks, call them "CPU blocks" and "GPU blocks", each using a hashing function with corresponding friendliness. They're mostly equivalent for purposes of confirming transactions and they exist on the same chain (a GPU block can reference a CPU block, etc.). Each type will have its own difficulty, where the difficulties are targeted so that 50% of blocks are GPU and 50% are CPU. There is an additional rule that a block is invalid if the last 8 blocks were the same type as it. So a botnet dominating the CPU blocks or a cluster dominating the GPU blocks can't do much because it can't generate a branch longer than 8 blocks (if someone manages to do both there's still a problem). You can change the recommended waiting period from 6 to 10 blocks.

I think you shouldn't artificially balance targets. They would both converge to a point where both are at the same level of profitability, regardless of the advances in technology. Requiring an alternate every n blocks makes sense though...
donator
Activity: 2058
Merit: 1054
October 03, 2011, 01:19:30 PM
#57
the flaw of Bitcoin design is not the GPUs, it is the mining pools. They completely invalidate the initial assumption thatr every Bitcoin participator is contributing to the network security. With the pools, only pool owners do. Currently the end miners don't need a Bitcoin client and can't even know for sure which network they are mining for...
I guess you need to use bold and all caps to be heard around here. So, for the third time,

POOLS ARE ONLY A SECURITY THREAT DUE TO AN IMPLEMENTATION DETAIL THAT CAN BE EASILY FIXED. IT IS NOT A FUNDAMENTAL PROBLEM WITH THE DESIGN.
full member
Activity: 124
Merit: 100
October 03, 2011, 01:03:41 PM
#56
the flaw of Bitcoin design is not the GPUs, it is the mining pools. They completely invalidate the initial assumption thatr every Bitcoin participator is contributing to the network security. With the pools, only pool owners do. Currently the end miners don't need a Bitcoin client and can't even know for sure which network they are mining for...
donator
Activity: 2058
Merit: 1054
October 03, 2011, 08:36:53 AM
#55
GPUs makes the network very botnet resistant.    Bitcoin likely would have been destroyed by botnets already (if out of spite or to simply see if they could) had it not been for the rise of specialized (i.e. GPU) mining.

This idea that specialized mining may defend the Bitcoin network from botnets might have merit.

I wonder if it might be possible to have the best of both worlds - where specialist mining makes commercial sense and casual CPU miners can also convert electricity to cryptocurrency at a rate that isn't prohibitive.

I'd guess you'd need to see a ratio of about approximately - 1.5:1 (Efficiency on Specialist Hardward:General CPU) to have both co-exist.

You could do something like that. Have two kinds of blocks, call them "CPU blocks" and "GPU blocks", each using a hashing function with corresponding friendliness. They're mostly equivalent for purposes of confirming transactions and they exist on the same chain (a GPU block can reference a CPU block, etc.). Each type will have its own difficulty, where the difficulties are targeted so that 50% of blocks are GPU and 50% are CPU. There is an additional rule that a block is invalid if the last 8 blocks were the same type as it. So a botnet dominating the CPU blocks or a cluster dominating the GPU blocks can't do much because it can't generate a branch longer than 8 blocks (if someone manages to do both there's still a problem). You can change the recommended waiting period from 6 to 10 blocks.


Those users will likely joins pools so pools which already the largest threat to decentralization still remain an issue.
I already explained that pools are not a threat to decentralization going forward.
donator
Activity: 1218
Merit: 1079
Gerald Davis
October 03, 2011, 08:35:47 AM
#54
RAM is actually incredibly cheap.  2GB costs ~$10 and that price will be half that in 12 months.  While server ram is more expensive it is much cheaper than multiple complete computer systems.  For example 1TB of FB-RDIMM runs ~$30K.  While that is some serious coin it enough for 500 instances.  $28K is far cheaper than 500 computers.  The per instance cost would only be $60.  As a % of overall computer cost RAM has been falling over the last two decades.  

As a commercial miner I would love you "solution"  I could replace my entire jury rigged GPU farm with one rack of high density servers and put them in a co-location cage.  The largest risk for me wouldn't be legitimate nodes it would be botnets.  It is hard to compete with $0 cost.

Still I don't see what "problem" having a cpu-only work unit solves.  If anything it makes the network MORE vulnerable to botnets.  Most users will simply not hash if the reward is ~$2 per year.  So you put a limit on how decentralized the network will become.  Those users will likely joins pools so pools which already the largest threat to decentralization still remain an issue.  The network will never be decentralized enough to be immune to botnets.  My prediction is within a year the current CPU-only alt crypto currency (can't remember the name) will be dead or forked due to the ease of taking over the network.

So looking at cpu only vs open network (CPU, GPU, FPGA, ASIC, etc)
1) it is more decentralized - dubious value see next points

2) highly vulnerable to botnets.  Even 100K "valid nodes" would easily be crushed by Storm Trojan (230K average controlled nodes). A smaller network could be crushed within days.  Even if there wasn't a financial incentive someone could hash the difficulty up to 1000x current and then leave letting the network fail.

3) unlikely to become "super decentralized" due to lack of financial incentive (most users won't hash 24/7 to earn $2 per year).   There are many more potential users of bitcoin than potential hashers.  Most people just want something with low fees they can use to safely buy and sell stuff.  They have no interest in becoming a payment processing node.

4) Still possible for commercial miners to game the system by exploiting whatever combination of CPU/memory gives them the highest return

5) Pools still remain the largest vulnerability. 

So what exactly does "CPU only" hashing achieve other than decentralized network for the sake of decentralizing?  Sure I grant you a cpu-only network would be more decentralized than an open network however I argue that the amount of decentralization you would gain solves nothing and makes the entire network more vulnerable.
legendary
Activity: 1470
Merit: 1030
October 03, 2011, 08:20:07 AM
#53
GPUs makes the network very botnet resistant.    Bitcoin likely would have been destroyed by botnets already (if out of spite or to simply see if they could) had it not been for the rise of specialized (i.e. GPU) mining.

This idea that specialized mining may defend the Bitcoin network from botnets might have merit.

I wonder if it might be possible to have the best of both worlds - where specialist mining makes commercial sense and casual CPU miners can also convert electricity to cryptocurrency at a rate that isn't prohibitive.

I'd guess you'd need to see a ratio of about approximately - 1.5:1 (Efficiency on Specialist Hardward:General CPU) to have both co-exist.
donator
Activity: 2058
Merit: 1054
October 03, 2011, 08:08:15 AM
#52
Basing it on RAM is even more foolish.

While most consumer grade hardware only supports ~16GB per system and the average computer has likely ~4GB there already exists specialized motherboards which support up to 16TB per system.  This would give commercial miner 4000x the hashing power of average node.  A commercial miner is always going to be able to pick the right hardware to maximize yield.  Limiting the hashing algorithm by RAM wouldn't change that.
And they get this 16TB of RAM for free? RAM is expensive, and the kind of RAM usually used on servers is more expensive than consumer RAM. And again, even if they manage to make it a bit more efficient it's not close to competing with already having a computer.

BTW 2GB would be a poor choice as many GPU now have 2GB thus the entire solution set could fit in videoram and GDDR5 is many magnitudes faster than DDR3 (desktop ram).
You need 2GB per instance. You can't parallelize over this 2GB bringing all the GPU's ALUS to bear. GPU computation and RAM are very parallel but not "fast", this takes away their advantage.

Sure we don't want a monopoly but as long as no entity achieves a critical mass we also don't need 200K+ nodes either.  If you are worried about the strength of the network a better change would be one which has a gradually decreasing efficiency as pool gets larger.  i.e. a non-linear relationship between hashing power and pool size.  This would cause pools to stabilize at a sweet spot that minimizes variance and minimizes the effect of non-linear hashing relationship.  Rather than deepbit having 50% and the next 10 pools having 45% and everyone else making up 5% you likely would see the top 20 pools having on average 4% of network capacity.
There's no need for that, the "deepbit security problem" is only because of an implementation detail. Currently the pool both handles payments and generates getwork, but there's no need for this to be the case. In theory miners can generate work themselves or get it from another node and still mine for the pool. Also, things like p2pool (as a substrate for proxy pools) can do away with the need for giant centralized pools to reduce variance.
donator
Activity: 1218
Merit: 1079
Gerald Davis
October 03, 2011, 07:46:39 AM
#51
Basing it on RAM is even more foolish.

While most consumer grade hardware only supports ~16GB per system and the average computer has likely ~4GB there already exists specialized motherboards which support up to 16TB per system.  This would give commercial miner 4000x the hashing power of average node.  A commercial miner is always going to be able to pick the right hardware to maximize yield.  Limiting the hashing algorithm by RAM wouldn't change that.

BTW 2GB would be a poor choice as many GPU now have 2GB thus the entire solution set could fit in videoram and GDDR5 is many magnitudes faster than DDR3 (desktop ram). 

There is no need for everyone to be hashing.   As long as the nodes are sufficiently decentralized there is no need for them to be completely decentralized. 

Also it is unlikely you are going to achieve that level of decentralization anyways.  Currently hashing is worth ~$60,000 per day.  If you have 1000 nodes then the average node has a gross revenue of $6 per day.  With 100K nodes it is $0.06 per day. 

Given botnets have up to 230K zombie CPU to defeat botnets in numerical superiority you would need 230K+ nodes making average yield ~$0.02 per day before electrical costs.  Most people aren't going to hash for $0.02 per day and pay massive amounts of electrical costs.

This idea that wide acceptance of hashing is a requirement of wide acceptance of usage is flawed.  How many people run a VISA or Paypal processing node?  What is the ratio of end users to processors?

Sure we don't want a monopoly but as long as no entity achieves a critical mass we also don't need 200K+ nodes either.  If you are worried about the strength of the network a better change would be one which has a gradually decreasing efficiency as pool gets larger.  i.e. a non-linear relationship between hashing power and pool size.  This would cause pools to stabilize at a sweet spot that minimizes variance and minimizes the effect of non-linear hashing relationship.  Rather than deepbit having 50% and the next 10 pools having 45% and everyone else making up 5% you likely would see the top 20 pools having on average 4% of network capacity.

I am not saying we even need or should do that but that would attack the real problem not the flawed belief that GPU makes the network weaker. GPUs makes the network very botnet resistant.    Bitcoin likely would have been destroyed by botnets already (if out of spite or to simply see if they could) had it not been for the rise of specialized (i.e. GPU) mining.
donator
Activity: 2058
Merit: 1054
October 03, 2011, 03:22:43 AM
#50
Wouldn't trying to keep changing things all the time result in fragmentation of the network as bunches of people get too lazy or just simply are not all that computer savvy to feel comfortable constantly upgrading things and stay with their old stuff?
Upgrading the client every so often is good practice anyway. If the big players agree to the change everyone else will just have to follow. Those that can't bother to keep up are better off using an eWallet rather than a client. It's not essential that we actually do change the hashing function frequently, only that we are prepared for the contingency. The "change every year" plan was just an example of how we could prevent specialization if we really wanted and all else fails.
hero member
Activity: 616
Merit: 500
Firstbits.com/1fg4i :)
October 03, 2011, 01:37:37 AM
#49
Wouldn't trying to keep changing things all the time result in fragmentation of the network as bunches of people get too lazy or just simply are not all that computer savvy to feel comfortable constantly upgrading things and stay with their old stuff?
donator
Activity: 2058
Merit: 1054
October 03, 2011, 12:08:43 AM
#48
only profitable for those with the most (and fastest) CPUs and the resources needed to support them (electricity, etc)
It's not about quantity. Someone with 1000 CPUs will make x1000 times the revenue, but with x1000 times the cost. It's about efficiency, cost per bitcoin generated (where all costs are considered - electricity, hardware, maintenance...). If I have just 1 CPU but with the same efficiency I can also profit.

those with the skills and resources will be the ones getting the profits
Skills, resources and opportunities. Someone who has a computer he bought for other purposes, which happens to be able to mine, has an opportunity to profit. At-home miners have several other big advantages over dedicated businesses. If there's really no specialized hardware, all the business has is things like some more technical knowledge and negotiating slightly better power prices and it simply can't compete.


1) It still won't be "fair".  Sure if you can only use CPU then the traditional mining farm because kaput.  It still doesn't make average user an "equal share".  What about IT department managers who may have access to thousands of CPU?  They dwarf the returns than an "average" user can ever make.  You simply substitute one king of the hill for another.
Nobody in the thread said it should be "fair". It's about making Bitcoin decentralized per the vision, and making it more secure (by making it more difficult for an attacker to build a dedicated cluster).

2) It makes the currency very very very vulnerable to botnets.  The largest botnet (Storm Trojan) has roughly 230,000 computers under its control.  It could instantaneously fork/control/double spend any crypto currency.   There are much fewer computers with high end GPU systems, they are more detectable when compromised, and on average tend to be owned by more computer savy users making controlling an equally powerful GPU botnet a more difficult task.
Then solve that problem. Botnets are a potential problem now but they will become less so as Bitcoin grows. In any case they seem like a challenge to overcome rather than a fatal flaw in CPU-mining.

3) If GPU were dead then FPGA would simply rein supreme.  CPU are still very inefficient because they are a jack of all trades.  That versatility means they don't excel at anything.  If bitcoin or some other crypto currency was GPU immune large professional miners would simply use FPGA and drive price down below electrical cost of CPU based nodes.  The bad news is it would make the network even smaller and even more vulnerable to a botnet (who's owner doesn't really care about electrical costs).
The point with CPU-friendly functions is RAM. With a given amount of RAM you can only run so many instances, so you're bound by your sequential speed. Unless FPGA can achieve a big advantage over CPU in this regard, they will just be too expensive to make it worthwhile.

4) Technology is always changing.  GPU are becoming more and more "general purpose".  It is entirely possible that code which runs inefficiently on today's GPU would run much more efficiently on next generation GPU.  So what are we going to scrap the block chain and start over everytime their is an architectural change.
Who said anything about scrapping the block chain? We're using the same block chain but deciding that starting with block X a different hash function is used. And yes, having a policy for updating the hash function is good for this and other reasons.

5) CPU will become much more GPU "like" in the future.  The idea of using multiple cores with redundant fully independent implementation is highly inefficient (current Phenom and Core i designs).  To continue to maintain Moore's law expect more designs like AMD APU which blend CPU and GPU elements.  Another example is the PS3 Cell processor with a single general purpose cores and 8 "simple" number crunching cores.  As time goes on these hybrid designs will become the rule rather than the exception.  Would be very silly is any crypto currency was less efficient on future CPU designs than current ones out of some naive goal of making it "GPU proof".
Again, RAM. You can choose a hash function which requires 2GB RAM per instance. Then the amortized cost of the CPU time will be negligible, and your computing rate is determined strictly by your available RAM and sequential speed.
hero member
Activity: 798
Merit: 1000
October 02, 2011, 09:47:27 PM
#47
No. The causal chain is Market => Price => Total mining reward => Incentive to mine => Amount of miners => Difficulty => Cost to mine. The other direction, the direct influence of the specifics of mining on the coin price, is negligible. If cost per hash was lower, there would simply be more hashes and larger difficulty thus maintaining market equilibrium. (though there are indirect effects due to network security, popularity etc).
The causal chain is Amount Hoarded => Scarcity => Price => How much can I sell that will leave miners still profitable for the security of the network so I can sell more later => Cost to mine
donator
Activity: 1218
Merit: 1079
Gerald Davis
October 02, 2011, 04:33:53 PM
#46
Making a crypto currency GPU immune is a stupid and naive goal.

1) It still won't be "fair".  Sure if you can only use CPU then the traditional mining farm because kaput.  It still doesn't make average user an "equal share".  What about IT department managers who may have access to thousands of CPU?  They dwarf the returns than an "average" user can ever make.  You simply substitute one king of the hill for another.

2) It makes the currency very very very vulnerable to botnets.  The largest botnet (Storm Trojan) has roughly 230,000 computers under its control.  It could instantaneously fork/control/double spend any crypto currency.   There are much fewer computers with high end GPU systems, they are more detectable when compromised, and on average tend to be owned by more computer savy users making controlling an equally powerful GPU botnet a more difficult task.

3) If GPU were dead then FPGA would simply rein supreme.  CPU are still very inefficient because they are a jack of all trades.  That versatility means they don't excel at anything.  If bitcoin or some other crypto currency was GPU immune large professional miners would simply use FPGA and drive price down below electrical cost of CPU based nodes.  The bad news is it would make the network even smaller and even more vulnerable to a botnet (who's owner doesn't really care about electrical costs).

4) Technology is always changing.  GPU are becoming more and more "general purpose".  It is entirely possible that code which runs inefficiently on today's GPU would run much more efficiently on next generation GPU.  So what are we going to scrap the block chain and start over everytime their is an architectural change.

5) CPU will become much more GPU "like" in the future.  The idea of using multiple cores with redundant fully independent implementation is highly inefficient (current Phenom and Core i designs).  To continue to maintain Moore's law expect more designs like AMD APU which blend CPU and GPU elements.  Another example is the PS3 Cell processor with a single general purpose cores and 8 "simple" number crunching cores.  As time goes on these hybrid designs will become the rule rather than the exception.  Would be very silly is any crypto currency was less efficient on future CPU designs than current ones out of some naive goal of making it "GPU proof".
Pages:
Jump to: