Some people in another thread thought this was an interesting idea, so I'm gonna expand on it a bit. Before I lose you when you see the wall of text, the basic idea is that the algorithm(s) hard-coded into cryptocurrences are hard-coded into the ASIC chips. If this algorithm dynamically changed in a non-predictable fashion, ASICs would be MUCH harder if not impossible to design, and would certainly, if designed, be orders of magnitude less efficient than their static counterparts.
CryptoCurrency mining is essential. It's how the network continues. It creates blocks, which, to simplify, hold chunks of transactions, inputs and outputs that move money around. A currency is useless if you can't spend it. You can't spend current cryptos without putting the transaction into a block. You can't put the transaction inside a block if no one is making blocks. The trouble comes when people's ability to create blocks is abused. This can be done by a multitude of people in a variety of ways for many different reasons. Someone doesn't like a competing currency? A business that deals with monetary transactions doesn't like the system as they can't control it? A corrupt government wants to destroy free currency? They could execute a 51% attack, and rewrite the attacked parts of the chain. It's important to note that, given enough computational power, a knowledgeable person could rewrite the blockchain back to the latest client-hard-coded and network-accepted block. They would release new blocks at a higher difficulty than the original chain, and would 'fork' or replace blocks on the chain.
51% attacks involve a ton of computational power, and a ridiculous amount of electricity. However, for a large vested interest, they're certainly not out of the question, no matter the coin. If a large institution really wanted to take down Litecoin, for example, they could develop a high-memory ASIC chip made for Scrypt(1024,1,1) calculations. And they could mine faster than the current network is mining. Sure, it would be expensive. But they also have a lot of money.
Another problem is the ASIC mining scene. Mining was developed to be free, distributed, open to anyone. However, due to the advent of ASIC devices, mining has been consolidated. It has become a rich-get-richer game. People have lost fortunes investing in ASICs with no clear delivery dates or ROI. While this is a part of free market, many see it as undesirable, myself included. I personally have invested significant money into ASICs, and for the most part, I'm happy with my investment. This isn't some deep-seated hatred towards ASIC companies, but rather a general analysis of the negative effects ASICs have had on the blockchain. Unlike consumer-oriented hardware (such as CPUs and GPUs), ASICs are not widely-available to consumers, they aren't carefully monitored for safety, and they don't have large, reliable companies behind them. It's rare that someone runs a scam around making their own GPU chips and selling them, haven't seen it yet. However, it's important to note that many scammers latch onto the idea of making fake ASIC companies to rip consumers off.
However, these above points aren't even the most important focus. They all revolve around the mining. Sure, mining is important, but it isn't the whole coin. Although it's rather unlikely due to the nature of hashing algorithms, what if a way of reversing SHA256 became possible? Let's step back and take a wider look at what hashing algorithms are. To simplify it, imagine I have a function called multiply. It accepts two numbers, and returns the result of multiplying them together. There's nothing special. Even this is, in its simplest form, a hashing algorithm. It loses some sort of data, some sort of understanding. Given this, say I call mult(6, 9). It returns 54. Now, it's trivial to list the possible inputs to mult(something, somethingelse) to give me 54, right? You could have (1,54), (2,27), (3,18), and (6, 9). Sure, 6,9 is in there. However, without any additional context, you can't tell me for sure which one is the original input. You could even say that the above possibilities all result in a collision--where two inputs result in one output.
Now, let's look at an actual example of how this loss of data results in protection. Hashing algorithms rely on circuit logic at some level. If you want some additional details, you can check out
http://m.metamorphosite.com/one-way-hash-encryption-sha1-data-software. At any rate, say we have a XOR function. XOR (Exclusive-OR, which says that true,true = false, true,false = true, false,true=true, and false,false=false. Let's change this to binary:
1010010110101101 XOR:
0110101100101011 =:
1100111010000110
Given 1100111010000110, you could tell me all the possibilities for input. There are quite a few of them. As a result, it's hard to reverse. Now, imagine that both of those input lines were the result of other such functions, building a reverse tree feeding system. And the inputs for those came from circuit logic. Quickly, the possible sample area for brute-force reversing the algorithm becomes unthinkably large. However, if you were able to successfully reverse an algorithm like this, you would end up with a list of inputs that collide--give identical output for different inputs.
Understanding the complexity of the algorithm, it's hard to prove for sure that there isn't a method by which to reverse it. If a reversal mechanism was discovered for the algorithm, it would be a big deal. And given billions if not trillions of dollars at stake, if it's possible, it's likely someone will find a way. No matter your faith (or lack thereof) in modern cryptographic one-way hashing functions, creating a currency, it would be nice to build it in such a way that a broken hash function wouldn't be the end of the project/crypto currency.
How to protect against 51% attacks 51% attacks, as explained earlier, rely on the ability to create blocks at a higher difficulty than the network itself can produce. One powerful step forward in preventing 51% attacks and further distributing mining/block production has been the idea of Proof-Of-Stake, where you can mine blocks with hashing power, but a large percentage of blocks are also created by owning large amounts of the currency. As a result, anyone who wanted to 51% attack the coin would have to have a significant portion of stake in the coin, and would therefore be at an economic disadvantage by attacking. However, for large institutions, it may still make financial sense (in a twisted way) to invest millions, or even billions, into a currency, for the ability to destroy it.
Proof of Stake is a strong contender. However, another issue with 51% attacks is a group's ability to obtain ridiculous amounts of computational power. For simplicity, considering a coin that only has proof of work (hashing) and no proof of stake, say the total hash rate is 1TH. Each 7970 can generate 500MH. If someone could create an ASIC, it's not at all unlikely for them to easily be able to obtain 1TH of computational power. Considering only electrical consumption, an ASIC for Bitcoin currently is somewhere around 200-400x more efficient than GPUs for the same hash rate. This enables a long-term attack to only require a large initial investment. In perpetuity, even if a GPU farm of 1TH was somewhat cheaper to make, for a long-term effective 51% attack, power consumption would become a major price. Therefore, we can conclude that ASICs make the network easier to attack.
Many people will tell you that the advent of ASICs make Bitcoin more secure. This is true. This is because Bitcoin is vulnerable to ASICs, the network is protected when ASICs are in the hand of normal people, as it makes an ASIC-based attack harder. However, Bitcoin would be better off not even being able to have ASICs made for it. A smaller financial investment would protect the network with the same power, and a huge attack on Bitcoin would instead require exorbant amounts of consumer hardware rather than being able to pay a large initial investment, and then run the attack for a long time cheap. GPUs use a lot of power. We're talking an increase in orders of magnitude. GPUs cost much more to run over a period of time, making costly attacks even more costly. This exorbant amount of electricity would make a long-term attack infeasible, and would make short-term attacks much more expensive. If interested to the point where the party wanting to take over the network was willing to make their own GPUs to lower cost/unit, they would be consuming a large amount of power to run such a big farm, and the added complexity of creating GPUs or high-performance CPUs, even geared towards a specific purpose, would make the cost per unit much higher for the business.
If an ASIC can be created for a coin, then the coin is the most secure if ASICs have been made for it. This is because, if ASICs are possible but not-yet-public, then a private group would require much less investment to pull off a successful attack. However, if ASICs are out in the wild, the private attack group would require much more ASIC hardware and electricity.
So, we have concluded that, to prevent 51% attacks, a mix of proof-of-stake block minting and an algorithm or algorithms that can't be calculated efficiently by ASICs.
Preventing reversal-attacks Though unlikely, it would be potentially possible for an algorithm to be reversed, and, given this idea, we have to build a coin with the idea that one or two of the algorithms that we use might end up eventually broken. If a currency uses only one algorithm, then that one algorithm being reversed would essentially spell doom for the currency. Patches could be applied and a new version released, but almost all faith would be lost in the currency, and when stakes are in the billions or trillions of dollars, a network hiccup like this is a HUGE deal, and would likely spell the end of public faith in crypto currencies.
The obvious solution to this is simply using multiple algorithms. However, there are some not-so-evident worries about using multiple hashing algorithms. The major one is entropy. Entropy is a measure of chaos. It's difficult to explain, but consider that we are generating random strings that are 16 characters long, and draw from a sample of 8 possible characters for each position. We would have 8^16 possibilities, so that would be our entropy. Hashing algorithms strive to be one-to-one mappings in a certain sample space of entropy. However, it's important to remember that, due to the data-losing nature of hashing algorithms, collisions are possible. To put this in scale, we have not yet found one single collision for SHA256. Don't let that deceive you, however, the math checks out in that it is most likely vulnerable. As a result, the hashing function is considered to lose entropy, or output less entropy than is put in. To put this in perspective, this then says that, if run enough, trillions upon trillions upon trillions of SHA256 functions would likely result in any input being converted to one single output. Since the entropy loss carries from round to round, if we could find the exact average entropy loss of the function, we could predict how many functions would needed to be calculated on a single input. However, that is neither the point nor the intention of this talk. Instead, I want to present the fact that running the same algorithm a ridiculous amount of times isn't smart. SHA256(SHA256("input")) isn't bad. Entropy loss is minimal and can be ignored. It's not a problem, we don't have to worry about it. However, we would have to worry if, for example, SHA256 was calculated a ridiculous number of times.
New algorithms are touted to have very little entropy reduction, however it's a hard concept to prove. The existence of collisions is not surprising, but finding one is. However, if in making a new coin that used a bunch of nested algorithms (say hashing algorithms a,b,c,d,e,f,g,and h) in such a fashion as a(b(c(d(e(f(g(h(input)))))))) would result in the output having something slightly lower than the lowest-entropy algorithm. So the only weakness of this system is one of the algorithms losing a significant portion of entropy over the sample area of the input.
While not a huge concern, this entropy reduction would have to be accounted for, perhaps even if only to make sure that, in a sample set of significant size (millions or billions) of hashes, collisions don't occur.
So, if we have 8 hashing algorithms, then all 8 would have to be broken to attack the network using reversal. For example, if a was reversible, then an attacker could, theoretically, only have to do b(c(d(e(f(g(h(input))))))). However, if any of the other non-outer-layer hashing algorithms were broken, it wouldn't at all affect the network security in reversing hashes, as it would most likely be faster to compute a hash rather than reverse a potentially-accepted output. Therefore, you would put your weakest algorithm in the center, and your strongest on the outside. Additionally, entropy-permitting, you could nest multiple times, so that a,b,c,d,e,f,g,and h all get run hundreds of times in the hashing process per completed 'hash round'. This idea enables an attack on function a to be orders of magnitude less significant, as a would still be calculated normally in all of the times it was used except for 1. So, nesting multiple algorithms, if done right, only strengthens the currency against algorithm reversal, and any losses in entropy are minimal and insignificant.
Alright, so you do a bunch of functions. Couldn't an ASIC be programmed to do that?Certainly! We require another change to become ASIC-resistant or even ASIC-proof. Dynamic algorithms. Yes. a,b,c,d,e,f,g, and h have to change in a pre-determined but un-predictable manner. This doesn't mean that the dev changes the code every six months and has other people rewrite miners. This means the network itself reorganizes the hashing functions. In order for this to work, the hashing functions would have to be modular to an extent. And when the network retargets difficulty, it would also retarget the hashing algorithm. It would, based on data from the latest retarget block hash, systematically create a new hashing algorithm by rearranging the modular portions of the hashing algorithms. Say, for example, that algorithm A does a ton of stuff, and then has 32 different modular steps. Each step has a negligible affect on entropy. Based on, say, the last sixteen bits of the retarget block (block right before the retarget) are used to determine the order of the steps. In some way that would be easy-to-calculate for clients, but impossible to predict before the retarget block is mined, the hashing algorithm's steps would be reorganized based on those bits. For example, maybe each bit (a one or a zero) determines which step is used first. If we have 16 1's (so 1111111111111111 was the last 16 bits of the retarget-worthy hash), then we'll do step 7 first, but if we have 15 1's, we'll do step 3 first, etc.). This counting method would be nice as it wouldn't line this first modular piece up with the next way to determine the 2nd step, which, for example, could be based on the number of 1's in the first four and last four bits, etc). If each algorithm had 32 different modular components, some of which could be used multiple times, each algorithm would have a huge number of possible orders. As a result, it would be
impossible infeasible to design an ASIC for each possibility. Some steps could be high-memory, so from when the high-memory algorithm is pushed to when it retargets, CPU miners would have an advantage. Other steps would be extremely low-memory, and when those were selected instead, GPUs would have an advantage. However, on average, the algorithms would lie nicely balanced, with some high-memory steps, but not enough to push GPUs out of the game, while still allowing CPUs a bit of room to play catch-up. As a side-note, the order in which algorithms are performed could be changed at the algorithm-level as well as at the hash level, so a could change, and a could be before b, when other times b could be before a, even though both times a is a different-but-similar algorithm consisting of a number of steps.
This dynamic idea would require GPU miners, as I understand it, to recreate a kernel and recompile in order to be efficient. However, if programmed correctly, this could be automatic and only result in minimal downtime during retarget blocks. Retargeting should be done often enough that an ASIC couldn't be produced during the algorithm's lifetime (2 weeks is way more often than necessary at this point in time, but might be a nice number simply because even an ASIC would be a pipe-dream) and would have to be long enough that GPU miners wouldn't be constantly shutting down to recreate kernels, etc.
Finally, it's important to note that such a network of evolving hashing algorithms would require significant thought in the algorithm design. Some of the steps could be tried-and-true algorithms such as SHA256, blake, keccak, etc. Others could be created specifically to allow modularity by design in steps. The step order would be similar to a salt in function, but different in implementation. In order to create a secure-but-usable dynamic algorithm system, some algorithms could remain constant but buried-deep in the algorithm, and constantly reordered so that an ASIC couldn't be created to do a portion of the hashing function without excessive latency with having an ASIC do part of a hashing algorithm and a GPU or cpu do the other part, which is completely infeasible due to, as mentioned, latency. ASICs are effective because they can check the hashes themselves, they aren't sending billions of hashes back to the computer to check per second. The bandwidth of sending a billion hashes per second (1TH) to the computer for checking would be a minimum of 8TB of data per second. ASICs wouldn't be useful, even if they were able to do a portion of the hashing, especially if that portion was nested inside, and required sending from the ASIC to general-computing devices and back multiple times.
Sorry about the wall of text. It takes a while to explain this idea, and the people most likely to be interested would read this whole thing, I hope. If you have any additional questions, PM me. If you want to create this algorithm, be my guest, I would love to see it implemented. I don't currently have the programming skills required to complete this project, I simply wanted to introduce the idea so that perhaps someone with the particular skill-set required would pick up the lead.