Four months ago? That comment was left an hour ago and explicitly links to a URL that didn't exist until today.
This is a great example of how the Bitcoin core developers are shooting themselves in the foot by attacking people. The developers want a closed little world where they make all the decision while, at the same time, they want mass adoption so their cache of Bitcoins becomes more valuable. You can't have it both ways.
I'm not sure what you want here. I'm just stating the facts: I was confused by the response because it appeared to be claiming that I was posting old material, not stuff which hit my inbox moments before.
Needless to say, I'm not entirely enthused to find out about something new through that kind of rude message. If someone doesn't want me pointing it out, then they should refrain from sending it and darkening my inbox with it.
To the best of my ability to discern is an altcoin / opencoin competitor with a number of seemingly marginal ideas being thrown at it, it is being— in my opinion— somewhat deceptively marketed as improvements to bitcoin when its is more of an alternative. The people responsible for it apparently (see the post) view it as a mechanism for undermining Bitcoin or at least the people who've been working on Bitcoin thus far. I, as you might expect, think thats disappointing. I see now that the message has _since_ been deleted, so it would seem that apparently the team here isn't entirely lacking in PR skills.
I'm usually pretty happy in the rare event that I see genuinely new ideas explored in altcoins, less so to see them promoted trading on Bitcoin's brand, and less so when they're really just shallow rehashings of the ideas in Bitcoin, often twiddled and changed without a deep understanding of the implications. They might have been able to convince some VC, who hasn't had the benefit of working in this space for a number of years and seen the same bad ideas over and over again, to fund this effort, but that doesn't mean I have to be impressed by it. If you want to fault me for sharing my thoughts, be my guest, but I continue to see part of my value in this community as being a person who sees through obfuscation and marketing.
Unfortunately for whatever "It's over" for me that Hos has planned, it doesn't seem likely to me that his team is technically prepared to pull it off.
Allow me to demonstrate,
The POW is a easily understood, relatively isolated part of a POW blockchain cryptocurrency. BitShares claims to have a novel POW with a number of desirable properties. So lets take
a look at it and see if it lives up to their claims.
Quoting from their PR piece:
Significantly, the Invictus Project uses a different proof of work to the two incumbent ones (Bitcoin’s SHA-256, and Litecoin’s Scrypt). The former rewards ASIC miners, while the latter rewards GPUs. The Invictus one will focus on general purpose CPUs, keeping them 32-64 times faster than a GPU for mining. To do this, it has a high RAM requirement, and also relies on sequential data processing.
I'm pretty skeptical of the claimed advantages altcoins often make about their modified POW.Like we saw with Litcoin's "GPU proof", they don't usually hold up— or have unexpected effects if any at all. At least in the case of Litecoin they used a published cryptographic construct which had an extensive paper providing some pretty strong substantiation for its properties, novel cryptography is a real easy way to burn yourself.
The structure of their POW is that it takes a SHA256 bit hash of the block header and then repeats it to fill up a 128MByte buffer. It then runs an apparently home-brew "memory hard" function over it which perform a number of xors and makes data dependant swaps of the data. Then it runs a 128 bit non-cryptographic "CRC" function over the data, and then computes a SHA512 of that.
The code remarks:
* This proof-of-work is computationally difficult even for a single hash,
* but must be so to prevent optimizations to the required memory foot print.
* The maximum level of parallelism achievable per GB of RAM is 8, and the highest
* end GPUs now have 4 GB of ram which means they could in theory support 32
* parallel execution of this proof-of-work.
The comments continue for a screen full describing the characteristics and merits of the particular algorithm.
Indeed, as implemented, it's very expensive to run this function. This has some pretty serious negative consequences since any client of this network or full node catching up will need to do a lot of computation just to check the validity of blocks. Checking block validity is also important because valid-looking blocks being cheap to detect is normally an important anti-DOS step. But I assume they realizes this and are prepared to deal with the consequences in exchange for this function's benefits.
But does it live up to its claims?
Here is the actual code:
fc::uint128 proof_of_work( const fc::sha256& in, unsigned char* buffer_128m )
{
const uint64_t s = MB128/sizeof(uint64_t);
uint64_t* buf = (uint64_t*)buffer_128m;
sfmt_t gen;
sfmt_init_by_array( &gen, (uint32_t*)&in, sizeof(in)/sizeof(uint32_t) );
sfmt_fill_array64( &gen, buf, s );
uint64_t data = (buf+s)[-1];
for( uint32_t x = 0; x < 1024; ++x )
{
uint64_t d = data%s;
uint64_t tmp = data ^ buf[d];
std::swap( buf[tmp%s], buf[d] );
data = tmp * (x+17);
}
return fc::city_hash_crc_128( (char*)buffer_128m, MB128 );
}
I don't believe that it does. Within 30 seconds of seeing the code I made the following observations:
- The interior 128-bit bottleneck opens it up to a collision attack with an average work factor of 2^64 (and 2^64 storage, or a bit more work and less storage by constructing a rainbow table). I consider this mostly a certificational weakness and not a practical attack, though in theory it could be perform today, especially if the memory-hardness is eliminated allowing a cheap and fast ASIC or FPGA implementation.
- The simple xor and constants construction would almost certainly yield to boolen logic simplification, potentially even subsuming the "CRC" step since it's not intended to be a cryptographic function.
- The memory-hardness can be removed in a probabilistic approximation of the POW function built out of the observation that 1024 is a very small fraction of 16777216 and so it's unlikely that any iterations of the interior loop will read from an entry which has already been updated. It could just be run 1024 way parallel, and will most of the time produce a correct result.
- Alternatively, the memory-hardness can be removed by pivoting the algorithm about its main-loop and using 1024 words of storage for the 1024 possible writes the algorithm can make. This is an exact implementation, not probabilistic.
I went ahead and implemented the last in plain ANSI-C, since it seemed the most convincing:
#include
#include
#include
#include
/*Original Invictus Innovations BitShares POW inner loop.*/
void orig_f(uint64_t buf[16777216])
{
uint64_t data=buf[16777216-1];
uint32_t x;
for(x=0;x<1024;++x)
{
uint64_t t2;
uint64_t d=data&16777215;
uint64_t tmp=data ^ buf[d];
t2=buf[tmp&16777215];
buf[tmp&16777215]=buf[d];
buf[d]=t2;
data=tmp*(x+17);
}
}
/*Past state lookup function.
*In the probabilistic/parallel version of this attack, this
* function is eliminated and we would just assume that there
* were no hits on the prior modification table and just merge
* the results at the end.
*In hardware this would get unrolled into mux-trees that worked in constant time.
* (or more likely, you'd just use the probabilistic version in hardware)*/
static inline uint64_t cpx(const uint64_t buf[4], const uint64_t mem[2048], const int loc[2048],int i, int x)
{
int j;
uint64_t out;
out=buf[x&3];
for(j=0;j<(i<<1);j++) {
int pos=(i<<1)-1-j;
if(loc[pos]==x) {
out=mem[pos];
break;
}
}
return out;
}
/*Version of orig_f that doesn't need lots of memory*/
void not_mh(uint64_t buf[16777216])
{
/*Doing this with 1024 words instead of 2048 would
* be pretty trivial since one of the two updates is always
* adjusting the prior read.*/
int loc[2048];
uint64_t mem[2048];
uint64_t data=buf[3]; /*Note we never read past buf[3]*/
uint32_t x;
for(x=0;x<1024;++x)
{
uint64_t t2;
uint64_t d=data&16777215;
uint64_t lu0=cpx(buf,mem,loc,x,d);
uint64_t tmp=data ^ lu0;
uint64_t mt=tmp&16777215;
t2=cpx(buf,mem,loc,x,mt);
loc[x<<1]=mt;
mem[x<<1]=lu0;
loc[(x<<1)+1]=d;
mem[(x<<1)+1]=t2;
data=tmp*(x+17);
}
/*If the CRC were a real CRC it would absolutely be possible to avoid
* running it on the full input size, taking advantage of the fact
* that most of the data is repeated, it still may be for
* city_hash_crc_128 but I haven't looked.
*In the real code, the 'CRC' would just be made to gather from
* the sparse array. Here we just write it back out to make it easy to
* compare.*/
for(x=0;x<2048;x++)buf[loc[x]]=mem[x];
}
int main(void)
{
int i;
int tries;
int match;
uint64_t *buf;
uint64_t *buf2;
FILE *r;
buf=malloc(16777216*sizeof(uint64_t));
if(!buf)return 1;
buf2=malloc(16777216*sizeof(uint64_t));
if(!buf2)return 1;
/*Rather than input from SHA256, we just use 256 bits from /dev/urandom.*/
for(tries=0;tries<100;tries++) {
r=fopen("/dev/urandom","rb");
if(!r)return 1;
if(fread(buf,sizeof(uint64_t),4,r)!=4)return 1;
fclose(r);
for(i=1;i<4194304;i++)memcpy(&buf[i*4],buf,sizeof(uint64_t)*4);
memcpy(buf2,buf,sizeof(uint64_t)*16777216);
/*Run the original "memory hard" function.*/
orig_f(buf);
/*Run the lol version.*/
not_mh(buf2);
match=1;
for(i=0;i<16777216;i++)match&=buf2[i]==buf[i];
if(match)printf("They match! (%llu)\n",(unsigned long long)buf[0]);
else printf("Boo! No match. (%llu)\n",(unsigned long long)buf[0]);
}
free(buf);
free(buf2);
return 0;
}
This reduces the memory footprint from about 134,217,728 bytes to about 24,576 bytes and could be made half that size with slightly more work.
It outputs a whole bunch of:
They match! (10825977267245349006)
They match! (12732965183411680069)
They match! (4744567806717135351)
They match! (8276864928471265477)
Frankly, if I were out to make a bad POW that would be easy for me and hard for others I don't think I would use one as obviously flawed as this one. But at least it makes it cheaper for validating nodes to check!
I haven't looked at any other parts of the BitShare code— I was not especially inspired by this part but keep in mind: There are other cryptocoin systems which are
completely closed, and I couldn't even tell you what laughably bad things they are doing. The developers here should not be penalized for building their tools in the open, not everyone does. They should be lauded for this much, if nothing else.
I honestly hope this small security / cryptography analysis is helpful, and that the price of me using it to blast Invictus Innovations a bit in public wasn't too high. I'd recommend avoiding any crypto more novel than what the Bitcoin ecosystem is using at least unless you get a professional cryptographer on your team (who will then also tell you not to use novel crypto).
If you genuinely are interested in making a asset trading system which is complementary to Bitcoin, I'd strongly suggest merge-mining it as you would obtain the protection of the enormous hashpower held by the Bitcoin community such infrastructure would serve. It doesn't sound fantastic in a VC pitch, but if you don't intend this to be an attack on the Bitcoin ecosystem you'd enjoy a lot more security that way. I think it's still an open question what the necessary economic incentives are for POW consensus to have lasting security...
There are a lot of people with loud ideas about how they want to change Bitcoin to make it better. Sometimes they get angry that the core developers will not consider their pet modifications. Many of the ideas are just simply bad, like this one, and would lead to insecurity or would disrupt the economic promises many users consider immutable. Often the baddness in an idea is subtle and takes a lot more work to tease out than this one, so with limited resources the onus has to be on the proposer to show that their work is necessary, beneficial, and safe. This isn't because the people with the ideas are not smart or good people, it's because ideas in this space are tricky and take a lot more consideration than many realize.