Pages:
Author

Topic: The Ethereum Paradox - page 3. (Read 99876 times)

sr. member
Activity: 336
Merit: 265
August 05, 2016, 12:37:53 AM
How I Fixed Satoshi's Design


There was a really good summary of why Casper's planned sharding is flawed.

Apparently everybody is still oblivious to my solution.

If you shard the blockchain, you've still got to verify it. You can't have shards trusting each other, as that breaks Nash equilibrium (there are game theories other than the one that guarantees the security of the long-chain rule).

But if you have every shard verify every other shard, then you don't have sharding any more.

My hypothetical solution is a statistical one (where the economic interests of all shards become intertwined) combined with eventual consistency where it is required to maintain the Nash equilibrium.

SegWit is (in one aspect but not entirely as afaik it really just centralizes proof-of-work) generally analogous to a similar conceptual idea I had thought of and dismissed, because it relies on the trust that the economically impacted parties will verify before eventual consistency is required, not on the proof that those parties did verify before it was required. The game theory gets quite complex because there are externalities such as shorting the coin. So it is possible I may have a mistake and we will find out once I publish.
legendary
Activity: 1260
Merit: 1000
August 04, 2016, 09:45:15 PM
Decent post.  Thought it might be a random Bitcoin dev with a throwaway account, but there's lots of history of buttcoin posts and references to Stolfi:

https://www.reddit.com/r/Buttcoin/comments/4vmh35/meth_users_re_assure_each_other_that_etc_the_coin/d5zxr7s?st=irci5tcm&sh=bcbbaec2
sr. member
Activity: 336
Merit: 265
July 11, 2016, 09:41:13 PM
The ASIC will always be at least two orders-of-magnitude more power efficient.

potentially up to two or three orders of magnitude.

potential for two orders of magnitude power efficiency improvement

allowing those to be 3 to 4 orders of magnitude more power efficient on the ASIC.

the ASIC will end up probably at least two orders-of-magnitude more power efficient

another 1/2 to 1 order-of-magnitude advantage on electricity cost

That's like an order of magnitude more orders of magnitude than I imagined!

Seriously though, the large DRAM row size may not be that much of an issue, since
all DRAM cells need refreshing on a frequent basis anyway. It's this refresh requirement
that makes it Dynamic RAM. So I doubt you can make row size much smaller.
And given that Cuckoo Cycle is constrained by memory latency, there's not that much
need to optimize the computation of the siphashes.
I would still say that an all ASIC solution for Cuckoo Cycle is at most one order
of magnitude more energy efficient than a commodity one.

Not ASIC proof, but certainly resistant...

Don't forget the other ideas for strategies I had presented to lower electrical power requirements by coalescing row accesses within a row buffer. Seems it is likely to be able to attain at least an order-of-magnitude or two advantage given sufficient investment economy-of-scale.

Also the cache costs I mentioned in the prior post might be only a fraction, but all these strategies and issues can add up.

Plus at least the half-order of magnitude less costly electricity for the ASIC farm which can be located next to hydropower, and including the efficiency losses due to charging the battery on the mobile phone.

Additionally it might be possible to eliminate the refresh of the DRAM if we know statistically due to the nature of the algorithm that the row will be accessed within the refresh requirement. I suppose this still requires the necessary amplifiers to recharge the capacitors, but I don't understand why this can't be scaled down to smaller row buffer sizes. I think the large row buffer sizes is to maximize density which also lowers power consumption, so I assume there is some tradeoff optimum level at a smaller row buffer size for this specific algorithm (readers should note the point that Cuckoo purposely wastes much energy due to randomly accessing only two bits out of 16384 bytes per row, so as to be latency and not memory bandwidth nor computation bounded and for hash computation to not consume the majority of the electrical power, but this causes a problem with the power consumption being optimizeable by the ASIC).

I guessestimated at best I could hope for between one and two orders-of-magnitude comparing Cuckoo-variant mobile mining to ASIC farms, and the killer observation is that we can at best expect to draw less than say 1 Watt-hours per day per mobile device (presuming they are mining only when they are spending microtransactions), so even with a billion users that is only roughly 1 - 10 megawatts that the ASIC farms need to overpower my unprofitable proof-of-work design. So mobile mining of the proof-of-work is insecure.

I have a solution to this within my block chain design, but I don't see how ASIC resistance can ever be viable for profitable proof-of-work (Satoshi's design) where the marginal mobile miners would at best have an electricity cost that is 30 - 100 times greater than the ASIC farm, thus I don't see how they would be incentivized to mine for profit when not required to, i.e. when on the charger.

It is possible that Cuckcoo proof-of-work algorithm has some use case, but I don't see mobile mining as viable which I think I read was one of your goals or inspirations. Of course I am not gloating about that. I wish it wasn't so.

P.S. readers I have not detailed why Monero/Cryptonite's proof-of-work hash and Zcash's Equihash would also suffer similar (most likely worse approaching 3 orders-of-magnitude) lack of ASIC resistance, but suffice it to say that there is no way to avoid the fact that a general purchase computer on household electricity costs can't be within an order-of-magnitude of power cost efficiency compared to ASIC mining farms. The details could be investigated and enumerated but this isn't necessary. It would be a waste of my time.
legendary
Activity: 990
Merit: 1108
July 11, 2016, 02:48:10 PM
The ASIC will always be at least two orders-of-magnitude more power efficient.

potentially up to two or three orders of magnitude.

potential for two orders of magnitude power efficiency improvement

allowing those to be 3 to 4 orders of magnitude more power efficient on the ASIC.

the ASIC will end up probably at least two orders-of-magnitude more power efficient

another 1/2 to 1 order-of-magnitude advantage on electricity cost

That's like an order of magnitude more orders of magnitude than I imagined!

Seriously though, the large DRAM row size may not be that much of an issue, since
all DRAM cells need refreshing on a frequent basis anyway. It's this refresh requirement
that makes it Dynamic RAM. So I doubt you can make row size much smaller.
And given that Cuckoo Cycle is constrained by memory latency, there's not that much
need to optimize the computation of the siphashes.
I would still say that an all ASIC solution for Cuckoo Cycle is at most one order
of magnitude more energy efficient than a commodity one.

Not ASIC proof, but certainly resistant...
sr. member
Activity: 335
Merit: 250
July 11, 2016, 08:51:42 AM
I have heard that china is going to make some miners for eth ,
its price is also increasing and thats good to see mining for eth is not a bad idea.

Where is the source? The Ethereum price is going down at the moment, it is not profitable to add ASIC for eth.
ETH price up 9% last 24 hours. DAO price up 15%. The hodlers will be rewarded in this life.

Delusional.
sr. member
Activity: 336
Merit: 265
July 11, 2016, 08:02:55 AM
I am following up on the discussion that tromp and I had upthread about whether his Cuckcoo proof-of-work is ASIC resistant, which began as a post by myself about Ethereum's Dagger proof-of-work.

For all the following reasons, I have concluded that ASIC resistant proof-of-work algorithms can't exist. This applies to Cryptonite's (Monero's) proof-of-work hash and Zcash's Equihash. The CPU simply isn't designed to do any one algorithm most efficiently. It makes tradeoffs in order to be adept at generalized computation. The ASIC will always be at least two orders-of-magnitude more power efficient.

In that prior discussion with tromp, I had suggested some strategies of using special hardware threads on the ASIC to coalesce memory accesses in the same memory row and/or queuing hash computations that didn't coalesce to attain greater power efficiency on the ASIC, potentially up to two or three orders of magnitude.

I have also realized that it might be possible to design DRAM circuits with much smaller row banks, which also has the potential for two orders of magnitude power efficiency improvement for this specific Cuckcoo algorithm. General computing DRAM must have large row buffer sizes so as to exploit efficiencies in locality of memory, so it wouldn't be possible to put these specialized DRAMs on general purpose computers.

As these DRAM power costs are shrunk, the power cost of the hash computations becomes a greater proportion, allowing those to be 3 to 4 orders of magnitude more power efficient on the ASIC.

Even if the Cuckcoo algorithm is shrunk to operate on L1 cash and even after I converted it to use up to 32 (16-bit) buckets per slot and track all the cycles and find multiple copies of cycles, so that I could force the consumption of the entire SRAM cache line per random access, still the ASIC will end up probably at least two orders-of-magnitude more power efficient, because for example N-way caches (required for general computation) are not as power efficient as direct-mapped (which is all that is required in this case).

This is not to mention that the ASIC mining farm has another 1/2 to 1 order-of-magnitude advantage on electricity cost compared to the CPU/GPU marginal miners.

Additionally:

1. Accessing a memory set that is larger than the cache(s) incurs the power consumption cost of loading the cache(s) and resetting the pipeline on cache misses[1]. N-way set associative L1 caches typically read in parallel the SRAM rows for all N ways, before the cache miss is detected[2]. On systems with inclusive L2 and L3 caches such as Intel, these are also always loaded. Since each SRAM access incurs 10X power consumption of DRAM (albeit 10X faster) and albeit that the DRAM row size is typically 128X greater[3], the baseline cache power consumption can become significant relative to that of the DRAM; and especially if our algorithm employs locality to amortize DRAM row loading over many accesses (in excess of the locality of the SRAM row size). An ASIC/FPGA could be designed to eliminate this baseline cache power consumption.

2. Due to the aforementioned N-way power consumption overhead, Android's 2-way L1 cache is more power efficient than Haswell's 8-way[4], and both have the same latency.


[1] https://lwn.net/Articles/252125/
[2] https://en.wikipedia.org/wiki/CPU_cache#Implementation
    http://personal.denison.edu/~bressoud/cs281-s10/Supplements/caching2.pdf#page=4
[3] http://www.futurechips.org/chip-design-for-all/what-every-programmer-should-know-about-the-memory-system.html
[4] http://www.7-cpu.com/cpu/Cortex-A15.html
    https://en.wikipedia.org/wiki/ARM_Cortex-A57#Overview
    http://www.7-cpu.com/cpu/Haswell.html

The following is sloppy code I just hacked quickly to do the test I needed.

Code: (cuckcoo.c)
// Cuckoo Cycle, an attempt for an ASIC-resistant proof-of-work
// a derivative of the original by John Tromp¹

#include "cuckoo.h"
#include
#include
// algorithm parameters
#define MAXBUFLEN PROOFSIZE*BUCKETS

// used to simplify nonce recovery
int cuckoo[1+SIZE][BUCKETS]; // global; conveniently initialized to zero
uint8_t cycle[1+SIZE];
int nonce, reads=0, len, cycled, count=0;
u16 node, path[PROOFSIZE], buckets[MAXBUFLEN];

bool recordCycle(const int pi) {
//  cycled = true;
  len = pi;
  printf("% 4d-cycle found at %d%%\n", len, (int)(nonce*100L/EASYNESS));
  return pi == PROOFSIZE && ++count > 256;
}

// Returns whether all cycles found.
bool walkOdd(const int pi, int bi);
bool walkEven(const int pi, int bi) {
  // End of path?
  if ((reads++, cuckoo[node][1]) == 0) // iff 1 occupied bucket, it reverses the path pointing back to the node that pointed to this node
    return false;
  // Cache all the node's buckets so DRAM row buffer loads are not repeated
  int i = 0;
  do {
    buckets[bi+i] = cuckoo[node][i]; // compact into buffer instead of allocating the BUCKETS on stack
  } while (++i < BUCKETS && cuckoo[node][i] != 0);
  // Walk the path for each bucket
  int j = bi;
  bi += i;
  for (; j < bi; j++) {
    node = buckets[j];
    // Repeating a cycle or reversing the path?
    int k = pi - 2;
    while (node != path[k] && --k >= 0);
    if (k > 0)
      continue;
    // Cycle found? (only in WalkEven() bcz bipartite graphs only contain even length cycles, bcz there are no edges U <--> U nor V <--> V)
    if (k == 0) {
      // All cycles found?
      if (recordCycle(pi))
        return true;
    }
    path[pi] = node;
    if (walkOdd(pi+1, bi))
      return true;
  }
  return false;
}

bool walkOdd(const int pi, int bi) {
  // Path would exceed maximum cycle length?
  if (pi > PROOFSIZE)
    return false;
  // End of path?
  if ((reads++, cuckoo[node][1]) == 0) // iff 1 occupied bucket, it reverses the path pointing back to the node that pointed to this node
    return false;
  // Cache all the node's buckets so DRAM row buffer loads are not repeated
  int i = 0;
  do {
    buckets[bi+i] = cuckoo[node][i]; // compact into buffer instead of allocating the BUCKETS on stack
  } while (++i < BUCKETS && cuckoo[node][i] != 0);
  // Walk the path for each bucket
  int j = bi;
  bi += i;
  for (; j < bi; j++) {
    node = buckets[j];
    // Repeating a cycle or reversing the path?
    int k = pi - 2;
    while (node != path[k] && --k >= 0);
    if (k > 0)
      continue;
    path[pi] = node;
    if (walkEven(pi+1, bi))
      return true;
  }
  return false;
}

bool walkThird(const int pi, int bi) {
  // End of path?
  if ((reads++, cuckoo[node][1]) == 0) // iff 1 occupied bucket, it reverses the path pointing back to the node that pointed to this node
    return false;
  // Cache all the node's buckets so DRAM row buffer loads are not repeated
  int i = 0;
  do {
    buckets[bi+i] = cuckoo[node][i]; // compact into buffer instead of allocating the BUCKETS on stack
  } while (++i < BUCKETS && cuckoo[node][i] != 0);
  // Walk the path for each bucket
  int j = bi;
  bi += i;
  for (; j < bi; j++) {
    node = buckets[j];
    // Repeating a cycle or reversing the path?
    int k = pi - 2;
    while (node != path[k] && --k >= 0);
    if (k > 0)
      continue;
    path[pi] = node;
    if (walkEven(pi+1, bi))
      return true;
  }
  return false;
}

bool walkFirstTwo() {
  // End of path?
  if (cuckoo[node][1] == 0) // iff 1 occupied bucket, it reverses the path pointing back to the node that pointed to this node
    return false;
  // Cache all the node's buckets so DRAM row buffer loads are not repeated
  int i = 0;
  do {
    buckets[i] = cuckoo[node][i]; // compact into buffer instead of allocating the BUCKETS on stack
  } while (++i < BUCKETS && cuckoo[node][i] != 0);
  // Walk the path for each bucket
  for (int j = 0; j < i; j++) {
    node = buckets[j];
    // Reversing the path?
    if (node == path[0])
      continue;
    path[2] = node;
    if (walkThird(2+1, i))
      return true;
  }
  return false;
}

int main(int argc, char **argv) {
  clock_t start = clock(), diff;
  assert(SIZE <= (1 << 14)); // 32KB L1 cache (of u16 elements, note u16 allows up 64KB)
  assert(PROOFSIZE > 2);// c.f. `walk(2, 0)`
  assert(BUCKETS > 1);  // c.f. `(reads++, cuckoo[node][1]) == 0`
  char *header = argc >= 2 ? argv[1] : "";
  setheader(header);
  printf("Looking for %d-cycle on cuckoo%d%d(\"%s\") with %d edges and %d buckets\n",
               PROOFSIZE, SIZEMULT, SIZESHIFT, header, EASYNESS, BUCKETS);
  int j, u, v, hashes=0, writes=0, maxes=0, max=0;
  for (nonce = 0; nonce < EASYNESS; nonce++) {
    sipedge(nonce, &u, &v); hashes++;
    // Edge already exists?
    reads++;
    int i = 0;
    while (cuckoo[u][i] != v && cuckoo[u][i] != 0 && ++i < BUCKETS);
    if (i < BUCKETS && cuckoo[u][i] == v)
      continue; // ignore duplicate edges
    // Not enough buckets?
      if (i == BUCKETS) {
        if (i == max) maxes++;
        else {
          max = i;
          maxes = 1;
        }
        continue;
      }
      else if (i <= max) {
        if (i == max) maxes++;
      }
      else {
        max = i;
        maxes = 1;
      }
    reads++;
    for (j = 0; cuckoo[v][j] != 0 && ++j < BUCKETS;) {}
      if (j == BUCKETS) {
        if (j == max) maxes++;
        else {
          max = j;
          maxes = 1;
        }
        continue;
      }
      else if (j <= max) {
        if (j == max) maxes++;
      }
      else {
        max = j;
        maxes = 1;
      }
    // Add new edge
    cuckoo[u][i] = v; writes++;
    cuckoo[v][j] = u; writes++;
    cycled = false;
    // Search for cycles?
    if (i > 0 && j > 0) {
      path[0] = u;
      path[1] = v;
      node = v;
      // Found?
      if (walkFirstTwo()) {
        int pi = len;
        // Mark the cycle edges
        while (--pi >= 0)
          cycle[path[pi]] = true;
        // Enumerate the nonces for the marked cycle edge
        for (; len; nonce--) {
          sipedge(nonce, &u, &v); hashes++;
          if (cycle[u] && cycle[v])
            printf("%2d %08x (%d,%d)\n", --len, nonce, u, v);
        }
        break;
      }
      if (cycled) {
        cuckoo[u][i] = 0; writes++;
        cuckoo[v][j] = 0; writes++;
      }
    }
  }
  printf("Hashes: %d\n Reads: %d\nWrites: %d\n Maxes: %d\n   Max: %d\n", hashes, reads, writes, maxes, max);
  diff = clock() - start;
  int msec = diff * 1000 / CLOCKS_PER_SEC;
  printf("Time taken %d seconds %d milliseconds\n", msec/1000, msec%1000);
  return 0;
}

// ¹https://bitcointalksearch.org/topic/m.15111595

Code: (cuckcoo.h)
// Cuckoo Cycle, a memory-hard proof-of-work
// Copyright (c) 2013-2014 John Tromp

#include
#include
#include
#include
void SHA256(const unsigned char *header, size_t len, unsigned char hash[32]);

// proof-of-work parameters
#ifndef SIZEMULT
#define SIZEMULT 1
#endif
#ifndef SIZESHIFT
#define SIZESHIFT 14
#endif
#ifndef EASYNESS
#define EASYNESS (SIZE*16) // controls probability of finding a cycle before completion
#endif
#ifndef PROOFSIZE
#define PROOFSIZE 6
#endif
#ifndef BUCKETS
#define BUCKETS 10
#endif

#define SIZE (SIZEMULT*(1<// relatively prime partition sizes
#define PARTU (SIZE/2+1)
#define PARTV (SIZE/2-1)
// Otherwise if (d=gcd(U,V)) > 1, frequencies multiples of d are mirrored
// in both partions, and SIZE/2 effectively shrinks to SIZE/(2*d); because given
// hash(k)=d*hash'(k), U=d*U', and V=d*V', thus d*hash'(k) mod d*U' =
// hash'(k) mod U' and d*hash'(k) mod d*V' = hash'(k) mod V':
//   http://pub.gajendra.net/2012/09/notes_on_collisions_in_a_common_string_hashing_function
//   http://stackoverflow.com/questions/25830215/how-does-double-hashing-work#comment40410794_25830215

typedef uint64_t u64;

#define ROTL(x,b) (u64)( ((x) << (b)) | ( (x) >> (64 - (b))) )

#define SIPROUND \
  do { \
    v0 += v1; v1=ROTL(v1,13); v1 ^= v0; v0=ROTL(v0,32); \
    v2 += v3; v3=ROTL(v3,16); v3 ^= v2; \
    v0 += v3; v3=ROTL(v3,21); v3 ^= v0; \
    v2 += v1; v1=ROTL(v1,17); v1 ^= v2; v2=ROTL(v2,32); \
  } while(0)

// SipHash-2-4 specialized to precomputed key and 4 byte nonces
u64 siphash24( int nonce, u64 v0, u64 v1, u64 v2, u64 v3) {
  u64 b = ( ( u64 )4 ) << 56 | nonce;
  v3 ^= b;
  SIPROUND; SIPROUND;
  v0 ^= b;
  v2 ^= 0xff;
  SIPROUND; SIPROUND; SIPROUND; SIPROUND;
  return v0 ^ v1 ^ v2 ^ v3;
}

u64 v0 = 0x736f6d6570736575ULL, v1 = 0x646f72616e646f6dULL,
    v2 = 0x6c7967656e657261ULL, v3 = 0x7465646279746573ULL;

#define U8TO64_LE(p) \
  (((u64)((p)[0])      ) | ((u64)((p)[1]) << 8) | \
   ((u64)((p)[2]) << 16) | ((u64)((p)[3]) << 24) | \
   ((u64)((p)[4]) << 32) | ((u64)((p)[5]) << 40) | \
   ((u64)((p)[6]) << 48) | ((u64)((p)[7]) << 56))

// derive siphash key from header
void setheader(const char *header) {
  unsigned char hdrkey[32];
  SHA256((unsigned char *)header, strlen(header), hdrkey);
  u64 k0 = U8TO64_LE( hdrkey ); u64 k1 = U8TO64_LE( hdrkey + 8 );
  v3 ^= k1; v2 ^= k0; v1 ^= k1; v0 ^= k0;
}


// generate edge in cuckoo graph
typedef uint16_t u16;
void sipedge(int nonce, u16 *pu, u16 *pv) {
  u64 sip = siphash24(nonce, v0, v1, v2, v3);
  *pu = 1 + (u16)(sip % PARTU); // "1 +" bcz 0 is the "no edge" ("empty cell or ⊥"¹⁰) value
  *pv = 1 + PARTU + (u16)(sip % PARTV);
}
legendary
Activity: 3836
Merit: 4969
Doomed to see the future and unable to prevent it
June 21, 2016, 08:24:22 PM
By analogously are you saying proportionally?
sr. member
Activity: 336
Merit: 265
June 21, 2016, 08:00:32 PM
Here is a very good explanation of what I was explaining upthread about how sharding validation of long-running scripts breaks Nash equilibrium and this paper explains that even without sharding, the validation of scripts on Ethereum can break Nash equilibrium:

https://arxiv.org/pdf/1606.05917v1.pdf#page=2

I found that here:

https://chriseth.github.io/notes/talks/truebit/#/1


Tangentially, the salient quote of the algorithm from that first white paper:

Consensus in Bitcoin, called Nakamoto consensus[13], achieves low commu-
nication complexity in exchange for an extremely high
amount of local work from miners. In the other extreme,    <--- Satoshi's design requires every full node to verify every transaction
Byzantine consensus [3] avoids local work, but requires a   <--- Byzantine consensus is entirely via message passing
considerable amount of message passing among parties.
Our verification game below requires nontrivial interac-
tion and some local work, but not nearly as much of
either as Byzantine or Nakamoto consensus respectively.

Note that paper can only be applicable to smart contracts which are puzzles that can be validated in parts, which you can see clearly by reviewing the examples such as a challenge and response about the members of an intersection of two sets. That doesn't seem to be applicable to Turing-complete code.

Also I had rejected a challenge-response design, because if the assumption that at least one honest node will verify and issue the challenge fails, then the block chain is fucked in some indeterminant state where Nash equilibrium is broken. That is too risky and frankly this is the direction SegWit is headed (and I guess they depend on centralization of mining to protect them  Roll Eyes)

But if you think about it, code execution can also be validated in parts. And then there is statistical solution for validation which is as analogously as strong of an assurance of correctness as is the reliance that a 51% attack is improbable. That is my design. Now watch somebody go claim it as their invention after reading this.
legendary
Activity: 2184
Merit: 1024
Vave.com - Crypto Casino
June 21, 2016, 09:12:11 AM
I have heard that china is going to make some miners for eth ,
its price is also increasing and thats good to see mining for eth is not a bad idea.

Where is the source? The Ethereum price is going down at the moment, it is not profitable to add ASIC for eth.
ETH price up 9% last 24 hours. DAO price up 15%. The hodlers will be rewarded in this life.
hero member
Activity: 546
Merit: 500
June 21, 2016, 08:54:27 AM
I have heard that china is going to make some miners for eth ,
its price is also increasing and thats good to see mining for eth is not a bad idea.

Pretty sure it's been stated that if an ASIC appears they will change it up to kill it.

That is good. I think it should change the parameters to the miners to kill any possibility of ASIC. That will deter it.
brand new
Activity: 0
Merit: 0
June 20, 2016, 03:43:23 PM
I have heard that china is going to make some miners for eth ,
its price is also increasing and thats good to see mining for eth is not a bad idea.
hero member
Activity: 868
Merit: 1000
June 21, 2016, 08:41:43 AM
I have heard that china is going to make some miners for eth ,
its price is also increasing and thats good to see mining for eth is not a bad idea.

Pretty sure it's been stated that if an ASIC appears they will change it up to kill it.
hero member
Activity: 546
Merit: 500
June 21, 2016, 08:40:23 AM
I have heard that china is going to make some miners for eth ,
its price is also increasing and thats good to see mining for eth is not a bad idea.

Where is the source? The Ethereum price is going down at the moment, it is not profitable to add ASIC for eth.
legendary
Activity: 994
Merit: 1035
June 20, 2016, 03:17:28 PM
So your argument is that everyone who invested in ETH understands Git. I think you are the moron.

I would suggest the opposite. Most investors of Ethereum likely are unfamiliar with Git , and thus why they are more probabilistically likely to invest in the first place. I am suggesting that in a courtroom situation where technical defense experts give a EIL5 of Git they can clearly show there is no attempt at malice or confusion on Eric's part.

can be construed as fraudulent activity

Of course anything can be misunderstood by an idiot or the uninformed. All it takes is a very basic understanding of Git to realize that their was no such intention,


http://www.legalmatch.com/law-library/article/modifying-a-contract.html

Since when is an open source repo that anyone can contribute to and no user needs to sign to use or fork considered a contract?
legendary
Activity: 3836
Merit: 4969
Doomed to see the future and unable to prevent it
June 20, 2016, 03:08:29 PM
So your argument is that everyone who invested in ETH understands Git. I think you are the moron.

I would suggest the opposite. Most investors of Ethereum likely are unfamiliar with Git , and thus why they are more probabilistically likely to invest in the first place. I am suggesting that in a courtroom situation where technical defense experts give a EIL5 of Git they can clearly show there is no attempt at malice or confusion on Eric's part.

can be construed as fraudulent activity

Of course anything can be misunderstood by an idiot or the uninformed. All it takes is a very basic understanding of Git to realize that their was no such intention,


http://www.legalmatch.com/law-library/article/modifying-a-contract.html
hv_
legendary
Activity: 2520
Merit: 1055
Clean Code and Scale
June 20, 2016, 02:46:27 PM
The ETH hypers thought the Turing completeness will send ETH value to the moon.

It plays out that the attack univers goes to the moon...
member
Activity: 85
Merit: 10
June 20, 2016, 02:44:10 PM
inside job makes the most sense

who would know the code well enough to pull this off?

are we supposed to believe some random was just auditing their code in sufficient detail to pull this off? no one talented enough to pull this off would waste their time doing that.
legendary
Activity: 994
Merit: 1035
June 20, 2016, 02:38:50 PM
So your argument is that everyone who invested in ETH understands Git. I think you are the moron.

I would suggest the opposite. Most investors of Ethereum likely are unfamiliar with Git , and thus why they are more probabilistically likely to invest in the first place. I am suggesting that in a courtroom situation where technical defense experts give a EIL5 of Git they can clearly show there is no attempt at malice or confusion on Eric's part.

can be construed as fraudulent activity

Of course anything can be misunderstood by an idiot or the uninformed. All it takes is a very basic understanding of Git to realize that their was no such intention,
legendary
Activity: 3836
Merit: 4969
Doomed to see the future and unable to prevent it
June 20, 2016, 02:35:29 PM
Without siting this is a change then it is assumed that it has always been worded as such, not everyone knows about the workings of Git. This change can only be held to from the change date forward. And is possibly not applicable to those that already hold "Gas".

This was a "merged" commit to change the metadescription.

No, one would not assume it had always been worded as such.

I take it that you are completely unfamiliar with github and the way it keeps a timeline of all changes?

One of the primary raison d'être of Github it to maintain these history of changes.

So your argument is that everyone who invested in ETH understands Git. I think you are the moron.
legendary
Activity: 994
Merit: 1035
June 20, 2016, 02:27:16 PM
Without siting this is a change then it is assumed that it has always been worded as such, not everyone knows about the workings of Git. This change can only be held to from the change date forward. And is possibly not applicable to those that already hold "Gas".

This was a "merged" commit to change the metadescription.

No, one would not assume it had always been worded as such.

I take it that you are completely unfamiliar with github and the way it keeps a timeline of all changes?

One of the primary raison d'être of Github it to maintain these history of changes.
Pages:
Jump to: