Pages:
Author

Topic: CPU friendly Altcoin in development - page 3. (Read 8218 times)

legendary
Activity: 1428
Merit: 1030
July 31, 2013, 11:37:00 PM
#49

Initial distribution of coins aside then however that gets accomplished... is there any big reason for all this anti-GPU, anti-FPGA, anti-ASIC stuff purely for the ongoing securing of a network / blockchain / ledger / transaction-set ?


Ongoing distribution is a feature too . . in 5 years time will a user be able to get some of your altcoin by joining a pool and converting electricity into altcoin, or will he need to buy it from a third party. By ensuring CPU generation is efficient, we'll be able to ensure both the user can generate altcoin for himself, or have a local thirdparty who can.
legendary
Activity: 2940
Merit: 1090
July 31, 2013, 11:03:15 PM
#48
I have followed the whole CPU vs GPU debate since it started, and nowadays it has been looking more and more clear to me that we really should try to keep securing the network separate in our minds from initial distribution of coins.

Is there any big reason to force the ongoing securing of the network to have to be done by inefficient hardware?

Or would it be fine to leave that to the most secure and efficient method a given point in history allows?

(History of a coin itself not just the field in general; for example even if methods different from bitcoin's are more efficient changing bitcoin to use a different method at this point in its history is more than just a technical problem of upgrading to the latest technology.)

Suppose for example that Ripple's consensus method turns out to be both effective and efficient and its source code is released or a free open source implementation is built to circumvent its failure to release its source code.

Would it not be ideal to use such an approach to securing the network, leaving us with only the question of how to initially distribute its built-in ("native") currency?

Or would that make the problem of who even gets to be a node that actually matters too political or something?

Another possibly useful thought-experiment is to imagine we already have a fine and dandy initial distribution of the currency ready to go and are only worrying now about how nodes get to be nodes that actually matter. For this thought experiment how the initial distribution happens can be a black box, magic, how it works need not be part of our concern. Although I expect man people have various ideas how it might work. Some ideas I have i do not even want to mention before using them, since most such ideas can be gamed so part of coming up with them is thinking of something there was hotherto no good motive to game thus that are unlikely to have already been gamed in anticipation of being used.

(Forum accounts and email addresses and IP addresses for example people already have motive to have gamed / sockpuppeted, same for facebook accounts and such. But blockchain based things based on blockchain activities that cost fees would have cost people to have gamed ahead of time for years, would anyone have bothered if they weren't sure their expensive gaming was going to be rewarded someday by the launch of a coin that issues its new coins to people who had done those precise actions on that particular already-existing blockchain?)

Initial distribution of coins aside then however that gets accomplished... is there any big reason for all this anti-GPU, anti-FPGA, anti-ASIC stuff purely for the ongoing securing of a network / blockchain / ledger / transaction-set ?

-MarkM-
legendary
Activity: 1428
Merit: 1030
July 31, 2013, 08:14:37 PM
#47
I recall seeing a coin launched within the last two weeks that claimed to use 6 different hashing functions.  What was the problem with that one?

That was Quarkcoin, and I quite like that one. There's also Primecoin which I think might turn out to be more GPU resistant than some had imagined.
sr. member
Activity: 328
Merit: 250
July 31, 2013, 05:28:47 PM
#46
In case anyone is curious, here is the thread with the speculation that Coblee and Artforz knew litecoin was not GPU hard and were mining it with GPUs from the start.

https://bitcointalksearch.org/topic/artforz-and-coblee-gpu-mining-litecoin-since-the-start-63365

I think it is funny that now ignorant litecoiners are the ones saying "cpu coins will get ruined by botnets", when the whole point of litecoin was to be a cpu coin in the first place.

Here is another thread where Coblee suggests changing litecoin's proof-of-work to something more GPU hard after a GPU miner was proven to exist.  TacoTime suggests a radix sort at the end of the thread, and then nothing was done to change it.

https://bitcointalksearch.org/topic/thread-about-gpu-mining-and-litecoin-64239

Most litecoiners have no idea this happened and continue to promote a coin that already has 19 million coins mined in a way that is most likely a massive deception.

There is definitely some room for a CPU coin.  I would recommend signing up a few merchants and coming up with a method to pay for stuff like electrum support, phone wallet support, and exchange support before launching it.

I recall seeing a coin launched within the last two weeks that claimed to use 6 different hashing functions.  What was the problem with that one?
legendary
Activity: 2674
Merit: 2965
Terminated.
July 31, 2013, 03:19:20 PM
#45
I like CPU friendy coins  Cheesy Cheesy Wink
legendary
Activity: 1428
Merit: 1030
July 31, 2013, 10:35:17 AM
#44
Seth - thanks for compiling all that history and summarizing the position on this with the different altcoins. Here's my take . .

>Artforz said in one of his last posts on this forum that the verification time is the reason you can't jack up the n values.

This is true to an extent . . . but Litecoin's N values could have been a lot higher before that became an issue.

If you look at the following chart -
http://yacexplorer.tk/graphs.htm

Litecoin has an Nfactor of around 8 . . . you could go right up to 17/18 before you see a problem with verification times. Most CPU users can verify those blocks faster than they can download them.

>The guys who maintain YACoin disagree with artforz and tacotime's position.  They claim that verification time is not a problem based on their testing.

It may not be a problem, but that will be based on their timeframe rather than their testing. They're expecting faster CPUs and memory to reduce the time as the NFactor increases. NFactor 20 takes 0.5 seconds now, who knows what that will be in 2020. But Yacoin was handed to the GPU miners, so I don't see it even tried to be GPU resistant. It would have been interesting if it had started with an NFactor of 15 or 16. As it is, it'll take a year to get to that point.




hero member
Activity: 518
Merit: 500
Hodl!
July 31, 2013, 10:16:11 AM
#43
Just a driveby thought.

I am under the impression that the vast majority of botnet machines are windows XP 32 bit.

Therefore, make it 64 bit only, or make minimum memory requirement 3.5 GB or wherever the cutoff is for what XP handles.
sr. member
Activity: 328
Merit: 250
July 31, 2013, 10:07:53 AM
#42

Regardless of what innovation you develop or how cool this "CPU Friendly" coin will be, until you solve the Botnet issue, there will never be a successful cpu only crypto coin.

That should be the first priority.


Just an opinion.

~BCX~

Both Litecoin and Bitcoin have had large botnets mining them early in their histories, and it hasn't been any huge security issue as far as I remember.  The legitimate miners eventually outnumber the botnets.  I will agree that if anything can reasonably done to prevent them from mining, it should be done because the whole point of a CPU coin is to get a better initial distribution of the currency, and having tons of CPUs all linking to one person is bad for that.
sr. member
Activity: 328
Merit: 250
July 31, 2013, 10:01:45 AM
#41
You wouldn't need 6GB.  The latency on main memory for a GPU is horribly bad so bad that any process which needs random access to GPU main memory will be annihilated by a CPU in terms of performance.  GPU memory is designed to stream textures and as such it couples massive bandwidth with extreme latency.  Scrypt was designed to fill the L3 cache of a CPU.  The developers of alt-coins had to intentionally lower the memory requirements by 99% to make GPU competitive.  Yes Litecoin and clones use about 128KB of cache.  The MINIMUM memory requirement for Scrypt is 12MB.  It doesn't take 16GB.  Try it out yourself or check out various hacking forums the OpenCL performance for Scrypt (2^14, 8, 1) is beyond pathetic.  A cheap CPU will run circles around it.
This begs the question: Was this known to the devs of Litecoin and/or Tenebrix? I mean, why else did they intentionally lower the memory requirements? Big scam after all (hurr durr gpu resistant)?

I have never got a satisfactory answer.  I will point out that it is intentional though.  The default parameters for Scrypt are (N=2^14, r=8, p=1), the parameters used by Litecoin are N=2^10, r=1, p=1).

I am not sure if was a scam but the end result is the same, Litecoin is 99% less memory hard then the default Scrypt and about 1/7000th as memory hard as the parameters recommended by the author for high security (not realtime) applications.

Why not use Scrypt as intended.  Scrypt with default variables has beyond horrible performance on GPUs.  Litecoin developers modified it to make it roughly 128x less memory resistant (using only 128KB total).

I'm working on this too and the problem I'm anticipating is that it will take time to verify the hash. The performance I'm seeing running scrypt to require 256MB of memory is a hash time of at least 0.5 seconds . . . . time increasing linearly with memory required . . that's not so bad for mining where you can just have a low difficulty, but creates a problem for clients verifying the block chain - it makes it a slower process, and could start to bite as the block chain gets longer.





Artforz said in one of his last posts on this forum that the verification time is the reason you can't jack up the n values.

So, the algorithm is fine as it is.  If you increase the amount of memory required, you end up with a GPU-favoured implementation of scrypt.

I don't understand this line but the rest of your post is a welcomed commentary that I do intend to provide counter-arguments for.

I would assume that the more memory required the *less* feasible GPU mining became. For instance you could (if artforz released the code) mine scrypt coins with a GPU but it would be so inefficient that you might as well just mine them with the CPU. My understanding is that increasing the amount of memory required further would make GPUs even more pitiful. If you kept increasing the memory required CPU's would decrease in hash power. Some CPU's with smaller and or slower amounts of cache (or inefficient cache usage) would fail to keep up. This would push innovation to improve memory management in CPU's as people try to design ways to make CPU's address large cache sizes faster or make more efficient use of L2 and L3 cache.

We would first see more efficient mining software just as people keep improving the existing scrypt miners but ultimately we would be pushing for CPU's that are continuously improving at memory hard math.  
Although you argue it is difficult to make large amounts of cache easy to address there is room for competition and innovation in this area as people push the boundaries on what is possible with the CPU.

Yes it sounds like a lot of very difficult work I agree but that's the whole idea. It is a speculation market for emerging CPU technology.

Short version: compared to (1024,1,1) increasing N and r actually helps GPUs and hurts CPUs.
Longer version:
While things are small enough to fit in L2, each CPU core can act mostly independently and has pretty large read/write BW, make it big enough to hit external memory and you've got ~15GB/s shared between all cores.
Meanwhile, GPU caches are too small to be of much use, so... with random reads at 128B/item a 256 bit GDDR5 bus ends up well < 20% peak BW, at 1024B/item that % increases very significantly.
end result, a 5870 ends up about 6 times as fast as a PhenomII for scrypt(8192,8,1). (without really trying to optimize either side, so ymmv).
The only way to make scrypt win on CPU-vs-GPU again would be to go WAAAY bigger, think > 128MB V array so you don't have enough RAM on GPUs to run enough parallel instances to mask latencies... but that also means it's REALLY slow (hash/sec? sec/hash!) and you need the same amount of memory to check results... Now who wants a *coin where a normal node needs several seconds and 100s of megs to gigs of ram just to check a block PoW for validity?

The guys who maintain YACoin disagree with artforz and tacotime's position.  They claim that verification time is not a problem based on their testing.

Sorry if this has been answered before, but I just found out about YACoin and I don't want to read all 170 pages.

Is YACoin just continually raising the N value?  Does this mean it will eventually take a huge amount of time to check a block PoW for validity?  How could this possibly be a good idea?

Probably the YACoin ongoing development thread will give you a better idea while reading much less than 170 pages:
https://bitcointalksearch.org/topic/annyac-yacoin-ongoing-development-206577

My post with my benchmarks for hash rates at various values of N, and when YACoin will switch to those values of N, is in the 15th post:
https://bitcointalksearch.org/topic/m.2162620

I benchmarked with a 4 year old dual Xeon E5450 server (almost stone age technology, but similar combined performance to today's i7-2600k's).  It appears it'll be a few decades before even today's hardware (or hardware from 4 years ago) would have a problem with the time needed to validate a block PoW.

As time goes on, doubling of N becomes further and further apart in time.  Advances in computing power will rapidly outpace the rising N over the long term.

So I am to understand that when coblee asked for suggestions on how to prevent GPUs from taking over litecoin, all he had to do was change the N value to higher than 8192 and litecoin would have become way more GPU resistant?

Coblee's thread about GPU-mining and Litecoin:
https://bitcointalksearch.org/topic/thread-about-gpu-mining-and-litecoin-64239

Thanks for the reply!  So am I to understand that artforz's analysis is wrong?  I guess that wouldn't be the first time....

The other thread has the majority of the GPU discussion, including benchmarks from mtrlt, the developer of Reaper (the first GPU kernel released for Litecoin in response to ArtForz's claim Litecoin was GPU resistant).  I disagree with ArtForz's claim that increasing N helps GPU's once both CPU's and GPU's are computing hashes large enough that they're pushed to external RAM.  I would say ArtForz's analysis was cherry-picked based on the specific value of N (8192) where computation gets pushed out of the L2 cache on the AMD Phenom II he was testing with.

Indications, including from mtrlt's benchmarks, are that the performance spread between CPU's and GPU's narrows as N rises.  As long as we don't cherry-pick a specific result from a certain value of N on an AMD Phenom II CPU..

Also note that YACoin doesn't use the same scrypt variant as Litecoin.  The mixing algorithm is switched from salsa20/8 to chacha20/8, and the hashing algorithm is switched from SHA-256 to Keccak-512.  Direct comparisons between hash rates of the two aren't quite going to be an apples-vs-oranges comparison for a given value of N.

Well, the N factor increases memory requirements for computing a single hash (thus it's using more memory and memory bandwidth). Current GPUs will quickly run out-of-memory (or there's other GPU-specific constraint that prevents the code from running at higher N, dunno). However, it also affects CPUs really hard (around 40% hashrate decrease if I remember correctly).

Nah, all you have to do is increase the lookup gap (via the previously published TMTO solution for scrypt from cgminer/reaper) and then you can compute the same hashes with less memory.

There's a probably bug in mtrlt's current code that doesn't allow calculation above N=4096, but it's possible that this particular TMTO implementation is not really optimized well for the GPU and that in the future with some hacking we'll see the gap further widen.

The further up the N value you get, the greater dependence on memory access speeds you typically observe (or at least, I observed using scrypt-jane on a CPU).  I wouldn't be surprised if eventually an implementation for GPUs came along that was optimized and destroyed CPUs for efficiency and speed.

BLAKE is used as an entropy source to randomize memory access too, I wouldn't be surprised if you looked at accesses to the lookup table and found that they end up being less than random as well due to consistent ordering of some types of data in the block header (thus also diminishing the amount of memory required).  I think pooler observed this when he was writing his CPU miner.

The whole point of trying to make a GPU-hard coin is to get a more even initial coin distribution than bitcoin/litecoin did.  The number of people with CPUs is way higher than the number with good GPUs.  There is no point to making a new alt-coin to change the mining algorithm if it doesn't promote a wider distribution by cutting out the GPU farmers.  The algorithm doesn't have to last forever; it only has to last a few years until ASICs are developed.  Litecoin lost its whole purpose when it was taken over by GPUs.  If we knew a way to assign an equal amount of coins to everyone on the planet in a decentralized way, we would do that, but that technology is decades away.  Distributing it to everyone with a CPU is way less fair, but it is still vastly superior to giving it to everyone with a GPU.

Here are the benchmarks from mtrlt, who was the first to write a GPU miner for litecoin.  He switched to mining YACoin because it was more profitable for him.  Now he is writing a GPU primecoin miner apparently, but I haven't paid attention for a several months.

Here are all my GPU benchmarking results, and also the speed ratio of GPUs and CPUs, for good measure.

GPU: HD6990 underclocked 830/1250 -> 738/1250, undervolted 1.12V -> 0.96V. assuming 320W power usage
CPU: WindMaster's 4 year old dual Xeon, assuming 80W power usage. In reality it's probably more, but newer processors achieve the same performance with less power usage.

Code:
N      GPUspeed    CPUspeed     GPU/CPU power-efficiency ratio
32     10.02 MH/s  358.8 kH/s   6.98
64     6.985 MH/s  279.2 kH/s   6.25
128    3.949 MH/s  194.0 kH/s   5.1
256    2.004 MH/s  119.2 kH/s   4.2
512    1.060 MH/s  66.96 kH/s   3.95
1024   544.2 kH/s  34.80 kH/s   3.9
2048   278.7 kH/s  18.01 kH/s   3.88
4096   98.5 kH/s   9.077 kH/s   2.72
8192+  0 H/s       4.595 kH/s   0

GPUs are getting comparatively slower bit by bit, until (as I've stated in an earlier post) at N=8192, GPU mining seems to break altogether.

EDIT: Replaced GPU/CPU ratio with a more useful power-efficiency ratio.

TacoTime asked if he had played with the lookup gap, and he said he had played with it quite a bit and couldn't get it to mine faster.  You can see here that jacking up the N value DOES make GPU mining substantially less effective relative to the CPU, and apparently they are not having problems with verification times.  YAcoin switches to N=8192 on August 13th.  You should probably get WindMaster in here to comment.
member
Activity: 99
Merit: 10
July 31, 2013, 09:22:56 AM
#40
FreeTrade: Thanks for your input and it's nice to see other people working on that too. We may should exchange our results. I like to do the programming very low level for that purpose. While assembler is a little bit too freaky and time intense to code with, we are doing it on very low leveled c. P.e.: We don't use any strings at all yet and just std math operations besides the radix sort. Your mentioned SHA hashing doesn't make sense within the proof of work algorithm itself in my eyes. The created algo has to be as good as the well known hashing operations in terms of irreversibility and unique hashes anyway and then nothing else is needed around it. I think our current mechanism is already very good in terms of that.
legendary
Activity: 1428
Merit: 1030
July 31, 2013, 05:41:57 AM
#39
You wouldn't need 6GB.  The latency on main memory for a GPU is horribly bad so bad that any process which needs random access to GPU main memory will be annihilated by a CPU in terms of performance.  GPU memory is designed to stream textures and as such it couples massive bandwidth with extreme latency.  Scrypt was designed to fill the L3 cache of a CPU.  The developers of alt-coins had to intentionally lower the memory requirements by 99% to make GPU competitive.  Yes Litecoin and clones use about 128KB of cache.  The MINIMUM memory requirement for Scrypt is 12MB.  It doesn't take 16GB.  Try it out yourself or check out various hacking forums the OpenCL performance for Scrypt (2^14, 8, 1) is beyond pathetic.  A cheap CPU will run circles around it.  

Some interesting stats here -
https://github.com/floodyberry/scrypt-jane

If I'm reading these correctly, they confirm my tests that it takes about 2 seconds per hash per GB memory required on a modern processor.

Interested to know what you think would be a good memory requirement/time tradeoff to resist GPUs but maintain a good block validation time.

One could even design the algorithm to become more memory hard over time (i.e. double memory requirement every 2 block years).

Yacoin? Nfactor started a little low though.
legendary
Activity: 1428
Merit: 1030
July 31, 2013, 03:35:58 AM
#38
Do you have any idea what we should change/add to the hashing function?

I'm working on this same problem, and I think you're taking a viable approach. You might want to hash on the way into your function and on the way out so you don't need to worry about cryptography and focus on making a GPU resistant algorithm.

Hash = SHA256(MAKEWORK(SHA256(Block)))

Where MAKEWORK is your GPU resistant algorithm.
legendary
Activity: 1428
Merit: 1030
July 31, 2013, 03:28:09 AM
#37
Why not use Scrypt as intended.  Scrypt with default variables has beyond horrible performance on GPUs.  Litecoin developers modified it to make it roughly 128x less memory resistant (using only 128KB total).

I'm working on this too and the problem I'm anticipating is that it will take time to verify the hash. The performance I'm seeing running scrypt to require 256MB of memory is a hash time of at least 0.5 seconds . . . . time increasing linearly with memory required . . that's not so bad for mining where you can just have a low difficulty, but creates a problem for clients verifying the block chain - it makes it a slower process, and could start to bite as the block chain gets longer.

legendary
Activity: 2940
Merit: 1090
July 31, 2013, 03:26:59 AM
#36
Just one example:

Quote
What happens if say 33% of the nodes saw transaction A, 33% saw transaction B, and 34% saw transaction C first where all 3 conflict?
Then no transaction will get voted in, and the deterministic algorithm will pick only one which will be a candidate for the next ledger.
How does the deterministic algorithm for including transactions in the ledger work?
Basically it applies transactions in an order designed for maximum efficiency until no new transactions get in. Transactions can pass, hard fail, or soft fail. If they pass, they're included. If they hard fail, they're dropped. if the soft fail, they stay as candidates.
Once a transaction gets in, all that conflict with it will hard fail.
The current algorithm is hash order first, with any soft fails repeated in account/sequence order. When no new transactions succeed in a pass, the operation completes.

Ok so in the event two or more nodes (servers) disagree consensus will be acheived by a deterministic algorithm.  The conflicting transactions will be sorted by hash order and the included in order until tx can't be included due to conflict.

Ok so I am going to send you coins, I generate tx until I make one with a high tx hash.  I then double spend it with a tx to myself that happens to have a low tx hash.  If I can ensure that no consensus is acheived (partial sbyill) attack then the deterministic algorithm will always pick my double spend over your tx

As described this is a massive vulnerability, one sure to be exploited once the server source code is open sourced and anyone can run a server (or 10,000 servers in a botnet). Now maybe the FAQ is just incomplete but since the server source code is closed source nobody can say for sure how weak this is.

https://ripple.com/wiki/Consensus


You are reading it, or reading into it, differently than I am.

Specifically, you do not mention what I had thought was fundamental: that if two disagree neither of them get in, because only consensus gets in.

You seem to assume being earlier in the queue, whether by high hash or other ordering / collating sequence, gets you in.

In my concept of the protocol, that only lets you not get disqualified yet; once a conflicting item does get looked at, boom, you and it both get disqualified as not being in consensus with each other.

So if the processing can catch up to the whole queue, it is left with all current items that do not conflict with each other and a bunch of chaff consisting of items that conflict either with history (already done-deal blocks / timeslices) or with other items currently up for consideration.

So to get in ahead of an item that contradicts your item you have to get the processing node to strike either your node or the node that sent the item that conflicts with yours from its trusted nodes list. it comes down to one of those two nodes is lying or if both items arrived from the same node than that node not winnowing out the chaff itself before passing items to me.

Until my user tells me which node to trust over the other node, i have to assume both nodes are compromised or they are not both in the same actual network of trust.

So unless my user specifically tells me to always trust the beagle boys, or at least to always believe them over scrooge mcduck, I have to just figure both have some problem in that they seem unable to reach consensus with each other...

As for 10,000 server botnets, if I have told my node that both the beagle boys and scrooge mcduck being in consensus is a significant consensus, but I have never even admitted to my node that I know any of these 10,000 bots, they only count at all if they can convince both the beagle boys and uncle scrooge of their version of reality...

-MarkM-

donator
Activity: 1218
Merit: 1079
Gerald Davis
July 31, 2013, 03:23:57 AM
#35
You wouldn't need 6GB.  The latency on main memory for a GPU is horribly bad so bad that any process which needs random access to GPU main memory will be annihilated by a CPU in terms of performance.  GPU memory is designed to stream textures and as such it couples massive bandwidth with extreme latency.  Scrypt was designed to fill the L3 cache of a CPU.  The developers of alt-coins had to intentionally lower the memory requirements by 99% to make GPU competitive.  Yes Litecoin and clones use about 128KB of cache.  The MINIMUM memory requirement for Scrypt is 12MB.  It doesn't take 16GB.  Try it out yourself or check out various hacking forums the OpenCL performance for Scrypt (2^14, 8, 1) is beyond pathetic.  A cheap CPU will run circles around it.
This begs the question: Was this known to the devs of Litecoin and/or Tenebrix? I mean, why else did they intentionally lower the memory requirements? Big scam after all (hurr durr gpu resistant)?

I have never got a satisfactory answer.  I will point out that it is intentional though.  The default parameters for Scrypt are (N=2^14, r=8, p=1), the parameters used by Litecoin are N=2^10, r=1, p=1).

I am not sure if was a scam but the end result is the same, Litecoin is 99% less memory hard then the default Scrypt and about 1/7000th as memory hard as the parameters recommended by the author for high security (not realtime) applications.
hero member
Activity: 756
Merit: 501
July 31, 2013, 02:47:40 AM
#34
You wouldn't need 6GB.  The latency on main memory for a GPU is horribly bad so bad that any process which needs random access to GPU main memory will be annihilated by a CPU in terms of performance.  GPU memory is designed to stream textures and as such it couples massive bandwidth with extreme latency.  Scrypt was designed to fill the L3 cache of a CPU.  The developers of alt-coins had to intentionally lower the memory requirements by 99% to make GPU competitive.  Yes Litecoin and clones use about 128KB of cache.  The MINIMUM memory requirement for Scrypt is 12MB.  It doesn't take 16GB.  Try it out yourself or check out various hacking forums the OpenCL performance for Scrypt (2^14, 8, 1) is beyond pathetic.  A cheap CPU will run circles around it.
This begs the question: Was this known to the devs of Litecoin and/or Tenebrix? I mean, why else did they intentionally lower the memory requirements? Big scam after all (hurr durr gpu resistant)?
donator
Activity: 1218
Merit: 1079
Gerald Davis
July 31, 2013, 02:21:52 AM
#33
If they don't release code that doesn't necessarily mean the actual protocol proposed cannot work, does it? Or is their failure to release the code maybe because the protocol is bullshit, they actually are having to not really do it as described because as described it cannot work, the whitepaper or whatever description of the purported consensus is just smokescreen flim flam or something?

I think the latter.  In looking at the white paper, FAQ, and wiki there are some significant "issues" with their explanation of achieving consensus.  Of course it "might" work but the devil is in the details and being closed source there is no way to expose those details to sunlight.

So it is more like "trust us it works, the magic black box says so" and "at some point in the future which will not be named and may change at our sole decree" we will let untrusted people run servers because it won't be a problem.  If you have doubts see the first quote.

Quote
If the approach works, then even if new code has to be written from scratch to implement the approach in free open source form it could be done, right?

Well we don't know it works.  As described I see a lot of potential issues.

Just one example:

Quote
What happens if say 33% of the nodes saw transaction A, 33% saw transaction B, and 34% saw transaction C first where all 3 conflict?
Then no transaction will get voted in, and the deterministic algorithm will pick only one which will be a candidate for the next ledger.
How does the deterministic algorithm for including transactions in the ledger work?
Basically it applies transactions in an order designed for maximum efficiency until no new transactions get in. Transactions can pass, hard fail, or soft fail. If they pass, they're included. If they hard fail, they're dropped. if the soft fail, they stay as candidates.
Once a transaction gets in, all that conflict with it will hard fail.
The current algorithm is hash order first, with any soft fails repeated in account/sequence order. When no new transactions succeed in a pass, the operation completes.

Ok so in the event two or more nodes (servers) disagree consensus will be acheived by a deterministic algorithm.  The conflicting transactions will be sorted by hash order and the included in order until tx can't be included due to conflict.

Ok so I am going to send you coins, I generate tx until I make one with a high tx hash.  I then double spend it with a tx to myself that happens to have a low tx hash.  If I can ensure that no consensus is acheived (partial sbyill) attack then the deterministic algorithm will always pick my double spend over your tx

As described this is a massive vulnerability, one sure to be exploited once the server source code is open sourced and anyone can run a server (or 10,000 servers in a botnet). Now maybe the FAQ is just incomplete but since the server source code is closed source nobody can say for sure how weak this is.

https://ripple.com/wiki/Consensus
legendary
Activity: 2940
Merit: 1090
July 30, 2013, 11:28:54 PM
#32
So you're categorising Ripple as a centralised ledger? Maybe a store-and-forward network or other distribution network of a centralised ledger, but in essence a centralised ledger?

I would call it a distributed centralaised ledger.  This may seem an oxymoron but the point is that there is a redundant network of ripple "servers" (not be be confused with client nodes) however the server source code remains closed source and only OpenCoin or agents of OpenCoin run the servers.

I would liken in to a private content distribution system being distributed but still under centralized control.  The advantages are no need for mining, no need for proof of work.  What the centralized network says is the status of coins is the status with no appeals or overrides.

There is no need for "proof" because all servers are trusted authorities.  They are all run by the same entity and there is no reason or scenario where they will provide conflicting ledgers.   At a high level one could say the purpose of "proof of work/stake/etc" is to resolve conflicts in the consensus.  There will never be conflicts in a distributed centralized network.

Oh you are confusing the company-operated network based on the Ripple protocol with the protocol itself.

I was meaning the proposed consensus mechanism, the protocol, not the control-freaks currently reneging in releasing the source code.

If they don't release code that doesn't necessarily mean the actual protocol proposed cannot work, does it? Or is their failure to release the code maybe because the protocol is bullshit, they actually are having to not really do it as described because as described it cannot work, the whitepaper or whatever description of the purported consensus is just smokescreen flim flam or something?

If the approach works, then even if new code has to be written from scratch to implement the approach in free open source form it could be done, right?

So i am not on about some specific proprietary software that implements the protocol but the actual purported protocol...

-MarkM-
legendary
Activity: 2940
Merit: 1090
July 30, 2013, 11:23:45 PM
#31
I doubt botnet resistance is possible, unless you make it such that normal people can't take part with their PC. You want broader participation for a coin to have better chance of success.

The 1% are only 1%, that is not broad.

Appeal to the starving, the children, the outcaste, make it unappealing to the rich, the well fed, the well-to-do at first, so that its appeal to them ends up coming from how many millions of people they can sell their howerver they got rich well fed etc to by adopting this currency that all those potential customers, who are currently ignored due to having no money, can become customers by means of once they have been given money by this method of giving money only to those who so desperately need it that they are willing to sit down 16 hours a day doing Turing Tests, or whatever...

-MarkM-


I said "broader". That's a implicit comparison of 2 scenarios where in one, most PC today can take part, and the other where more specialized hardware is necessary. An advantage of CPU friendly coin would be more people can get decent mining results with the PC they already have, hence more people can take part easily.

Besides, what does "the starving" have to do with this topic?

Because forget a PC, think a smartphone or handheld. Who even has desktop machines anymore other than geeks?

-MarkM-
sr. member
Activity: 560
Merit: 250
July 30, 2013, 10:43:23 PM
#30
Hence eMunie, it heavily relies on Hard Drives, instead of CPU's and GPU's
Pages:
Jump to: