Author

Topic: NA - page 321. (Read 893613 times)

legendary
Activity: 1023
Merit: 1000
ltex.nl
December 28, 2014, 01:54:06 PM
/GJ, the reason why people vote for DIGI is that it is currently more certain as a "fix" (we don't know if it is 100% effective) than having a yet-to-be-developed GDR at some undetermined point in the future.

People have offered help as much as they can, it seems that there is a better-than-DGW3 solution available, but we get very little feedback on those things. Everybody here feels that they are poking in the dark, not knowing what to do.

As far as I know is that we could provide a DIGI implementation in GO, but nobody here knows how to use that language. So it has been suggested to hire a dev for that.

Referring to my earlier analogy, I would like to ask if we are going to "prepare a list" coming "weekend" or do the "Job" (Fix)...

Please also note my Vote also stands for G-J and all the tremendous work he is- and has been doing for us!
legendary
Activity: 1658
Merit: 1001
December 28, 2014, 01:43:32 PM
/GJ, the reason why people vote for DIGI is that it is currently more certain as a "fix" (we don't know if it is 100% effective) than having a yet-to-be-developed GDR at some undetermined point in the future.

People have offered help as much as they can, it seems that there is a better-than-DGW3 solution available, but we get very little feedback on those things. Everybody here feels that they are poking in the dark, not knowing what to do.

As far as I know is that we could provide a DIGI implementation in GO, but nobody here knows how to use that language. So it has been suggested to hire a dev for that.
sr. member
Activity: 393
Merit: 250
December 28, 2014, 01:30:31 PM


To cut this long reply short: My vote goes to Digi ASAP...

Same here!
sr. member
Activity: 409
Merit: 250
December 28, 2014, 01:21:49 PM
A 25% modified Digishield diff algo is my recommendation.
How is the algo modified? I'd be very interested to see how it works!

You change the max difficulty adjustment.  Other than that it's standard DIGI.
Okay.. And it is increased 25% then? No other changes are made?

While I completely trust GJ, a simulator is a just that... a simulation.
A simulation is not "just a simulation".. it's a piece of code that executes the exact same maths as being done in the satoshi client, but does not require actual hashing power.. Thereby making it easier to see how an algo reacts.. This makes it a much more powerful tool to test algo changes.

Can it simulate a fork caused by an isntant, massive hashrate increase?  You also talk about luck with a 100MH miner... well the 600GH miner is going to have luck too.  Computer simulations, while very accurate(when proven so) are only as good as the code.  There's a reason why robot AI isn't to a point of consciousness yet.

Forks aren't caused by the diff readjustment algorithm and so they're currently out of scope for the simulator, and are not part of the difficulty re-adjustment problem at all.
The chances of a 1MH miner having three lucky blocks are a lot larger then the chances for a 600GH miner having three lucky blocks.

1. Look at network difficulty adjustment in increments instead of minutes or seconds -

The common fault is to look at a network as blocks per minute or blocks per 24 hours, such as presented by 'nlgblogger', instead a network needs to be looked at as the number of diff adjustments per 24 hours. With a 60sec block time, there is 1440 increments per 24 hours, with 150sec(NLG), there are only 576. So any diff algo designed for a 60sec block time needs to be looked at over 1440 blocks, not a 24 hour period, to see if it is working as desired if that diff algo is applied to any other block time. NLG has only about 1/3 of the diff increments in 24 hours when compared to a 60sec block time network. Both DigiShield and DGW3 were desired expressly for a 60 second block time, Digicoin in the former and DarkCoin in the latter. This is why neither Digi nor DGW3 are effective without tuning.
I don't really agree. The main variable that drives the DGW3 calculation is the average block time for the past n blocks. Hashing power can be added/removed instantaneous. It's not like hashing machines need to "warm up" and slowly increase their hashing speed over a few minutes or hours, so it doesn't make sense for any diff algo to take into account how many re-adjustments per hour are happening.

I think you're missing the point here.  The reason Kilo brings this up is VERY valid.  If you have more iterations of retargeting in a specific period, the more change can happen.  Lets say the NLG codebase was only allowed to be altered every 6 months.  You would have to make a lot of major changes to account for everything that happened in that defined time period.  Maybe something happened 5 months ago, but now you're waiting to catch up.  If you say you can change the codebase every 3 months, now you're only 2 months behind instead of 5.  Mind you, I know the blocks carry the changes with each block, but when they are targeted for a longer time, they reaction is more severe.  I'll get back to this.
I totally agree that having more adjustments (smaller block times) will give the diff re-adjustment more opportunities to re-adjust, and so in theory it will work better.. But this is not something the algorithm itself should be aware of. Also, let's not forget that smaller blocktimes introduces a new problem: more chance of chain splits. (network must converge within the blocktime, if it doesn't, a split might occur.. if the split isn't handled well by the network, it becomes permanent and we're stuck with a fork..)

Digishield is a much simpler and more elegant diff algo that scales easily and effectively, while DGW3 is very complex and does not scale well.
I wouldn't say DGW3 is very complex.. It's not even complex relative to Digishield, because the two look a lot alike. Please tell me how you think Digi is more elegant and what you mean with "scaling".. Where in the maths does Digi scale better than DGW3?

Averaging difficulties to calculate the newer difficulty is more complex in ways.  It's allows more room for error and skewed calculations.  It's pretty obvious DGW3 doesn't scale well.  You can't argue that because the chain shows it.  I think it has for a long time now.

I'll take a stab at what Kilo meant by elegant.  DIGI is simple.  It's a 1 to 2 line addition to the base LTC difficulty algorithm.  That's it.  It's elegant in that it's simple and it works.  Additionally, Kilo and I have witnessed DIGI implementations over the last year that scaled very well.  When a chain's hashrate grows 5X it's size in a day and stays healthy, I would say it scales pretty darn good.
I know DGW3 doesn't handle the current spikes very well.. But what is this "scaling" factor? And why would Digi handle it well? Is there an example of a coin with Digi that has x30 hashrate increases (e.g.: clevermining) at the same level as Guldencoin currently is?

Also, Digishield is not really a 1 to 2 line addition to the base LTC algorithm. Please read https://github.com/digibyte/digibyte/blob/master/src/main.cpp#L1268-L1555
And we don't have 5x hashrate in a day.. we have 30x hashrate in 3 seconds..

I would fork NLG to a 75sec block target time with a 500NLG reward that retargets every block using a custom Digishield implemtation.
Blockreward is nowhere near related to the difficulty algorithm.. It wouldn't have anything to do with Digishield...

Completely related if you reduce the block time to accommodate for an algo that is meant for shorter block times.  A 150 second block time works for LTC because LTC uses the standard algo with a shit ton of hashing power, think petahash, behind it consistently.  NLG doesn't have that.  Most other altcoins that have healthy chains use a block time closer to 60 seconds but not less.  90 seconds would be optimal, but Kilo's 75 seconds is probably a very safe number for NLG.  Less than 60 seconds and you run the risk of timewarp attacks.  Kilo has successfully done this to a few coins just to prove the maths.  But that aside, shortening the block time is a solid idea.  It reduces the magnification of the retarget changes and allows for faster reactions.  If you shorten the block time, you have to reduce the reward, unless you want to mine out faster.  Normally, I would be against reducing block rewards, but halving the block time and the block reward gives you the same coins mined daily, but with 100% more difficulty changes in that time.
With the current network stability (which isn't very high) I wouldn't recommend lower blocktimes.. Furthermore, it simply doesn't work because although lower blocktimes means more blocks and thus more re-adjustments.. it's the same calculations being done... say we would half the block time to 75 seconds, that means the difficulty would also be halved so we would get twice the number of blocks within the same amount of time. The income/costs for dedicated miners wouldn't change, they would just get twice the blocks with each half the reward per block. This also applies for clever, they would have to spend the same amount of hashingpower to get twice the number of blocks at half the reward per block.. so in the end it's just the same thing.. The only thing that changes is that if the dgw3 or digi impact isn't halved too: it will cause twice the size in blocktime spikes (both up and down).. So you'd have to half the dgw3/digi impact.. which means that in the end, you're left with 0 effect (apart from a network that's more vulnerable to time warp attacks and chain splits). The faster reaction you're talking about would work if the hashrate changes over a number of blocks' time. But with a jump pool, it's instantanious. If I'm missing something, please explain.

(....) This would reduce any 'stuck network' time by 50%, increase the number of diff increments per 24hrs by 100%, reduce the number of blocks that Clever can mint per 24hrs by 80%, and make NLG a much better network for micro-transactions.
How did you figure out these numbers ?

I would say the 50% stuck time is conservative with DIGI.  I would guess you would reduce the amount of stuck time(blocks that take forever to mine) closer to 90-95%.  The chain would function smoother, and not make wild swings into the stratosphere like it does now with DGW3.  The 100% increase in difficulty adjustments is correct if you halve the block time.  The reduction in the amount of blocks that clever could mint is about right.  It's not hard to see in the charts I posted that if you threw 10X the network hashrate at the chain, based on profitability, you're profitability would be gone before you could grab 15 blocks in under a minute.  There's no averaging, so DIGI doesn't need to take in to account the false lows that DGW3 produces.  The reaction is instant instead of delayed because of a moving average.

The numbers, while not statistically absolute, are sound.  No need to dismiss them.
The  fact that digi doesn't take into account the false lows /does/ sound good.. As that is actually the largest part of the problem we're seeing.

Please don't take these remarks personally.. I'm merely pointing out that with the hashrate/sec spikes we're experiencing; both DGW3 and Digishield won't handle it. I know I haven't been active the last week or so, I'm sorry for that. But please keep in mind that we want a permanent and solid solution. We actually need a more complex algo...

I don't take the remarks personally, at all.  I think you are 100% incorrect about DIGI not handling the hashrate spikes though.  We've proved it on a testnet.  It's funny... we presented actual data, but there's still people that deny it, like it was all made up lol.  DIGI works, the graphs back that up, and the solution will work until the entire altcoin environment makes it's next paradigm shift.  You can't predict the future of mining, so you can't create an algo for it.  You can however use what is effective in this era of mining.  DIGI is the answer.
You spent that entire post trying to debunk my team's findings, and shoot down the possibility of using DIGI.  However, you didn't mention a single plan of action other than to state that a more complex algo needs to be created.

A plan that has been needed for months now.  Give us direction.  You're the rudder.  Don't let us steer ourselves into the rocks.
So, I don't really believe in the halved blocktimes, but investigating more into digi seems to be worth the time..
Before christmas I had the idea to write 'GDR': "Guldencoin Difficulty Readjustment". Maybe it's a good approach to take what we've learned so far, including DGW3 and Digi approaches, and work out an algorithm that is better. After all that is how DGW3 and Digi were born too: by applying lessons learned and incremental development.
I would be extremely happy to do this, but not alone. If anyone can provide me with a Digi ported to the simulator that would be great, it's pretty simple, but will take some time.

If the community doesn't want a new algorithm and places it's bet on Digi, then I WILL help with that. I'd be happy to work together with youguys to apply and deploy the software patch. Just as before I will provide compiled binaries for all major platforms and update the seeds and send a network alert. I believe last time that went pretty well, and this time we don't have to do a chain fork.

Please though, understand that we should not do this without being absolutely sure. There are always risks.

Cheers!
legendary
Activity: 1023
Merit: 1000
ltex.nl
December 28, 2014, 12:27:37 PM
Maybe I have become too much commercial over time, but I am also a Nerd from zero-hour (my first company actually was the third Internet Provider in The Netherlands, way back when the WWW wasn't even thought of and we all used Gopher and 300 baud modems).

What hit's me reading the above discussions is that we tend to move closer and closer to focus on details if we try to solve problems. This is why most of us are discussing detailed parts of the algo. On the other hand we have a DEV team that has a wider focus overseeing a long term strategy.

I personally have learned to take a distance from things first to oversee the big picture. But I also have experienced that missing out on the details can have quite crucial effects as well. This is why I think we need to do both and try to combine experiences.

Having said that, I see why our DEV team wants to take time to create the best possible foundation for their long term views. I also see why a lot of us are becoming impatient and frustrated over the effects of the lack in defense against rapes from CM. And then I have to agree with both!

Still my gut feeling tells me it would be wise to accept the fact that DGW hasn't provided us with wat we expected. This in no way is something our DEV team has to account for, we were all behind this action. I do sense however that the majority is coming to peace with the fact that the implementation of DGW has failed.

I like to point out that we now have 4 months of real time testing behind us on our life chain, only to draw the conclusion the chosen path isn't the right one. My logical question would be, why chose to stay on the proven wrong path while trying to figure out the ultimate alternative, when we have a way better (and well argumented) path we can hop on almost effortless, doing the same (figuring out the ultimate alternative)?

To cut this long reply short: My vote goes to Digi ASAP...
legendary
Activity: 1582
Merit: 1002
HODL for life.
December 28, 2014, 12:05:09 PM
Roll Eyes The main question. When will the algo be solved  Huh The sooner the better.

This.

/GJ, don't take my post as an attack, or what I'm going to say right now.

You spent that entire post trying to debunk my team's findings, and shoot down the possibility of using DIGI.  However, you didn't mention a single plan of action other than to state that a more complex algo needs to be created.

A plan that has been needed for months now.  Give us direction.  You're the rudder.  Don't let us steer ourselves into the rocks.

-Fuse
full member
Activity: 170
Merit: 100
December 28, 2014, 11:56:50 AM
 Roll Eyes The main question. When will the algo be solved  Huh The sooner the better.
legendary
Activity: 1582
Merit: 1002
HODL for life.
December 28, 2014, 11:47:54 AM
A 25% modified Digishield diff algo is my recommendation.
How is the algo modified? I'd be very interested to see how it works!

You change the max difficulty adjustment.  Other than that it's standard DIGI.


While I completely trust GJ, a simulator is a just that... a simulation.
A simulation is not "just a simulation".. it's a piece of code that executes the exact same maths as being done in the satoshi client, but does not require actual hashing power.. Thereby making it easier to see how an algo reacts.. This makes it a much more powerful tool to test algo changes.

Can it simulate a fork caused by an isntant, massive hashrate increase?  You also talk about luck with a 100MH miner... well the 600GH miner is going to have luck too.  Computer simulations, while very accurate(when proven so) are only as good as the code.  There's a reason why robot AI isn't to a point of consciousness yet.


1. Look at network difficulty adjustment in increments instead of minutes or seconds -

The common fault is to look at a network as blocks per minute or blocks per 24 hours, such as presented by 'nlgblogger', instead a network needs to be looked at as the number of diff adjustments per 24 hours. With a 60sec block time, there is 1440 increments per 24 hours, with 150sec(NLG), there are only 576. So any diff algo designed for a 60sec block time needs to be looked at over 1440 blocks, not a 24 hour period, to see if it is working as desired if that diff algo is applied to any other block time. NLG has only about 1/3 of the diff increments in 24 hours when compared to a 60sec block time network. Both DigiShield and DGW3 were desired expressly for a 60 second block time, Digicoin in the former and DarkCoin in the latter. This is why neither Digi nor DGW3 are effective without tuning.
I don't really agree. The main variable that drives the DGW3 calculation is the average block time for the past n blocks. Hashing power can be added/removed instantaneous. It's not like hashing machines need to "warm up" and slowly increase their hashing speed over a few minutes or hours, so it doesn't make sense for any diff algo to take into account how many re-adjustments per hour are happening.

I think you're missing the point here.  The reason Kilo brings this up is VERY valid.  If you have more iterations of retargeting in a specific period, the more change can happen.  Lets say the NLG codebase was only allowed to be altered every 6 months.  You would have to make a lot of major changes to account for everything that happened in that defined time period.  Maybe something happened 5 months ago, but now you're waiting to catch up.  If you say you can change the codebase every 3 months, now you're only 2 months behind instead of 5.  Mind you, I know the blocks carry the changes with each block, but when they are targeted for a longer time, they reaction is more severe.  I'll get back to this.


Digishield is a much simpler and more elegant diff algo that scales easily and effectively, while DGW3 is very complex and does not scale well.
I wouldn't say DGW3 is very complex.. It's not even complex relative to Digishield, because the two look a lot alike. Please tell me how you think Digi is more elegant and what you mean with "scaling".. Where in the maths does Digi scale better than DGW3?

Averaging difficulties to calculate the newer difficulty is more complex in ways.  It's allows more room for error and skewed calculations.  It's pretty obvious DGW3 doesn't scale well.  You can't argue that because the chain shows it.  I think it has for a long time now.

I'll take a stab at what Kilo meant by elegant.  DIGI is simple.  It's a 1 to 2 line addition to the base LTC difficulty algorithm.  That's it.  It's elegant in that it's simple and it works.  Additionally, Kilo and I have witnessed DIGI implementations over the last year that scaled very well.  When a chain's hashrate grows 5X it's size in a day and stays healthy, I would say it scales pretty darn good.


I would fork NLG to a 75sec block target time with a 500NLG reward that retargets every block using a custom Digishield implemtation.
Blockreward is nowhere near related to the difficulty algorithm.. It wouldn't have anything to do with Digishield...

Completely related if you reduce the block time to accommodate for an algo that is meant for shorter block times.  A 150 second block time works for LTC because LTC uses the standard algo with a shit ton of hashing power, think petahash, behind it consistently.  NLG doesn't have that.  Most other altcoins that have healthy chains use a block time closer to 60 seconds but not less.  90 seconds would be optimal, but Kilo's 75 seconds is probably a very safe number for NLG.  Less than 60 seconds and you run the risk of timewarp attacks.  Kilo has successfully done this to a few coins just to prove the maths.  But that aside, shortening the block time is a solid idea.  It reduces the magnification of the retarget changes and allows for faster reactions.  If you shorten the block time, you have to reduce the reward, unless you want to mine out faster.  Normally, I would be against reducing block rewards, but halving the block time and the block reward gives you the same coins mined daily, but with 100% more difficulty changes in that time.


(....) This would reduce any 'stuck network' time by 50%, increase the number of diff increments per 24hrs by 100%, reduce the number of blocks that Clever can mint per 24hrs by 80%, and make NLG a much better network for micro-transactions.
How did you figure out these numbers ?

I would say the 50% stuck time is conservative with DIGI.  I would guess you would reduce the amount of stuck time(blocks that take forever to mine) closer to 90-95%.  The chain would function smoother, and not make wild swings into the stratosphere like it does now with DGW3.  The 100% increase in difficulty adjustments is correct if you halve the block time.  The reduction in the amount of blocks that clever could mint is about right.  It's not hard to see in the charts I posted that if you threw 10X the network hashrate at the chain, based on profitability, you're profitability would be gone before you could grab 15 blocks in under a minute.  There's no averaging, so DIGI doesn't need to take in to account the false lows that DGW3 produces.  The reaction is instant instead of delayed because of a moving average.

The numbers, while not statistically absolute, are sound.  No need to dismiss them.


Please don't take these remarks personally.. I'm merely pointing out that with the hashrate/sec spikes we're experiencing; both DGW3 and Digishield won't handle it. I know I haven't been active the last week or so, I'm sorry for that. But please keep in mind that we want a permanent and solid solution. We actually need a more complex algo...

I don't take the remarks personally, at all.  I think you are 100% incorrect about DIGI not handling the hashrate spikes though.  We've proved it on a testnet.  It's funny... we presented actual data, but there's still people that deny it, like it was all made up lol.  DIGI works, the graphs back that up, and the solution will work until the entire altcoin environment makes it's next paradigm shift.  You can't predict the future of mining, so you can't create an algo for it.  You can however use what is effective in this era of mining.  DIGI is the answer.

-Fuse
hero member
Activity: 938
Merit: 1000
@halofirebtc
December 28, 2014, 09:20:26 AM
100 lines of code? That's it? Granted they are very important lines of code but damn! Made it seem like it was longer than the main.cpp...

Thanks for the update/opinions GJ. I guess it wasn't what everyone wanted to hear as your statements are reflected in the price but you can't give them what they want, not healthy. Onward. Just don't put this off for another year.

All luck balances out over time. Luck should not be a an attributed factor of great weight.
sr. member
Activity: 409
Merit: 250
December 28, 2014, 09:07:23 AM
With respect to the testing, how is real world mining not actual maths?  It is how the algorithm is going to behave on a chain, rather than how a simulator says it will behave.  I would think if you wanted true test data you would truly test it, would you not?  Like I said, I'm for testing against the simulator, but we need to know that the simulator(which simulates actual mining, not actually mining) is giving us substantiated results.  We can't say the simulator is the key to testing algorithms without testing the simulator against an actual testnet first.

For example, you would need to mine on a testnet.  You would simulate large wave, small wave, etc. and record all the data.  Then you would need to replicate that chain with the simulator, and compare the results.  If the simulator was off by a predetermined margin, you wouldn't be able to say the simulator was accurate.  After all the testnet chain would be the definitive data... it's the actual mining.

So to say that testnet data is an educated guess is a little off base.  It is the actual, substantiated data collected from mining.

-Fuse

Let me make an other statement, "a simulator is nothing more or less than a tool for testing situations that CAN'T be test otherwise"
Why do you think cars are crashed to a concrete wall? Why do you think they testflight a new plane? Just because all the simulations they do just give a pretty good insight but are no substitute for the real live situations. Simulators are the next best thing if you can't afford testing live!
This said a simulator for crypto an unnecesary toy. Real live testing is possible in the testnet environment, real software, real hashes from real miners.

The Criptoe team tested Digi in a modified form on testnet and the results are looking OK. What reason can any dev have for a cumbersome coin NOT to change the algo to something thats properly tested.


I should clarify a bit on the simulator.
First of all, the comparison to car crashing simulations is not very accurate. We crash cars into walls because it's very very hard to simulate all the moving parts, the different materials, the impact and weight of all those materials.. the temperature and friction of all the parts.. etc. etc. etc.. So real live testing is the way to go..
A difficulty re-adjustment algorithm is ~100 lines of code performing some calculations.. When you give this the same input, it will output the same thing every time.. The only thing that makes the simulator different from a testnet is that no actual hashing is required and one can "fake" any amount of GH/s without having to purchase large machines..
I'm not saying testnet testing shouldn't happen, it's very important.. But unless you own a fortune and just bought a lot of new mining machines, it can't test against the same volumes of hashingpower.. Another argument for my simulator is that it makes testing easier and faster.. on a testnet testing a chain with 100 blocks should take 100*150 seconds.. The simulator can do this instantly because time is "simulated".. Again, that does not make the math being done any less reliable.. it's just fooling DGW3 by telling it more time has passed that did in the real world.

And yes, fuse is correct that if you record the testnet results and input them in a simulator, somewhere near the same output should come out.. (only faster).
There is one large difference between the testnet results and the simulator: the simulator calculates with 'perfect' hashrate chance. So if you have a dice roll 60 times, it will roll to each digit 10 times.. Wheras real mining has a chance of "having luck".. In the end this doesn't really matter for the diff algo testing, becuase "having luck" with a 100 MH miner isn't nowhere near as much impact as 600GH/s rolling through..
sr. member
Activity: 409
Merit: 250
December 28, 2014, 08:49:26 AM
A 25% modified Digishield diff algo is my recommendation.
How is the algo modified? I'd be very interested to see how it works!

While I completely trust GJ, a simulator is a just that... a simulation.
A simulation is not "just a simulation".. it's a piece of code that executes the exact same maths as being done in the satoshi client, but does not require actual hashing power.. Thereby making it easier to see how an algo reacts.. This makes it a much more powerful tool to test algo changes.

1. Look at network difficulty adjustment in increments instead of minutes or seconds -

The common fault is to look at a network as blocks per minute or blocks per 24 hours, such as presented by 'nlgblogger', instead a network needs to be looked at as the number of diff adjustments per 24 hours. With a 60sec block time, there is 1440 increments per 24 hours, with 150sec(NLG), there are only 576. So any diff algo designed for a 60sec block time needs to be looked at over 1440 blocks, not a 24 hour period, to see if it is working as desired if that diff algo is applied to any other block time. NLG has only about 1/3 of the diff increments in 24 hours when compared to a 60sec block time network. Both DigiShield and DGW3 were desired expressly for a 60 second block time, Digicoin in the former and DarkCoin in the latter. This is why neither Digi nor DGW3 are effective without tuning.
I don't really agree. The main variable that drives the DGW3 calculation is the average block time for the past n blocks. Hashing power can be added/removed instantaneous. It's not like hashing machines need to "warm up" and slowly increase their hashing speed over a few minutes or hours, so it doesn't make sense for any diff algo to take into account how many re-adjustments per hour are happening.

Digishield is a much simpler and more elegant diff algo that scales easily and effectively, while DGW3 is very complex and does not scale well.
I wouldn't say DGW3 is very complex.. It's not even complex relative to Digishield, because the two look a lot alike. Please tell me how you think Digi is more elegant and what you mean with "scaling".. Where in the maths does Digi scale better than DGW3?

I would fork NLG to a 75sec block target time with a 500NLG reward that retargets every block using a custom Digishield implemtation.
Blockreward is nowhere near related to the difficulty algorithm.. It wouldn't have anything to do with Digishield...

(....) This would reduce any 'stuck network' time by 50%, increase the number of diff increments per 24hrs by 100%, reduce the number of blocks that Clever can mint per 24hrs by 80%, and make NLG a much better network for micro-transactions.
How did you figure out these numbers ?


Please don't take these remarks personally.. I'm merely pointing out that with the hashrate/sec spikes we're experiencing; both DGW3 and Digishield won't handle it. I know I haven't been active the last week or so, I'm sorry for that. But please keep in mind that we want a permanent and solid solution. We actually need a more complex algo...
legendary
Activity: 1658
Merit: 1001
December 28, 2014, 08:28:56 AM
Please, don't touch the reward and the block time. That's something you do with scam coins, not guldencoin (and are, for me, in a way important determinants for trust in this coin).
sr. member
Activity: 332
Merit: 250
December 28, 2014, 08:18:11 AM
Yeah. In the meanwhile we can approach Golang developers anyway. It's handy to have one or two that we can call in anyway. Can you give me assistence with this Fuse? Send me a list of requirements/ code that the Golang devs needs to implement and where? Preferably via PM so I can type up a job-post on several freelance boards.

Thanks.

And to calm everyone down. Please take a moment to breathe, testing thoroughly is very very important. Don't rush to pushing for DIGI just because it seems the right thing to do. We need to know it is the best play, based on maths and not an educated guess on tests on the testnet. Sure. It goes a long way and a lot better than no testing at all. But having multiple numbers and tests just gives everyone more bang for their buck.

Not to worry. Buerra's pushing things now. LOL MONEY MONEY LOL

I'll start looking on Odesk and Elance.  Anyone with C++ and GO experience would be able to do the port.  I'd rather let /GJ decide on whether he wants to dev the rest.  It's his baby after all.  Has any post been made to announce the simulator to the crypto community?  There is a dev section of the forums that is filled with very competent devs who eat up this kind of development.  I would guess that announcing it there would lead to an "outsourcing" of community development.  Might just be easier to go that route.

What I say next might seem argumentative, but it's really just for my clarification.  Don't take it the wrong way.

With respect to the testing, how is real world mining not actual maths?  It is how the algorithm is going to behave on a chain, rather than how a simulator says it will behave.  I would think if you wanted true test data you would truly test it, would you not?  Like I said, I'm for testing against the simulator, but we need to know that the simulator(which simulates actual mining, not actually mining) is giving us substantiated results.  We can't say the simulator is the key to testing algorithms without testing the simulator against an actual testnet first.

For example, you would need to mine on a testnet.  You would simulate large wave, small wave, etc. and record all the data.  Then you would need to replicate that chain with the simulator, and compare the results.  If the simulator was off by a predetermined margin, you wouldn't be able to say the simulator was accurate.  After all the testnet chain would be the definitive data... it's the actual mining.

So to say that testnet data is an educated guess is a little off base.  It is the actual, substantiated data collected from mining.

-Fuse

I totally agree with Fuse in this. I'm holding back lately because of the strange discussions going on over here regarding the algo and simulator. It's amazing to see a statement that testnet is an educated guess.
I know I'm gona disturb some dreams again but I'm a litlle tired about the powers that are being ascribed to a simulator.

Let me make an other statement, "a simulator is nothing more or less than a tool for testing situations that CAN'T be test otherwise"
Why do you think cars are crashed to a concrete wall? Why do you think they testflight a new plane? Just because all the simulations they do just give a pretty good insight but are no substitute for the real live situations. Simulators are the next best thing if you can't afford testing live!
This said a simulator for crypto an unnecesary toy. Real live testing is possible in the testnet environment, real software, real hashes from real miners.

The Criptoe team tested Digi in a modified form on testnet and the results are looking OK. What reason can any dev have for a cumbersome coin NOT to change the algo to something thats properly tested.





  
sr. member
Activity: 246
Merit: 250
December 28, 2014, 05:32:55 AM
Good to know the simulator isn't necessarily the first step to a solution (I always thought it was), good there are more and more possible solutions made public. As long as the simulator didn't bring us a solution, every other suggestion brought by the community must be taken seriously and discussed in the open (instead of being ignored).
full member
Activity: 170
Merit: 100
December 28, 2014, 04:16:06 AM

The time for inaction is over...

EDIT - forgot about my radical proposal...

I would fork NLG to a 75sec block target time with a 500NLG reward that retargets every block using a custom Digishield implemtation. This would reduce any 'stuck network' time by 50%, increase the number of diff increments per 24hrs by 100%, reduce the number of blocks that Clever can mint per 24hrs by 80%, and make NLG a much better network for micro-transactions. This would make NLG the premier alt-coin.


That is what I would do with NLG

The reduction of block times by 50% and halving the amount of NLG mined would be a great change if it is possible, this change will also extend the coins mining life cycle as the block halving would come faster. Right now the total coins will be mined out under 15 years, this will extend it over 40 years.

This algorithm problem has been hanging over the coins head since September and would of killed most coins by now, so imagine what a good and smooth algorithm would do, I think top 20 coin by middle of next year as the elephant in the room would be finally dealt with.
Great idea! I fully support this change and would like to see it implemented yesterday  Tongue We should start 2015 with a solid and healthy gulden with no room for Clever mining anymore.
sr. member
Activity: 458
Merit: 500
December 28, 2014, 12:16:14 AM

The time for inaction is over...

EDIT - forgot about my radical proposal...

I would fork NLG to a 75sec block target time with a 500NLG reward that retargets every block using a custom Digishield implemtation. This would reduce any 'stuck network' time by 50%, increase the number of diff increments per 24hrs by 100%, reduce the number of blocks that Clever can mint per 24hrs by 80%, and make NLG a much better network for micro-transactions. This would make NLG the premier alt-coin.


That is what I would do with NLG

The reduction of block times by 50% and halving the amount of NLG mined would be a great change if it is possible, this change will also extend the coins mining life cycle as the block halving would come faster. Right now the total coins will be mined out under 15 years, this will extend it over 40 years.

This algorithm problem has been hanging over the coins head since September and would of killed most coins by now, so imagine what a good and smooth algorithm would do, I think top 20 coin by middle of next year as the elephant in the room would be finally dealt with.
sr. member
Activity: 672
Merit: 250
December 27, 2014, 07:33:25 PM
Since Fuse pushed this into the public domain... I will submit a summary of my findings and put forth some rather radical proposals. I had wanted to present this to GJ in private as we have been in discussions, but I have not had time due to family and business obligations. And at the moment I am too busy to have time for a detailed reply, so will keep this as short and direct as possible.

1. Look at network difficulty adjustment in increments instead of minutes or seconds -

The common fault is to look at a network as blocks per minute or blocks per 24 hours, such as presented by 'nlgblogger', instead a network needs to be looked at as the number of diff adjustments per 24 hours. With a 60sec block time, there is 1440 increments per 24 hours, with 150sec(NLG), there are only 576. So any diff algo designed for a 60sec block time needs to be looked at over 1440 blocks, not a 24 hour period, to see if it is working as desired if that diff algo is applied to any other block time. NLG has only about 1/3 of the diff increments in 24 hours when compared to a 60sec block time network. Both DigiShield and DGW3 were desired expressly for a 60 second block time, Digicoin in the former and DarkCoin in the latter. This is why neither Digi nor DGW3 are effective without tuning.

2. Digi vs DGW3 -

Digishield is a much simpler and more elegant diff algo that scales easily and effectively, while DGW3 is very complex and does not scale well. DGW3 works extremely well for the coin it was expressly designed for, DRK. Also Digishield handles the backside of the spike much better.

3. Testnet, Simulator or Main-net -

While I completely trust GJ, a simulator is a just that... a simulation. A testnet is just an alternate block chain that replicates the main network exactly and allows for testing. Testnet results are a real and actual and I trust those results much more than an unproven simulation. If the community wants... and I have threatened this... I am happy to fork NLG(the main blockchain) and run my tests on the main chain, just be aware that I could own most of your NLG if I did this. Fuse and the Criptoe has asked me not to do this in the interest of protecting NLG and I have deferred to their request. The testnet results are real and factual as testing on the main chain.

4. DigiShield or as I call GuldenShield provides the best solution in the mid-term -

I have spent over 3 weeks testing various diff algos on the NLG testnet, on my own and with Fuse and the Criptoe team. Fuse has my findings and data. A 25% modified Digishield diff algo is my recommendation. It works, it reacts quick enough to make it unprofitable for a large hash-rate miner/pool by driving the diff up very quickly reducing the number of blocks they can mint on a hash-rate spike, while not lowering the diff to a false low on the back side of the spike.

5. Real world results -

Clever will still be able to drive the diff up and cause the network to hang, but will mint so few blocks that it will no longer be profitable. There is nothing that can be done to prevent a 'stuck' network in the case of a sudden loss in hash-rate. The network will not plummet to a false low after finding a long block, but will the diff drop will controlled and prevent that frenzy of found blocks that Clever is able to capitalise on.

Please read this post - https://bitcointalksearch.org/topic/m.9938466 - for further explanation.

6. Implement GuldenShield - give credit to the Digishield devteam, but since we are essentially deploying a custom diff algo, the GuldenCoin devteam and community deserve the credit and right to call the NLG diff algo their own. Fuse has a working codebase and Criptoe can have a clean, pretty codebase ready to deploy in a few days at worst.

In summary, GuldenShield may not be the perfect solution, but is a solid solution that will be able to let the NLG network grow to a diff of 10k plus. The beauty of crypto and open-source is that the codebase can adapt to the environment and demands of the community. NLG needs to embrace this flexibility and move forward.

The time for inaction is over...

EDIT - forgot about my radical proposal...

I would fork NLG to a 75sec block target time with a 500NLG reward that retargets every block using a custom Digishield implemtation. This would reduce any 'stuck network' time by 50%, increase the number of diff increments per 24hrs by 100%, reduce the number of blocks that Clever can mint per 24hrs by 80%, and make NLG a much better network for micro-transactions. This would make NLG the premier alt-coin.


That is what I would do with NLG
member
Activity: 100
Merit: 10
December 27, 2014, 05:52:25 PM
Click on the pictures for a larger view.
...
The first 2 jumps are from baseline to 5 times the network hashrate.  After that, the jumps are 10X increases, and the highest peak is when we toggled a 15X increase for giggles.  In the following pictures you will see the difficulty values and the steps up and down instead of instant spikes and crashes.  Notice that each time the hashrate was removed to go back to baseline, it never dives below the baseline.  It tapers back, and when the block is found, it stabilizes instantly. This is something DGW3 isn't doing properly.
...
-Fuse

Looks really good! For my taste it could increase even faster (use the 25% you tested too instead of 20%) to give MPs like CLEVER even less time to make use of lower diff.
How many blocks is the window for MPs from what you saw in your tests?
I guess your mod is based on DigiShield V1, since V2 and V3 are based on multi algos.

Isn't it possible to do temporarily a little adjustment in the DWG3 parameters without a mandatory upgrade until the simulator is ready for testing?  Or can't that be done without a mandatory upgrade? That would mean some improvements already, the bigger come later then with the new Guldencoin algo.

DGW3 in its original form (as used here) has a little 'One-Off' bug that is easy to fix (plus a little code cleanup) but not the root of the problem. Based on a row of 2k blocks I was analysing there seems to be another, more hidden bug that hits at diff 512 and looks like a data type conversion problem to me. At that point my free time got too short to get closer to the problem.
And I think /GJ was right when he said that fixing DGW3 will not fix our problem of getting raped my MPs; DGW3 has proven that it does not react fast enough for what we need. It sure reacts faster than earlier versions while still trying to smooth the spikes. The smoothing leads to the windows for MPs.

From reading through many diff algos during the last months I have come to an own idea for an algo that should react fast while still a bit smoothed by a trend analysis. Sadly free time is still short and my experience in reading C is far bigger than writing it Sad . And without a tool to test and fine tune the idea it's all theory. So the next steps of the simulator with mid and big waves are needed here too.

Conclusion: Fuse's DIGImod seems the best solution for now to get some pressure from NLG and gives enough time to develop the NLG diff algo for the forseeable future. That's the hope based on what we can see. A new diff algo that is able to keep up with the heavy increasing hash power nowerdays needs more time than previously expected and a lot of testing before it can go life for a coin with real life use.

Another point I mentioned before: Extrem short block times needs a penalty which in my eyes is a necessary and reasonable defense! A reject for extrem short block times (<50sec, 1/3 of the default time of 150) to the block check is low enough for normal 'lucky blocks' still being accepted. Or at least cut the block reward for those blocks to 1/3. This way the first few blocks for big hash powers jumping in and until the diff algo can adjust wouldn't overpay (by time used) so far anymore.
hero member
Activity: 938
Merit: 1000
@halofirebtc
December 27, 2014, 03:50:38 PM
Throwing money at GJ from the premine isn't going to get anything done faster and could be used in a better way.

Besides, don't you all trust him?  Wink


Trust isn't necessarily the issue here.  I trust him completely.  I just want to see things move along and not sit stagnant.

I've spoken in the past with /GJ about development in PM, and he expressed his dedication to NLG development, but lack of time and funds to make it a dedicated job.  Let's face it... in most cases, devs do this in their free time because they need to pay the bills with a full-time job.  I'm not entirely sure what /GJ does for a living outside of crypto, but if using dev premine to pay him to allow him to dedicate more time to development helps, I'm all for it.  This is all up to /GJ though... we can't tell him to quit his job and only work on NLG.

As far as GO skills go, that's not my bag.  I just don't have the time to put into learning it right now.  I'm sure I could, but I've got enough on my plate.  However, if the dev team and community decide to push forward with DIGI, then I'm ready to create the code git pull if needed.  No dev time is needed on the dev team's end, minus the windows compile.  At least that could take a little pressure of the team.

Like others have said though, I'd like to hear from /GJ and see what he has to say.  I'm ready to act on any decision, I just need direction.

snip

-Fuse

What I meant by the trust thing was that's he's oriented to do well for NLG regardless of whether or not we throw additional monies at him.
sr. member
Activity: 880
Merit: 251
Think differently
December 27, 2014, 02:02:59 PM
Just a real life metaphor from me....

My best friend and I bought a boat 10 years ago. It needed finishing on the interior. I am a hands on guy (measure once, cut many times), the opposite of my friend who likes to make models and lists endlessly before he actually starts to do anything. Needless to say this created some challenges.

One weekend our wives decided it was time to end discussions and encouraged us to start doing the job. I decided to go along with the ideas we came up with together and drove by the stores to pick up the materials and started on the job. My friend started preparing a list and spend the entire weekend finding out where to best buy the materials.

On Sunday he arrived at the shipyard with the list and found me with a beer on the boat. It was finished, almost exactly the way he described on his list....

I don't want to say here this is the best way to go, but it might help you to understand why I'm with Fuse's last remarks...

Nice example, i'm with Fuse's choice too
Jump to: