Author

Topic: NA - page 406. (Read 893613 times)

sr. member
Activity: 409
Merit: 250
October 22, 2014, 04:29:29 AM
The danger of putting too much weight on the latest block:
Lets put it to an extreme and you use only the last 1 or 2 blocks. A pool can easy cherry pick those blocks, leave for 2 blocks, come back... and repeat. The diff would jump up and down like crazy.
So it is important to fine tune the settings and find a good path between instant reaction and smoothing the diff changes.

I would never recommend a 1-2 block focus.  That would be crazypants thinking.

I seriously think 6 is a nice even number, and it falls in line with about the average number of blocks that Clever rapes at any given time.  You put more focus on those 6 blocks, instead of 24 evenly, and you'll see a big change in Clever's ability to own 90% of the network.

Of course, GJ has some code on the table right now.  I guess we'll all just need to wait to see what it looks like before we try to push one way or another.

-Fuse

I'm with CED and Fuse in this, I would say three / four blocks minimum because otherwise the diff will be all over the place. I'm also missing some structure in this discussion. I did put down a list yesterday with strange diff values imho thats problem uno, if we don't solve that problem every solution will probably show the same weird behaviour as KGW and DGW are showing.

Second, if you want to modify the algo you need to identify the problem properly and with the problem description you can work out a solution. Without a proper problem description there is a huge risc of drifting away from the goal. Why take 6 blocks or 10 or 24? Because thats the number of doors in your office? Or is it the blocks that are being mined fast? So in my opinion the path could look like this:

phase1
- identify the cause of the swings more than 3 and 1/3
It is limited to 3 and 1/3 diff change. But the new diff is calculated from the average diff over the previous 24 blocks. So you can't compare the diff to the previous block as displayed in the list. Please see https://github.com/nlgcoin/guldencoin/blob/master/src/main.cpp#L1286 and line 1299 and lines 1303-1306.
- work out a solution for point 1 first
We could change it so the diff is calculated only from the diff of the previous block.. Then the diff change between individual blocks could never exceed 3x or 0.33x. But that also means the diff can go x3^3 (x27 !!!) when there are 3 lucky/fast blocks. I don't think that is a smart thing to do.
- if the fix for point 1 is outside DGW code see if DGW is acting as expected in the first place
I don't think point 1 needs to be fixed. I'm not aware of any code influencing the difficulty outside DGW. DGW is acting as it should (as it is doing in other coins), but simply can't handle these hash/sec spikes.
phase 2
- identify the problem and descibe it properly (yeah I know it's jumppool, but whats really the undelying thing that causes the trouble)
You're right. It is good to have a verbose explanation with some examples. As long as we keep in mind that diff readjustment should work for more problems than only this one.
- work out a modification of the algo by testing it on past hashrate swings and see if it smoothens out those.
Testing is very important. We shouldn't release a new algo too fast. Placing a new algo on the existing chain doesn't give a proper indication though, as it does not influence block times and doesn't change multipool behaviour. It would be better so simulate several cases of hashrate joins/leaves. I have done this before with DGW3, but never the amounts that we currently see. This time we must simulate extreme hashrates, even multiple times more than we currently see with clevermining. I plan to release the software I used for simulations, just needs some polishing and persistency of scenario's (currently the user must enter the block times manually, each time). That way everyone can apply and test changes locally.
- test the modification thorough for vulnerabilities and flaws
Very important! I think 24Kilo can be of great help here
- implement it
This time deployment will be very smooth. We can release a new algo one week in advance with a hardcoded block on which all nodes switch algo.

 

I really like the weighted average idea's that are being discussed here. I had the idea of weighted average in my mind, but I didn't have it worked out yet. It's great to see the discussion here, please keep it going! It is definitely influencing the code I'm writing.

Please understand that when I release the code I have so far: it's not a final version. We do this together. It will be our algorithm; created by the Guldencoin community. So the code is open for feedback and changes. This is the only way that we can fix this problem. We all have great idea's and knowledge. When we keep combining our efforts we will create the best possible algorithm.
legendary
Activity: 952
Merit: 1000
October 22, 2014, 04:23:24 AM
sr. member
Activity: 393
Merit: 250
October 22, 2014, 03:27:32 AM
To ilustrate the behaviour we experience look at the blocks below. At 137136 we had a diff of 412 and because the block preceding this took two hours the diff dropped to 118. But the max drop is 1/3 so it should have dropped to 138 and not 118. The dif lowers and despite blocktimes of tens of seconds it lowers and lowers. Till 137159 then the calculation says hoooo fellas this is too fast and it raises the diff from 29.8 to 162.1 not exactly the max three times that was in the DGW design... So why does is diff adjusted more than the DGW design? And second it can behave much better if it is a weighted average instead of a plain one.

 
137184 2014-10-20 19:37:56 1 1000 227.657 307183000 64.5077 4675.82 61.0856%
137183 2014-10-20 19:37:43 1 1000 188.93 307182000 64.5077 4675.82 61.0856%
137182 2014-10-20 19:37:14 6 45685.2034 431.768 307181000 64.5076 4675.82 61.0858%
137181 2014-10-20 19:01:13 1 1000 374.057 307180000 64.4828 4675.79 61.095%
137180 2014-10-20 19:00:42 2 1422.978 332.835 307179000 64.4827 4675.79 61.0951%
137179 2014-10-20 18:53:02 1 1000 302.237 307178000 64.4776 4675.79 61.097%
137178 2014-10-20 18:52:47 1 1000 276.263 307177000 64.4776 4675.79 61.0971%
137177 2014-10-20 18:52:02 1 1000 256.457 307176000 64.4773 4675.79 61.0973%
137176 2014-10-20 18:50:03 1 1000 240.951 307175000 64.4761 4675.78 61.0978%
137175 2014-10-20 18:49:57 3 1975.29793541 228.569 307174000 64.4763 4675.78 61.0978%
137174 2014-10-20 18:47:50 1 1000 218.365 307173000 64.475 4675.78 61.0984%
137173 2014-10-20 18:47:41 1 1000 210.006 307172000 64.4751 4675.78 61.0984%
137172 2014-10-20 18:47:39 1 1000 203.035 307171000 64.4753 4675.78 61.0984%
137171 2014-10-20 18:47:24 1 1000 197.242 307170000 64.4753 4675.78 61.0985%
137170 2014-10-20 18:47:19 1 1000 192.377 307169000 64.4755 4675.78 61.0985%
137169 2014-10-20 18:46:52 1 1000 188.239 307168000 64.4754 4675.78 61.0986%
137168 2014-10-20 18:46:48 1 1000 184.775 307167000 64.4756 4675.78 61.0986%
137167 2014-10-20 18:46:44 1 1000 181.831 307166000 64.4757 4675.78 61.0987%
137166 2014-10-20 18:46:41 1 1000 179.32 307165000 64.4759 4675.78 61.0987%
137165 2014-10-20 18:46:29 1 1000 177.128 307164000 64.476 4675.78 61.0987%
137164 2014-10-20 18:45:32 1 1000 175.297 307163000 64.4755 4675.78 61.099%
137163 2014-10-20 18:45:08 1 1000 173.454 307162000 64.4755 4675.78 61.0991%
137162 2014-10-20 18:43:29 2 6641.46782322 171.971 307161000 64.4745 4675.78 61.0995%
137161 2014-10-20 18:42:48 1 1000 170.843 307160000 64.4744 4675.78 61.0996%
137160 2014-10-20 18:42:27 1 1000 162.133 307159000 64.4744 4675.78 61.0997%
137159 2014-10-20 18:42:16 1 1000 29.857 307158000 64.4744 4675.78 61.0997%
137158 2014-10-20 18:41:59 1 1000 32.026 307157000 64.4744 4675.78 61.0998%
137157 2014-10-20 18:41:53 1 1000 34.314 307156000 64.4746 4675.78 61.0998%
137156 2014-10-20 18:41:40 1 1000 36.323 307155000 64.4746 4675.78 61.0999%
137155 2014-10-20 18:41:37 1 1000 38.742 307154000 64.4748 4675.78 61.0999%
137154 2014-10-20 18:41:34 1 1000 37.496 307153000 64.475 4675.78 61.0999%
137153 2014-10-20 18:41:18 1 1000 40.622 307152000 64.475 4675.78 61.1%
137152 2014-10-20 18:40:59 1 1000 43.965 307151000 64.475 4675.78 61.1001%
137151 2014-10-20 18:40:52 1 1000 47.547 307150000 64.4751 4675.78 61.1001%
137150 2014-10-20 18:40:48 1 1000 50.685 307149000 64.4753 4675.78 61.1001%
137149 2014-10-20 18:40:47 1 1000 54.738 307148000 64.4755 4675.78 61.1001%
137148 2014-10-20 18:40:43 1 1000 58.675 307147000 64.4757 4675.78 61.1001%
137147 2014-10-20 18:40:31 1 1000 63.313 307146000 64.4757 4675.78 61.1002%
137146 2014-10-20 18:40:03 1 1000 68.027 307145000 64.4756 4675.78 61.1003%
137145 2014-10-20 18:39:48 1 1000 72.557 307144000 64.4757 4675.78 61.1004%
137144 2014-10-20 18:39:44 1 1000 78.178 307143000 64.4758 4675.78 61.1004%
137143 2014-10-20 18:39:00 1 1000 83.37 307142000 64.4755 4675.78 61.1006%
137142 2014-10-20 18:38:55 1 1000 88.63 307141000 64.4757 4675.78 61.1006%
137141 2014-10-20 18:38:54 1 1000 92.904 307140000 64.4759 4675.78 61.1006%
137140 2014-10-20 18:38:49 1 1000 99.356 307139000 64.476 4675.78 61.1006%
137139 2014-10-20 18:38:33 1 1000 97.431 307138000 64.476 4675.78 61.1007%
137138 2014-10-20 18:37:44 1 1000 105.188 307137000 64.4757 4675.78 61.1009%
137137 2014-10-20 18:37:00 8 92544.22974057 118.799 307136000 64.4754 4675.78 61.1011%
137136 2014-10-20 18:36:42 3 8363.86928814 412.358 307135000 64.4763 4675.78 61.1006%
 



For the people who missed the list thsminer is talking about
sr. member
Activity: 332
Merit: 250
October 22, 2014, 02:58:31 AM
The danger of putting too much weight on the latest block:
Lets put it to an extreme and you use only the last 1 or 2 blocks. A pool can easy cherry pick those blocks, leave for 2 blocks, come back... and repeat. The diff would jump up and down like crazy.
So it is important to fine tune the settings and find a good path between instant reaction and smoothing the diff changes.

I would never recommend a 1-2 block focus.  That would be crazypants thinking.

I seriously think 6 is a nice even number, and it falls in line with about the average number of blocks that Clever rapes at any given time.  You put more focus on those 6 blocks, instead of 24 evenly, and you'll see a big change in Clever's ability to own 90% of the network.

Of course, GJ has some code on the table right now.  I guess we'll all just need to wait to see what it looks like before we try to push one way or another.

-Fuse

I'm with CED and Fuse in this, I would say three / four blocks minimum because otherwise the diff will be all over the place. I'm also missing some structure in this discussion. I did put down a list yesterday with strange diff values imho thats problem uno, if we don't solve that problem every solution will probably show the same weird behaviour as KGW and DGW are showing.

Second, if you want to modify the algo you need to identify the problem properly and with the problem description you can work out a solution. Without a proper problem description there is a huge risc of drifting away from the goal. Why take 6 blocks or 10 or 24? Because thats the number of doors in your office? Or is it the blocks that are being mined fast? So in my opinion the path could look like this:

phase1
- identify the cause of the swings more than 3 and 1/3
- work out a solution for point 1 first
- if the fix for point 1 is outside DGW code see if DGW is acting as expected in the first place
phase 2
- identify the problem and descibe it properly (yeah I know it's jumppool, but whats really the undelying thing that causes the trouble)
- work out a modification of the algo by testing it on past hashrate swings and see if it smoothens out those.
- test the modification thorough for vulnerabilities and flaws
- implement it












 
legendary
Activity: 1025
Merit: 1001
October 22, 2014, 01:04:21 AM
sr. member
Activity: 393
Merit: 250
October 21, 2014, 11:53:29 PM
The danger of putting too much weight on the latest block:
Lets put it to an extreme and you use only the last 1 or 2 blocks. A pool can easy cherry pick those blocks, leave for 2 blocks, come back... and repeat. The diff would jump up and down like crazy.
So it is important to fine tune the settings and find a good path between instant reaction and smoothing the diff changes.

I would never recommend a 1-2 block focus.  That would be crazypants thinking.

I seriously think 6 is a nice even number, and it falls in line with about the average number of blocks that Clever rapes at any given time.  You put more focus on those 6 blocks, instead of 24 evenly, and you'll see a big change in Clever's ability to own 90% of the network.

Of course, GJ has some code on the table right now.  I guess we'll all just need to wait to see what it looks like before we try to push one way or another.

-Fuse

Nice to see some constructive consensus  Grin
legendary
Activity: 1582
Merit: 1002
HODL for life.
October 21, 2014, 09:11:55 PM
The danger of putting too much weight on the latest block:
Lets put it to an extreme and you use only the last 1 or 2 blocks. A pool can easy cherry pick those blocks, leave for 2 blocks, come back... and repeat. The diff would jump up and down like crazy.
So it is important to fine tune the settings and find a good path between instant reaction and smoothing the diff changes.

I would never recommend a 1-2 block focus.  That would be crazypants thinking.

I seriously think 6 is a nice even number, and it falls in line with about the average number of blocks that Clever rapes at any given time.  You put more focus on those 6 blocks, instead of 24 evenly, and you'll see a big change in Clever's ability to own 90% of the network.

Of course, GJ has some code on the table right now.  I guess we'll all just need to wait to see what it looks like before we try to push one way or another.

-Fuse
legendary
Activity: 1526
Merit: 1000
the grandpa of cryptos
October 21, 2014, 09:08:43 PM
i reallyu like this coins idea
member
Activity: 100
Merit: 10
October 21, 2014, 08:45:10 PM

I think this is getting more complicated than it needs to be.  If we're looking at a weighted average, like I suggested, take the first 6 blocks and then the next 18 counted as one block.  I don't see how additional weight on the latest blocks would allow for an exploit.  Care to elaborate?

I do agree though that the changes need to happen faster, much like I originally pointed out here: https://bitcointalk.org/index.php?topic=554412.msg8992812;topicseen#msg8992812.  The POT chart there is a solid representation of what we should be trying to achieve.

-Fuse

We basically follow the same path. The more weight you give to the newer blocks, the quicker it reacts to the hash rate change.
Neither your nor mine idea are big deals to implement; only a few lines of code in the loop that calculates the actual time and average.
From how I understand it, you are putting it as 6 new blocks to 1 averaged over the 18 older ones.
My approach is to increase the weight in steps the closer you come to the newest block with the weight of 40% or 53% to the most relevant set of blocks.
Which one is better? We can speculate but best would be to see some numbers from test cases or network tests.

The danger of putting too much weight on the latest block:
Lets put it to an extreme and you use only the last 1 or 2 blocks. A pool can easy cherry pick those blocks, leave for 2 blocks, come back... and repeat. The diff would jump up and down like crazy.
So it is important to fine tune the settings and find a good path between instant reaction and smoothing the diff changes.

(I had a quick look at NiteGravityWell and without digging too deep into it, it looks like a slightly modded KGW.)
legendary
Activity: 1582
Merit: 1002
HODL for life.
October 21, 2014, 07:17:59 PM
DGW3 was developed at a time and for a network that had far smaller spikes than we see today.
I still think what we need is a faster reaction to the hash rate spikes and drops we see.
The more time it takes to settle in to the desired diff for the actual hash rate the more time we give pools to take advantage of it and normal miners will suffer after a drop and the time it takes to bring the diff back down.

The idea I wrote about a few pages back and some others picked up too is to give the newer blocks in the interval a higher weight than the older ones.

Lets say we split the 24 blocks into 4 parts when calculating nActualTimespan.

older ---> newer
part1, part2, part3, part4 (each part 6 blocks)

a more conservative approach:
part1: block times * 1   (counted like 06)
part2: block times * 2   (counted like 12)
part3: block times * 3   (counted like 18)
part4: block times * 4   (counted like 24)

sum / 60 = weighted average
weighted average * 24 = nActualTimespanWeighted

a more agressive approach:
part1: block times * 2^0  (=block times * 1)   (counted like 06)
part2: block times * 2^1  (=block times * 2)   (counted like 12)
part3: block times * 2^2  (=block times * 4)   (counted like 24)
part4: block times * 2^3  (=block times * Cool   (counted like 48)

sum / 90 = weighted average
weighted average * 24 = nActualTimespanWeighted

Splitting it into more parts would make it react even faster, both on the way up and on the way down.
Giving the newest i.e 1 or 2 blocks additional weight can even speed it up more; giving the very latest blocks too much weight could lead to an exploit for an attack.

It should be an easy mod to our DGW3 diff algo with lower risk than a completely new algo.

I think this is getting more complicated than it needs to be.  If we're looking at a weighted average, like I suggested, take the first 6 blocks and then the next 18 counted as one block.  I don't see how additional weight on the latest blocks would allow for an exploit.  Care to elaborate?

I do agree though that the changes need to happen faster, much like I originally pointed out here: https://bitcointalk.org/index.php?topic=554412.msg8992812;topicseen#msg8992812.  The POT chart there is a solid representation of what we should be trying to achieve.

-Fuse
member
Activity: 100
Merit: 10
October 21, 2014, 05:49:04 PM

I've actually started implementing a new algorithm, from scratch. It performs faster re-adjustment, limited to 1.2 or 0.8 difficulty change between individual blocks. But it also limits the difficulty change to 3.0 or 0.33 compared to the last 120 blocks difficulty average. This means that the diff can rise ~3.0 times in 6 blocks, and it can fall to ~0.33 times in 5 blocks. In the linked formula's the impact of the new blocks themselves are not calculated in the 120 blocks average.
The idea behind this is that it will be able to handle large joins and leaves, but won't be tricked into settling on a high difficulty too fast.

Thoughts?

DGW3 was developed at a time and for a network that had far smaller spikes than we see today.
I still think what we need is a faster reaction to the hash rate spikes and drops we see.
The more time it takes to settle in to the desired diff for the actual hash rate the more time we give pools to take advantage of it and normal miners will suffer after a drop and the time it takes to bring the diff back down.

The idea I wrote about a few pages back and some others picked up too is to give the newer blocks in the interval a higher weight than the older ones.

Lets say we split the 24 blocks into 4 parts when calculating nActualTimespan.

older ---> newer
part1, part2, part3, part4 (each part 6 blocks)

a more conservative approach:
part1: block times * 1   (counted like 06)
part2: block times * 2   (counted like 12)
part3: block times * 3   (counted like 18)
part4: block times * 4   (counted like 24)

sum / 60 = weighted average
weighted average * 24 = nActualTimespanWeighted

a more agressive approach:
part1: block times * 2^0  (=block times * 1)   (counted like 06)
part2: block times * 2^1  (=block times * 2)   (counted like 12)
part3: block times * 2^2  (=block times * 4)   (counted like 24)
part4: block times * 2^3  (=block times * Cool   (counted like 48)

sum / 90 = weighted average
weighted average * 24 = nActualTimespanWeighted

Splitting it into more parts would make it react even faster, both on the way up and on the way down.
Giving the newest i.e 1 or 2 blocks additional weight can even speed it up more; giving the very latest blocks too much weight could lead to an exploit for an attack.

It should be an easy mod to our DGW3 diff algo with lower risk than a completely new algo.
sr. member
Activity: 672
Merit: 250
October 21, 2014, 05:00:38 PM
To ilustrate the behaviour we experience look at the blocks below. At 137136 we had a diff of 412 and because the block preceding this took two hours the diff dropped to 118. But the max drop is 1/3 so it should have dropped to 138 and not 118. The dif lowers and despite blocktimes of tens of seconds it lowers and lowers. Till 137159 then the calculation says hoooo fellas this is too fast and it raises the diff from 29.8 to 162.1 not exactly the max three times that was in the DGW design... So why does is diff adjusted more than the DGW design? And second it can behave much better if it is a weighted average instead of a plain one.

+1 for the detailed info, mate.  I'm with you 100%.

I really do believe either looking at less blocks for the average, or creating a weighted average is the way to go.  Additionally, there needs to be a limit in the amount of difficulty increase/decrease so we're not throwing the difficulty all over the place.

Again- less extreme changes that happen more often.

-Fuse

I think we're on a good path here!

I've actually started implementing a new algorithm, from scratch. It performs faster re-adjustment, limited to 1.2 or 0.8 difficulty change between individual blocks. But it also limits the difficulty change to 3.0 or 0.33 compared to the last 120 blocks difficulty average. This means that the diff can rise ~3.0 times in 6 blocks, and it can fall to ~0.33 times in 5 blocks. In the linked formula's the impact of the new blocks themselves are not calculated in the 120 blocks average.
The idea behind this is that it will be able to handle large joins and leaves, but won't be tricked into settling on a high difficulty too fast.

Thoughts?

I'll share the code when I'm happy with initial implementation.


Please send me your code by PM to test it for time based attacks... remember that the Achilles Heel of difficulty retargeting algo's that adjust every block is vulnerability to time based attacks... this is why KGW had to be recoded... I would not try to reinvent the wheel with this... a Digishield variant is the best next step in finding a solution... I know this algo has pretty much resolved the multi-pool problem for POT and has proven to be quite resistant to attack.

This is the approach Criptoe is working on at present.
legendary
Activity: 1023
Merit: 1000
ltex.nl
October 21, 2014, 04:32:24 PM
I think you misunderstand. The diff will rise max with a factor of 1.2 between blocks. So if the diff is 300, the next block can be max 360.

I definitely misread your text lol

This is a decent approach.  As long as we don't from 250 to 750 in a single jump, you should see better block times.  Additionally, as the difficulty drops back down, we'll get to the point where we're riding a sweet spot where it's not profitable for the miners doing short-term calculations for profit(aka multis).

-Fuse

I feel a like a reward is due soon!  Grin
legendary
Activity: 1582
Merit: 1002
HODL for life.
October 21, 2014, 04:04:54 PM
I think you misunderstand. The diff will rise max with a factor of 1.2 between blocks. So if the diff is 300, the next block can be max 360.

I definitely misread your text lol

This is a decent approach.  As long as we don't from 250 to 750 in a single jump, you should see better block times.  Additionally, as the difficulty drops back down, we'll get to the point where we're riding a sweet spot where it's not profitable for the miners doing short-term calculations for profit(aka multis).

-Fuse
sr. member
Activity: 409
Merit: 250
October 21, 2014, 03:57:50 PM
I think we're on a good path here!

I've actually started implementing a new algorithm, from scratch. It performs faster re-adjustment, limited to 1.2 or 0.8 difficulty change between individual blocks. But it also limits the difficulty change to 3.0 or 0.33 compared to the last 120 blocks difficulty average. This means that the diff can rise ~3.0 times in 6 blocks, and it can fall to ~0.33 times in 5 blocks. In the linked formula's the impact of the new blocks themselves are not calculated in the 120 blocks average.
The idea behind this is that it will be able to handle large joins and leaves, but won't be tricked into settling on a high difficulty too fast.

Thoughts?

I'll share the code when I'm happy with initial implementation.


I think you're shooting too high with a 3X increase.  We'll have the same results we have now where it jumps too high for normal miners, and we get stuck.  The drop back down is fine, but the jump up in difficulty should be smaller IMO.

As 24Kilo always tells me, we don't need to reinvent the wheel.  We just need to make sure our current wheel is round.

-Fuse

I think you misunderstand. The diff will rise max with a factor of 1.2 between blocks. So if the diff is 300, the next block can be max 360.
legendary
Activity: 1582
Merit: 1002
HODL for life.
October 21, 2014, 02:36:47 PM
I think we're on a good path here!

I've actually started implementing a new algorithm, from scratch. It performs faster re-adjustment, limited to 1.2 or 0.8 difficulty change between individual blocks. But it also limits the difficulty change to 3.0 or 0.33 compared to the last 120 blocks difficulty average. This means that the diff can rise ~3.0 times in 6 blocks, and it can fall to ~0.33 times in 5 blocks. In the linked formula's the impact of the new blocks themselves are not calculated in the 120 blocks average.
The idea behind this is that it will be able to handle large joins and leaves, but won't be tricked into settling on a high difficulty too fast.

Thoughts?

I'll share the code when I'm happy with initial implementation.


I think you're shooting too high with a 3X increase.  We'll have the same results we have now where it jumps too high for normal miners, and we get stuck.  The drop back down is fine, but the jump up in difficulty should be smaller IMO.

As 24Kilo always tells me, we don't need to reinvent the wheel.  We just need to make sure our current wheel is round.

-Fuse
sr. member
Activity: 409
Merit: 250
October 21, 2014, 02:29:13 PM
To ilustrate the behaviour we experience look at the blocks below. At 137136 we had a diff of 412 and because the block preceding this took two hours the diff dropped to 118. But the max drop is 1/3 so it should have dropped to 138 and not 118. The dif lowers and despite blocktimes of tens of seconds it lowers and lowers. Till 137159 then the calculation says hoooo fellas this is too fast and it raises the diff from 29.8 to 162.1 not exactly the max three times that was in the DGW design... So why does is diff adjusted more than the DGW design? And second it can behave much better if it is a weighted average instead of a plain one.

+1 for the detailed info, mate.  I'm with you 100%.

I really do believe either looking at less blocks for the average, or creating a weighted average is the way to go.  Additionally, there needs to be a limit in the amount of difficulty increase/decrease so we're not throwing the difficulty all over the place.

Again- less extreme changes that happen more often.

-Fuse

I think we're on a good path here!

I've actually started implementing a new algorithm, from scratch. It performs faster re-adjustment, limited to 1.2 or 0.8 difficulty change between individual blocks. But it also limits the difficulty change to 3.0 or 0.33 compared to the last 120 blocks difficulty average. This means that the diff can rise ~3.0 times in 6 blocks, and it can fall to ~0.33 times in 5 blocks. In the linked formula's the impact of the new blocks themselves are not calculated in the 120 blocks average.
The idea behind this is that it will be able to handle large joins and leaves, but won't be tricked into settling on a high difficulty too fast.

Thoughts?

I'll share the code when I'm happy with initial implementation.
legendary
Activity: 1582
Merit: 1002
HODL for life.
October 21, 2014, 02:16:27 PM
I knew that the blocks were based on calculated averages of the past. I'm a bonehead, sometimes. Haha. Instead of cancelling the block as I described, why can't the network start to drop the diff if the last block was found, for example, an hour ago? Have a decaying diff inside the time between each block instead of varying diff per block and put a limit on how long the maximum time in between blocks for the decay math to base itself from.

You still need a block to be found to trigger the change.  When a block is found, the network broadcasts the next block's difficulty.  The difficulty algorithm should takes into account the time between blocks and adjust the difficulty accordingly for the next block.  But without a block being solved to send out a new record of the next block's difficulty, you can't find out what the difficulty is going to be.

I suspect any change that would do something like a "difficulty ping" would not only need to write these changes to the blockchain somehow(like POS blocks).  Not only would it bloat the blockchain, but it would also allow additional avenues for time-based attacks.  Essentially all an attacker would need to do is set their clocks forward on numerous nodes to simulate a larger time gap.

-Fuse

Edit:

Beside all these solutions I still think the problem is the way DGW3 is implemented. It just does not react the way it should and that has nothing to do with being designed for... Because its a counting algo based on the last 24 blocks. One thing that also keeps me busy is the fact that KGW did not work properly either so maybe the solution is at some totally other place in the code. You sometimes see the same strange behaviour when variables are used outside the memory space and overwritten by some other procedure
DGW3 was not modified, except for some code comments and alignment. It looks like DGW3 is calculating correctly, the problem is that it was not created for jump pools at this scale. I have been thinking about increasing the number of blocks into calculation (currently 24). But I'm affraid that it will only result in multipools getting more blocks before the diff spikes. What do you think?
No, what I meant is the fact that two seperate dampening solutions that should work pretty well jump around like crazy. That made me wonder, maybe its not the KGW or DGW code but something that precedes or supercedes it. You maybe could ask Rijk what exactly was done about the KGW problem back in April, because then KGW went haywire. It's just a wild guess, but it's strange that we have more problems with those jump pool than other coins.
I agree, it could be something else. Something not related to KGW or DGW. But when doing the math (and I'm sure you've done it too) it makes sense that it's simply DGW not being able to handle the huge amount of hash/sec

This is exactly why you need to reduce the number of blocks taken into consideration.  IMO, a weighted average is the way to go.  You want to reduce the amount of blocks the multis can take.  Increasing the count evens the difficulty graph out over time, but it doesn't solve the issue of network jumping.  You need to limit the amount of blocks they can mine by quickly ramping up difficulty, but not overshooting the magic number.

Again, less change more often.  I believe a weighted average would do this.

The other option is digishield, but that is another very serious fork.

-Fuse
sr. member
Activity: 409
Merit: 250
October 21, 2014, 02:07:38 PM
Beside all these solutions I still think the problem is the way DGW3 is implemented. It just does not react the way it should and that has nothing to do with being designed for... Because its a counting algo based on the last 24 blocks. One thing that also keeps me busy is the fact that KGW did not work properly either so maybe the solution is at some totally other place in the code. You sometimes see the same strange behaviour when variables are used outside the memory space and overwritten by some other procedure
DGW3 was not modified, except for some code comments and alignment. It looks like DGW3 is calculating correctly, the problem is that it was not created for jump pools at this scale. I have been thinking about increasing the number of blocks into calculation (currently 24). But I'm affraid that it will only result in multipools getting more blocks before the diff spikes. What do you think?
No, what I meant is the fact that two seperate dampening solutions that should work pretty well jump around like crazy. That made me wonder, maybe its not the KGW or DGW code but something that precedes or supercedes it. You maybe could ask Rijk what exactly was done about the KGW problem back in April, because then KGW went haywire. It's just a wild guess, but it's strange that we have more problems with those jump pool than other coins.
I agree, it could be something else. Something not related to KGW or DGW. But when doing the math (and I'm sure you've done it too) it makes sense that it's simply DGW not being able to handle the huge amount of hash/sec
hero member
Activity: 938
Merit: 1000
@halofirebtc
October 21, 2014, 01:55:48 PM
Thinking a little outside the box here:
How about if the network senses an hour or however long has passed, then automatically drops the diff of the current block by voiding/canceling the current big diff block and releasing a new small diff block? Is there a reason the network can't cancel/re-submit the diff or the current block being mined? Just ideas to keep ppl thinking, food for thought, I don't know if it's even possible. Blockchain and code are capable of so many things, figured I'd ask.

I might not be 100% correct on this, but I don't think this would be possible.  The code works around submitted blocks.  So you need to have blocks created and submitted to the chain to have the numbers calculated against.  I might be possible if you had something like POS always generating blocks separately from POW, but that's a whole other can of worms.  Frankly, implementing something like that is a huge undertaking.

-Fuse

Yep, Fuse is right. The problem with the current block is that we don't know whats going on till it's solved.

Imagine it this way; You want to do a job and I will hand you a calculation to solve. The difficulty of the calculation I give you is estimated so that it will take you 150 seconds. So you start working and when you're done and found a solution you shout it to the community. So between the time you asked for work and the moment you shout your result we don't have any contact. I don't know it takes you so long and you don't know what to do otther than soving the calculation. The only way to stop you in between is another person shouting the answer for this calculation so you know it's solved.

Sure there are ways to notify but as Fuse stated thats a huge undertaking.  


I knew that the blocks were based on calculated averages of the past. I'm a bonehead, sometimes. Haha. Instead of cancelling the block as I described, why can't the network start to drop the diff if the last block was found, for example, an hour ago? Have a decaying diff inside the time between each block instead of varying diff per block and put a limit on how long the maximum time in between blocks for the decay math to base itself from.
Jump to: