Author

Topic: [ANN][CLAM] CLAMs, Proof-Of-Chain, Proof-Of-Working-Stake, a.k.a. "Clamcoin" - page 178. (Read 1151252 times)

donator
Activity: 2058
Merit: 1007
Poor impulse control.
I understand this as:

1. Take a point in time over some very long time period.
2. For that point, take the time difference between the last and the next blocks created (unless the randomly selected point in time is exactly a block creation time, in which case pick again)
3. If you repeat 1 and 2 some large number of times and take the average time between blocks,  it will be about 2 minutes.

This has to be wrong, so have I just totally misunderstood you?

You've understood correctly, but it isn't wrong. Smiley

For (2), if you happen to pick exactly a block creation time, use the time between that block and the one before it. No need to disregard your selection.

Let's make it even simpler:

1. Pick 10k random numbers between 0 and 10k. Sort them in order.
2. Pick random numbers in the same range. Find where they fit in the sorted list above. Find the size of the gap they fit into.
3. Average those gap sizes.
4. Get 2.00

Astonishing, isn't it? 10k points in a 10k range are an average of 1 unit apart from each other. But randomly pick points in that range and find that the average size of the gaps you land in is actually 2!

Some code:


Okay now that the solution is no longer a spoiler, it is really quite simple. TLDR:

When you pick a random point in time you are more likely to end up in the middle of a long block than a short one. (Think about it: There are blocks that are 1 second or less. How likely is a random point in time to end up in one of those?).

It turns out that the average block time you end up picking with this method is 2 minutes, even though the average length of all CLAM blocks is 1 minute.

I've given it some thought and I think that explanation is correct but it's not the entire answer.

If we assume you're picking one interblock duration from an infinite sequence, you're essentially picking a block weighted by the duration.

The probability density is of the interblock durations is exponential, ie

P(x) =  lambda  * exp(-lambda*x)

where x is the number of blocks and lambda is the block rate function.

So expectation is:
E(x) =  integral_0_to_inf (x*P(x))  = 1/lambda

The weighted probability density is the above multiplied by x*lambda, so the weighted expectation is:

 = integral_0_to_inf (x*lambda*x*P(x))
 = integral_0_to_inf (x*lambda*x*lambda * exp(-lambda*x) = 2/lambda

which is what we see. Yay! Incidentally, if you take the averages generated by your script, you'll can check that the the histograms match the above probability density.

But what if we instead want to sample lots of blocks, maybe take a sample every n/lambda seconds? I don't have a derivation for it, but I wrote a nice simple simulator in R:

Code:

library(data.table)
library(ggplot2)

# n is number of blocks
# lambda is block rate in seconds (eg 1/60 for clams, 1/600 for bitcoin)
# sample at n*lambda seconds

test_fun1 <- function(n, lambda){
### sum(rexp(n*10, 1/lambda)) will usually be >  n*lambda

testdata <- data.table(duration = rexp(n*10, 1/lambda), block_height = 1:(n*10))
testdata[, time := cumsum(duration)]
rndtime <- runif(1, 0, n* lambda)
duration <- testdata[which(time > rndtime)[1], duration]

        ### in case blocks don't cover the time range
if(length(duration) > 0) return(duration)

}


n <- seq(0.5, 50 , 0.5)
lambda <- 60

plotdata <- data.table(minutes = n*lambda/60, mean_duration = pbsapply(n, function(x) mean(replicate(10000,  test_fun1(x, lambda = lambda)))))

ggplot(plotdata, aes(minutes, mean_duration)) + geom_point(alpha=0.25) + geom_smooth(formula= y ~ log(x)) + theme_bw()



For clams, this gives us the following plot:



So if you're doing a lot of samples, the expected sampled duration tends to the expected 1/lambda (60 seconds in this case). As you reduce the sample rate, it will approach 2/lambda in the way illustrated in the plot above.

Thanks for the puzzle, doog Smiley
legendary
Activity: 2968
Merit: 1198
Okay now that the solution is no longer a spoiler, it is really quite simple. TLDR:

When you pick a random point in time you are more likely to end up in the middle of a long block than a short one. (Think about it: There are blocks that are 1 second or less. How likely is a random point in time to end up in one of those?).

It turns out that the average block time you end up picking with this method is 2 minutes, even though the average length of all CLAM blocks is 1 minute.
legendary
Activity: 2940
Merit: 1333
I understand this as:

1. Take a point in time over some very long time period.
2. For that point, take the time difference between the last and the next blocks created (unless the randomly selected point in time is exactly a block creation time, in which case pick again)
3. If you repeat 1 and 2 some large number of times and take the average time between blocks,  it will be about 2 minutes.

This has to be wrong, so have I just totally misunderstood you?

You've understood correctly, but it isn't wrong. Smiley

For (2), if you happen to pick exactly a block creation time, use the time between that block and the one before it. No need to disregard your selection.

Let's make it even simpler:

1. Pick 10k random numbers between 0 and 10k. Sort them in order.
2. Pick random numbers in the same range. Find where they fit in the sorted list above. Find the size of the gap they fit into.
3. Average those gap sizes.
4. Get 2.00

Astonishing, isn't it? 10k points in a 10k range are an average of 1 unit apart from each other. But randomly pick points in that range and find that the average size of the gaps you land in is actually 2!

Some code:

Code:
#!/usr/bin/env python

import random

points = range = 10000
samples = 1000000
list = []

def randomPoint(): return random.random() * range

def findPoint(point):
    last = 0
    for i in list:
        if i > point: return last, i
        last = i
    return last, range

i = 0
while i < points:
    list.append(randomPoint())
    i += 1

list.sort()

i = 0
sum = 0
while i < samples:
    i += 1
    point = randomPoint()
    start, end = findPoint(point)
    sum += end - start
    if i % 1000 == 0:
        print "%d : point %.2f is between %.2f and %.2f, gap = %.2f, average gap = %.2f" % (i, point, start, end, end - start, sum / i)

Some results:

Code:
$ ~/Source/Python/randompoints.py
1000 : point 2387.45 is between 2385.66 and 2390.78, gap = 5.12, average gap = 1.99
2000 : point 5960.24 is between 5959.33 and 5963.06, gap = 3.73, average gap = 2.00
3000 : point 7200.27 is between 7199.62 and 7200.60, gap = 0.98, average gap = 1.99
4000 : point 928.07 is between 926.96 and 928.65, gap = 1.68, average gap = 1.99
5000 : point 6716.98 is between 6716.89 and 6717.06, gap = 0.17, average gap = 1.98
6000 : point 6888.10 is between 6887.06 and 6890.42, gap = 3.36, average gap = 1.98
7000 : point 3816.51 is between 3816.14 and 3816.89, gap = 0.75, average gap = 1.99
8000 : point 7514.49 is between 7513.71 and 7514.87, gap = 1.15, average gap = 1.99
9000 : point 5838.96 is between 5837.04 and 5840.15, gap = 3.11, average gap = 2.00
10000 : point 2998.62 is between 2997.48 and 2999.81, gap = 2.33, average gap = 2.00
11000 : point 9116.49 is between 9115.60 and 9117.87, gap = 2.27, average gap = 2.00
12000 : point 5445.59 is between 5445.08 and 5445.95, gap = 0.87, average gap = 2.00
[...]

See how in the results we are tending to find the bigger gaps (5.12, 3.73, ...) and not the smaller ones. It's because the smaller ones are smaller, and so less likely to be randomly picked.
donator
Activity: 2058
Merit: 1007
Poor impulse control.
Whether you check against theoretical or real blocks, you find the same.

If you randomly pick a point in time over the last year and see what the time between blocks was at that point, you get number around 2 minutes on average. Even though the average blocktime is only one minute.

I understand this as:

1. Take a point in time over some very long time period.
2. For that point, take the time difference between the last and the next blocks created (unless the randomly selected point in time is exactly a block creation time, in which case pick again)
3. If you repeat 1 and 2 some large number of times and take the average time between blocks,  it will be about 2 minutes.

This has to be wrong, so have I just totally misunderstood you?




legendary
Activity: 2940
Merit: 1333
OK LADIES LAST ADVICE

SELL YOUR CLAMS NOW, THE PARTY IS OVER

SUB 0.001 BY THE END OF NEXT WEEK, MARK MY WORDS

REALLY, I'M NOT KIDDING  Wink

For the record, it's the end of next week, and there's a 200 BTC buy wall at twice the price quoted by Mr. Vulture:



I'm not sure I've ever seen a less accurate prediction.

IF THIS SHITCOIN IS NOT BELOW 0.001 BY NEXT SUNDAY, I PROMISE TO DISSAPPEAR FROM THIS SHITCOIN THREAD FOREVER

I meant one minute after January 24 at 23:59 poloniex time.
legendary
Activity: 1638
Merit: 1001
legendary
Activity: 2940
Merit: 1333
So we've only seen 8k dug since the digger stopped, 3k of which was all at once.

I thought that 3K bump was due to the faulty math of supply that was corrected.

I'm pretty sure I went back through all the old blocks and corrected them for the supply bug, and that the 3k bump is real.

I'll find the 3k bump and post a link to the block(s) in question when I get a chance.

Edit: this block contains three transactions digging up ~1k CLAMs each.
sr. member
Activity: 360
Merit: 250
Token
So we've only seen 8k dug since the digger stopped, 3k of which was all at once.

I thought that 3K bump was due to the faulty math of supply that was corrected.
legendary
Activity: 2674
Merit: 1083
Legendary Escrow Service - Tip Jar in Profile
Hm, interesting effect. Cheesy I would have awaited that the smaller timeframes for finding a block will average it out but of course the total amount of values coming from the bigger timeframes will have a higher power when put into the average calculation.

Well, i'm no mathematician but it's interesting. Smiley
legendary
Activity: 1638
Merit: 1001
Calling VultureFund.

90 minutes left.
legendary
Activity: 2940
Merit: 1333
People were asking in the JD chat for updated 'dig' charts, so I figure I'd post them here too.

Click for bigger versions:

  1) all-time:

   

  2) since just before the big digger started:

   

  3) since just before the big digger stopped:

   

So we've only seen 8k dug since the digger stopped, 3k of which was all at once.
legendary
Activity: 1638
Merit: 1001
Calling VultureFund.

Fewer than 13 hours left.
legendary
Activity: 2940
Merit: 1333
Spoiler alert... I give the game away in this post. If you don't want to read it yet, scroll past...

x

x

x

x

x

What i meant is that your statement might be true WHEN you don't check it against the reality. So we know one clam block is found in average every 1 minute. Then when you don't check it against the real blocks found, let's say you take a point in the future, then you would await that the next block, in average would be 1 minute away. The last block was one minute away in average too. Because you don't have data to rely on. Though when you check it against the past blocks or wait until the blocks were solved around that point then the theoretical truth proofs to be wrong because that point in time has to be somewhere between 1 minute blocks. Which would be the same for the average of all points in time that are checked against the real blocks.

Whether you check against theoretical or real blocks, you find the same.

If you randomly pick a point in time over the last year and see what the time between blocks was at that point, you get number around 2 minutes on average. Even though the average blocktime is only one minute.

So was your statement theoretical only?

No, it really happens.

Do you want to know why?

It's because when you pick a random point in time you have very little chance of picking one of the quick blocks, and a much bigger chance of picking a slow block. That's what messes up the average time - there's a bias towards you picking the bigger gaps, just because they are bigger.

But what i want to say is that how can the average be higher than the average of these points in between a block if the average blocksize is 60 seconds?

It's a funny effect isn't it.

Maybe this related situation makes it clearer:

Suppose there are two busses per hour at your local stop. One one the hour, and one at 5 minutes past the hour. You don't know the timetable; you just know that there are 2 busses per hour, so an average time between busses of 30 minutes. You guess your expected wait time will be 15 minutes.

Now suppose you arrive at the bus stop at a random point in time. How long do you expect to be waiting for a bus?

5/60 probability you arrive between x:00 and x:05 and have an average wait of 2.5 minutes
55/60 probability you arrive after x:05 and have an average wait of 27.5 minutes
Note that the average of 2.5 and 27.5 is 15 minutes - but the probabilities aren't equal. The long wait is much higher probability.

The actual expected wait time is (2.5 * 5/60) + (27.5 * 55/60) = 25.4166 minutes. The expected 'time between busses' you see, when you randomly arrive at the stop is twice that, at around 51 minutes. Because almost all the time you're not lucky enough to get there between :00 and :05.

--

Alternatively, suppose there are two busses, one with 10 people on it, and one with 90 people on it. You take the 100 people into a room and do a survey. You ask them "how many people were on your bus?" and average the replies.

There was an average of 50 people per bus, but you're going to hear "90" much more than you hear "10", so the average of the numbers you hear is going to be closer to 90 than to the true average.

   I'm not a python programmer, but in the find routine, it looks like your returning the newer block, and then the older block.  but in the main routine, your looking for the older block then the newer block.  

   Have you tried running the code with a small subset, just so you can manually verify the output.  

I would expect the averages to be 1/2 the block time.  

Maybe I don't know what I'm talking about....  (Won't be the first time)

It's not clear from my code, but the datafile has newest blocks first, so find routine is returning the older block first, then the newer one.

I did test it on a small sample first, and it is doing the first thing.

I would have expected the averages to be 30 seconds too, but they aren't. See my "bus" examples above for an intuitive explanation of why... basically when you pick a random time you have a bigger chance of picking a time when we were waiting a long time for a block than picking one of the quick ones...

x

x

x

x

x
legendary
Activity: 2968
Merit: 1198
But what i want to say is that how can the average be higher than the average of these points in between a block if the average blocksize is 60 seconds?

Break down the assumptions you are making very carefully.
legendary
Activity: 1007
Merit: 1000

Code:
#!/usr/bin/env python

import random, string, time

def rand():
    return start_date + random.random() * (end_date - start_date)

def fmt(seconds):
    return '[%s]' % time.ctime(seconds)

def find(seconds):
    last = None
    for sec in times:
        if sec < seconds:
            return sec, last
        last = sec

datfile = "clamblocks.dat"

count = 0
lines = 100000
samples = 100000
times = []

fp = open(datfile, "r")

while True:
    line = fp.readline()
    if not line:
        break
    line = string.split(line[:-1])
    times.append(string.atoi(line[5]))
    count += 1
    if (count == lines):
        break

start_date = times[-1]
end_date = times[0]

print "picking random dates between %s and %s" % (fmt(start_date), fmt(end_date))

before_sum = 0
after_sum = 0

count = 0
while True:
    t = rand()
    before, after = find(t)
    before_sum += t - before
    after_sum += after - t
    count += 1
    if count % 1000 == 0:
        print ("(%6d) %s is %6.2f seconds after %s (%6.2f) and %6.2f seconds after %s (%6.2f)" %
               (count,
                fmt(t),
                t - before, fmt(before), before_sum / count,
                after - t,  fmt(after), after_sum / count))

   I'm not a python programmer, but in the find routine, it looks like your returning the newer block, and then the older block.  but in the main routine, your looking for the older block then the newer block.  

   Have you tried running the code with a small subset, just so you can manually verify the output.  

I would expect the averages to be 1/2 the block time.  

Maybe I don't know what I'm talking about....  (Won't be the first time)
legendary
Activity: 2674
Merit: 1083
Legendary Escrow Service - Tip Jar in Profile
Are you sure that you used the same point in time to calculate the first 2 points? If C is correct then in average you should always find a point in or on a 1 minute timeframe.

Are you saying that the next block always happens within one minute of the current time? That isn't true. The average expected(*) time to the next block is always one minute. It is often more than one minute to the next block.

Note that when staking you don't make "progress" to finding a block. You either find it or you don't. The same with Bitcoin mining. When you are waiting for a block, if you wait 5 minutes without a block being found, it doesn't mean a block is "due", or that a block "should" be found in the next 5 minutes. Even after waiting 5 minutes, the expected time to the next block is still 10 minutes. And if there is no block in the next 10 minutes, the expected time *after that* is still 10 minutes.

(*) I've been using the word "average" when I meant "expected".

Let's pick a random time, say Sat Jan 16 02:33:17 2016.
The previous block was found at 02:32:32 (45 seconds earlier).
The next block was found at 02:33:36 (19 seconds later)

The time to the next block was 19 seconds. The "average" time to the next block was 19 seconds, I guess, if you can average a constant. The average of 19 seconds is 19 seconds. But at that point in time the *expected* time to the next block was 60 seconds, even though at that time it had been 45 seconds since the previous block.

I wrote it the wrong way. What i meant is that your statement might be true WHEN you don't check it against the reality. So we know one clam block is found in average every 1 minute. Then when you don't check it against the real blocks found, let's say you take a point in the future, then you would await that the next block, in average would be 1 minute away. The last block was one minute away in average too. Because you don't have data to rely on. Though when you check it against the past blocks or wait until the blocks were solved around that point then the theoretical truth proofs to be wrong because that point in time has to be somewhere between 1 minute blocks. Which would be the same for the average of all points in time that are checked against the real blocks.

So was your statement theoretical only? It sounds so when i read your answer. Even though i already was doubting what i wrote your answer seems to say you would see it the same way... in theory, without being checked against reality.

When i would chose a random point in the future then the next block after that point should be a half minute later, i think, too.

OK, let's test it.

I wrote a script which picks random points in time over the last month or so, then looks up the time to the previous and next block.

Code:
#!/usr/bin/env python

import random, string, time

def rand():
    return start_date + random.random() * (end_date - start_date)

def fmt(seconds):
    return '[%s]' % time.ctime(seconds)

def find(seconds):
    last = None
    for sec in times:
        if sec < seconds:
            return sec, last
        last = sec

datfile = "clamblocks.dat"

count = 0
lines = 100000
samples = 100000
times = []

fp = open(datfile, "r")

while True:
    line = fp.readline()
    if not line:
        break
    line = string.split(line[:-1])
    times.append(string.atoi(line[5]))
    count += 1
    if (count == lines):
        break

start_date = times[-1]
end_date = times[0]

print "picking random dates between %s and %s" % (fmt(start_date), fmt(end_date))

before_sum = 0
after_sum = 0

count = 0
while True:
    t = rand()
    before, after = find(t)
    before_sum += t - before
    after_sum += after - t
    count += 1
    if count % 1000 == 0:
        print ("(%6d) %s is %6.2f seconds after %s (%6.2f) and %6.2f seconds after %s (%6.2f)" %
               (count,
                fmt(t),
                t - before, fmt(before), before_sum / count,
                after - t,  fmt(after), after_sum / count))

I happened to have the data in a file already, so it's a bit quicker than querying the clam daemon. But never mind that. Here's the output of the script. It shows the average times in (parentheses):

Quote
(200000) [Fri Nov 13 01:41:39 2015] is  67.13 seconds after [Fri Nov 13 01:40:32 2015] ( 52.78) and  44.87 seconds after [Fri Nov 13 01:42:24 2015] ( 52.96)
(201000) [Tue Dec  8 16:40:24 2015] is  40.73 seconds after [Tue Dec  8 16:39:44 2015] ( 52.76) and   7.27 seconds after [Tue Dec  8 16:40:32 2015] ( 52.97)
(202000) [Sun Jan 17 05:09:04 2016] is  16.23 seconds after [Sun Jan 17 05:08:48 2016] ( 52.77) and  31.77 seconds after [Sun Jan 17 05:09:36 2016] ( 52.97)

After picking 200k random points in time the average of all the actual times from the previous block is 52.78 seconds, and the average of the actual times to the next block is 52.97 seconds.

I'm surprised it's coming out around 53 seconds and not 60, but can imagine two explanations:

1) the CLAM network is always a little bit too fast; the average block time is 59.xx seconds, not 60 seconds; not a significant error
2) the average time to the next block is 60 seconds because it's possible (though unlikely) to have *very* long gaps between blocks; we're not seeing those very long gaps in the sample that I'm averaging over, but we are seeing lots of short gaps

Either way, it's closer to 60s than to 30s, and their sum is way over 60s.

Edit: I ran it again, using a year's worth of blocks, and let it run for longer. The results barely changed:

Quote
picking random dates between [Sun Jan 18 03:37:36 2015] and [Thu Jan 21 07:04:00 2016]
(276000) [Mon Feb 23 00:40:30 2015] is  62.58 seconds after [Mon Feb 23 00:39:28 2015] ( 52.74) and 289.42 seconds after [Mon Feb 23 00:45:20 2015] ( 52.76)

Well, i'm surprised and don't see why this is the case.

First, statements A and B don't make sense.  The average amount of time from the chosen point in time is a singular number.  You probably mean the average of a distribution of randomly chosen points.  

You are correct. The average time to the next block from a particular random point in time is whatever the actual time to the next block was. I was being sloppy. I meant the expected time to the next block if the future wasn't already known.

You wake up, turn on your computer, look at blockchain.info. How long since the last block was found? Make a note. Wait for the next block; how long does it take from when we woke up? Make a note. Repeat this every day for a year, average times to the previous blocks, and average the times to the next blocks. Do you get something close to 5 minutes for both averages or something close to 10 minutes?

I think SebastianJu would tell us that on average we are half-way between blocks, so the average time would be 5 minutes to the previous and 5 minutes to the next. I'm claiming that the average is actually 10 minutes in both directions, and that the sum of the two averages would be 20 minutes.

But I am also claiming that the average time between BTC blocks is 10 minutes.
[/quote]

I still don't get it. The average time between blocks is 1 minute. Then all you can do is chose points between this minute. Which means 60 seconds. If you do it easy and calculate 1+2...59+60 you get 1830, if i used the small gauss correctly. Tongue Divided through 60 numbers would be 30,5. Oh well, i'm not a big mathematician it seems. Cheesy

But what i want to say is that how can the average be higher than the average of these points in between a block if the average blocksize is 60 seconds?
full member
Activity: 224
Merit: 100
★YoBit.Net★ 350+ Coins Exchange & Dice
http://blog.cryptsy.com/

Quote
– Update 2016/01/19 10:08pm –
The following wallets have been opened for withdrawals:
* Clams

Time to get them out, in case someone still has some CLAM sitting there
They realised thats the only coin worth to have so they have the rest locked.

Are they allowing deposits to bitcoin still? If yes, is it possible to deposit bitcoin, buy CLAMS and withdraw them? I don't think so. What are the new features did CLAMs added recently? This is a huge thread, I can't read it all at this time.

Cryptsy is dead. No more trading, they stopped everything and only partly open certain altcoin wallets for withdrawals.
legendary
Activity: 2968
Merit: 1198
is there a staking pool planned? I currently found out about just-dices shared staking... but you have to invest your coins so it comes with a certain risk Smiley I'd love to see dooglus setup a stake pool

To be honest investing there is not really much of a risk. There is so much leverage on the site that any big losses/gains would be mostly hit others rather than you, as long as you don't leverage.

The main advantages to staking yourself or via my pool are:

1. No 10% fee

2. Decentralize the network.

3. At some point in the future JD leverage might decrease, or large betting on the site might increase, in which case you would be taking more risk. You could in that case stop investing, assuming you were paying attention.

The main disadvantage:

1. You have to trust me instead of dooglas.

2. No interactive web site showing your balance, supporting instant deposits and withdrawals etc.
legendary
Activity: 2940
Merit: 1333
Are you sure that you used the same point in time to calculate the first 2 points? If C is correct then in average you should always find a point in or on a 1 minute timeframe.

Are you saying that the next block always happens within one minute of the current time? That isn't true. The average expected(*) time to the next block is always one minute. It is often more than one minute to the next block.

Note that when staking you don't make "progress" to finding a block. You either find it or you don't. The same with Bitcoin mining. When you are waiting for a block, if you wait 5 minutes without a block being found, it doesn't mean a block is "due", or that a block "should" be found in the next 5 minutes. Even after waiting 5 minutes, the expected time to the next block is still 10 minutes. And if there is no block in the next 10 minutes, the expected time *after that* is still 10 minutes.

(*) I've been using the word "average" when I meant "expected".

Let's pick a random time, say Sat Jan 16 02:33:17 2016.
The previous block was found at 02:32:32 (45 seconds earlier).
The next block was found at 02:33:36 (19 seconds later)

The time to the next block was 19 seconds. The "average" time to the next block was 19 seconds, I guess, if you can average a constant. The average of 19 seconds is 19 seconds. But at that point in time the *expected* time to the next block was 60 seconds, even though at that time it had been 45 seconds since the previous block.

When i would chose a random point in the future then the next block after that point should be a half minute later, i think, too.

OK, let's test it.

I wrote a script which picks random points in time over the last month or so, then looks up the time to the previous and next block.

Code:
#!/usr/bin/env python

import random, string, time

def rand():
    return start_date + random.random() * (end_date - start_date)

def fmt(seconds):
    return '[%s]' % time.ctime(seconds)

def find(seconds):
    last = None
    for sec in times:
        if sec < seconds:
            return sec, last
        last = sec

datfile = "clamblocks.dat"

count = 0
lines = 100000
samples = 100000
times = []

fp = open(datfile, "r")

while True:
    line = fp.readline()
    if not line:
        break
    line = string.split(line[:-1])
    times.append(string.atoi(line[5]))
    count += 1
    if (count == lines):
        break

start_date = times[-1]
end_date = times[0]

print "picking random dates between %s and %s" % (fmt(start_date), fmt(end_date))

before_sum = 0
after_sum = 0

count = 0
while True:
    t = rand()
    before, after = find(t)
    before_sum += t - before
    after_sum += after - t
    count += 1
    if count % 1000 == 0:
        print ("(%6d) %s is %6.2f seconds after %s (%6.2f) and %6.2f seconds after %s (%6.2f)" %
               (count,
                fmt(t),
                t - before, fmt(before), before_sum / count,
                after - t,  fmt(after), after_sum / count))

I happened to have the data in a file already, so it's a bit quicker than querying the clam daemon. But never mind that. Here's the output of the script. It shows the average times in (parentheses):

First, statements A and B don't make sense.  The average amount of time from the chosen point in time is a singular number.  You probably mean the average of a distribution of randomly chosen points.  

You are correct. The average time to the next block from a particular random point in time is whatever the actual time to the next block was. I was being sloppy. I meant the expected time to the next block if the future wasn't already known.

You wake up, turn on your computer, look at blockchain.info. How long since the last block was found? Make a note. Wait for the next block; how long does it take from when we woke up? Make a note. Repeat this every day for a year, average times to the previous blocks, and average the times to the next blocks. Do you get something close to 5 minutes for both averages or something close to 10 minutes?

I think SebastianJu would tell us that on average we are half-way between blocks, so the average time would be 5 minutes to the previous and 5 minutes to the next. I'm claiming that the average is actually 10 minutes in both directions, and that the sum of the two averages would be 20 minutes.

But I am also claiming that the average time between BTC blocks is 10 minutes.
[/quote]

The error is in your final assertion, "Wouldn't you expect ...".  No I wouldn't expect that.

Right. A+B = C is false. In fact A + B = 2C.

The expected time to previous block + the expected time to next block = twice the expected block time.
legendary
Activity: 2940
Merit: 1333
is there a staking pool planned? I currently found out about just-dices shared staking... but you have to invest your coins so it comes with a certain risk Smiley I'd love to see dooglus setup a stake pool

Read about "offsite investing" in the Just-Dice FAQ. By default JD investments are really quite safe because they are very diluted by people taking advantage of this "offsite" thing. There has never been a week over which any CLAM investor who was invested without using the "offsite" feature made a loss at Just-Dice.

But if you want a pure staking pool, check smooth's offering.
Jump to: