Pages:
Author

Topic: Continuum Mining Pool: No fees; Client uptime monitoring via twitter and email - page 10. (Read 50243 times)

hero member
Activity: 938
Merit: 1002
I can now see a hashrate being reported for 1LLdCXQohpJcZpKrwGc9ebgfKrDanYuwKC, that's good enough for me, I assume the balance probably won't show till > 1btc?
Balance is updated in realtime. And now it is for PPS as well.
Your new address's balance should climb each time you submit a share.
Note that the hashrate is over a 5 minute window so if you haven't submitted a share within that window, it goes to 0.
I see my balance with a faster miner but I have a slower one connected as well and can't see its balance, although I've been submitting shares for some time and see the hash rate. ... OK, now I switched to PPS and saw my balance after submitting a share. I guess share!=score and doesn't have to lead to a positive balance?

Anyway, what I was meaning to ask is, what happens if I want to leave the pool for some reason? Do you pay the remaining balance after a week of absence as Eligius does?
donator
Activity: 2058
Merit: 1054
Can you add functionality to create and delete monitors to the website?
full member
Activity: 140
Merit: 100
J after posting my last reply now I am getting the following error message   File "/usr/local/poclbm/BitcoinMiner.py", line 259, in longPollThread
    (connection, result) = self.request(connection, url, self.headers)
Brought down the server to add HTTP keepalive support. Also, found the hashrate bug.
newbie
Activity: 14
Merit: 0
J after posting my last reply now I am getting the following error message   File "/usr/local/poclbm/BitcoinMiner.py", line 259, in longPollThread
    (connection, result) = self.request(connection, url, self.headers)
  File "/usr/local/poclbm/BitcoinMiner.py", line 222, in request
    response = connection.getresponse()
  File "/usr/lib/python2.6/httplib.py", line 990, in getresponse
    response.begin()
  File "/usr/lib/python2.6/httplib.py", line 391, in begin
    version, status, reason = self._read_status()
  File "/usr/lib/python2.6/httplib.py", line 355, in _read_status
    raise BadStatusLine(line)
BadStatusLine
02/06/2011 18:16:37, long poll exception:                   
Traceback (most recent call last):
  File "/usr/local/poclbm/BitcoinMiner.py", line 259, in longPollThread
    (connection, result) = self.request(connection, url, self.headers)
  File "/usr/local/poclbm/BitcoinMiner.py", line 221, in request
    else: connection.request('GET', url, headers=headers)
  File "/usr/lib/python2.6/httplib.py", line 914, in request
    self._send_request(method, url, body, headers)
  File "/usr/lib/python2.6/httplib.py", line 951, in _send_request
    self.endheaders()
  File "/usr/lib/python2.6/httplib.py", line 908, in endheaders
    self._send_output()
  File "/usr/lib/python2.6/httplib.py", line 780, in _send_output
    self.send(msg)
  File "/usr/lib/python2.6/httplib.py", line 739, in send
    self.connect()
  File "/usr/lib/python2.6/httplib.py", line 720, in connect
    self.timeout)
  File "/usr/lib/python2.6/socket.py", line 561, in create_connection
    raise error, msg
error: [Errno 101] Network is unreachable
newbie
Activity: 14
Merit: 0
Ok, balancecurrent is really expensive in terms of performance and RPC is timing out with this long a round. I am trying to figure a faster way to do this.

As for hashrate, did the miner submit an unusually low number of shares during the interval you checked?

no, I have been watching it today and while it was normal for most of the day again it's listing me at Client hashrate
372230498 just 20 mins ago it was Client hashrate 1417339207
donator
Activity: 2058
Merit: 1054
Ok, balancecurrent is really expensive in terms of performance and RPC is timing out with this long a round. I am trying to figure a faster way to do this.
Assuming the problem is summing over all the shares of the worker, note that shares older than the last 10,000 or so will have negligible score, so you can just find out the ID of the last share and use "where ID > X-10000" or similar.
Great, that's a big help. Yeah, it's the big table scan that's bogging it down.

Edit: balance updated to only scan last 10,000 shares. Note that the final round payment calculation still scans all shares.
There's still room for improvement in the final calculation. At the current r, shares older than 17,000 will have score less than double-precision granularity, so they won't have any effect anyway.
Hmm, you mean no material affect?
Round 4 payment calc:
select max(score) from share = 646.164449020304
shares = select count(*) from share where score is null = 281610
maxid = select max(id) from share 828713
totscore = select sum(exp(score-max))+exp(score-os) from share = 436.318039783311

payment calc
select exp(score-totscore) from share
order by id
values:
5.43924628175081e-284
5.45174116086077e-284
5.46426474284596e-284
5.47681709364117e-284
...
So they're very small, enough so that we could ignore them but they don't underflow the double precision type.
I was talking about granularity (precision), not minimal representable number. If you add 1 + 1e-17 you'll get back 1, because double has only 52 bits of precision, or roughly 16 decimal digits. So scores less than this will have no effect on the final numerical score.
full member
Activity: 140
Merit: 100
Ok, balancecurrent is really expensive in terms of performance and RPC is timing out with this long a round. I am trying to figure a faster way to do this.
Assuming the problem is summing over all the shares of the worker, note that shares older than the last 10,000 or so will have negligible score, so you can just find out the ID of the last share and use "where ID > X-10000" or similar.
Great, that's a big help. Yeah, it's the big table scan that's bogging it down.

Edit: balance updated to only scan last 10,000 shares. Note that the final round payment calculation still scans all shares.
There's still room for improvement in the final calculation. At the current r, shares older than 17,000 will have score less than double-precision granularity, so they won't have any effect anyway.
Hmm, you mean no material affect?
Round 4 payment calc:
select max(score) from share = 646.164449020304
shares = select count(*) from share where score is null = 281610
maxid = select max(id) from share 828713
totscore = select sum(exp(score-max))+exp(score-os) from share = 436.318039783311

payment calc
select exp(score-totscore) from share
order by id
values:
5.43924628175081e-284
5.45174116086077e-284
5.46426474284596e-284
5.47681709364117e-284
...
So they're very small, enough so that we could ignore them but they don't underflow the double precision type.
donator
Activity: 2058
Merit: 1054
Ok, balancecurrent is really expensive in terms of performance and RPC is timing out with this long a round. I am trying to figure a faster way to do this.
Assuming the problem is summing over all the shares of the worker, note that shares older than the last 10,000 or so will have negligible score, so you can just find out the ID of the last share and use "where ID > X-10000" or similar.
Great, that's a big help. Yeah, it's the big table scan that's bogging it down.

Edit: balance updated to only scan last 10,000 shares. Note that the final round payment calculation still scans all shares.
There's still room for improvement in the final calculation. At the current r, shares older than 17,000 will have score less than double-precision granularity, so they won't have any effect anyway.
full member
Activity: 140
Merit: 100
Ok, balancecurrent is really expensive in terms of performance and RPC is timing out with this long a round. I am trying to figure a faster way to do this.
Assuming the problem is summing over all the shares of the worker, note that shares older than the last 10,000 or so will have negligible score, so you can just find out the ID of the last share and use "where ID > X-10000" or similar.
Great, that's a big help. Yeah, it's the big table scan that's bogging it down.

Edit: balance updated to only scan last 10,000 shares. Note that the final round payment calculation still scans all shares.
donator
Activity: 2058
Merit: 1054
Ok, balancecurrent is really expensive in terms of performance and RPC is timing out with this long a round. I am trying to figure a faster way to do this.
Assuming the problem is summing over all the shares of the worker, note that shares older than the last 10,000 or so will have negligible score, so you can just find out the ID of the last share and use "where ID > X-10000" or similar.
full member
Activity: 140
Merit: 100
Ok, balancecurrent is really expensive in terms of performance and RPC is timing out with this long a round. I am trying to figure a faster way to do this.

As for hashrate, did the miner submit an unusually low number of shares during the interval you checked?
newbie
Activity: 14
Merit: 0
My client hash rate is again malfunctioning and only display 1/4 of what I'm currently outputting.. I also cannot check unconfirmed balance.
full member
Activity: 140
Merit: 100
Btw, I am curious as to how useful publishing the share and score logs would be. Particularly whether that ensures pool integrity given that the worker solution is included. I would not be willing to publish the IP addresses of workers but could make previous round sharelog data available less IP address. So, would that be useful to anyone? Does anyone object IE are their privacy concerns here of which I am not aware?
This is a very important issue. I think it will be very useful if you publish such logs. Ideally, for each completed round you would have a table of shares (maybe downloadable in csv format) with the following info: Share #ID, Timestamp, score (as a proportion of the total round score), worker Bitcoin address, hash.
In case people are uncomfortable with their addresses' payout info being displayed, you could show stats only for the shares submitted by the user. For this you will probably need some sort of login system.
I don't see how publishing bitcoin addresses could be a privacy issue but I could just hash them or take a crc32 etc. My thought was to just dump out the share table per round (time,worker,lscore,solution. If someone analyzing the data wants scores as a proportion of total, they can just calculate it using the formula in your thread.
donator
Activity: 2058
Merit: 1054
Btw, I am curious as to how useful publishing the share and score logs would be. Particularly whether that ensures pool integrity given that the worker solution is included. I would not be willing to publish the IP addresses of workers but could make previous round sharelog data available less IP address. So, would that be useful to anyone? Does anyone object IE are their privacy concerns here of which I am not aware?
This is a very important issue. I think it will be very useful if you publish such logs. Ideally, for each completed round you would have a table of shares (maybe downloadable in csv format) with the following info: Share #ID, Timestamp, score (as a proportion of the total round score), worker Bitcoin address, hash.
In case people are uncomfortable with their addresses' payout info being displayed, you could show stats only for the shares submitted by the user. For this you will probably need some sort of login system.
newbie
Activity: 19
Merit: 0
All good with balance updates now too, thanks very much for your help Martok, best of luck with your pool.
full member
Activity: 140
Merit: 100
I can now see a hashrate being reported for 1LLdCXQohpJcZpKrwGc9ebgfKrDanYuwKC, that's good enough for me, I assume the balance probably won't show till > 1btc?
Balance is updated in realtime. And now it is for PPS as well.
Your new address's balance should climb each time you submit a share.
Note that the hashrate is over a 5 minute window so if you haven't submitted a share within that window, it goes to 0.
newbie
Activity: 19
Merit: 0
I can now see a hashrate being reported for 1LLdCXQohpJcZpKrwGc9ebgfKrDanYuwKC, that's good enough for me, I assume the balance probably won't show till > 1btc?
newbie
Activity: 19
Merit: 0
Hey there martok,


./poclbm.py -v -w 128 -d 0 --user=1LLdCXQohpJcZpKrwGc9ebgfKrDanYuwKC\;pps --pass=any --host continuumpool.com --port 8332


Is the current invocation I'm using, I can't see a hashrate or balance for either the previous address or this one however.

Since you seem to be willing to just rollover my pool into the one we finally end up identifying I'll just leave this thing mining till we figure it out Smiley
full member
Activity: 140
Merit: 100
Tried an escaped ; as the separator, also tried a different address, still the same output from the pool side monitors..

Any other ideas? I was under the impression that if it wasn't registering the correct address, it wouldn't accept my hashes?
Think I found you.
Are you mining now? Can you confirm balances are being updated?I will manually credit any previously submitted shares from my logs.

Edit: Credited 572 shares.
donator
Activity: 2058
Merit: 1054
Quote
I don't know if that's the problem, but I noticed that when I copy-paste my address I often get a stray space which causes it to be unrecognized.

Note to martok - trim spaces from the input.
Indeed, will do.
I noticed now that my comment may have been a little ambiguous - to clarify, I was referring to getting statistics from continuumpool.com, not to the miner flags.

Tried an escaped ; as the separator, also tried a different address, still the same output from the pool side monitors..

Any other ideas? I was under the impression that if it wasn't registering the correct address, it wouldn't accept my hashes?
Did you try it without pps? This should help narrow down the problem.
Pages:
Jump to: