Pages:
Author

Topic: [ANN][LTC][Pool][PPLNS][STRATUM] - ltc.kattare.com - burnside's Mining Pool - page 20. (Read 118882 times)

full member
Activity: 126
Merit: 100






With my cgminer I see hash rate units in "Mh/s", and typically 40 to 50 Mh/s shown.. but in My Stats at Burnside I see units of Kh/s, and typically see around 100 Kh/s... which is correct and why the difference ?

Also, I seem to get many "share rejected" messages, anything I can do in way of configuration etc.. to improve that ?

Thanks in advance !

--------------------
cgminer version 1.2.4
--------------------------------------------------------------------------------
 [(5s):34.9  (avg):47.6 Mh/s] [Q:280  A:0  R:159  HW:2  E:0%  U:0.00/m]
--------------------------------------------------------------------------------
 GPU 0: [47.6 Mh/s] [Q:278  A:0  R:159  HW:2  E:0%  U:0.00/m]
--------------------------------------------------------------------------------

[2013-03-23 13:51:51] Share rejected from GPU 0 thread 0
[2013-03-23 13:52:00] LONGPOLL detected new block on network, flushing work queue
[2013-03-23 13:56:55] Share rejected from GPU 0 thread 1
[2013-03-23 13:59:28] Share rejected from GPU 0 thread 1
[2013-03-23 14:00:08] Share rejected from GPU 0 thread 0
[2013-03-23 14:01:17] Share rejected from GPU 0 thread 1
[2013-03-23 14:02:34] LONGPOLL detected new block on network, flushing work queue
[2013-03-23 14:03:03] Share rejected from GPU 0 thread 0


legendary
Activity: 1106
Merit: 1006
Lead Blockchain Developer
What is going on with the stales in this pool? Im seeing ~15%

this issue is under havy attention of burnside, should be hopefully repaired soon, right now you can wait for solution at one of these:

https://bitcointalksearch.org/topic/m.1637447



Things have been pretty good for me lately.  I think I've been seeing around 3-4% on average, but right now: Round Shares: 380 (1.3% stale)

We got the new server installed and racked up today, so hopefully in the next couple days I can start moving pool services over to it.  Wink

Cheers.
legendary
Activity: 1106
Merit: 1006
Lead Blockchain Developer
How often does the "balance" update in my account? Been mining for 6 hours I believe, nothing has changed. Is the "estimate" in my stats area what will be going into my account?

My stats I believe: Workers (cached up to 60 seconds)
  Y (Active)    181.99KH/s    92 Accepted     14 Rejected     13.21% Stale Percent

It can be a while.  (Read the about PPLNS page.)

First you have to mine shares, which it looks like you have.  Then a block has to be solved, then that block has to receive 120 confirmations.  Wink

Cheers.
full member
Activity: 157
Merit: 100
Hello!
How often does the "balance" update in my account? Been mining for 6 hours I believe, nothing has changed. Is the "estimate" in my stats area what will be going into my account?

My stats I believe: Workers (cached up to 60 seconds)
  Y (Active)    181.99KH/s    92 Accepted     14 Rejected     13.21% Stale Percent
full member
Activity: 154
Merit: 100
Hello,
Sorry if this is a stupid question but:
I hear there are problems going on now, I don't understand them but could they be the cause of this?
My miner says I'm connecting to the pool but the site doesn't seen to think so.
If it matters I just set this up, does it just take a while to update? Thanks.
https://i.imgur.com/af1kRxF.png
(Not embedded because it's massive, sorry)

At 21khash/sec it will take a while before you get your first accepted share with the current difficulty level.
In my minimal experience (being new to this pool and Litecoin mining in particular) it might take 5 or 10 minutes after that before it shows in the stats.
The more khash you can do the quicker things appear in the log.
It does not feel like a linear relationship to me though and it wasn't until I got above 50khash/sec that the shares seemed to come along at a reasonable two or three a minute (in bursts).
Even then you can go several minutes without a share being accepted and that is sometimes followed by a LONGPOLL message.

Good luck.
newbie
Activity: 17
Merit: 0
Hello,

Sorry if this is a stupid question but:

I hear there are problems going on now, I don't understand them but could they be the cause of this?

My miner says I'm connecting to the pool but the site doesn't seen to think so.

If it matters I just set this up, does it just take a while to update? Thanks.

https://i.imgur.com/af1kRxF.png

(Not embedded because it's massive, sorry)
sr. member
Activity: 476
Merit: 253
What is going on with the stales in this pool? Im seeing ~15%

this issue is under havy attention of burnside, should be hopefully repaired soon, right now you can wait for solution at one of these:

https://bitcointalksearch.org/topic/m.1637447

hero member
Activity: 1036
Merit: 500
What is going on with the stales in this pool? Im seeing ~15%
member
Activity: 91
Merit: 10
FUCK THE SYSTEM!
Hi man, i need to delete my account:

username: itstrue
email:[email protected]

i have a lot of message my mailbox

Greetings
full member
Activity: 174
Merit: 100
Right now I have automatic payouts set up on my account. Is there any way to disable the email notifications that come with these payouts?

Also, is there an IRC channel for this pool?
legendary
Activity: 1106
Merit: 1006
Lead Blockchain Developer

You're definitely right, there is another ongoing issue.  That is that pushpool gets overloaded periodically.  (a few seconds every few minutes.)  I've expanded file descriptors and every single restriction I can think of to work around this, but have been unsuccessful.

So, unable to expand pushpool I ended up spinning up two pushpools and load balance them using nginx.  I made that change ~6 months ago.  Now we're back to one pushpool or the other getting overloaded periodically, and with the load balancing what happens is the overloaded pushpool gets removed from the balancing and you get sent to the pushpool that is still answering.  If you're making a request to submit work, and that work came from the other pushpool, then the current pushpool doesn't recognize it and it gets flagged as invalid.

Oddly enough, pushpool has memcache functionality and both pushpools are pointed at the same memcache.  I thought initially this was so that you could run a bunch of them and have them share the work between them, but clearly that is not the case.  I'm not really sure what pushpool is using memcache for.

As you digest all this you're probably wondering why then can't I just add a third pushpool to the balancing.  The problem is that in order to make sure that you always get sent to the same backend pushpool (because of the issue where your work is invalid if you don't) I had to configure the balancing to be by IP address.  And, naturally, since we're behind a DDoS service, 80% of our traffic comes from... the same IP address.  Ugh.  So even with the two pushpools, one takes like 80% of the traffic and I have no way to split it out beyond that.  I need like 3 DDoS services with each one running a pushpool behind 'em.  Wink



Firstly things do seem to be running much smoother this morning.

Thank you for your explanation above. I have to admit I had to read it twice; but, your explanation is very clear. If pushpool keeps a running log of what work it has issued and that log is not being shared in the memcache where is it? Do both running pushpools share a common database where information about each workers contributing shares ect. are maintained? And is the log of work issued also apart of that database? If it was might this be a database issue?

One other thing if the memcache is not for sharing of information of what work has been issued should both pushpools be pointing at the same cache or should they have seperate memory blocks?

Forgive my ignorance here because I really don't know. I've just found in my line of work it's good to bounce ideas off each other and sometimes even a wrong idea triggers a right solution.


Edit: I don't know if this really means much; but, I notice something today when I was monitoring the situation. When I went to the My Stats tab it seemed to hang up. While it was hanging I checked one of my rigs and the unknown work rejects we're happening. I had been running at below 0.5 percent stales up to that point (suddenly I was at 20 percent; although my miner calls them unknown work). The My Stats tab has to access the database to display. Interesting coincidence?

The work is stored in pushpool's memory I'm pretty sure.  It's all internal until a share is submitted, then the result of that share (stale or not) plus the username submitting the share is submitted to the db as an insert.  I use insert delayed even, so the db would have to be completely down for it to impact work submissions.  Edit... probably worth noting here too that the db server is a different box.  We have three servers right now, the webserver, db server, and pushpool/litecoind server.

The other possibility though is that there are network issues or rate limiting going on at the DDoS provider level.  Eg, if you can't load the site and the shares delay on submission, then your requests might not even be getting to us.

I'm tempted to just turn off the DDoS protection, but I know as soon as I do we're gonna get caught with our pants down again.  Wink   (for those of you new to LTC, we have been one of the few pools that have been up through most of the DDoS attacks, though it does impact us too.)

member
Activity: 95
Merit: 10
Edit: I don't know if this really means much; but, I notice something today when I was monitoring the situation. When I went to the My Stats tab it seemed to hang up. While it was hanging I checked one of my rigs and the unknown work rejects we're happening. I had been running at below 0.5 percent stales up to that point (suddenly I was at 20 percent; although my miner calls them unknown work). The My Stats tab has to access the database to display. Interesting coincidence?

this seems right to me. perhaps the stats are autoupdating on the page and its causing conflict?
full member
Activity: 131
Merit: 100

You're definitely right, there is another ongoing issue.  That is that pushpool gets overloaded periodically.  (a few seconds every few minutes.)  I've expanded file descriptors and every single restriction I can think of to work around this, but have been unsuccessful.

So, unable to expand pushpool I ended up spinning up two pushpools and load balance them using nginx.  I made that change ~6 months ago.  Now we're back to one pushpool or the other getting overloaded periodically, and with the load balancing what happens is the overloaded pushpool gets removed from the balancing and you get sent to the pushpool that is still answering.  If you're making a request to submit work, and that work came from the other pushpool, then the current pushpool doesn't recognize it and it gets flagged as invalid.

Oddly enough, pushpool has memcache functionality and both pushpools are pointed at the same memcache.  I thought initially this was so that you could run a bunch of them and have them share the work between them, but clearly that is not the case.  I'm not really sure what pushpool is using memcache for.

As you digest all this you're probably wondering why then can't I just add a third pushpool to the balancing.  The problem is that in order to make sure that you always get sent to the same backend pushpool (because of the issue where your work is invalid if you don't) I had to configure the balancing to be by IP address.  And, naturally, since we're behind a DDoS service, 80% of our traffic comes from... the same IP address.  Ugh.  So even with the two pushpools, one takes like 80% of the traffic and I have no way to split it out beyond that.  I need like 3 DDoS services with each one running a pushpool behind 'em.  Wink



Firstly things do seem to be running much smoother this morning.

Thank you for your explanation above. I have to admit I had to read it twice; but, your explanation is very clear. If pushpool keeps a running log of what work it has issued and that log is not being shared in the memcache where is it? Do both running pushpools share a common database where information about each workers contributing shares ect. are maintained? And is the log of work issued also apart of that database? If it was might this be a database issue?

One other thing if the memcache is not for sharing of information of what work has been issued should both pushpools be pointing at the same cache or should they have seperate memory blocks?

Forgive my ignorance here because I really don't know. I've just found in my line of work it's good to bounce ideas off each other and sometimes even a wrong idea triggers a right solution.


Edit: I don't know if this really means much; but, I notice something today when I was monitoring the situation. When I went to the My Stats tab it seemed to hang up. While it was hanging I checked one of my rigs and the unknown work rejects we're happening. I had been running at below 0.5 percent stales up to that point (suddenly I was at 20 percent; although my miner calls them unknown work). The My Stats tab has to access the database to display. Interesting coincidence?
legendary
Activity: 1106
Merit: 1006
Lead Blockchain Developer
Network Hash Rate   -2,147.484 GH/s

An Int over run obv. But this is just reporting right?

Hah, that's awesome.  Whatever it is, that's how it's coming out of litecoind:

# litecoind getmininginfo
{
    "blocks" : 312928,
    "currentblocksize" : 1225,
    "currentblocktx" : 1,
    "difficulty" : 54.19159781,
    "errors" : "",
    "generate" : false,
    "genproclimit" : -1,
    "hashespersec" : 0,
    "networkhashps" : -2147483648,
    "pooledtx" : 3,
    "testnet" : false
}

member
Activity: 95
Merit: 10
Glad to be able to help.

I'm working right now but will check things out at my end this evening. I'll watch for a while and see if the problem is still there. One more thing that I didn't mention was that the error seems almost random. It's not always at the end of a round. Sometimes I'll get it 10 to 40 rejects in a row all saying unknown work. Then right back to accepted shares.

I wish I was more knowledgeable about this so I could be more help on my end. I think bgminer might come with a logging option. If it does I'll try and record something for you.

You're definitely right, there is another ongoing issue.  That is that pushpool gets overloaded periodically.  (a few seconds every few minutes.)  I've expanded file descriptors and every single restriction I can think of to work around this, but have been unsuccessful.

So, unable to expand pushpool I ended up spinning up two pushpools and load balance them using nginx.  I made that change ~6 months ago.  Now we're back to one pushpool or the other getting overloaded periodically, and with the load balancing what happens is the overloaded pushpool gets removed from the balancing and you get sent to the pushpool that is still answering.  If you're making a request to submit work, and that work came from the other pushpool, then the current pushpool doesn't recognize it and it gets flagged as invalid.

Oddly enough, pushpool has memcache functionality and both pushpools are pointed at the same memcache.  I thought initially this was so that you could run a bunch of them and have them share the work between them, but clearly that is not the case.  I'm not really sure what pushpool is using memcache for.

As you digest all this you're probably wondering why then can't I just add a third pushpool to the balancing.  The problem is that in order to make sure that you always get sent to the same backend pushpool (because of the issue where your work is invalid if you don't) I had to configure the balancing to be by IP address.  And, naturally, since we're behind a DDoS service, 80% of our traffic comes from... the same IP address.  Ugh.  So even with the two pushpools, one takes like 80% of the traffic and I have no way to split it out beyond that.  I need like 3 DDoS services with each one running a pushpool behind 'em.  Wink




fairly new to litecoin, been a bitcoin fanboi since I heard about it right after the crash in '11. I'm no developer so I often just talk out of my ass, but it is in an attempt to lend my problem solving skills to the effort... I just don't always succeed! Smiley

I know CGMiner has a failswitch configuration, you can change it to round robin, etc. Maybe if you had several pools set up behind the ddos and just had people config to round robin between them?
legendary
Activity: 1540
Merit: 1060
May the force bit with you.
Network Hash Rate   -2,147.484 GH/s

An Int over run obv. But this is just reporting right?
legendary
Activity: 1106
Merit: 1006
Lead Blockchain Developer
Glad to be able to help.

I'm working right now but will check things out at my end this evening. I'll watch for a while and see if the problem is still there. One more thing that I didn't mention was that the error seems almost random. It's not always at the end of a round. Sometimes I'll get it 10 to 40 rejects in a row all saying unknown work. Then right back to accepted shares.

I wish I was more knowledgeable about this so I could be more help on my end. I think bgminer might come with a logging option. If it does I'll try and record something for you.

You're definitely right, there is another ongoing issue.  That is that pushpool gets overloaded periodically.  (a few seconds every few minutes.)  I've expanded file descriptors and every single restriction I can think of to work around this, but have been unsuccessful.

So, unable to expand pushpool I ended up spinning up two pushpools and load balance them using nginx.  I made that change ~6 months ago.  Now we're back to one pushpool or the other getting overloaded periodically, and with the load balancing what happens is the overloaded pushpool gets removed from the balancing and you get sent to the pushpool that is still answering.  If you're making a request to submit work, and that work came from the other pushpool, then the current pushpool doesn't recognize it and it gets flagged as invalid.

Oddly enough, pushpool has memcache functionality and both pushpools are pointed at the same memcache.  I thought initially this was so that you could run a bunch of them and have them share the work between them, but clearly that is not the case.  I'm not really sure what pushpool is using memcache for.

As you digest all this you're probably wondering why then can't I just add a third pushpool to the balancing.  The problem is that in order to make sure that you always get sent to the same backend pushpool (because of the issue where your work is invalid if you don't) I had to configure the balancing to be by IP address.  And, naturally, since we're behind a DDoS service, 80% of our traffic comes from... the same IP address.  Ugh.  So even with the two pushpools, one takes like 80% of the traffic and I have no way to split it out beyond that.  I need like 3 DDoS services with each one running a pushpool behind 'em.  Wink


full member
Activity: 131
Merit: 100
Glad to be able to help.

I'm working right now but will check things out at my end this evening. I'll watch for a while and see if the problem is still there. One more thing that I didn't mention was that the error seems almost random. It's not always at the end of a round. Sometimes I'll get 10 to 40 rejects in a row all saying unknown work. Then right back to accepted shares.

I wish I was more knowledgeable about this so I could be more help on my end. I think bgminer might come with a logging option. If it does I'll try and record something for you.
member
Activity: 95
Merit: 10
great work guys. glad to see some progress! I loved mining at your pool and I look forward to returning!

That said. 83 difficulty in 3333min? WHAT IN THE SEVEN HELLZ!

We need FPGA. Power is quickly drawing closer to profit :\
Pages:
Jump to: