Pages:
Author

Topic: [OFFLINE]P2Pmining.com-Hybrid P2Pool-NO FEE!!!-BTC/NMC/IXC/I0C/DEV/LTC - page 26. (Read 56628 times)

sr. member
Activity: 409
Merit: 251
Crypt'n Since 2011
Help simplify this SQL statement for daily miner payout graph data. It works, but is there a simpler way? Got crazy when I had to exclude orphaned blocks.

Code:
SELECT DATE_FORMAT( FROM_UNIXTIME( t1.time ) ,  '%Y-%m-%d' ) 
DAY , UNIX_TIMESTAMP( DATE_FORMAT( FROM_UNIXTIME( t1.time ) ,  '%Y-%m-%d' ) ) TIMESTAMP, SUM(
SHARE * amount ) btc
FROM
(SELECT TIME, payouts.amount AS
SHARE , pool_blocks.amount
FROM  `payouts` ,  `pool_blocks` ,
(SELECT MAX( TIME ) AS mtime
FROM pool_blocks)maxt
WHERE payouts.txid = pool_blocks.txid
AND pool_blocks.time > UNIX_TIMESTAMP( ) -60 *60 *24 *30
AND address =  '1KgFh9kWBpz4TsX92xcx78VQ2Fo1jP2Ddx'
AND NOT (
maxt.mtime > pool_blocks.time
AND pool_blocks.confirmations =1
)
)t1
GROUP BY DAY

SQL is fun Grin
legendary
Activity: 924
Merit: 1000
Think. Positive. Thoughts.
Hmmm... one of my machines is at 5% stales, the other is at 19.5% stales, I'm beginning to wonder if it is some sort of network congestion since one is connected to a higher-priority switch on the network. Also, the one with lower stales is on cgminer 2.3.1, higher stales is the newer 2.3.2.

A couple things for p2pool you want to be on 2.3.3 or higher.  Some of the older versions have a bug where cgminer ignores the submit-stale flag set by the pool.

Discarded % is irrelevant.    It is the amount of work cgminer requested that it discard BEFORE starting to work.  This is simply a metric between GPU hashing power and LP interval.

If your GPU is 300 MH/s (each GPU is what matters) then it will complete a getwork in ~15s but LP will occur early than that so most queued up work will never be started.  You can reduce the amount of discarded work by setting the queue param to 1 (and threads =1) but you will still have a lot of discarded work.

One other thing to consider is some routers have an issue w/ high # of open connections (especially multiple rigs all running cgminer opening dozens of simultaneous connections).  newer version of cgminer helps this but some routers still lag under the load (# of connections not bandwidth).

I tried with cgminer 2.3.6, stales go up to between 15 and 20% with it. GUI miner keeps them between 10-15%. I'm going to try BAMT later this week, but I'm testing another pool until then to make sure it's not a problem with my network. Thanks for all the help.
full member
Activity: 196
Merit: 100
Web Dev, Db Admin, Computer Technician
⊅ graphs look pretty good.
sr. member
Activity: 409
Merit: 251
Crypt'n Since 2011
a that price why not FPGA? from what ive seen the  good open source ones will keep their value

There are way more people buying GPUs than FPGAs, so it will be easier to sell later on. 7970 was only $450 from amazon.
newbie
Activity: 17
Merit: 0
a that price why not FPGA? from what ive seen the  good open source ones will keep their value
sr. member
Activity: 409
Merit: 251
Crypt'n Since 2011
Just ordered a few more GPUs. A 5870,5970 and a new 7970.  Roll Eyes
sr. member
Activity: 409
Merit: 251
Crypt'n Since 2011


Suggestions:
Would you add ⊅BTC paid out per 24 hours, ⊅BTC paid out total?
On the current miners page maybe list firstbits instead of full addresses? (So more stats can fit. Cheesy )
Would it be possible to have a subpool subsidy that a 6 GH miner could donate to those under 1 GH?
Do you plan on having a drawing to give away a 5830 or 6870 for the subpool hashers who have hashed for at least, say one week between date to date?

I will definitely put in the BTC paid data for each miner soon.

I like your idea of using the firstbits for the addresses.

I don't think that is possible with the current set up to address your third suggestion

No plans an drawings yet.
sr. member
Activity: 409
Merit: 251
Crypt'n Since 2011
Here is the code in between p2pool code for further clarification:

Code:
                    if pow_hash > target:
                        print 'Worker %s submitted share with hash > target:' % (request.getUser(),)
                        print '    Hash:   %56x' % (pow_hash,)
                        print '    Target: %56x' % (target,)
                    elif header_hash in received_header_hashes:
                        print >>sys.stderr, 'Worker %s @ %s submitted share more than once!' % (request.getUser(), request.getClientIP())
                    else:
                        received_header_hashes.add(header_hash)
                       
                        #Enter Share to database
                        try:
                            dbf_user = request.getUser()
                            dbuser_items = dbf_user.split('+')
                            db_diff = bitcoin_data.target_to_difficulty(target) * 1000
                            proxy_db = MySQLdb.connect(host="localhost",user="mysqluser",passwd="mysqlpassword",db="p2pmining")
                            pdb_c = proxy_db.cursor()
                            pdb_c.execute("""INSERT INTO miner_data (id,address,hashrate,timestamp,difficulty,ontime) VALUES (NULL, %s , %s , UNIX_TIMESTAMP() , %s, %s)""", (dbuser_items[0], db_diff * on_time, db_diff , on_time ) )
                            proxy_db.close()
                        except:
                            log.err(None, 'Error with database:')
                        #End Enter Share Code
                       
                        pseudoshare_received.happened(bitcoin_data.target_to_average_attempts(target), not on_time, user)
                        self.recent_shares_ts_work.append((time.time(), bitcoin_data.target_to_average_attempts(target)))
                        while len(self.recent_shares_ts_work) > 50:
                            self.recent_shares_ts_work.pop(0)
                        local_rate_monitor.add_datum(dict(work=bitcoin_data.target_to_average_attempts(target), dead=not on_time, user=user))

Notoriety in the comments would be cool  Grin
full member
Activity: 196
Merit: 100
Web Dev, Db Admin, Computer Technician
So just after,
Code:
def got_response(header, request):
                    assert header['merkle_root'] == merkle_root
                   
                    header_hash = bitcoin_data.hash256(bitcoin_data.block_header_type.pack(header))
                    pow_hash = net.PARENT.POW_FUNC(bitcoin_data.block_header_type.pack(header))
                    on_time = current_work.value['best_share_hash'] == share_info['share_data']['previous_share_hash']

you replace the existing 'try:' and 'except:' sections with your code?
I was looking through the diff stuff with the hex addresses goin  Huh.

If we use it should we comment it with credit to JayCoin?

Suggestions:
Would you add ⊅BTC paid out per 24 hours, ⊅BTC paid out total?
On the current miners page maybe list firstbits instead of full addresses? (So more stats can fit. Cheesy )
Would it be possible to have a subpool subsidy that a 6 GH miner could donate to those under 1 GH?
Do you plan on having a drawing to give away a 5830 or 6870 for the subpool hashers who have hashed for at least, say one week between date to date?
sr. member
Activity: 409
Merit: 251
Crypt'n Since 2011
To promote openness, I thought I would release the Python code I add to the P2Pool code. The rest of the pool is in PHP.

Later, I may record who actually finds shares and blocks so people interested in that data can see it.

At the beginning of main.py

Code:
import MySQLdb

In the WorkerBridge.got_response() in main.py after p2pool verifies that the proof of work hash is <  the target.

Code:
                       #Enter Share to database
                        try:
                            dbf_user = request.getUser()
                            dbuser_items = dbf_user.split('+')
                            db_diff = bitcoin_data.target_to_difficulty(target) * 1000
                            proxy_db = MySQLdb.connect(host="localhost",user="mysqluser",passwd="mysqlpassword",db="p2pmining")
                            pdb_c = proxy_db.cursor()
                            pdb_c.execute("""INSERT INTO miner_data (id,address,hashrate,timestamp,difficulty,ontime) VALUES (NULL, %s , %s , UNIX_TIMESTAMP() , %s, %s)""", (dbuser_items[0], db_diff * on_time, db_diff , on_time ) )
                            proxy_db.close()
                        except:
                            log.err(None, 'Error with database:')
                        ###

That is it.
sr. member
Activity: 409
Merit: 251
Crypt'n Since 2011


I've been on guiminer for a few hours, stales are at 7.5% and dropping. Anyone else on guiminer getting the "connection problems" under gpu speed?

Another thing, GUIMiner shows 0 stale shares, but the site is still showing a percentage... any ideas why?

Those are DOA shares. That means a new p2pool share has been found before you submitted your work.  Because the work you submit may still solve a block, it is still excepted by the pool.
legendary
Activity: 924
Merit: 1000
Think. Positive. Thoughts.
Hmmm... one of my machines is at 5% stales, the other is at 19.5% stales, I'm beginning to wonder if it is some sort of network congestion since one is connected to a higher-priority switch on the network. Also, the one with lower stales is on cgminer 2.3.1, higher stales is the newer 2.3.2.

A couple things for p2pool you want to be on 2.3.3 or higher.  Some of the older versions have a bug where cgminer ignores the submit-stale flag set by the pool.

Discarded % is irrelevant.    It is the amount of work cgminer requested that it discard BEFORE starting to work.  This is simply a metric between GPU hashing power and LP interval.

If your GPU is 300 MH/s (each GPU is what matters) then it will complete a getwork in ~15s but LP will occur early than that so most queued up work will never be started.  You can reduce the amount of discarded work by setting the queue param to 1 (and threads =1) but you will still have a lot of discarded work.

One other thing to consider is some routers have an issue w/ high # of open connections (especially multiple rigs all running cgminer opening dozens of simultaneous connections).  newer version of cgminer helps this but some routers still lag under the load (# of connections not bandwidth).

OK, thanks. I'm updating cgminer now. I booted up guiminer and it shows 0 stales after 50 submitted shares but it shows "connection problems" every 10 seconds instead of 350mhash so this may be a network thing. Thanks for the help.

new cgminer didn't fix it, though I am down from 20% stales to 15%... switching to guiminer for a few hours to test

I've been on guiminer for a few hours, stales are at 7.5% and dropping. Anyone else on guiminer getting the "connection problems" under gpu speed?

Another thing, GUIMiner shows 0 stale shares, but the site is still showing a percentage... any ideas why?
legendary
Activity: 924
Merit: 1000
Think. Positive. Thoughts.
What is your cgminer command line eroxors?

A sample similar to mine:

$ sudo ./cgminer -o http://p2pmining.com:9332 -u ⊅BTC Address -p relacks -I 6 -g 1 -k phatk -v 2 -w 128  --auto-fan --temp-target 68 --gpu-engine 875 --gpu-mem 250

The bold items, for p2pool, are required to be less than what would be set when used for other pools. In other pools threads, -g can be set to 2 or more. Intensity needs to be lowered until you find good results starting at 8 for p2pool.

Currently:

cgminer -g 1 -I 4
full member
Activity: 196
Merit: 100
Web Dev, Db Admin, Computer Technician
What is your cgminer command line eroxors?

A sample similar to mine:

$ sudo ./cgminer -o http://p2pmining.com:9332 -u ⊅BTC Address -p relacks -I 6 -g 1 -k phatk -v 2 -w 128  --auto-fan --temp-target 68 --gpu-engine 875 --gpu-mem 250

The bold items, for p2pool, are required to be less than what would be set when used for other pools. In other pools threads, -g can be set to 2 or more. Intensity needs to be lowered until you find good results starting at 8 for p2pool.
legendary
Activity: 924
Merit: 1000
Think. Positive. Thoughts.
Hmmm... one of my machines is at 5% stales, the other is at 19.5% stales, I'm beginning to wonder if it is some sort of network congestion since one is connected to a higher-priority switch on the network. Also, the one with lower stales is on cgminer 2.3.1, higher stales is the newer 2.3.2.

A couple things for p2pool you want to be on 2.3.3 or higher.  Some of the older versions have a bug where cgminer ignores the submit-stale flag set by the pool.

Discarded % is irrelevant.    It is the amount of work cgminer requested that it discard BEFORE starting to work.  This is simply a metric between GPU hashing power and LP interval.

If your GPU is 300 MH/s (each GPU is what matters) then it will complete a getwork in ~15s but LP will occur early than that so most queued up work will never be started.  You can reduce the amount of discarded work by setting the queue param to 1 (and threads =1) but you will still have a lot of discarded work.

One other thing to consider is some routers have an issue w/ high # of open connections (especially multiple rigs all running cgminer opening dozens of simultaneous connections).  newer version of cgminer helps this but some routers still lag under the load (# of connections not bandwidth).

OK, thanks. I'm updating cgminer now. I booted up guiminer and it shows 0 stales after 50 submitted shares but it shows "connection problems" every 10 seconds instead of 350mhash so this may be a network thing. Thanks for the help.

new cgminer didn't fix it, though I am down from 20% stales to 15%... switching to guiminer for a few hours to test
legendary
Activity: 924
Merit: 1000
Think. Positive. Thoughts.
Hmmm... one of my machines is at 5% stales, the other is at 19.5% stales, I'm beginning to wonder if it is some sort of network congestion since one is connected to a higher-priority switch on the network. Also, the one with lower stales is on cgminer 2.3.1, higher stales is the newer 2.3.2.

A couple things for p2pool you want to be on 2.3.3 or higher.  Some of the older versions have a bug where cgminer ignores the submit-stale flag set by the pool.

Discarded % is irrelevant.    It is the amount of work cgminer requested that it discard BEFORE starting to work.  This is simply a metric between GPU hashing power and LP interval.

If your GPU is 300 MH/s (each GPU is what matters) then it will complete a getwork in ~15s but LP will occur early than that so most queued up work will never be started.  You can reduce the amount of discarded work by setting the queue param to 1 (and threads =1) but you will still have a lot of discarded work.

One other thing to consider is some routers have an issue w/ high # of open connections (especially multiple rigs all running cgminer opening dozens of simultaneous connections).  newer version of cgminer helps this but some routers still lag under the load (# of connections not bandwidth).

OK, thanks. I'm updating cgminer now. I booted up guiminer and it shows 0 stales after 50 submitted shares but it shows "connection problems" every 10 seconds instead of 350mhash so this may be a network thing. Thanks for the help.
donator
Activity: 1218
Merit: 1079
Gerald Davis
Hmmm... one of my machines is at 5% stales, the other is at 19.5% stales, I'm beginning to wonder if it is some sort of network congestion since one is connected to a higher-priority switch on the network. Also, the one with lower stales is on cgminer 2.3.1, higher stales is the newer 2.3.2.

A couple things for p2pool you want to be on 2.3.3 or higher.  Some of the older versions have a bug where cgminer ignores the submit-stale flag set by the pool.

Discarded % is irrelevant.    It is the amount of work cgminer requested that it discard BEFORE starting to work.  This is simply a metric between GPU hashing power and LP interval.

If your GPU is 300 MH/s (each GPU is what matters) then it will complete a getwork in ~15s but LP will occur early than that so most queued up work will never be started.  You can reduce the amount of discarded work by setting the queue param to 1 (and threads =1) but you will still have a lot of discarded work.

One other thing to consider is some routers have an issue w/ high # of open connections (especially multiple rigs all running cgminer opening dozens of simultaneous connections).  newer version of cgminer helps this but some routers still lag under the load (# of connections not bandwidth).
legendary
Activity: 924
Merit: 1000
Think. Positive. Thoughts.
ack, I'm at 20% stales... I'm trying intensity of 4 to see if they go down but I'm only mining at 85% of capacity with such a low intensity..... is it possible to mine one thread on one pool at a certain intensity and another thread on another pool at a different intensity (in cgminer) ?

Didn't work... what's going on? What am I doing wrong?

There can be multiple reasons. Is your network to congested for Longpolling to keep up? Is your cgminer updated? Did you try mining with only one thread (-g 1)? What is your hashrate?

I don't think I waited long enough, the 4 intensity ended up getting my stales to 5-7% but I'm running at about 85-90% of full speed, which I guess I'll just deal with. When I look at pool information it says that I have 7000+ discarded work due to new blocks, is this normal for an 800MHash machine after about 48 hours?

Hmmm... one of my machines is at 5% stales, the other is at 19.5% stales, I'm beginning to wonder if it is some sort of network congestion since one is connected to a higher-priority switch on the network. Also, the one with lower stales is on cgminer 2.3.1, higher stales is the newer 2.3.2.
legendary
Activity: 924
Merit: 1000
Think. Positive. Thoughts.
ack, I'm at 20% stales... I'm trying intensity of 4 to see if they go down but I'm only mining at 85% of capacity with such a low intensity..... is it possible to mine one thread on one pool at a certain intensity and another thread on another pool at a different intensity (in cgminer) ?

Didn't work... what's going on? What am I doing wrong?

There can be multiple reasons. Is your network to congested for Longpolling to keep up? Is your cgminer updated? Did you try mining with only one thread (-g 1)? What is your hashrate?

I don't think I waited long enough, the 4 intensity ended up getting my stales to 5-7% but I'm running at about 85-90% of full speed, which I guess I'll just deal with. When I look at pool information it says that I have 7000+ discarded work due to new blocks, is this normal for an 800MHash machine after about 48 hours?
sr. member
Activity: 409
Merit: 251
Crypt'n Since 2011
Just added ability to look at hashrates and DOA/stale rate over different time spans. Checkout at http://p2pmining.com/?method=pool#ui-tabs-4
Pages:
Jump to: