Author

Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool - page 381. (Read 2591964 times)

legendary
Activity: 1258
Merit: 1027

I've now got all shares submitted by my node in a nice fast DB.

They are reported for the node on the main page, and per miner on the miner dashboard page, below is a shot from the miner dashboard. I'm going to do the same for blocks next...

nice! care to share or collaborate on my front end? would love to be able to display more stats persisted over restarts etc.

I'm happy to share everything I'm doing....

This code is not optimized and is likely still buggy, the nice thing is it only affects the front end, so bugs will not hamper mining in any way. Any contributions or suggestions are appreciated.

At some point I may clean everything up and post to GitHub, but that is a ways off....

This was written for a LAMP stack, should work on other platforms, but untested....

You will also need to move your p2pool "front end" and logfile to a directory accessible by your web server and where php can run.

The included CRUD class is here: http://www.phpro.org/classes/PDO-CRUD.html

MySQL DB Table:

Code:
--
-- Table structure for table `found_shares`
--

CREATE TABLE IF NOT EXISTS `found_shares` (
  `id` int(11) NOT NULL,
  `address` varchar(34) NOT NULL,
  `share` varchar(15) NOT NULL,
  `time` datetime NOT NULL,
  `doa` tinyint(1) NOT NULL DEFAULT '0'
) ENGINE=InnoDB  DEFAULT CHARSET=utf8;

--
-- Indexes for table `found_shares`
--
ALTER TABLE `found_shares`
 ADD PRIMARY KEY (`id`);

--
-- AUTO_INCREMENT for table `found_shares`
--
ALTER TABLE `found_shares`
MODIFY `id` int(11) NOT NULL AUTO_INCREMENT;
I will probably add an index for the address and time columns.

Cron job that populates the DB, set to run every minute:

Code:
// ******************************************** //
// P2Pool to MySQL
// File: cron.php
// Author: Ian T. (bitcointalk: windpath)
// Web: http://www.CoinCadence.com
// Sample: http://mining.CoinCadence.com
// ******************************************** //

// Error reporting (should be commented out when live)
//ini_set('display_startup_errors',1);
//ini_set('display_errors',1);
//error_reporting(-1);

// ******************************************** //
// Set below variables
// ******************************************** //

// MySQL Database
$THE_HOST "DB_HOST"
$THE_USER "DB_USER"
$THE_PWD "DB_PWD"
$THE_DB "DB_DB"

// Path to P2Pool log
// Must be readable and writable by PHP
// Custom P2Pool log location can be set when starting P2Pool. Example:
// --logfile /path/to/log/p2pool
// !! --> Make sure it is not publicly readable <-- !!
$log_path '/path/to/log/p2pool';

// ******************************************** //
// Edit past here at your own risk...
// ******************************************** //

// Pattern to match
$share_pattern '/GOT\ SHARE/';

// ******************************************** //
// set up crud DB class
// ******************************************** //
include 'crud.php';
$crud = new crud();
$setDsn "mysql:dbname=".$THE_DB.";host=".$THE_HOST;
$crud->dsn $setDsn;
$crud->username $THE_USER;
$crud->password $THE_PWD;

// ******************************************** //
// Function Calls
// ******************************************** //
getShares($share_pattern$log_path);
maintainFile($log_path);

// ******************************************** //
// Functions 
// ******************************************** //
function maintainFile($log_path)
{
    if (
filesize($log_path) > 25000000// 25MB
    
{
        
$log_backup $log_path.time().".bak";
rename ($log_path$log_backup);
    }
    return;
}

function 
getShares($share_pattern$log_path)
{
    
$values = array();
    
$result preg_grep($share_patternfile($log_path));
    if (
$result != NULL
    {
        
$rawShares array_reverse($result);
$shareCount count($rawShares);
$x $shareCount;
foreach($rawShares as $share)
        {
            if (
$x 0
            {
                
$pieces explode(" "$share);
$shareDate $pieces[0]." ".$pieces[1];
$shareMiner $pieces[4];
$shareShare $pieces[5];
if (isset($pieces[10])) { $shareDoa 1; } else { $shareDoa 0; }
if(testShare($shareShare$shareDate) == false)
{
    $values[] = array('address'=>$shareMiner'share'=>$shareShare'time'=>$shareDate'doa'=>$shareDoa);
}
                
$x--;
            }
}
insertShares($values);
    }
}

function 
insertShares($values)
{
    if(!empty(
$values)) 
    {
        global 
$crud;
        
$crud->dbInsert('found_shares'$values);
    }


function 
testShare($shareShare$shareDate)
{
    global 
$crud;
    
$sql "SELECT * FROM `found_shares` WHERE `share` = '".$shareShare."' AND `time` = '".$shareDate."'";
    
$records $crud->rawSelect($sql);
    
$shares $records->fetchAll(PDO::FETCH_ASSOC);
    if(empty(
$shares)) { return false; } else { return true; }   
}
?>

A lot to do, but for sure clean up the .bak log files after X time, cache added shares, structure as a class, include blocks, etc...

Front end:
This still needs a lot of work, presentation needs to be removed from logic...

Display Shares, this file is "included" in the front end and dumps out a formatted HTML table for now:
Code:
// ******************************************** //
// P2Pool to MySQL
// File: shares.php Web Share Explorer
// Author: Ian T. (bitcointalk: windpath)
// Web: http://www.CoinCadence.com
// Sample: http://mining.CoinCadence.com
// ******************************************** //

// Error reporting (should be commented out when live)
//ini_set('display_startup_errors',1);
//ini_set('display_errors',1);
//error_reporting(-1);

// ******************************************** //
// Set below variables
// ******************************************** //

// MySQL Database
$THE_HOST "DB_HOST"
$THE_USER "DB_USER"
$THE_PWD "DB_PWD"
$THE_DB "DB_DB"

// ******************************************** //
// Edit past here at your own risk...
// ******************************************** //

// ******************************************** //
// set up crud DB class
// ******************************************** //
include 'crud.php';
$crud = new crud();
$setDsn "mysql:dbname=".$THE_DB.";host=".$THE_HOST;
$crud->dsn $setDsn;
$crud->username $THE_USER;
$crud->password $THE_PWD;

$records $crud->rawSelect('SELECT SQL_CALC_FOUND_ROWS * FROM found_shares ORDER BY time DESC LIMIT 20');
$rows $records->fetchAll(PDO::FETCH_ASSOC);
$count =  $crud->rawSelect("SELECT FOUND_ROWS()");
$totalRows $count->fetch(PDO::FETCH_ASSOC);

echo 
"1 - ".count($rows)." of ".$totalRows['FOUND_ROWS()']." Shares";
echo 
'';
foreach(
$rows as $row)
{
    
$time strtotime($row['time']);
    
$timeSince humanTiming($time);
    
$shareMiner $row['address']..$row['address'].'" TARGET="_NEW">(view)';
    
$shareShare $row['share'];
    if(
$row['doa'] == 0) { $shareStatus 'accepted'; } else { $shareStatus 'DOA'; }
    echo 
"";
}
echo 
'
AgeBitcoin AddressShareStatus
$timeSince ago$shareMiner$shareShare$shareStatus
'
;

function 
humanTiming ($time)
{
    
$time time() - $time// to get the time since that moment
    
$tokens = array (
        
31536000 => 'year',
        
2592000 => 'month',
        
604800 => 'week',
        
86400 => 'day',
        
3600 => 'hour',
        
60 => 'minute',
        
=> 'second'
    
);
    foreach (
$tokens as $unit => $text) {
        if (
$time $unit) continue;
        
$numberOfUnits floor($time $unit);
        return 
$numberOfUnits.' '.$text.(($numberOfUnits>1)?'s':'');
    }
}
?>


So that's where I'm at, more to come....
sr. member
Activity: 308
Merit: 250
Decentralize your hashing - p2pool - Norgz Pool

I've now got all shares submitted by my node in a nice fast DB.

They are reported for the node on the main page, and per miner on the miner dashboard page, below is a shot from the miner dashboard. I'm going to do the same for blocks next...



nice! care to share or collaborate on my front end? would love to be able to display more stats persisted over restarts etc.
legendary
Activity: 1258
Merit: 1027

I've now got all shares submitted by my node in a nice fast DB.

They are reported for the node on the main page, and per miner on the miner dashboard page, below is a shot from the miner dashboard. I'm going to do the same for blocks next...

legendary
Activity: 1258
Merit: 1027
No issues here - with bog standard bitcoind settings.



Mine has calmed down a little, but still not seeing that nice flat .20s latency I had before....

Updated graph:
hero member
Activity: 686
Merit: 500
WANTED: Active dev to fix & re-write p2pool in C
No issues here - with bog standard bitcoind settings.

legendary
Activity: 1258
Merit: 1027

Hi phillipsjk,

I'm aware of the block size change in the 0.9.* clients; however, I'm not sure that would explain the latency spikes that are currently happening.  I've had 0.9.1 installed and running since release and it is only within the past few days that the latency has gone haywire.  My normal latency is ~0.4s, which by itself isn't that good since guys running the entire thing in a RAM drive get ~.02s.  In the past 2 days, the latency has spent many hours of ~1s latency.  It isn't only my node, either.  Nothing has changed on my end in terms of hardware/software/load/network traffic/etc.  Even if something had changed on my end, it wouldn't be reflected across multiple nodes on the network.


I wish... I'm 100% SSD, have the same latency issue as the rest of us...



Probably someone sending some godawful amount of 'priority' transactions w/ zero fee that sometimes take a while to get into blocks.  Your bitcoind client will accept them all unless you've configured it differently..

I remember when all those horsestaplebattery (sp?) transactions were fubar'ing a lot of p2pools

Thanks zvs, that's not it, I tried setting mintxfee and minrelaytxfee and the spikes still occur...
zvs
legendary
Activity: 1680
Merit: 1000
https://web.archive.org/web/*/nogleg.com

Hi phillipsjk,

I'm aware of the block size change in the 0.9.* clients; however, I'm not sure that would explain the latency spikes that are currently happening.  I've had 0.9.1 installed and running since release and it is only within the past few days that the latency has gone haywire.  My normal latency is ~0.4s, which by itself isn't that good since guys running the entire thing in a RAM drive get ~.02s.  In the past 2 days, the latency has spent many hours of ~1s latency.  It isn't only my node, either.  Nothing has changed on my end in terms of hardware/software/load/network traffic/etc.  Even if something had changed on my end, it wouldn't be reflected across multiple nodes on the network.


I wish... I'm 100% SSD, have the same latency issue as the rest of us...



Probably someone sending some godawful amount of 'priority' transactions w/ zero fee that sometimes take a while to get into blocks.  Your bitcoind client will accept them all unless you've configured it differently..

I remember when all those horsestaplebattery (sp?) transactions were fubar'ing a lot of p2pools
legendary
Activity: 1008
Merit: 1001
Let the chips fall where they may.
Can I trust a share (i.e. "01b6db51") to be unique for all time?

My suspicion is that I can not, but before I add the overhead of checking the share + share time rather then just the share I figured I'd ask Wink

Thanks.

If it is a 256bit hash, the answer is likely yes. It is unknown how long "all of time" is going to be.

Edit: checked the share explorer on one of the public nodes, and the hash appears to be 64 hex digits (256 bit). You would need 2128 shares trial hashes to get a 50% chance of a collision. (Insert sun picture here)
jjw
newbie
Activity: 3
Merit: 0
Looking forward to a 2-4 block surge when our luck comes back Smiley

Seriously though, over time it does even out, and generally p2pool comes out on top.

Leaving the pool during a long downturn (like 3-5 days, we are at 3 now) is like throwing the baby out with the bath water.

Luck, like anything else in life, is bound to change.

After a long streak of bad luck we typically see a long streak of _ _ _ _ luck Smiley

Yeah, I know.  I've never been particularly troubled by the pool variance.  Really have just shrugged off the pool's long downturns and try not to get *too* excited when we find 2+ blocks per day.

My concerns are over share finding.  I'm finding that far more difficult to diagnose.  Not sure if my poor profitability over the past 20 days is just variance related or something more nefarious.  I think I'll stick around P2Pool solely for the challenge of figuring things out.  Smiley

Sadly my sole Antminer S1 does not generate shares fast enough to satisfy my data driven nature.  In the past week I've had an orphan rate of 14%, yet the previous week I had an orphan rate of 3.5%.  Unfortunately, by the time I get a statistically significant data set to work with the S1 will be relegated to the scrapheap.

The simple, and somewhat tempting, solution is to buy 9 more of them.  Smiley
legendary
Activity: 1258
Merit: 1027
Hey folks!

I'm working on code moving a bunch of P2Pools data in to a MySQL DB so that node statistics may be tracked longer, persisted over process restarts, and displayed more efficiently.

Question about shares:

Can I trust a share (i.e. "01b6db51") to be unique for all time?

My suspicion is that I can not, but before I add the overhead of checking the share + share time rather then just the share I figured I'd ask Wink

Thanks.
sr. member
Activity: 308
Merit: 250
Decentralize your hashing - p2pool - Norgz Pool
I've uploaded my front end code to github. If you would like to use it please feel free to do so and if you really like the work I have done you can donate some Bitcoin to me 17HQeLJNp2r3WW99amE3hXcvEYR96jZoPJ

I have called it p2pool fancy front end https://github.com/norgan/p2pool_fancy_front_end

full member
Activity: 161
Merit: 100
digging in the bits... now ant powered!
I'll stick in it Smiley

Trying to convince a friend to throw over his addition 200GHS tonight...

Just need to be patient then, was worried it may have been purely my node...
sr. member
Activity: 308
Merit: 250
Decentralize your hashing - p2pool - Norgz Pool
Just added a new Sydney, Australia node to my pool. www.norgzpool.net.au for those aussie miners out there.
legendary
Activity: 1258
Merit: 1027
Where can I find documentation for the [log] file formats?

Would LOVE to find/see this....
newbie
Activity: 43
Merit: 0

P2Pool puts up a nice web page that you can view in a browser to monitor your pool performance.

However, my p2pool node are behind a firewall, and I only let ports 8333 and 9333 through that wall.  And 22, for SSH and SFTP.

So, how can I monitor my p2pool node remotely under these conditions?

There are a lot of interesting files in ~/p2pool/data/bitcoin with intriguing names like graph_db and log and stats and such like.

I could just grab these files with SFTP and extract the information I want once I have them here.

Where can I find documentation for the file formats?
legendary
Activity: 1258
Merit: 1027
Happy to say we will be ramping up our private-for-now node, should add back a good 20-30th starting soon Smiley

Good timing Smiley Glad your coming over....
hero member
Activity: 546
Merit: 500
Owner, Minersource.net
Happy to say we will be ramping up our private-for-now node, should add back a good 20-30th starting soon Smiley
legendary
Activity: 1258
Merit: 1027
Looking forward to a 2-4 block surge when our luck comes back Smiley

Seriously though, over time it does even out, and generally p2pool comes out on top.

Leaving the pool during a long downturn (like 3-5 days, we are at 3 now) is like throwing the baby out with the bath water.

Luck, like anything else in life, is bound to change.

After a long streak of bad luck we typically see a long streak of _ _ _ _ luck Smiley
sr. member
Activity: 308
Merit: 250
Decentralize your hashing - p2pool - Norgz Pool
that's the price we pay for being distributed. it does all work out even in the end apparently, just need some patience.
My first 2 weeks on p2pool have paid much more than I had got from larger pools and I'm only mining with 36gh/s! (again variance but it shows the p2pool can pay well even for small miners).
hero member
Activity: 924
Merit: 1000
Watch out for the "Neg-Rep-Dogie-Police".....

Anyone else having similar issues with blocks not been found for the last 3 days?


Erm.....everyone?

When p2pool finds a block, everyone gets a share of it. So when p2pool doesn't.......... Wink
Jump to: