Pages:
Author

Topic: Flexible mining proxy - page 2. (Read 88832 times)

vip
Activity: 166
Merit: 100
August 13, 2011, 09:00:17 PM
the httpd.conf in /etc/apache2 is blank. Is there another file by that name somewhere else?

edit: config file is apache2.conf for Debian. Google is pretty useful.
member
Activity: 98
Merit: 11
August 13, 2011, 08:42:32 PM
Code:
[Sat Aug 13 18:32:06 2011] [error] server reached MaxClients setting, consider raising the MaxClients setting

How do I increase this setting? When this error is thrown, apache screeches to a halt and is unusable until restarted.

Google is your friend. Open the httpd.conf file and change the max clients value to something higher.
vip
Activity: 166
Merit: 100
August 13, 2011, 06:48:02 PM
Code:
[Sat Aug 13 18:32:06 2011] [error] server reached MaxClients setting, consider raising the MaxClients setting

How do I increase this setting? When this error is thrown, apache screeches to a halt and is unusable until restarted.
member
Activity: 98
Merit: 11
July 30, 2011, 03:03:25 PM
I added correlation graphs so you can see Mhash values alongside your GPU temp in the same graph. Requires the Phoenix logfile modification to grab the mhash value, as the schema in Flexible doesn't provide an easy method to get periodic mhash values that correspond to the temp values. Maybe I'll figure that out later... but in the meantime if you use Phoenix the logfile mod is super simple. https://forum.bitcoin.org/index.php?topic=27761.0



member
Activity: 98
Merit: 11
July 26, 2011, 06:25:45 PM

I love what you're doing... that's awesome.. for my 2cents, I'd like to see the temps/load/etc separated from the "worker" name, since multiGPU boxes will generally run (1) worker per box... more than that gets unmanagable quickly.

Either that or you poll all the GPU's/CPU/etc in the box, and log the average..

Glad you enjoy it Smiley

In regard to separating the workers.. well, that's not really possible with the current schema. Also I'm not sure it makes sense to store multiple GPU device temperature stats for one worker. It would require a 1:N schema which would require a completely different approach for the monitoring script, schema, and reporting aspects as well as graphing. You can get around this by making dummy workers in the proxy and having the health monitor script upload it's data for those dummy workers. So your single-worker-per-box can connect to the proxy as usual, but you'd have like "box-name-GPU0", "box-name-GPU1" etc and look at the graphs that way. But I can't store multiple GPU temperatures for a single worker.

As to administrative hassle of multiple workers per box, my boxes are all running 3-4 cards each with one instance of phoenix per GPU. Having an aggregate worker for all devices on a box makes it very difficult to see how an individual GPU is performing, and Phoenix can't use multiple devices per process instance. The only miner I know that will use multiple devices is Diablo, which runs about 5-10% slower on my GPUs than Phoenix.

Managing multiple devices per box is actually very simple, I have a script per instance that gets started via the LXDE autostart script:

Code:
> cat /etc/xdg/lxsession/LXDE/autostart
@lxpanel --profile LXDE
@pcmanfm --desktop --profile LXDE
@sleep 3
@/usr/bin/screen -dmS proxy0 /home/user/p0.sh
@/usr/bin/screen -dmS proxy1 /home/user/p1.sh
@/usr/bin/screen -dmS proxy2 /home/user/p2.sh
@/usr/bin/screen -dmS monitor /bin/ping 172.16.0.1
@lxterminal

Each of the /home/user/p.sh script contains the following code to start up fans, core/mem speed, and connect to flexible proxy. If I reboot the box the miners get started automatically without any input from me. If my UPS dies the BIOS on each server is set to restore power to "LAST STATE", so they will turn on as soon as there is power available, then they boot LinuxCoin into init-level-5 and run the autostart script which in turn runs the miner via this code:

Code:
#!/bin/sh
#6870 OC Settings
CLOCK_CPU=950
CLOCK_MEM=1050
FANSPEED=100
WORKSIZE=128
AGGRESSION=11
DEVICE="0"
LOG="/home/user/log.$DEVICE.out"
URL=""
PORT="80"
USER="box#gpu#" #Worker naming schema: server_namec like: ultra0gpu2 for "sun ultra40 workstation number zero, gpu device two"
PASS="password"

export DISPLAY=:0.$DEVICE
aticonfig --od-enable
echo "running: aticonfig --od-setclocks=$CLOCK_CPU,$CLOCK_MEM --adapter=$DEVICE"
aticonfig --od-setclocks=$CLOCK_CPU,$CLOCK_MEM --adapter=$DEVICE
aticonfig --odgt --adapter=$DEVICE
aticonfig --pplib-cmd "get fanspeed 0"
aticonfig --odgc --adapter=$DEVICE
aticonfig --pplib-cmd "set fanspeed 0 $FANSPEED"
aticonfig --pplib-cmd "get fanspeed 0"
echo "running: ./phoenix.py --url=http://${USER}:${PASS}@$URL:$PORT -k phatk VECTORS BFI_INT FASTLOOP=false WORKSIZE=$WORKSIZE AGGRESSION=$AGGRESSION DEVICE=$DEVICE"
python ./phoenix.py --logtotext=$LOG --url=http://${USER}:${PASS}@$URL:$PORT -k phatk VECTORS BFI_INT FASTLOOP=false WORKSIZE=$WORKSIZE AGGRESSION=$AGGRESSION DEVICE=$DEVICE

member
Activity: 98
Merit: 11
July 26, 2011, 05:57:40 PM
I added support for graphing temperature. There's a link for each worker on the "temp" column which will show you the last 24 hours of temperatures for each card....

How tough would it be to add graphs for both "worker" and "aggregate" mining data, like Mhash, shares, rejects, etc?



I'll have to look at the schema tables that store that data. Once I get a query to support gathering the data over time then pushing it to the graphing system is trivial. I'll update my findings soon.
sr. member
Activity: 467
Merit: 250
July 26, 2011, 03:49:23 PM
I added support for graphing temperature. There's a link for each worker on the "temp" column which will show you the last 24 hours of temperatures for each card....

How tough would it be to add graphs for both "worker" and "aggregate" mining data, like Mhash, shares, rejects, etc?

sr. member
Activity: 467
Merit: 250
July 26, 2011, 03:48:29 PM
I've created a health monitoring script and a new database table for the Proxy so that you can see your worker "clock speed, mem speed, fan speed, card temperature" from the Proxy Dashboard page.  I have the schema change (just an added table) and the health reporting script all finished. Just need to modify the dashboard to support a couple of new columns for that data.

I love what you're doing... that's awesome.. for my 2cents, I'd like to see the temps/load/etc separated from the "worker" name, since multiGPU boxes will generally run (1) worker per box... more than that gets unmanagable quickly.

Either that or you poll all the GPU's/CPU/etc in the box, and log the average..

member
Activity: 98
Merit: 11
July 25, 2011, 10:54:39 PM
I added support for graphing temperature. There's a link for each worker on the "temp" column which will show you the last 24 hours of temperatures for each card. You can get the changes via a git clone or update. Later on I'll add the ability to specify a date range for graphing as well as support for graphing the other health stats.

Here's an example. It uses DyGraphs.

member
Activity: 98
Merit: 11
July 25, 2011, 12:56:52 PM
So I finished the health monitor script and integrated the status display with the main admin dashboard. Here's a screenshot showing the added columns: "Temp, Fan Speed, Clock(core) Speed, Mem Speed". Anyone interested in this being added to the github code? I have two tables on the database "worker_health_current" and "worker_health_archive" - the current table tracks the most recent entry for each GPU, and the archive table tracks all entries so that (when I code this later) I can display graphs for trending of GPU temperature over time.

Let me know what you all think.

@Shotgun - I am VERY interested!  Please post a patch or a fork and let me know where to find it. 

Forked the project and added my files. The bin/worker_health.sh file has the instructions and pertinent info.
https://github.com/btc-shotgun/Bitcoin-mining-proxy

If you like it, feel free to send me some coins Wink
1BUV1p5Yr3xEtSGbixLSospmK6B8NCdqiW
sr. member
Activity: 278
Merit: 250
July 25, 2011, 11:11:44 AM
So I finished the health monitor script and integrated the status display with the main admin dashboard. Here's a screenshot showing the added columns: "Temp, Fan Speed, Clock(core) Speed, Mem Speed". Anyone interested in this being added to the github code? I have two tables on the database "worker_health_current" and "worker_health_archive" - the current table tracks the most recent entry for each GPU, and the archive table tracks all entries so that (when I code this later) I can display graphs for trending of GPU temperature over time.

Let me know what you all think.

@Shotgun - I am VERY interested!  Please post a patch or a fork and let me know where to find it. 
full member
Activity: 182
Merit: 100
July 25, 2011, 04:49:54 AM
Looks good, please fork and consider making it optional (via the conf file).


Is this being updated still?
member
Activity: 98
Merit: 11
July 23, 2011, 11:12:19 PM
I've created a health monitoring script and a new database table for the Proxy so that you can see your worker "clock speed, mem speed, fan speed, card temperature" from the Proxy Dashboard page.  I have the schema change (just an added table) and the health reporting script all finished. Just need to modify the dashboard to support a couple of new columns for that data.

I'll post the changes required when I'm done - hopefully people enjoy the improvements.

cdhowie - if you like my work with this can you include it in the github repo or should I branch the project? I really love the work you did with the proxy server, it's made my life a lot easier. Smiley


So I finished the health monitor script and integrated the status display with the main admin dashboard. Here's a screenshot showing the added columns: "Temp, Fan Speed, Clock(core) Speed, Mem Speed". Anyone interested in this being added to the github code? I have two tables on the database "worker_health_current" and "worker_health_archive" - the current table tracks the most recent entry for each GPU, and the archive table tracks all entries so that (when I code this later) I can display graphs for trending of GPU temperature over time.

Let me know what you all think.

newbie
Activity: 27
Merit: 0
July 23, 2011, 06:10:22 PM
I'm running this fine (so far) with nginx + php-fpm as the webserver.

One thing to note to get LP working is to set fastcgi_read_timeout in the nginx virtual host to something high (such as 3600) along with max_execution_time in the php.ini file.

You also need some fairly specific settings to get the php parameters such as path_info passed along correctly. This is what I have.

Code:
location ~ ^(.+\.php)(.*)$ {
  root    /web/root/here;

  fastcgi_index index.php;
  fastcgi_split_path_info ^(.+\.php)(.*)$;
  fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
  fastcgi_param PATH_INFO  $fastcgi_path_info;
  fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info;
  fastcgi_read_timeout 3600;
  fastcgi_pass    127.0.0.1:9000;
  include fastcgi_params;

}



Can you tell me why this stuff is required and why I would want to implement it over Apache (what I'm currently using to run the proxy app).

It's just the way Nginx works. It doesn't handle PHP directly like how Apache does, PHP is in an external process (fastcgi) instead and you have to tell Nginx how to pass the relevant parameters and forward the request to this fastcgi backend.

It's just an alternative web server and some people like myself prefer it over Apache. Primarily it is a high performance lightweight static file server and can easily handle 30000+ requests per second on static files, and 10000+ concurrent downloads, while using very little RAM (tens of megabytes) and works extremely well in these high load environments. It is not a fully featured beast like Apache, which is why you have to forward requests for things such as PHP to another backend process.

Why implement it over Apache? If it's just you using it then no reason at all. However I have set it up on one of my web servers which runs Nginx anyway. Also, it could be advantageous to run it if you have many (hundreds or even thousands) of miners connected due to the memory usage compared to Apache.

Check out some benchmarks for it Smiley.
member
Activity: 98
Merit: 11
July 23, 2011, 02:17:22 PM
I'm running this fine (so far) with nginx + php-fpm as the webserver.

One thing to note to get LP working is to set fastcgi_read_timeout in the nginx virtual host to something high (such as 3600) along with max_execution_time in the php.ini file.

You also need some fairly specific settings to get the php parameters such as path_info passed along correctly. This is what I have.

Code:
location ~ ^(.+\.php)(.*)$ {
  root    /web/root/here;

  fastcgi_index index.php;
  fastcgi_split_path_info ^(.+\.php)(.*)$;
  fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
  fastcgi_param PATH_INFO  $fastcgi_path_info;
  fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info;
  fastcgi_read_timeout 3600;
  fastcgi_pass    127.0.0.1:9000;
  include fastcgi_params;

}



Can you tell me why this stuff is required and why I would want to implement it over Apache (what I'm currently using to run the proxy app).
member
Activity: 98
Merit: 11
July 23, 2011, 02:15:58 PM
I've created a health monitoring script and a new database table for the Proxy so that you can see your worker "clock speed, mem speed, fan speed, card temperature" from the Proxy Dashboard page.  I have the schema change (just an added table) and the health reporting script all finished. Just need to modify the dashboard to support a couple of new columns for that data.

I'll post the changes required when I'm done - hopefully people enjoy the improvements.

cdhowie - if you like my work with this can you include it in the github repo or should I branch the project? I really love the work you did with the proxy server, it's made my life a lot easier. Smiley
member
Activity: 98
Merit: 11
July 22, 2011, 01:55:37 PM
I've been having some table locking issues with the curent database since the schema creates all of the tables as MyISAM. As a MySQL DBA per profession, I would recommend that anyone running more than a handful of miners to switch to InnoDB. You can run the following commands on the mysql command line to convert your tables.

Code:
alter table pool engine=innodb;
alter table settings engine=innodb;
alter table submitted_work engine=innodb;
alter table work_data engine=innodb;
alter table worker engine=innodb;
alter table worker_pool engine=innodb;

The schema file can be changed to create tables as InnoDB from the start by changing each line that says "ENGINE=MYISAM" to "ENGINE=INNODB".
full member
Activity: 182
Merit: 100
July 12, 2011, 07:22:41 AM
Everytime a block is solved i get this after long poll event:

Quote
[2011-07-12 22:22:02] JSON-RPC call failed: {
   "code": 0,
   "message": "No enabled pools responded to the work request."
}
newbie
Activity: 27
Merit: 0
July 11, 2011, 08:02:52 PM
I'm running this fine (so far) with nginx + php-fpm as the webserver.

One thing to note to get LP working is to set fastcgi_read_timeout in the nginx virtual host to something high (such as 3600) along with max_execution_time in the php.ini file.

You also need some fairly specific settings to get the php parameters such as path_info passed along correctly. This is what I have.

Code:
location ~ ^(.+\.php)(.*)$ {
  root    /web/root/here;

  fastcgi_index index.php;
  fastcgi_split_path_info ^(.+\.php)(.*)$;
  fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
  fastcgi_param PATH_INFO  $fastcgi_path_info;
  fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info;
  fastcgi_read_timeout 3600;
  fastcgi_pass    127.0.0.1:9000;
  include fastcgi_params;

}
nux
newbie
Activity: 24
Merit: 0
July 11, 2011, 02:36:49 PM
One feature I thought would be handy is something like a worker group.

Essentially a way to mass manage a handful of workers.  For example, create a pool, and update 9 workers at once to all use it with the same login, password, priority etc.

I currently just use SQL to do it manually.  Add a worker_id/pool_id combo manually, and then an UPDATE .. WHERE pool_id = to add a pool to every one of those workers.
Pages:
Jump to: