Author

Topic: OFFICIAL CGMINER mining software thread for linux/win/osx/mips/arm/r-pi 4.11.0 - page 589. (Read 5805740 times)

member
Activity: 107
Merit: 10
Is there a windows binary with bitforce support turned on?
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
Some amazing results tonight with Xiangfu's Icarus mining farm running on a single cgminer instance.

It's 1 cgminer with 91 Icarus hashing away at ~33.75GH/s BUT still only using 3% of the CPU in a 32 bit ubuntu linux!

Here's the main API output reformatted to fit here:
Code:
Date: 07:11:39 29-Apr-2012 UTC-07:00
Computer: cgminer 2.3.6       
Elapsed  MHS av    Found Blocks  Getworks  Accepted  Rejected  Hardware Errors       Utility 
19m 22s  33270.28  0             15091     9132      25        0                     471.46/m

  Discarded  Stale   Get Failures  Local Work  Remote Failures  Network Blocks  Total MH
  2740       0       1             18262       0                3               38665911.5942

Date: 07:11:39 29-Apr-2012 UTC-07:00
Computer: cgminer 2.3.6       
GPU Count  PGA Count  CPU Count  Pool Count  ADL  ADL in use  Strategy  Log Interval  Device Code  OS
0          91         0          3           N    N           Failover  5             ICA          Linux   

legendary
Activity: 3583
Merit: 1094
Think for yourself
Code:
 [2012-04-29 13:59:37] LONGPOLL from pool 3 detected new block
 [2012-04-29 13:59:39] LONGPOLL from pool 0 requested work restart

You now have a way of knowing which pool is LIKELY (though not surely) to have found that block if you have most of the major pools in your set up. This means you now have a way to *cough* choose who to hop to, if you're so inclined...


I'd donate a BTC or two for a new management strategy that acted on that info.
Sam
It wouldn't be hard to do now, but realistically for hopping to be worthwhile you need to only do it on prop pools for the maximum benefit, know their hashrate and do it for the magic percentage of duration. Then you have to factor in for different payschemes and how long to stay there and where to hop back to and so on... I had no intention of developing and maintaining such a database, which would be very fluid and change day by day.

 On the other hand if all you wanted was one that would hop to the pool that found the latest block each time, and you plugged in what pools you wanted it to work on, that would be very easy to do.

That's exactly what came to my mind.  I have no desire to enter the nefarious world of complex pool hopping.  But if my miner always switched the the pool that last found a block, or seems to have, I would always have the confidence I was mining on a active and properly functioning pool.

Several pools have been having problems in recent times where remote servers loose connection to the database and my miner is hashing away happily on a pool that is actually down but doesn't know it.  So when I travel for work I always leave my miner on the pool that has never had this problem, to my knowledge.
Thanks for the thoughts,
Sam
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Code:
 [2012-04-29 13:59:37] LONGPOLL from pool 3 detected new block
 [2012-04-29 13:59:39] LONGPOLL from pool 0 requested work restart

You now have a way of knowing which pool is LIKELY (though not surely) to have found that block if you have most of the major pools in your set up. This means you now have a way to *cough* choose who to hop to, if you're so inclined...


I'd donate a BTC or two for a new management strategy that acted on that info.
Sam
It wouldn't be hard to do now, but realistically for hopping to be worthwhile you need to only do it on prop pools for the maximum benefit, know their hashrate and do it for the magic percentage of duration. Then you have to factor in for different payschemes and how long to stay there and where to hop back to and so on... I had no intention of developing and maintaining such a database, which would be very fluid and change day by day. On the other hand if all you wanted was one that would hop to the pool that found the latest block each time, and you plugged in what pools you wanted it to work on, that would be very easy to do.
legendary
Activity: 3583
Merit: 1094
Think for yourself
Code:
 [2012-04-29 13:59:37] LONGPOLL from pool 3 detected new block
 [2012-04-29 13:59:39] LONGPOLL from pool 0 requested work restart

You now have a way of knowing which pool is LIKELY (though not surely) to have found that block if you have most of the major pools in your set up. This means you now have a way to *cough* choose who to hop to, if you're so inclined...


I'd donate a BTC or two for a new management strategy that acted on that info.
Sam
legendary
Activity: 3583
Merit: 1094
Think for yourself
I have two asus 5770's running on stock Ubuntu 11.04.  aticonfig can read the gpu temps just fine, but these stats don't appear in cgminer.  I've tried numerous versions of cgminer, AMD sdk's, all with the same results.  Sad

Looks like you may not have ADL support, temps don't show up and your using default clocks it looks like.
Sam
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
...
Note that some pool operators don't really like this behavior because it increases pool load during LPs (and thus stales or other pool users) without actually using the pool much. From what I know some pools even ban you if you do this (submit less than some threshold of shares for a given amount of requested work / long polls).
How would a 'large pool' differentiate that from simply being a 'backup' pool?
Would that really mean that no one should use 'large pools' as backups coz they will ban you or reject your shares ...

Actually I get unexpected middle of the round share's rejected by DeepBit that is my backup ...
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Note that some pool operators don't really like this behavior because it increases pool load during LPs (and thus stales or other pool users) without actually using the pool much. From what I know some pools even ban you if you do this (submit less than some threshold of shares for a given amount of requested work / long polls).
Right, I didn't say it was good practice, sanctioned, moral, appropriate or anything else of the sort. Just casting a purely objective observation about what it did.
hero member
Activity: 504
Merit: 500
FPGA Mining LLC
Sorry if this is a dumb question ckolivas:

Is there any advantage to having cgminer listen on multiple large pools for LP's even if you have no intention if mining there? For example:

Pool 0 - The real desired pool
Pool 1 -  Large pool number one
Pool 2 - Large pool number two
Pool 3 - Large pool number three
That is not a dumb question at all and in fact you are right in thinking there may be an advantage, but it is complicated.

The reason comes down to what happens at longpoll time. A longpoll occurs when the block on the network has changed. This is the time you are most likely to submit stale shares because you are working on the old block still when the new block is already spreading throughout the network. Furthermore, once the longpoll hits, you have to throw out all work and ask your pool(s) for more work so it is the time you are most likely to have a drop in hashrate while waiting for the new work (this is why the older releases of cgminer used to say waiting for fresh work).

Now because cgminer checks longpolls from *any* pool you are connected to now, it can tell when the block changes on the network often faster than your primary pool finds out because you may be connected to the pool which discovered the block as well. So cgminer then knows to stop working on anything from that old block in anticipation that it will be wasted work. However until your primary pool discovers that the block has changed, you cannot actually get useful work from it. The beauty of longpoll, though, is that it is actually giving cgminer work as well, so you will be getting work from the backup pools to fill in the trough period until your primary pool finds out the block has changed. For this to be useful, though, you do actually have to be happy for the backup pools to get some of your shares over time, and therefore enough shares have to accumulate to eventually give you a payout from them. If you enable the --failover-only option, you lose this benefit because not only will cgminer stop working on the old block before your primary pool discovers the block has changed, but you won't even accept any work your primary pool gives you until then, so it will dip in hashrate for longer.

Note that some pool operators don't really like this behavior because it increases pool load during LPs (and thus stales or other pool users) without actually using the pool much. From what I know some pools even ban you if you do this (submit less than some threshold of shares for a given amount of requested work / long polls).
hero member
Activity: 504
Merit: 500
FPGA Mining LLC
Any1 can tell me haw to use zteX board? i`m getting error -3 in the beggining. i got 15y board.
In my limited ztex knowledge (nelisky wrote that and I don't have one)
The 15y board supported was added with 2.3.6
(It was in 2.3.5 also but best not to use 2.3.5)

But not in 2.3.4 or earlier

Hm, as of a week ago there was no code base that actually works on the 1.15y IIUC. Nelisky just had committed some preparatory changes for that, but not the actual thing yet.
I'm not sure if this has changed in the meantime, but from what I know (and I'm not really involved deeply here), it could be possible that 2.3.6 simply doesn't support the 1.15y yet. Ask nelisky for details.
donator
Activity: 919
Merit: 1000
Con, Luke,

this is from another thread to sort out issues with a throttling BFL Single I have in my 5-unit setup:
[...]
While I am at it, I figured out a SW issue that could be quite relevant for all multi-Single setups driven by cgminer: while I removed the throttling device from the setup for further inspection, the average hashrate of the remaining 4 climbed up noticeably. To double check, I repeatedly run it long enough to exclude variation and this is what I get:
1) running all 5 devices the hashrate for all of them starts at 828 and after running a day the throttling settles at 705 all-time-average, while the properly working ones settle at 790
2) running the 4 proper ones alone, all start at 828 and after the day they are still at ~825

In other words, the throttling one is not just reducing its own hashing power but also those of the proper ones. In my 5-units setup the estimated loss is ~250MH/s.

This could be caused by the communication between PC and Single being frozen during the throttling. From the SW design view there should theoretically be no inter-dependencies, since every device is handled by its own threads. But in practice if the device throttles while communicating to the host and thereby stalls, the related thread will eat its scheduling quantum busy looping the serial port.

Luckily ckolivas is not only cgminer developer but also a Linux scheduler guru, so I'll sort this SW issue out in his thread. Meanwhile I will separate the throttling device from my setup and run it from a different host.

Tl;dr: if you have a multi-BFL Singles setup with one or more throttling units (front LED is blinking now and then) you should consider operating the throttling ones from a different host for a better overall hashrate.

In short, what I am observing is: the one throttling device reproducibly reduces the all-time-average hashrate of the non-throttling devices reported by cgminer (Linux, 2.3.5). I am not sure whether this is just a measurement error or if my assumption with the stalling communication thread causing lags sounds reasonable.

Con, you're da Linux scheduler guru. Any ideas? Since you have no Singles at hand (and even more no reliably throttling ones), I'd try to implement and test potential fixes myself if you had some ideas. Or do you have a throttling one, Luke?


Thanks, zefir
sr. member
Activity: 274
Merit: 250
I`m using now 2.3.6, thats why i know the error code Smiley
i`ll ask nelisky on ztex hardware thread.
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
Any1 can tell me haw to use zteX board? i`m getting error -3 in the beggining. i got 15y board.
In my limited ztex knowledge (nelisky wrote that and I don't have one)
The 15y board supported was added with 2.3.6
(It was in 2.3.5 also but best not to use 2.3.5)

But not in 2.3.4 or earlier
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
I have two asus 5770's running on stock Ubuntu 11.04.  aticonfig can read the gpu temps just fine, but these stats don't appear in cgminer.  I've tried numerous versions of cgminer, AMD sdk's, all with the same results.  Sad

...

Anyone have any ideas what's causing this?
First guess (from the README):
export DISPLAY=:0

(if not, then it becomes a question of how you didn't setup the GUI or other such things)

I would also suggest you use the current version.
newbie
Activity: 42
Merit: 0
I have two asus 5770's running on stock Ubuntu 11.04.  aticonfig can read the gpu temps just fine, but these stats don't appear in cgminer.  I've tried numerous versions of cgminer, AMD sdk's, all with the same results.  Sad

https://img.skitch.com/20120429-tgisxymgwsgdj83egw2xr7yue2.jpg

https://img.skitch.com/20120429-8rxk84cg6k3gxr4mb2d933kjpt.jpg

Anyone have any ideas what's causing this?
legendary
Activity: 1428
Merit: 1001
Okey Dokey Lokey
someone help me via pm!!! i tried to update my video drivers and i eneded up installing sdk 2.6, So i followed some advanced removal steps that i stated for me to delete some /windows/inf .inf files, And when i went to install sdk 2.4 It was fine, But when trying to install the display drivers, I ALWAYS get ".inf file not found blah blah cannot install"

I've even tried a system restore, And that device manager install thing doesnt work!, Please help me!, My hashing power and main computer are practically gone!!!
sr. member
Activity: 274
Merit: 250
Any1 can tell me haw to use zteX board? i`m getting error -3 in the beggining. i got 15y board.
sr. member
Activity: 457
Merit: 251
Hmm... Problem's gone now. Nevermind then, sorry for bothering you. I think it has to do with my installation of ntpd while using cgminer.
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
There seems to be a problem with the share/time counts, as I only have a small 220 mhash/s and cgminer's showing me a constantly growing 37.14 shares per minute.

Con - should have quoted in my last post...

The fact he mentions 37.14 displayed he is NOT using 2.3.6 and should update...

1 decimal is fine for U as you said...

I am using 2.3.6... I just recently cloned it from the git repo and built it. The number's at 128.89 shares/min now.
What build options did you use or

What is the output of one of:
1) java API config
or
2) echo -n config | nc -4 127.0.0.1 4028 ; echo

If you are windows and 1) doesn't work or you have problems getting either to work just ask
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Sorry if this is a dumb question ckolivas:

Is there any advantage to having cgminer listen on multiple large pools for LP's even if you have no intention if mining there? For example:

Pool 0 - The real desired pool
Pool 1 -  Large pool number one
Pool 2 - Large pool number two
Pool 3 - Large pool number three
That is not a dumb question at all and in fact you are right in thinking there may be an advantage, but it is complicated.

The reason comes down to what happens at longpoll time. A longpoll occurs when the block on the network has changed. This is the time you are most likely to submit stale shares because you are working on the old block still when the new block is already spreading throughout the network. Furthermore, once the longpoll hits, you have to throw out all work and ask your pool(s) for more work so it is the time you are most likely to have a drop in hashrate while waiting for the new work (this is why the older releases of cgminer used to say waiting for fresh work).

Now because cgminer checks longpolls from *any* pool you are connected to now, it can tell when the block changes on the network often faster than your primary pool finds out because you may be connected to the pool which discovered the block as well. So cgminer then knows to stop working on anything from that old block in anticipation that it will be wasted work. However until your primary pool discovers that the block has changed, you cannot actually get useful work from it. The beauty of longpoll, though, is that it is actually giving cgminer work as well, so you will be getting work from the backup pools to fill in the trough period until your primary pool finds out the block has changed. For this to be useful, though, you do actually have to be happy for the backup pools to get some of your shares over time, and therefore enough shares have to accumulate to eventually give you a payout from them. If you enable the --failover-only option, you lose this benefit because not only will cgminer stop working on the old block before your primary pool discovers the block has changed, but you won't even accept any work your primary pool gives you until then, so it will dip in hashrate for longer.

There is also one other potential use that I'm not sure if I should point out or not Wink

If you see this:
Code:
 [2012-04-29 13:59:37] LONGPOLL from pool 3 detected new block
 [2012-04-29 13:59:39] LONGPOLL from pool 0 requested work restart

You now have a way of knowing which pool is LIKELY (though not surely) to have found that block if you have most of the major pools in your set up. This means you now have a way to *cough* choose who to hop to, if you're so inclined...
Jump to: