Author

Topic: OFFICIAL CGMINER mining software thread for linux/win/osx/mips/arm/r-pi 4.11.0 - page 517. (Read 5805728 times)

legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
...

the imbalance between pools that support rolltime and those that don't will now be extreme in load balance strategy.

What exactly is "rolltime"?

Thanks,
Sam
The nonce size in bitcoin is too small - it is 32bits (sorta like MS-DOS saying 640K will always be big enough)
Each piece of work you get from a pool allows you to attempt 2^32 (~4 billion) hashes - i.e. one for each nonce value.

For a single MiniRig, this means that you need to get over 350 pieces of work from the pool per minute.

The hack solution to this, that someone come up with in the past, was to instead edit the time field (roll it forward)
Thus if you add 1 second to the time field, you can then do another 4 billion hashes on the same piece of work you got from the pool (without having to ask for and wait for more work from the pool) since you are now hashing something different.

Thus most pools now support this.
If they don't - get a new pool.

It means you get less work from the pool to do the same amount of hashing.
The E: % value will tell you how much less.
e.g. if you have an E: of 200% that means you are rolling each piece of work you get, on average, once
Thus you are getting half as much work from the pool to do the same amount of hashing.

If a pool doesn't support roll-n-time then perfect "no rolling" E: is 100%

It doesn't reduce the number of shares you have to send back to the pool however.
The pool would have to support high difficulty shares to reduce that also.
But it does reduce the amount of work you request from the pool.

You can add up to 7200 seconds to the time field and bitcoind will still accept it as a valid block solution if you find a block
... thus why the timestamps in the bitcoin blockchain are not to be taken too seriously.

A larger nonce range in bitcoin would solve this correctly, rather than a time hack, however that would require a hard fork, and the bitcoin devs are scared of hard forks coz they do soft forks so badly.

https://bitcointalksearch.org/topic/handle-much-larger-mhs-rigs-simply-increase-the-nonce-size-89278

Edit:
The other solution (that no doubt, one of which will be used) that are being bandied about are to move the pools work to the miner.
Thus pretty much all the pool will be doing is counting shares.
The miner (or some other program run by the miner) will be doing the work of dealing with longpolls, transaction selection and everything related to that.

If this were to take place, then I would suggest anyone who implements such a solution, also put a fee in the coinbase to cover the work you are doing for the pool .........
sr. member
Activity: 362
Merit: 250
Error when CGMINER compiling. Versions 2.3.4-2.3.6. Versions 2.3.1-2.3.3 compiles fine.

...

Thanks for fix of my bug in Cgminer 2.4.0! Forgot to tell this before.
legendary
Activity: 1540
Merit: 1001
New release - 2.7.0, August 18th 2012

First time I've upgraded since 2.5.4 I think.

Everything is working as advertised.

Three things I noticed that I thought I'd mention:

- On both my miners, one with 4 GPUs, and one with 3, initially on startup the highest number one (lowest on the list) shows a massive amount of work done compared to the others, like 10x as much.  They eventually balance out, but on startup it is unbalanced.

- Failover is working much better.  I run two copies of p2pool on two different machines for backup purposes.  The backup would almost always get a very small amount of work from the miners.  Now the backup is getting zero, like its supposed to.

- Just for fun I tried changing the intensity to dynamic.  I normally run it at 8 for my 7970s.  Every GPU I tried it on, cgminer would change it to 7 and it wouldn't move from there.  Wouldn't say D, it would say 7.  I ended up changing them all to 9, which before caused problems, but now it works and increases output.  10 just jacks up CPU usage with no perceptible increase in hash rate.

As usual, great work.  I just send 2 long over due coins your way..

Regards,

M.

newbie
Activity: 63
Merit: 0
What is a "magnitude rig"?

He means any magnitude (power) of rig (any device which mines bitcoins in our case) will be kept busy with work. Trying to reduce time with no work in other words.
sr. member
Activity: 378
Merit: 250
Why is it so damn hot in here?
Still getting intensity pegging down to -10 on dynamic threads on win 7 x64.  At the same time, when it drops to -10, cgminer starts eating up a whole core of CPU power.   Undecided

This change happened sometime after 2.6.1.  Probably related to to the change in how cgminer handles dynamic intensity that was in 2.6.5, not 100% sure though since my update path was 2.6.1, 2.6.5, 2.7.0.  First saw it in 2.6.5.  Does not happen in 2.6.1.

Any GPU set to dynamic intensity will eventually peg to -10.  Sometimes very shortly after starting, sometimes after running for a while.  I have not yet been able to figure out what the trigger is.

Tested with a clean copy of 2.7.0 with no prefs file after a cold system reboot.
full member
Activity: 373
Merit: 100
I shortened my list from 4 to 2 so now it's abcpool and btcguild and made abcpool the primary.  I seem to be mining on both roughly equally.
I switched the pools so that btcguild is primary and re-ran.  Same result: roughly equal mining on both pools.

On my setup (debian testing, cgminer compiled from git) my primary pool gets ~9/10 shares and the rest are distributed among the backup pools. This is while the heat is making the router I'm connected to act up, so the network is in generally bad shape from my point of view. IOW, even though some of the getworks get dropped, only about 10% of the shares go to backup pools.
It might be a good idea to investigate whether your setup isn't somehow problematic...
legendary
Activity: 3583
Merit: 1094
Think for yourself
any magnitude rig will be kept solidly busy

What is a "magnitude rig"?

New pool strategy: Balance.
--balance           Change multipool strategy from failover to even share balance
This is to differentiate itself from the existing pool strategy, Load balance:
--load-balance      Change multipool strategy from failover to efficiency based balance

Now I finally know what load balance means and understand why I never got balanced shares across pools/servers.  Thanks for the clarification.

the imbalance between pools that support rolltime and those that don't will now be extreme in load balance strategy.

What exactly is "rolltime"?

Thanks,
Sam
legendary
Activity: 916
Merit: 1003
I've heard some complaints on abcpool's thread about getting messages "pool not providing work fast enough".  I've seen this error myself occasionally.  I figured maybe abcpool is slower than btcguild so I set up an experiment:

I shortened my list from 4 to 2 so now it's abcpool and btcguild and made abcpool the primary.  I seem to be mining on both roughly equally.
I switched the pools so that btcguild is primary and re-ran.  Same result: roughly equal mining on both pools.
full member
Activity: 373
Merit: 100
You should probably search the thread on the difference between simple failover and --failover-only; I'll just quickly mention that simple failover gets work from backup pools when the queue runs low where --failover-only only gets work from backup pools when the primary pool fails. This means that --failover-only will probably stop mining for a short time when your primary pool goes down.
legendary
Activity: 916
Merit: 1003

Did you specify the --failover-only flag (or set that option some other way)? Default failover leaks work to backup pools and is supposed to.

I just copied my command line shortcut over from the previous version.  I never knew about "--failover-only" before.

**later**
That did the trick.  My pool list is abcpool.co, btcguild.com, api.bitcoin.cz, mining.eligius.st.  Without --failover-only it seems to just switch among all 4 as if it were load balancing.  I'm running 200 Mh/s on cable modem.
full member
Activity: 373
Merit: 100
Right off I noticed 2.7.0 is mining on my backup pools even though I'm using failover.  Version 2.6.5 says pool 0 (my primary) is online and it stays on it.  Version 2.7.0 seems to mine all the pools in my list in spite of my pool management strategy being failover.

Did you specify the --failover-only flag (or set that option some other way)? Default failover leaks work to backup pools and is supposed to.
newbie
Activity: 63
Merit: 0
New release - 2.7.0, August 18th 2012

Oh I am very excited about this release. I just fired it up, and immediately I can see a good difference. I think I will finally be able to drop 2.5.0, things are looking splendid. I'll keep you informed. I am so, so happy about this  Grin

Thank you for everything you do ck.
legendary
Activity: 916
Merit: 1003
Right off I noticed 2.7.0 is mining on my backup pools even though I'm using failover.  Version 2.6.5 says pool 0 (my primary) is online and it stays on it.  Version 2.7.0 seems to mine all the pools in my list in spite of my pool management strategy being failover.
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
cut/paste ...

2.7.0
An Xubuntu 11.04 x86_64 executable is in my github downloads called cgminer-2.7.0a
https://github.com/kanoi/cgminer/downloads

For anyone who didn't realise, it's just the executable file to put in place of 'cgminer'
Nothing else needs changing
First get and extract the full binary release from ckolivas and then copy my file in place of 'cgminer'

No Problems so far on my BFL (my 2xGPU+2xIcarus is testing another code change at the moment)

The same configure options as cvolivas' binary version
In case anyone was wondering:
CFLAGS="-O2 -W -Wall" ./configure --enable-icarus --enable-bitforce --enable-ztex --enable-modminer --enable-scrypt

I have also added a WinXP cgminer-2.7.0a.exe

This ONLY has BFL + ICA (as below) thus it doesn't need a computer with OpenCL on it
CFLAGS="-O2 -W -Wall" ./configure --enable-icarus --enable-bitforce

You will most likely also need the windowsdlls.zip file there in my downloads since some of my *.dll might be slightly different to ckolivas
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
New release - 2.7.0, August 18th 2012

Normally a minor version (2.6->2.7) update brings with it instability, but in fact this release includes some very heavily tested code that was a long time in the making and because of its magnitude and impact it warranted a version update.


Human readable changelog:

The main change in this version is a complete rewrite of the getwork requesting mechanism. I've been slowly hacking away at it for some time, but finally gave up in disgust and have rewritten it almost entirely. Previously mining threads would occasionally throw out a request for more work, some arbitrary test would be done on whether more work should be requested, and it handed off the message to another thread which spawned another thread and that then sent the request and ... anyway the mechanism was so asynchronous that the arse didn't know what its head was doing by the time it was deciding on what to do for work. Worse yet it was hard to find the right place to reuse work and so it was never reused to its utmost potential. This is mostly my fault for gradually hacking on more and more asynchronous threaded components to cgminer and the demands for getting work have been increasing sharply of late with new hardware. The rewrite involves scheduling a new request based on the rate the old work items get used up, and is much better at predicting when it needs to leak work to backup pools and less likely to throw a "pool is not providing work fast enough" message. Overall you should now see much more Local Work (LW), the efficiency will be higher on pools that support rolltime, less work will be discarded, any magnitude rig will be kept solidly busy - note this MAY mean your overclocks will become that much more stressed if you have set clocks very aggressively. Thanks to numerous people who tested this on IRC during its development phase. For many of you, you'll be wondering what the fuss is about cause it will just appear as business as usual.

New pool strategy: Balance.
--balance           Change multipool strategy from failover to even share balance
This is to differentiate itself from the existing pool strategy, Load balance:
--load-balance      Change multipool strategy from failover to efficiency based balance

With the change to queueing and more roll work being possible than ever before, the imbalance between pools that support rolltime and those that don't will now be extreme in load balance strategy. To offset that, and since the number of people using load balance has been increasing, the new strategy was added to try and give roughly the same number of shares to each pool. This required some code to estimate a rolling average work completion rate that was not dependent on difficulty, and would cope with dips and peaks as pools are enabled/disabled/fail. Otherwise you could end up mining 100% on solo since you would rarely be submitting only possible block solutions, and not shares.

New statistic: Work Utility. With higher difficulty share supporting pools in the testing phase, it is going to be hard to monitor overall work performance based on successful share submission. To counter this, work utility is a value based on the amount of difficulty 1 shares solved, whether they're accepted or rejected by the pool. This value will always be higher than the current "utility". (Note that difficulty 1 share counting on scrypt is not supported since the work is compared to the target on the GPU itself, but the total shares solved will be displayed).

Other minor bugfixes.


Full changelog:

- Introduce a new statistic, Work Utility, which is the number of difficulty 1
shares solved per minute. This is useful for measuring a relative rate of work
that is independent of reject rate and target difficulty.
- Implement a new pool strategy, BALANCE, which monitors work performed per pool
as a rolling average every 10 minutes to try and distribute work evenly over all
the pools. Do this by monitoring diff1 solutions to allow different difficulty
target pools to be treated equally, along with solo mining. Update the
documentation to describe this strategy and more accurately describe the
load-balance one.
- Getwork fail was not being detected. Remove a vast amount of unused variables
and functions used in the old queue request mechanism and redefine the getfail
testing.
- Don't try to start devices that don't support scrypt when scrypt mining.
- 0 is a valid return value for read so only break out if read returns -1.
- Consider us lagging only once our queue is almost full and no staged work.
- Simplify the enough work algorithm dramatically.
- Only queue from backup pools once we have nothing staged.
- Don't keep queueing work indefinitely if we're in opt failover mode.
- Make sure we don't opt out of queueing more work if all the queued work is
from one pool.
- Set lagging flag if we're on the last of our staged items.
- Reinstate clone on grabbing work.
- Grab clones from hashlist wherever possible first.
- Cull all the early queue requests since we request every time work is popped
now.
- Keep track of staged rollable work item counts to speed up clone_available.
- Make expiry on should_roll to 2/3 time instead of share duration since some
hardware will have very fast share times.
- Do the cheaper comparison first.
- Check that we'll get 1 shares' worth of work time by rolling before saying we
should roll the work.
- Simplify all those total_secs usages by initialising it to 1 second.
- Overlap queued decrementing with staged incrementing.
- Artificially set the pool lagging flag on pool switch in failover only mode as
well.
- Artificially set the pool lagging flag on work restart to avoid messages about
slow pools after every longpoll.
- Factor in opt_queue value into enough work queued or staged.
- Roll work whenever we can on getwork.
- Queue requests for getwork regardless and test whether we should send for a
getwork from the getwork thread itself.
- Get rid of age_work().
- 0 is a valid return value for read so only break out if read returns -1.
- Offset libusb reads/writes by length written as well in ztex.
- Cope with timeouts and partial reads in ztex code.
- fpga serial I/O extra debug (disabled by default)
legendary
Activity: 1286
Merit: 1004
2.6.4 is woking good with 15 BFL Singles a week!
legendary
Activity: 1428
Merit: 1001
Okey Dokey Lokey
sr. member
Activity: 344
Merit: 250
Flixxo - Watch, Share, Earn!
Does anyone have experience with CGminer and configuration on Dropbox without dropbox windowsclient?
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
I would sometimes get as much as 2/3 at one pool and 1/3 at the other. Does that sound like a correct ratio that the rollntime could cause?
Yes, and it is going to get MUCH worse on the next release as I clean up the queueing to make it as efficient as possible.
sr. member
Activity: 271
Merit: 250
When using the load balance feature between 2 pools, I always have more accepted shares on pool 0 than pool 1. If I switch the pools on my .conf file I get the same result. Pool 0 always has a considerably higher amount of accepted shares than pool 1.

I have noticed a similar behavior too, but with me pool 3 gets almost all of the shares and pools 0, 1 and 2 get starved.
Sam
This is simply that one pool has rolltime and the others do not. It's such an efficient option that it's really a waste of resources to not use it on the one pool since the others have not moved out of the dark ages. But I guess I could add more black magic to cope with that....

I have been able to get a better balance between 2 pools by using the rotate option. Get alot closer to having even shares on each pool. With the load balance at the times I had the biggest differences between pools, I would sometimes get as much as 2/3 at one pool and 1/3 at the other. Does that sound like a correct ratio that the rollntime could cause?

This all being to attempt to lower variance. At wat cost is my trying to split work evenly between 2 pools effecting efficiency versus just using the failover feature?
Jump to: