Author

Topic: OFFICIAL CGMINER mining software thread for linux/win/osx/mips/arm/r-pi 4.11.0 - page 663. (Read 5806103 times)

-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Hey Con,

I looked again through every kernel argument and compared line by line with my Python code. I found 2 small differences and 2 brackets, that are not needed (see last commit https://github.com/Diapolo/cgminer/commit/68e36c657318fbe1e7714be470cf954a1d512333), but I guess they don't fix the persisting problem with false-positive nonces (perhaps you can give it a try - I have no compiler or IDE setup to test it by myself). The argument order is exactly as DiaKGCN awaits it, so that can't be the problem either.

It could be a problem of your changes to the output code in the kernel, a problem with the base-nonces, who are passed to the kernel or something with the output-buffer in the CGMINER host code ... :-/. Where resides the output-buffer processing? As I said my kernel used ulong * natively, which I changed to uint * in one commit of my fork, I guess I need to look at it.

Edit: OMFG, I introduced a bug with one of my former commits, which changed the type of the output buffer from uint * to int * ... fixed that one! It's time for another try Con Cheesy.

Dia
Diapolo... I appreciate the effort you're putting in, and I realise you're new to this collaborative coding and source control management, but probably a good idea to see your code actually compiles before you ask someone to test it. Usually people compile and test their own code before asking someone else to test it for them.

Anyway... I fixed the !(find) in my local copy and it still produces hardware errors.

edit: It doesn't matter what vectors or worksize I try this with.
legendary
Activity: 1876
Merit: 1000
i had an ubuntu box that was ok running phoenix

but i like cgminer more, so i wanted to compile it (it has ubuntu x86)

I tried, but failed, because ncurses version was not correct.

I then upgraded from 10.04 to 10.11 (all through ssh)

After reinstalling the drivers (ssh -X), i was able to compile cgminer.

but i cant start mining, except with cpu only (on 2.1.2, 2.2.3 has no cpu support i think), it tells me that there is no valid gpu available.

I tried automatically starting cgminer with screen and upstart (like i was told arroung page 178 to 180).

It gives the same error.

i then tried with the same trick i used on debian, and nothing happens.

Im not sure if this PC with ubuntu has any trouble with not having a display connected.

did you accept the 'sdk license?'
newbie
Activity: 78
Merit: 0
i had an ubuntu box that was ok running phoenix

but i like cgminer more, so i wanted to compile it (it has ubuntu x86)

I tried, but failed, because ncurses version was not correct.

I then upgraded from 10.04 to 10.11 (all through ssh)

After reinstalling the drivers (ssh -X), i was able to compile cgminer.

but i cant start mining, except with cpu only (on 2.1.2, 2.2.3 has no cpu support i think), it tells me that there is no valid gpu available.

I tried automatically starting cgminer with screen and upstart (like i was told arroung page 178 to 180).

It gives the same error.

i then tried with the same trick i used on debian, and nothing happens.

Im not sure if this PC with ubuntu has any trouble with not having a display connected.
hero member
Activity: 772
Merit: 500
Hey Con,

I looked again through every kernel argument and compared line by line with my Python code. I found 2 small differences and 2 brackets, that are not needed (see last commit https://github.com/Diapolo/cgminer/commit/68e36c657318fbe1e7714be470cf954a1d512333), but I guess they don't fix the persisting problem with false-positive nonces (perhaps you can give it a try - I have no compiler or IDE setup to test it by myself). The argument order is exactly as DiaKGCN awaits it, so that can't be the problem either.

It could be a problem of your changes to the output code in the kernel, a problem with the base-nonces, who are passed to the kernel or something with the output-buffer in the CGMINER host code ... :-/. Where resides the output-buffer processing? As I said my kernel used ulong * natively, which I changed to uint * in one commit of my fork, I guess I need to look at it.

Edit: OMFG, I introduced a bug with one of my former commits, which changed the type of the output buffer from uint * to int * ... fixed that one! It's time for another try Con Cheesy.

Dia
full member
Activity: 200
Merit: 100
|Quantum|World's First Cloud Management Platform
This on what OS ?

Thanks !
Xubuntu linux 11.04. I have not been able to recreate the problem again since I posted. The pool also has come back online now. When the problem was occurring it was easily recreated by disabling/enabling the pool. At first I thought it was a problem with my machine so I shut it off and made sure the cards were all seated properly, then turned it back on. When cgminer ran again at startup, the problem recurred. I was able to fix it by disabling the dead pool again.
hero member
Activity: 772
Merit: 500
Wohoo, looks good so far ...

I forked cgminer and set diakgcn as branch, added a remote for the diakgcn branch in your repo. I now can edit files and do commits Cheesy.

Con, if you are now doing commits to your diakgcn branch, can I merge them via "git fetch upstream" and "git merge upstream/diakgcn" afterwards?

Can you have a look at https://github.com/Diapolo/cgminer/commits/diakgcn ... I now need to figure out how to create a pull request for the branch diakgcn.

Thanks,
Dia
I just tested it. Now instead of producing no shares at all, it is only producing hardware errors... Still needs work I expect. Likely something in the API is broken. Check the code in findnonce.c in precalc hash to see what variables are being used and then the code in device-gpu.c for what parameters are being passed to your kernel in queue_diakgcn_kernel in what order. It should make sense.

I checked how you precompute the kernel parameters yesterday, every parameter looked good. I will investigate further, did you use vectors or no vectors? Would be best to first get the non vectors code working ...

Edit: I need some input, the values A to H in findnonce.c are "mixed" via R() into new values, so I guess A to H correspond to state2 in my Python code and ctx_a - ctx_h would be state0 in my Python code. If this is the case, I have to recheck all kernel arguments ... I'm a bit confused right now Cheesy.

Code:
self.state  = np.array(unpack('IIIIIIII', nonceRange.unit.midstate), dtype=np.uint32)
self.state2 = np.array(unpack('IIIIIIII', calculateMidstate(nonceRange.unit.data[64:80] + '\x00\x00\x00\x80' + '\x00' * 40 + '\x80\x02\x00\x00', nonceRange.unit.midstate, 3)), dtype=np.uint32)
self.state2 = np.array(list(self.state2)[3:] + list(self.state2)[:3], dtype=np.uint32)

Dia
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Wohoo, looks good so far ...

I forked cgminer and set diakgcn as branch, added a remote for the diakgcn branch in your repo. I now can edit files and do commits Cheesy.

Con, if you are now doing commits to your diakgcn branch, can I merge them via "git fetch upstream" and "git merge upstream/diakgcn" afterwards?

Can you have a look at https://github.com/Diapolo/cgminer/commits/diakgcn ... I now need to figure out how to create a pull request for the branch diakgcn.

Thanks,
Dia
I just tested it. Now instead of producing no shares at all, it is only producing hardware errors... Still needs work I expect. Likely something in the API is broken. Check the code in findnonce.c in precalc hash to see what variables are being used and then the code in device-gpu.c for what parameters are being passed to your kernel in queue_diakgcn_kernel in what order. It should make sense.
hero member
Activity: 518
Merit: 500
I just encountered a weird problem with the latest version 2.2.3. One of my backup pools went dead and it seemed to be interfering with cgminer's ability to update the statistics on top (mhs, gpu temp and fan rpm). Basically, these stats were frozen and cgminer was only acting on what they last said (so fan rpms and such were not being adjusted properly, a potentially dangerous situation). When I disabled the offending pool, stats began updating again normally. Enabled the pool again and right back to frozen stats. A couple of times tonight this caused the fans to go to 100% due to overheating because the rpms were being kept too low due to the last stat update being too long ago.

Not sure what else I can do to help track this down, pool management is set to failover, but failover only flag is not enabled.

This on what OS ?

Thanks !
hero member
Activity: 772
Merit: 500
Wohoo, looks good so far ...

I forked cgminer and set diakgcn as branch, added a remote for the diakgcn branch in your repo. I now can edit files and do commits Cheesy.

Con, if you are now doing commits to your diakgcn branch, can I merge them via "git fetch upstream" and "git merge upstream/diakgcn" afterwards?

Can you have a look at https://github.com/Diapolo/cgminer/commits/diakgcn ... I now need to figure out how to create a pull request for the branch diakgcn.

Thanks,
Dia
full member
Activity: 200
Merit: 100
|Quantum|World's First Cloud Management Platform
Just re-enabled the still dead pool, now stats are behaving normally. Maybe the problem is dependent on the type of network failure. When stats were frozen before, the "accepted/rejected" messages below were updating as normal. Stats were updating maybe once every 3 or 4 minutes.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Haha no chance. It just would have been waiting on a network response presumably.
full member
Activity: 200
Merit: 100
|Quantum|World's First Cloud Management Platform
It looked almost like a thread starvation problem to me, but I've never looked at the code so take that with a lot of salt  Wink
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
I just encountered a weird problem with the latest version 2.2.3. One of my backup pools went dead and it seemed to be interfering with cgminer's ability to update the statistics on top (mhs, gpu temp and fan rpm). Basically, these stats were frozen and cgminer was only acting on what they last said (so fan rpms and such were not being adjusted properly, a potentially dangerous situation). When I disabled the offending pool, stats began updating again normally. Enabled the pool again and right back to frozen stats. A couple of times tonight this caused the fans to go to 100% due to overheating because the rpms were being kept too low due to the last stat update being too long ago.

Not sure what else I can do to help track this down, pool management is set to failover, but failover only flag is not enabled.
Interesting find! I will investigate this. Thanks.
full member
Activity: 200
Merit: 100
|Quantum|World's First Cloud Management Platform
I just encountered a weird problem with the latest version 2.2.3. One of my backup pools went dead and it seemed to be interfering with cgminer's ability to update the statistics on top (mhs, gpu temp and fan rpm). Basically, these stats were frozen and cgminer was only acting on what they last said (so fan rpms and such were not being adjusted properly, a potentially dangerous situation). When I disabled the offending pool, stats began updating again normally. Enabled the pool again and right back to frozen stats. A couple of times tonight this caused the fans to go to 100% due to overheating because the rpms were being kept too low due to the last stat update being too long ago.

Not sure what else I can do to help track this down, pool management is set to failover, but failover only flag is not enabled.
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
Anyway - the actual point of this is that there should be a set of steps required to add BAMT to BAMT's choice of OS.
So those technically minded do not need to trust someone else's copy of an OS.
Is there already?

I don't know all the technical stuff, but I know it's just a standard Debian live distro that runs a few config files at startup.  You really just dd the .img, run the fixer to install current fixes, edit one file, restart the mine service and your mining.  Don't get me wrong, I use and recommend your install guide and it's great, but with multiple headless rigs nothing is faster to set up than BAMT.
Heh I'm not out to get lots of people to use my script Smiley

In fact there are issues with using USB and even low memory with an HDD install that I've mentioned (about 2 weeks ago?) when I had trouble with my own script Tongue
I need to update that soon now that I think I've worked it all out ...
(main problem: if you ever forget to 'sync' before shutdown can trash it ...)

I just thought I'd mention the reasoning behind why I use a base OS that others may not think of.

And even then if the install is documented to go on top of an OS, that resolves that also.
hero member
Activity: 642
Merit: 500
edit

no change at all Sad

i know this sounds very strange but watching this thing it almost looks like its trying to maintain a mhash less than 500 total or something

the gpus mhash fluctuations constantly going from 20-100+ and back down each one keeps changing but the aggregate speed stays around 500 mhash for all 5 cards...
Check your CPU usage, my friend.  I'll bet it's being bottlenecked.  This is exactly how it'll behave if your threads are starving for CPU time.
legendary
Activity: 1316
Merit: 1005
My reason for having my install script is coz I don't want to trust someone else to put together the entire OS with bitcoin as the target in mind.
I need only trust the applications I install.

This was part of my rationale for looking into switching over to Arch, the main one being rolling updates as opposed to major point releases. That alone makes it easier to keep current. Now all I need is the time to handle the first run.

And nice Accepted/Rejected ratio Smiley
donator
Activity: 798
Merit: 500
Anyway - the actual point of this is that there should be a set of steps required to add BAMT to BAMT's choice of OS.
So those technically minded do not need to trust someone else's copy of an OS.
Is there already?

I don't know all the technical stuff, but I know it's just a standard Debian live distro that runs a few config files at startup.  You really just dd the .img, run the fixer to install current fixes, edit one file, restart the mine service and your mining.  Don't get me wrong, I use and recommend your install guide and it's great, but with multiple headless rigs nothing is faster to set up than BAMT.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
That is nice ck.  Good to see your enjoying that new toy.  Any idea how many watts it's drawing for those 694 Mh/s?
No idea I'm afraid.

GPU 0: 695.0 / 693.5 Mh/s | A:1246  R:3  HW:0  U:9.91/m  I:11
72.0 C  F: 71% (4139 RPM)  E: 1200 MHz  M: 1050 Mhz  V: 1.170V  A: 99% P: 5%

GPU 1: 428.2 / 427.3 Mh/s | A:729  R:2  HW:0  U:5.80/m  I:9
73.5 C  F: 62% (5111 RPM)  E: 960 MHz  M: 835 Mhz  V: 1.175V  A: 99% P: 5%

GPU 2: 428.2 / 427.4 Mh/s | A:781  R:1  HW:0  U:6.21/m  I:9
73.5 C  F: 48% (4292 RPM)  E: 960 MHz  M: 835 Mhz  V: 1.175V  A: 99% P: 5%

GPU 3: 437.7 / 436.7 Mh/s | A:810  R:1  HW:0  U:6.45/m  I:9
73.5 C  F: 64% (3894 RPM)  E: 1000 MHz  M: 875 Mhz  V: 1.175V  A: 99% P: 5%

They all have different airflow characteristics due to their place on the motherboard, GPU 3 is in the coolest spot followed by 0, 2, 1.

Heat generation should be proportional to energy usage but the back of the 7970 is more open than the 6970 so I think the airflow through it is better. Pulling a rough estimate out of my arse, I'd say that it uses the same amount of power volt-per volt, clockspeed-per clockspeed as the 6970, and I happen to be running it at higher clockspeed. The difference is, of course, that it produces a much higher hashrate than the 6970 at the same clocks and voltage.
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
Slightly off topic but adding RPC to cgminer was major enhancement for cgminer and it allowed cgminer to be integrated into BAMT. For those of you looking to use cgminer as the engine for more comprehensive set of monitoring and configuring tools BAMT is the solution.
...
So BAMT is a set of programs with a web front end that he packaged an entire OS with.
Basically I'd guess it's web code and a background program with some database storage.

I've said this about linuxcoin before and I guess I'll say it about BAMT now too (for people with 1 or 2 rigs)
My reason for having my install script is coz I don't want to trust someone else to put together the entire OS with bitcoin as the target in mind.
I need only trust the applications I install.
More-so, I'm sure that the developer of BAMT doesn't verify every package he puts in the OS, whereas with any original OS install, you simply are trusting the OS supplier for the base OS and then any extra non OS supplied apps as you choose.

Also, it's not just an issue of who the developer is, you have to trust his software anyway, but it's the other unfortunate issue of supplied already installed OSs that then get 'other' downloads of them that people learn the hard way to not trust those 'other' downloads

Anyway - the actual point of this is that there should be a set of steps required to add BAMT to BAMT's choice of OS.
So those technically minded do not need to trust someone else's copy of an OS.
Is there already?

Of course, if you have lots (or 100s) of mining rigs, any manual install procedure is going to take a lot of time ... but making a copy of a USB or a HDD already setup should make that a lot quicker to at least start the process
(yeah you usually can just image a linux HDD and run it on other different hardware)
Jump to: