Pages:
Author

Topic: [ANN]: cpuminer-opt v3.8.8.1, open source optimized multi-algo CPU miner - page 24. (Read 444067 times)

legendary
Activity: 1470
Merit: 1114
http://prntscr.com/ij3bkh
http://prntscr.com/ij3flq
windows 10 local wallet
3.8.2.1 -> http://prntscr.com/ij3qkp

interzone/c11 simply crashed -> 3.8.2.1 http://prntscr.com/ij3pma
BWK/nist5 also crashed -> 3.8.2.1 http://prntscr.com/ij3ou2
Solaris/xevan http://prntscr.com/ij3lnl -> 3.8.2.1 http://prntscr.com/ij3nmc
UTC/scryptjane:14 http://prntscr.com/ij3zoo -> 3.8.2.1 http://prntscr.com/ij40du
on localhost / win 10

Nist5 looks pretty good in 3.8.2.1 using GBT. Did you ever find a block? This would
indicate that GBT basically works as is.
newbie
Activity: 182
Merit: 0
But CPU performance is static, once it's benchmarked why would you need/want continuous
monitoring?
See  #3515
legendary
Activity: 1470
Merit: 1114
All algo back to normal, no any problem.

Thanks!

Thanks for your help, sorry for the mess.
legendary
Activity: 1470
Merit: 1114
But the bottom line, hashrate is a function of how many hashes can a processor calculate

Agree with your comments about pool stats, that is what I am aiming to replicate because
it represents actual earnings.

But CPU performance is static, once it's benchmarked why would you need/want continuous
monitoring?



I'm using an automated script (personal fork of Megaminer) for all benchmarking and mining.
For benchmarking, I prefer to mine to a real pool, while also earning a little, without having to use "--benchmark"
For some algos, the time until initial share can be very high (as in minutes), which makes for unnecessary long benchmark
Sometimes the first share is submitted very fast (often happens for me with Lyra2z), and the reported rate, until the next share is submitted, is of 1 or 2 threads, which is very low and thus skews the benchmark
It is also used in a watchdog to see that the miner really works as it should

I'll consider that after I come up with a better way to report earnings. I'll have to figure out how pools convert a share
to a hashrate then do the same calculation locally if I have all the variables. That would be a new data term that would
probably be measured in hash rate and would be normalized based on share submission rate and share difficulty.
The existing HS or KHS could then be used as you request and represent CPU performance.
newbie
Activity: 15
Merit: 0
All algo back to normal, no any problem.

Thanks!
member
Activity: 473
Merit: 18
But the bottom line, hashrate is a function of how many hashes can a processor calculate

Agree with your comments about pool stats, that is what I am aiming to replicate because
it represents actual earnings.

But CPU performance is static, once it's benchmarked why would you need/want continuous
monitoring?



I'm using an automated script (personal fork of Megaminer) for all benchmarking and mining.
For benchmarking, I prefer to mine to a real pool, while also earning a little, without having to use "--benchmark"
For some algos, the time until initial share can be very high (as in minutes), which makes for unnecessary long benchmark
Sometimes the first share is submitted very fast (often happens for me with Lyra2z), and the reported rate, until the next share is submitted, is of 1 or 2 threads, which is very low and thus skews the benchmark
It is also used in a watchdog to see that the miner really works as it should
legendary
Activity: 1470
Merit: 1114
cpuminer-opt-3.8.3.2

https://github.com/JayDDee/cpuminer-opt/releases/tag/v3.8.3.2

Reverted gbt changes from v3.8.0 that broke getwork.
Reverted scaled hash rate for API, added HS term in addition to KHS.
Added blocks solved to console display and API.
legendary
Activity: 1470
Merit: 1114
But the bottom line, hashrate is a function of how many hashes can a processor calculate

Agree with your comments about pool stats, that is what I am aiming to replicate because
it represents actual earnings.

But CPU performance is static, once it's benchmarked why would you need/want continuous
monitoring?

member
Activity: 473
Merit: 18

On a sidenote, the feature I would love to see would be continuous hash rate reporting, not only when a share is found (which can take a while on some algos with current),
but reporting it continuously


Wouldn't we all. The problem is the locally calculated hashrate is completely artificial.

To put this in perspective, while testing blakecoin with very high stratum diff the miner was reporting
150 MH/s but a single share submit resulted in the pool displaying 469 Mh/s. It remained at that rate
until the share fell out of the sample window. So the pool reported hashrate went from 0 to 469 and
back to zero every time a share was submitted.

So I have to ask, what exactly do you expect from a continuous hashrate display that has no real
connection to what the pool is seeing?

I would like to see a measure that uses share submission rate and share difficulty so the miner can
make calculations based on real data.


Pools don't report a "more real" hashrate, they calculate an estimation based on amount of shares submitted and current difficulty.
If you submit one share in 5 minutes, they don't have enough information to calculate real rate

When difficulty is low enough to supply the pool with enough shares, the pool will show hashrate close to that which is calculated in the miner
The lower the amount of shares submitted, the bigger role "luck" plays in the hashrate calculated by the pool

But the bottom line, hashrate is a function of how many hashes can a processor calculate
full member
Activity: 392
Merit: 159
(excuse my ignorance)

What can you effectively mine on a CPU these days ?
Would that be any cpu or med high end ones?
newbie
Activity: 182
Merit: 0
Wouldn't we all. The problem is the locally calculated hashrate is completely artificial.

To put this in perspective, while testing blakecoin with very high stratum diff the miner was reporting
150 MH/s but a single share submit resulted in the pool displaying 469 Mh/s. It remained at that rate
until the share fell out of the sample window. So the pool reported hashrate went from 0 to 469 and
back to zero every time a share was submitted.

So I have to ask, what exactly do you expect from a continuous hashrate display that has no real
connection to what the pool is seeing?

I would like to see a measure that uses share submission rate and share difficulty so the miner can
make calculations based on real data.
It gives you an idea of your CPU's performance. If you want to tweak it, you'll see in real time how O/C settings (for instance) affect the hashrate.
legendary
Activity: 1470
Merit: 1114

http://prntscr.com/ij3bkh
http://prntscr.com/ij3flq
windows 10 local wallet
3.8.2.1 -> http://prntscr.com/ij3qkp

interzone/c11 simply crashed -> 3.8.2.1 http://prntscr.com/ij3pma
BWK/nist5 also crashed -> 3.8.2.1 http://prntscr.com/ij3ou2
Solaris/xevan http://prntscr.com/ij3lnl -> 3.8.2.1 http://prntscr.com/ij3nmc
UTC/scryptjane:14 http://prntscr.com/ij3zoo -> 3.8.2.1 http://prntscr.com/ij40du
on localhost / win 10

Excellent data, it wil take some time to analyze it.

Edit: I have an initial question about this data that will affect the bug fix release.
You initially reported that yescryptr16 crashed but this shows that it was hashing
and submitting rejects. Were both of these tests with the same code? I need to know
if v3.8.3.1 can hash or if it crashes before starting to hash.

1700x or E5-2640 + manjaro + 3.8.3.1 Segmentation fault, no any hash http://prntscr.com/ij6v61 <-original git compiled without any change
8700k + win10 + 3.8.3.1 start working, crash after 1-2 minute, normal hashrate 1khs +/- 5%, windows also from git, not self compiled
I'm using -march=znver1 -DUSE_SPH_SHA tweaks for amd and sandybridge for E5 thats all

Thanks, I'll play it safe for now and skip the job_id check for getwork/gbt
newbie
Activity: 15
Merit: 0

http://prntscr.com/ij3bkh
http://prntscr.com/ij3flq
windows 10 local wallet
3.8.2.1 -> http://prntscr.com/ij3qkp

interzone/c11 simply crashed -> 3.8.2.1 http://prntscr.com/ij3pma
BWK/nist5 also crashed -> 3.8.2.1 http://prntscr.com/ij3ou2
Solaris/xevan http://prntscr.com/ij3lnl -> 3.8.2.1 http://prntscr.com/ij3nmc
UTC/scryptjane:14 http://prntscr.com/ij3zoo -> 3.8.2.1 http://prntscr.com/ij40du
on localhost / win 10

Excellent data, it wil take some time to analyze it.

Edit: I have an initial question about this data that will affect the bug fix release.
You initially reported that yescryptr16 crashed but this shows that it was hashing
and submitting rejects. Were both of these tests with the same code? I need to know
if v3.8.3.1 can hash or if it crashes before starting to hash.

1700x or E5-2640 + manjaro + 3.8.3.1 Segmentation fault, no any hash http://prntscr.com/ij6v61 <-original git compiled without any change
8700k + win10 + 3.8.3.1 start working, crash after 1-2 minute, normal hashrate 1khs +/- 5%, windows also from git, not self compiled
I'm using -march=znver1 -DUSE_SPH_SHA tweaks for amd and sandybridge for E5 thats all
legendary
Activity: 1470
Merit: 1114
I hope this explains it

Thanks, it's becoming clear. Considering the API output isn't intended to be human readable dynamic
rate scaling is innapropriate. What is most appropriate is just raw H/s and the monitoring app would
be responsible for making it human readable. The hitch is that there is a legacy of using kH/s which
doesn't work for very low hashrate algos.

So I will go back to the initial request and just add H/s alongside kH/s. Although kH/s would be deprecated
it would be maintained indefinitely. Since the dual reporting of hashrate is not visible to the user there is
no harm.

That resolves that issue.

I am also considering adding solution count to the API and console output. A bug fix release shouldn't normally
have a new feature but in this case it makes sense to be to coordinate multiple API changes rather than speading
them over multiple releases.

I will reverse all the GBT changes introduced in 3.8.3 and analyze the data provided by Nokedll in more detail
before making any changes to try to support GBT.

The final issue is the role of the get_new_work changes to the problems seem. The key question is whether it
contributed to the solo mining problems reported. The role of job_id is critical as I discovered using benchmark.
I have to be sure a job_id exists before I test it else I get a NULL pointer deref. There may be 3 tracks through
this function:

1. benchmark where there is no job id to test for new work.

2. stratum where job_id is tested and in some instances the work may also need regeneration

3. getwork/gbt where there may (or may not?) be a job_id to compare but work should never be regenarated
I'm still not sure if testing the job_id is safe for getwork and gbt, I'm hoping for more clarification which will
determine if additional checking is required.
legendary
Activity: 1470
Merit: 1114

On a sidenote, the feature I would love to see would be continuous hash rate reporting, not only when a share is found (which can take a while on some algos with current),
but reporting it continuously


Wouldn't we all. The problem is the locally calculated hashrate is completely artificial.

To put this in perspective, while testing blakecoin with very high stratum diff the miner was reporting
150 MH/s but a single share submit resulted in the pool displaying 469 Mh/s. It remained at that rate
until the share fell out of the sample window. So the pool reported hashrate went from 0 to 469 and
back to zero every time a share was submitted.

So I have to ask, what exactly do you expect from a continuous hashrate display that has no real
connection to what the pool is seeing?

I would like to see a measure that uses share submission rate and share difficulty so the miner can
make calculations based on real data.
legendary
Activity: 1470
Merit: 1114

http://prntscr.com/ij3bkh
http://prntscr.com/ij3flq
windows 10 local wallet
3.8.2.1 -> http://prntscr.com/ij3qkp

interzone/c11 simply crashed -> 3.8.2.1 http://prntscr.com/ij3pma
BWK/nist5 also crashed -> 3.8.2.1 http://prntscr.com/ij3ou2
Solaris/xevan http://prntscr.com/ij3lnl -> 3.8.2.1 http://prntscr.com/ij3nmc
UTC/scryptjane:14 http://prntscr.com/ij3zoo -> 3.8.2.1 http://prntscr.com/ij40du
on localhost / win 10

Excellent data, it wil take some time to analyze it.

Edit: I have an initial question about this data that will affect the bug fix release.
You initially reported that yescryptr16 crashed but this shows that it was hashing
and submitting rejects. Were both of these tests with the same code? I need to know
if v3.8.3.1 can hash or if it crashes before starting to hash.
newbie
Activity: 182
Merit: 0
Is there a page of the thread presenting benchmarks?
(Wondering what a really shitty CPU would do - like a G4400)
newbie
Activity: 15
Merit: 0
still Segmentation fault
same error with other wallet

If it crashes with BLOCK_VERSION_CURRENT 3 and std_longpoll_rpc_call from 3.8.2 I'm stumped.

git + 2 patch, no build modification
http://prntscr.com/iixh31

Ok, undo those changes, start fresh and make the following change to std_get_new_work:

Code:
       if ( ( memcmp( work->data, g_work->data, algo_gate.work_cmp_size )
              && clean_job )
          || ( *nonceptr >= *end_nonce_ptr )
del:      || ( !opt_benchmark && strcmp( work->job_id, g_work->job_id ) ) )
add:      || ( have_stratum && strcmp( work->job_id, g_work->job_id ) ) )
       {




http://prntscr.com/iixtyv

One more shot in the dark, replace std_get_new_work with the old version.

If that doesn't work apply all patches above: replace std_get_new_work and
std_longpoll_rpc_call with old versions, and #define BLOCK_VERSION_CURRENT 3
as per old version.

Ater that I'm really stuck.

Edit: This is really strange. I need you to confirm the previous version still works.

I've reviewed the changes I made. There were none to yescrypt but many other algos
were changed.

I made a few changes to common code:

Increasing the block version count, reverting did not help.

Removing getwork code from longpoll. This was my first suspect if my assumption that
getwork doesn't use longpoll. But reversing that change did not help either.

I made a change to how new work is detected to fix an issue with super-fast algos. But
reversing that didn't fix it either.

I made a change to how shares are detected but that only applies when a solution is found.

The last change was to the API which also doesn't apply.

I'm at a loss to explain it.
some getwork code from longpoll

This problem is bugging me, it defies logic. I'm beginning to suspect it may be an isolated issue.
If anyone else is solo mining with v3.8.3.1 using getwork or gbt please post your results, success
or failure, Please include the algo, your CPU, OS, any deveation from defaults and any relevant
console output.

http://prntscr.com/ij3bkh
http://prntscr.com/ij3flq
windows 10 local wallet
3.8.2.1 -> http://prntscr.com/ij3qkp

interzone/c11 simply crashed -> 3.8.2.1 http://prntscr.com/ij3pma
BWK/nist5 also crashed -> 3.8.2.1 http://prntscr.com/ij3ou2
Solaris/xevan http://prntscr.com/ij3lnl -> 3.8.2.1 http://prntscr.com/ij3nmc
UTC/scryptjane:14 http://prntscr.com/ij3zoo -> 3.8.2.1 http://prntscr.com/ij40du
on localhost / win 10
hero member
Activity: 700
Merit: 500

Any downside to this?


i use H/s as often as possible and just format it into the appropriate unit on display. A change to not always use H/s and remove KH/s (after some time) will require some conversion to bring it down to H/s first, push it through the app and finally convert it back to whatever is the appropriate format. It wont be a large overhead in programming but it will be unnecessary. I do not know of any app using the api returned hashrate just as is, it wouldn't make sense.

This is how i currently extract data from api:

Code:
  const result = {
accepted: parseFloat(obj.ACC),
acceptedPerMinute: parseFloat(obj.ACCMN),
algorithm: obj.ALGO,
difficulty: parseFloat(obj.DIFF),
hashrate: parseFloat(obj.KHS) * 1000,
miner: `${obj.NAME} ${obj.VER}`,
rejected: parseFloat(obj.REJ),
uptime: obj.UPTIME,
cpus: parseFloat(obj.CPUS),
temperature: parseFloat(obj.TEMP),
  };

this would need to change to this:

Code:
  const units = [
{key: 'PH/s', factor: 5},
{key: 'TH/s', factor: 4},
{key: 'GH/s', factor: 3},
{key: 'MH/s', factor: 2},
{key: 'KH/s', factor: 1},
{key: 'H/s', factor: 0},
  ];
  const unit = units.find(currUnit => obj[currUnit.key]);
  let hashrate = 0;
  if (unit) {
hashrate = parseFloat(obj[unit.key]) * (Math.pow(1000, unit.factor));
  }
  const result = {
accepted: parseFloat(obj.ACC),
acceptedPerMinute: parseFloat(obj.ACCMN),
algorithm: obj.ALGO,
difficulty: parseFloat(obj.DIFF),
hashrate,
miner: `${obj.NAME} ${obj.VER}`,
rejected: parseFloat(obj.REJ),
uptime: obj.UPTIME,
cpus: parseFloat(obj.CPUS),
temperature: parseFloat(obj.TEMP),
  };

With H/s always present it would be just like the first example except i do not need to do
Code:
* 1000

I hope this explains it
member
Activity: 473
Merit: 18
It doesn't make sense to me to put both but if that's what people want I'll do it. I'd like some
opinions from other users.

keep KH/s for backwards compatibility that is

Given that another API change is coming is it worth it to reintroduce kH/s or just take
the compatibility hit for both and get it over with?

Current api is compatible with Ccminer, so I think changing it would break compatibility with many tools that people may use

If you are reworking the api, then perhaps keeping existing one and activating it with a switch would be the "safe" way,
but it would introduce additional code which I'm not sure you want to maintain.

Perhaps set a cutoff date of when the old API will be discontinued?

I personally mine on my own fork of MegaMiner (https://github.com/yuzi-co/Megaminer) and changing the api is easy for me.


On a sidenote, the feature I would love to see would be continuous hash rate reporting, not only when a share is found (which can take a while on some algos with current),
but reporting it continuously
Pages:
Jump to: