Author

Topic: [POOL][Scrypt][Scrypt-N][X11] Profit switching pool - wafflepool.com - page 150. (Read 465769 times)

newbie
Activity: 42
Merit: 0
I was reading about the new Stratum server and I think it's causing issues with my cudaminer (750 Ti) rig.

I'm running steady at 1.8 Mh/s, but since yesterday it is reporting less.   (From 1.8 -> ~ 1.45)

On the below graph, you can see it drop down and stay well below where it usually is.  The first dip, i'm assuming, is when the server switched over?    After that, I did two reboots when I noticed the drop in khash, which you can see on the graph as well.   (Windows Updates slowed down my shutdown/restart, allowing it to show up on the graph)

Can someone please tell me why I'm seeing the drop in khash all of a sudden, even when my rig is running at 1.8 MH/s still?    Something I need to re-configure in the cudaminer.bat to work better on intensity?    

The biggest issue is that my BTC / Day / 1mH DROPPED as well!   So, not only is it reporting lower, it IS lower.

I just posted this up on /r/wafflepool and immediately, someone else said "I have the same issue!"

Help?

http://i.imgur.com/jX3dRRK.png
newbie
Activity: 28
Merit: 0
Now it's back to  0.00592860 BTC/Mh  Embarrassed
sr. member
Activity: 411
Merit: 250
poolwaffle, whether we will produce Vertcoin (VTC) Scrypt-N?
hero member
Activity: 630
Merit: 500
If I'm not mistaken, you can bump up virtual memory to overcome any shortfall in memory.
hero member
Activity: 693
Merit: 500
i can see what you are saying,  so as ltc was the asnwer to BTC becoming too high in difficulty due to ASICS we gotta wait and see what the answer to LTC is against the coming ASICS., if there is a such a coin that stands a chance and pools can adapt to it that would be the new standard...

An easy way of making ASICs unprofitable is to design algorithms that require large memory buffers and that have performance bound by memory bandwidth rather than arithmetic.  ASICs provide the greatest benefits for algorithms that are arithmetic-bound, and they provide the least benefits for algorithms that are bound by memory bandwidth.  By combining a large size memory buffer with random access patterns, we would get a level playing field that evolves very slowly.  GPUs of today have 200-300GB/s memory bandwidth which has only increased by a small margin generation-to-generation.  GPUs are expected to get a nice jump in bandwidth when memory technologies like die-stacked memory show up in a few years, but after that bandwidth growth will be very very slow again.  A large part of the complexity and cost in a GPU is the memory system, and this is something that is only feasible to build because millions of GPUs are sold per week.  By developing an algorithm that requires a hardware capability that is only cost-feasible in commodity devices that are manufactured in quantities of several million or more, it would push ASICs completely out, and keep them for a very long time, perhaps indefinitely.  It's one thing to fab an ASIC chip, it's another thing to couple it to a high-capacity high-bandwidth memory system.  If you design an algorithm that uses the "memory wall" as a fundamental feature, it will make ASICs no better than any other hardware approach.

Great Post and so true...

If they want a leveled plane of mining, that should be the way...

Best Regards,

LPC

Ya, so there's already coins that do this.  YACoin was the first, and currently takes 4 MB per thread to complete a calculation.  That will be 8 MB on May 31st.  All the other scrypt-chacha coins will get there eventually, but YAC is the trailblazer Cheesy

Sorry, but 4MB isn't a lot of memory.  1GB or more would start to be the size of memory I'm talking about.  Anything that's just a few megabytes in size is small enough that someone that wanted it badly enough could just put SRAM on-die.  CPUs and GPUs already have aggregate on-chip cache sizes that are 10 times that size, so 4MB is nowhere near large enough.  The data size has to be large enough so that the on-chip caches are useless, and remain useless over at least a 10 year period.  I would put that at something over 1GB.

We'll have to disagree on what constitutes "a lot", but even in YACoin, the effects of 4 MB hashes are taking their tolls.  You can't parallelize as many threads on today's GPUs as you can at lower N Factors.  A Radeon R9 290 with 2560 shaders would need 40 GB (no, not 4, 40!) to fully utilize the card.  Luckily, OpenCL is flexible, and we can adapt the code and recompile the OpenCL kernel we are using to utilize lookup-gap to give a larger effective memory size and thus use more threads.  If we were unable to change lookup-gap, the performance would degrade MUCH faster than 50% for every N-Factor change.  An ASIC is, by definition, a hard-coded piece of software in silicon format.  If they could utilize lookup-gap, it would need to be set in the design, and it would then be that balance between speed of the computations vs the amount of memory included.  But then, it will only work for a given N-Factor, so you'd have to switch to a different coin eventually.  How much dram can you fit in an ASIC die?  I would guess not enough to do more than a couple of hashes at a time, and unless the speed of the chip is significantly faster than today's GPU cores, I think we're still a long way off from ASICs for high memory (even 4 MB, NF=14) coins.
full member
Activity: 307
Merit: 102
Not near my rig right now last i checked was 5hrs ago maybe its lower now...
member
Activity: 94
Merit: 10
i was expecting to see some more rejects since the stratum changes, but i am not.
should i be worried?

0.3 % right now

Same here 0.3%

My long term average is 2% (up from 0.2% old stratum) still not bad since i changed nothing from old config.

that's the thing, i didn't do anything either.  so why would i see no changes whatsoever?
full member
Activity: 307
Merit: 102
i was expecting to see some more rejects since the stratum changes, but i am not.
should i be worried?

0.3 % right now

Same here 0.3%

My long term average is 2% (up from 0.2% old stratum) still not bad since i changed nothing from old config.
full member
Activity: 155
Merit: 100
i was expecting to see some more rejects since the stratum changes, but i am not.
should i be worried?

0.3 % right now

Same here 0.3%
member
Activity: 94
Merit: 10
i was expecting to see some more rejects since the stratum changes, but i am not.
should i be worried?

0.3 % right now
member
Activity: 100
Merit: 10
You can't simply assume the base unit of time is seconds, that's why you don't multiply scalars by unit of time directly.
Simply, 1 day is not 86000, it is seconds/day. But since we are doing (BTC / day) / (MH/s) = BTCs/MHD = BTC/(MHD/s)
full member
Activity: 168
Merit: 100
I think MHD is a wrong unit as MH = million hashes, you can't multiply a unit with no relation to time to day.
kWh is used to calculate energy used is because kW = kJ/s

I think it is more accurate to call it MHD/s unless we know what to call MH/s

MHD/s is asinine.  If you mine all day at 1 MH/s, how many MH have you computed at the end of the day?

My entire point is that the unit name must match the value being stated before it, so yesterday we earned 0.00742039 BTC per what???

The correct answer is 86000 MH -- or for simplicity sake, 1 MHD

0.00742039 BTC / Day @ 1MH / sec is also accurate, but both a mouthful and incredibly awkward as contained within it are two different time periods.
member
Activity: 100
Merit: 10
I think MHD is a wrong unit as MH = million hashes, you can't multiply a unit with no relation to time to day.
kWh is used to calculate energy used is because kW = kJ/s

I think it is more accurate to call it MHD/s unless we know what to call MH/s
full member
Activity: 168
Merit: 100
Quote
author=comeonalready

And it is payout per MHD!

I see you insist... Ok. Let me show you my reasoning for why IT IS NOT MHD and then you can just slam down your irrefutable proof and I'll stand rebuked.

let's say that your hashrate is 1 MHs. Meaning that your computational power can "solve" 1 million hashes in a second. Analogous to active power being measured in kW, the energy consumption is measured based on the consumption of power in a unit of time thus the kWh unit of measure. When you want to see how much energy you consumed in a DAY you just count the kWh which means you consumed 24 kWh/day (for 1 kWh).

when you use BTC/MHD you are implying that your hash rate is 1 "MH per day" which is false. your hash rate is an average of 1 MHs over a day's time and you contributed with 3600* 1 MH in a day's time. WP states [0.01 btc/(average MHs) in a day] and not the amount of hashes in a day. And lastly, the unit of measure is MHs. you can twist that figure to show a day or a year but it's still MHs as reported by your miners.


1- No, 1 MHD is one Mega Hash Day, not 1 Mega Hash per Day.  There is no divisor in an MHD unit.  You put that in there.  Why didn't you add it to your example of kWH too?
2- You've made an error in your explanation.  There are 86400 seconds in a day, not 3600.  That is how many seconds there are in an hour.
3- 1 MHD = 86400 MH, or the number of hashes computed over the course of one day at an average of 1 MH/s instantaneous rate.
4- The reason I chose to indicate MHD over MHH (hour) MHW (week) is because that is the length of time that everyone naturally wants to know and compare day over day and pool to pool.
5- In your last sentence, you conveniently removed the divisor from MH/s, which is a rate at which hashes are computed.

I have a perfect analogy for all this, but I refuse to dumb it down and spoon feed it to the masses.  I happen to be one of those people who cannot help but roll his eyes every time a news reporter quantifies a distance or area in the number of football fields that could fit into that space -- as in the average distance from the earth to the moon is 4,200,000 football fiends laid out end to end.

full member
Activity: 217
Merit: 100
Hi,

A new version of http://stratehm.net has been deployed with many bug fixes and a notification system for Wafflepool news.



Enjoy !
full member
Activity: 196
Merit: 100
Quote
Hashrate: 2.63 GH/s
I think something is wrong.

Properly switching from ghash back to waffle.  Their pool speed went from 44.41 Gh/s  to 19.98 Gh/s
sr. member
Activity: 266
Merit: 250
Quote
Hashrate: 2.63 GH/s
I think something is wrong.
newbie
Activity: 28
Merit: 0
i can see what you are saying,  so as ltc was the asnwer to BTC becoming too high in difficulty due to ASICS we gotta wait and see what the answer to LTC is against the coming ASICS., if there is a such a coin that stands a chance and pools can adapt to it that would be the new standard...

An easy way of making ASICs unprofitable is to design algorithms that require large memory buffers and that have performance bound by memory bandwidth rather than arithmetic.  ASICs provide the greatest benefits for algorithms that are arithmetic-bound, and they provide the least benefits for algorithms that are bound by memory bandwidth.  By combining a large size memory buffer with random access patterns, we would get a level playing field that evolves very slowly.  GPUs of today have 200-300GB/s memory bandwidth which has only increased by a small margin generation-to-generation.  GPUs are expected to get a nice jump in bandwidth when memory technologies like die-stacked memory show up in a few years, but after that bandwidth growth will be very very slow again.  A large part of the complexity and cost in a GPU is the memory system, and this is something that is only feasible to build because millions of GPUs are sold per week.  By developing an algorithm that requires a hardware capability that is only cost-feasible in commodity devices that are manufactured in quantities of several million or more, it would push ASICs completely out, and keep them for a very long time, perhaps indefinitely.  It's one thing to fab an ASIC chip, it's another thing to couple it to a high-capacity high-bandwidth memory system.  If you design an algorithm that uses the "memory wall" as a fundamental feature, it will make ASICs no better than any other hardware approach.

Great Post and so true...

If they want a leveled plane of mining, that should be the way...

Best Regards,

LPC

Ya, so there's already coins that do this.  YACoin was the first, and currently takes 4 MB per thread to complete a calculation.  That will be 8 MB on May 31st.  All the other scrypt-chacha coins will get there eventually, but YAC is the trailblazer Cheesy

Sorry, but 4MB isn't a lot of memory.  1GB or more would start to be the size of memory I'm talking about.  Anything that's just a few megabytes in size is small enough that someone that wanted it badly enough could just put SRAM on-die.  CPUs and GPUs already have aggregate on-chip cache sizes that are 10 times that size, so 4MB is nowhere near large enough.  The data size has to be large enough so that the on-chip caches are useless, and remain useless over at least a 10 year period.  I would put that at something over 1GB.
newbie
Activity: 57
Merit: 0
Quote
author=comeonalready

And it is payout per MHD!

I see you insist... Ok. Let me show you my reasoning for why IT IS NOT MHD and then you can just slam down your irrefutable proof and I'll stand rebuked.

let's say that your hashrate is 1 MHs. Meaning that your computational power can "solve" 1 million hashes in a second. Analogous to active power being measured in kW, the energy consumption is measured based on the consumption of power in a unit of time thus the kWh unit of measure. When you want to see how much energy you consumed in a DAY you just count the kWh which means you consumed 24 kWh/day (for 1 kWh).

when you use BTC/MHD you are implying that your hash rate is 1 "MH per day" which is false. your hash rate is an average of 1 MHs over a day's time and you contributed with 3600* 1 MH in a day's time. WP states [0.01 btc/(average MHs) in a day] and not the amount of hashes in a day. And lastly, the unit of measure is MHs. you can twist that figure to show a day or a year but it's still MHs as reported by your miners.

newbie
Activity: 1
Merit: 0
Jump to: