Pages:
Author

Topic: Hash Auger 2.9.7.5 Mining Manager and Switcher for NVIDIA GPUs - page 20. (Read 8755 times)

jr. member
Activity: 756
Merit: 2
I'm having a performance problem with Alexis. In the previous version its performance was as expected, now with x17 gives values ​​lower than normal.

I have tried activating the ccminer cache or whatever that option is called that greatly increases the use of cpu, and without the option marked. when before it reached 18mhs in x17 now I do not exceed 16 mhs. In these last days I changed the g4400 for an i3, and added better sources to the rig to force the maximum, and as a consequence now I see that version 1.8.2 alexis does not perform well.

He has considered adding some option so that a rig with all its cards only mine a protocol in a single process. You comment that it is more efficient to do it for each card. But I in x17 with alexis with manual configuration I do 122 mhs with all at once, which is more than 20 mhs per media card. With your program and one process per card, the performance is lower, I'm not sure of the efficiency at all.

Also comment that in the zergpool post in bitcointalk ANN, the owner of the pool indicates that zergpool is not very suitable for auto exchange because they update prices very slowly. It is not that they put a better price, is that they do not update frequently and usually cheats bots like this one with inadequate prices that can not be corrected with the% price option.

It would not be advisable not to use it, I leave it as a comment more
copper member
Activity: 130
Merit: 0
I am using this personally on some of my miners.  Working excellent 5 - 12 gpu rigs (nVidia) - On our pool at www.QuantumMiningPool.com

I am a single coin miner, I HATE when someone converts my work to BTC... (fees, not timed right, coin collector, etc...)

newbie
Activity: 481
Merit: 0
I answer everyone for this post.

It can be complicated, I just explain an idea, it can be made easier or lighter, but something that helps to look for better optimization. It can be with fewer options and making it protococolo to protocol choosing which to optimize.

A good optimization is not 3 or 4% more, of course not. That would be in another product where the OC is the same in all protocols, the best thing about HA is the OC for each protocol, but it takes a lot of time to do it by hand, a lot .. I prefer that I do it alone while I do other things. I do prefer to lose a whole day and have a good OC, which would surely be a benefit greater than 10%. There are protocols that have very different combinations of OC to other protocols.

In the same way this would add stability that this is also what is sought. Leaving a general OC for all defined is not optimal. Putting OC data more or less at random does not maximize revenue either.

As I said and explain, I would vote for a better optimization on the basis that it is already done. It has a lot of potential and it can be easier than I explain. And above all, it would give a comparative value to better HA compared to its competition, which none have.

It is Spain there is a refrain that says, (To see how it is translated). Better skill than strength . In this case it would not be to put more miners or more pools, the software has everything and can be improved but it is not the most important from my point of view. The skill would be the ease of the software to optimize itself to a better point, if one wants, it would be optional.

POder have a good fit for each protocol, both in intensity and OC, it would be much more than 3 or 4%, rather more, apart from being more stable, as long as self opmice discard the combinations that give error or produce restarts.

It is just an idea, that at least I worry about giving ideas to add value to the product. I take my time, I report errors and I give my ideas based on the experience. and as I say it could be totally optional for whoever wants, but highly recommended to do it.

I thank the programmer for the effort and wish him the best. Also say that I would even be willing to pay slightly more in dev fee if the product is modernizing and trying to be a real alternative to other products that are expensive like AW and that lacks some functions that exist in HA

I thank you for your suggestion.  I do not doubt that auto-tuning of overclocks would be a very useful feature.  As you noted, many miners choose a less than optimal general overclock because they do not want to invest the time in finding the optimal overclock for each algorithm. However, there is a saying that "the devil is in the details" -  a concept can appear to be simple, but on closer inspection it is much more complicated than it first appeared. A human can easily determine if a video card is becoming unstable (visual artifacts on the screen, for example), but the software cannot.  Also, as I mentioned before, the software would have to recognize system crashes so that it avoids using those settings again. There is also a big difference in a miner being stable for a few minutes versus hours or days.  Most users would want the auto-overclock tool to complete in a reasonable amount of time, but the tool would have to test multiple clock settings for many algorithms, limiting the duration of each individual stability test and increasing the potential that the software incorrectly assumes an overclock to be stable when it really is not.

Again, I'll use EVGA's auto-overclock tool as an example.  I have never been able to get it to work on EVGA cards and many of its users report similar issues.  If a graphics card manufacturer cannot reliably find  an optimal gaming overclock for their own products, think of how much more difficult it would be to calculate optimal overclocks for different types of cards using different algorithms and miners within a period of time that most users would tolerate. While it wouldn't be impossible, it would be a significant investment in development time that would come at the expense of most other improvements to the software.
jr. member
Activity: 756
Merit: 2
In the auto exchange pool configuration, there is the option to exclude protocols. But again, it's one on one. It would be good options to copy from another pool, or even copy to all pools that selection of protocols.

In the auto exchange pool configuration, there is the option to exclude protocols. But again, it's one on one. It would be good options to copy from another pool, or even copy to all pools that selection of protocols.

You may want to have all the same, but in some pool I can make a different change, but it would be punctual. It should be easier to copy that configuration to other pools easily, as you did in the graphics card section

In the auto exchange pool configuration, there is the option to exclude protocols. But again, it's one on one. It would be good options to copy from another pool, or even copy to all pools that selection of protocols.

In the auto exchange pool configuration, there is the option to exclude protocols. But again, it's one on one. It would be good options to copy from another pool, or even copy to all pools that selection of protocols.

You may want to have all the same, but in some pool I can make a different change, but it would be punctual. It should be easier to copy that configuration to other pools easily, as you did in the graphics card section


About the difficulty of programming, I think it is more work to write code than to think how to do it.

I do not say an OC as EGVA, I have given you a very detailed idea, you on that basis of my idea, valuing the difficult / easy, and other factors, you can do an optimization totally different in other steps and more simple, I just I say what I need.

And what I need is needed by many professional miners who know about the shortcomings of other products. One of the main characteristics of your product is card by card, that is, a process for each card, and also you can easily optimize the OC and the intensity for each protocol. That's a breakthrough, but I tell you it takes a long time.

If you are able to develop something that is not as complete as I have described, but that helps me optimize faster, I am sure that many people who have many machines will be very grateful

Miners of few cards is not so important, BIG MINING will give you a very interesting dev fee, it is not the same 1% of 1000 that of 100,000.

I do not demand anything, I just suggest and give the complete idea well explained, now it is you who must choose the best ideas of the other miners, but almost nobody contributes anything.

I hope to see the following improvements, I will continue testing it in the small test rig that I have

I also suggested that the coins to be mined directly had the option of being able to mine if the diff falls below X, which also seems like a good idea, because if the difficulty is very high I prefer to go in auto change, and if it falls from X that I define, then mine that currency, so I take better advantage of time looking for the greatest number of profits.

Maybe some of my suggestions are strange because they have not been seen in other proghramas, but that does not mean they are not good ideas.

Over time I would like HA to be able to manage platforms remotely, but this I think is far from the initial idea of ​​a programmer and I understand it.

The Hash Auger software disables algorithms for pools for a variety of reasons.  Unlike some programs that use predefined lists, the software uses each pool's rate information to determine the coins and algorithms that each pool supports. If a pool adds a coin or an algorithm, Hash Auger can use it automatically without an update as long as the algorithm is supported by one of the included miners. Conversely, the software will not attempt to use an algorithm on a pool that doesn't support it. For example, neither MiningPoolHub nor NiceHash currently support Phi, so Hash Auger won't try to use Phi on either pool.

Next, Hash Auger won't use any algorithms that are disabled on all devices - those are the algorithms that cannot be enabled or disabled in each  pool's algorithm list. For example, both Qubit and Quark are disabled by default, so Hash Auger will not use either algorithm for any pool unless the user enables them on their devices and benchmarks these algorithms first.

By default, the software will compare prices for all device-enabled algorithms for all auto-exchange pools. Version 1.8.2 adds support to disable individual algorithms for auto-exchange pools in cases where the user does not want to mine specific algorithms on that pool. For instance, last night I disabled C11 on one pool because it always takes that pool a relatively long time to provide valid work for this algorithm.  I still want to use C11 on other pools, so I do not want to disable it completely, I just don't want the software to use it on this one pool regardless of what the earnings estimate may be.

I understand that being able to copy these preferences could save a little time in some circumstances, but the time savings will be considerably less than that from copying device settings.  Unlike the device settings which often apply to every GPU, the algorithms list is specific to each pool.  Copying settings for Phi to MininingPoolHub or NiceHash wouldn't have any affect since those pools do not support that algorithm.  Also, since the software automatically excludes algorithms that are disabled on all devices, the use case of disabling an algorithm on every pool is already handled.  Finally, most users don't enable every auto-exchange pool, so the need to copy these settings to every pool is limited.

As for your suggestions about an auto-tuning feature, I agree that in theory it is an intriguing idea.  However, given the poor reputation that existing products such as EVGA's auto-overclocking software have, reliably implementing such a feature is a bit more challenging than the concept implies.  Trying to balance tuning time with overall reliability would be problematic as overclock settings may appear to be stable after a few minutes, but then fail after a few hours.  Also, if the overclock settings corrupt the device driver and require a restart, the whole OS would be frozen, preventing the software from recording the failure to avoid using those settings in the future. Thus, the tuning process would keep repeating the same tests that lead to system freezes. Due to issues such as this, if I ever decide to include any auto-tuning features, they will probably be focused on intensity settings only.
jr. member
Activity: 756
Merit: 2
I answer everyone for this post.

It can be complicated, I just explain an idea, it can be made easier or lighter, but something that helps to look for better optimization. It can be with fewer options and making it protococolo to protocol choosing which to optimize.

A good optimization is not 3 or 4% more, of course not. That would be in another product where the OC is the same in all protocols, the best thing about HA is the OC for each protocol, but it takes a lot of time to do it by hand, a lot .. I prefer that I do it alone while I do other things. I do prefer to lose a whole day and have a good OC, which would surely be a benefit greater than 10%. There are protocols that have very different combinations of OC to other protocols.

In the same way this would add stability that this is also what is sought. Leaving a general OC for all defined is not optimal. Putting OC data more or less at random does not maximize revenue either.

As I said and explain, I would vote for a better optimization on the basis that it is already done. It has a lot of potential and it can be easier than I explain. And above all, it would give a comparative value to better HA compared to its competition, which none have.

It is Spain there is a refrain that says, (To see how it is translated). Better skill than strength . In this case it would not be to put more miners or more pools, the software has everything and can be improved but it is not the most important from my point of view. The skill would be the ease of the software to optimize itself to a better point, if one wants, it would be optional.

POder have a good fit for each protocol, both in intensity and OC, it would be much more than 3 or 4%, rather more, apart from being more stable, as long as self opmice discard the combinations that give error or produce restarts.

It is just an idea, that at least I worry about giving ideas to add value to the product. I take my time, I report errors and I give my ideas based on the experience. and as I say it could be totally optional for whoever wants, but highly recommended to do it.

I thank the programmer for the effort and wish him the best. Also say that I would even be willing to pay slightly more in dev fee if the product is modernizing and trying to be a real alternative to other products that are expensive like AW and that lacks some functions that exist in HA
newbie
Activity: 481
Merit: 0
A bit unrelated.  I was looking into a hard wallet like trezor.  I currently am using hashauger with blazepool.  I set blazepool to pay into my exodus wallet.   I have both some litecoin and some bitcoin in there.  Blazepool only pays in bitcoin.  Is there a way to receive my payment from blazepool directly to my hard wallet.   Reason I ask is the fees that are charged on exodus are a bit high.   You guys have more experience than me.  I dont know if exchanging my bitcoin to litecoin in exodus then transfer my litecoin balance to my trezor is cost efficient.  Or should I send both coins to my hard wallet and find another exchange site to exchange later.   BTW I made more money today with the increase in altcoins than I mined.  Yay..

I’m glad to hear that your earnings are getting better. Hopefully the worst of the slump is behind us. For security reasons, most hard wallets are not connected to the Internet all the time, but it looks like Trezor has some services that act like temporary wallets until you connect your hard wallet to the network. So you should be able to use the Trezor wallet address for your mining and the Trezor software should handle moving the funds between the software and hardware wallets. Maybe somebody using that setup can comment on whether there are any fees involved. I’m not sure if that transaction between the Trezor soft and hard wallets is on the network and subject to fees or not.

Concerning exchange fees, you may want to compare the rates Exodus charges with other exchanges and see if you can reduce costs by using a different exchange (Like CoinBase, Poloniex or Kraken) or by exchanging a larger amount less frequently. If you don't want to keep too much of your earnings in an exchange wallet for security reasons, you could use a software wallet such as Electrum to temporarily hold your BTC payments from the pool until you accumulate enough to do a more cost-efficient exchange or to time your trade when the price of LTC is low compared to BTC. Then you could have the exchange deposit your LTC into the Trezor wallet. Of course, you'd have to pay transaction fees every time you move your earnings, so it is usually best to minimize transfers/exchanges as much as possible. However, keeping a little in BTC might help you diversify your holdings in case both coins don't appreciate in value at the same rate.

If you do change wallet addresses on the pool, you will have to plan the switch around the pool’s payout schedule to make sure the earnings linked to the old wallet address are above the pool’s minimum payout.
newbie
Activity: 68
Merit: 0
A bit unrelated.  I was looking into a hard wallet like trezor.  I currently am using hashauger with blazepool.  I set blazepool to pay into my exodus wallet.   I have both some litecoin and some bitcoin in there.  Blazepool only pays in bitcoin.  Is there a way to receive my payment from blazepool directly to my hard wallet.   Reason I ask is the fees that are charged on exodus are a bit high.   You guys have more experience than me.  I dont know if exchanging my bitcoin to litecoin in exodus then transfer my litecoin balance to my trezor is cost efficient.  Or should I send both coins to my hard wallet and find another exchange site to exchange later.   BTW I made more money today with the increase in altcoins than I mined.  Yay..
newbie
Activity: 481
Merit: 0
If you go too far with super optimization and and the program becomes buggy and I end up losing a half day because i cant get back to restart, the little 3 or 4% gain is not going to be worth it to me.  Dont get baited into making something too complicated. 

You bring up a good point. Even a moderate increase in hash rates isn’t going to offset lost earnings due to downtime if a system is pushed to the point of instability. My development priority will always be stability, so any settings that may boost performance at the risk of stability will be disabled by default. Similarly, I like to see how new mining software is received and evolves before I add it to the software. It took me a while to find a version of Alexis that appears to be stable on modern cards. Also, it is important to keep in mind that hash rates aren’t the only factor that affects earnings. A 5% increase in hash rates means little if you are mining coins at a 10% lower price than what you could be. Therefore, I try to improve the software in a more balanced aproach rather than just focusing on any one aspect.
newbie
Activity: 68
Merit: 0
If you go too far with super optimization and and the program becomes buggy and I end up losing a half day because i cant get back to restart, the little 3 or 4% gain is not going to be worth it to me.  Dont get baited into making something too complicated. 
newbie
Activity: 481
Merit: 0
In the auto exchange pool configuration, there is the option to exclude protocols. But again, it's one on one. It would be good options to copy from another pool, or even copy to all pools that selection of protocols.

In the auto exchange pool configuration, there is the option to exclude protocols. But again, it's one on one. It would be good options to copy from another pool, or even copy to all pools that selection of protocols.

You may want to have all the same, but in some pool I can make a different change, but it would be punctual. It should be easier to copy that configuration to other pools easily, as you did in the graphics card section

In the auto exchange pool configuration, there is the option to exclude protocols. But again, it's one on one. It would be good options to copy from another pool, or even copy to all pools that selection of protocols.

In the auto exchange pool configuration, there is the option to exclude protocols. But again, it's one on one. It would be good options to copy from another pool, or even copy to all pools that selection of protocols.

You may want to have all the same, but in some pool I can make a different change, but it would be punctual. It should be easier to copy that configuration to other pools easily, as you did in the graphics card section

The Hash Auger software disables algorithms for pools for a variety of reasons.  Unlike some programs that use predefined lists, the software uses each pool's rate information to determine the coins and algorithms that each pool supports. If a pool adds a coin or an algorithm, Hash Auger can use it automatically without an update as long as the algorithm is supported by one of the included miners. Conversely, the software will not attempt to use an algorithm on a pool that doesn't support it. For example, neither MiningPoolHub nor NiceHash currently support Phi, so Hash Auger won't try to use Phi on either pool.

Next, Hash Auger won't use any algorithms that are disabled on all devices - those are the algorithms that cannot be enabled or disabled in each  pool's algorithm list. For example, both Qubit and Quark are disabled by default, so Hash Auger will not use either algorithm for any pool unless the user enables them on their devices and benchmarks these algorithms first.

By default, the software will compare prices for all device-enabled algorithms for all auto-exchange pools. Version 1.8.2 adds support to disable individual algorithms for auto-exchange pools in cases where the user does not want to mine specific algorithms on that pool. For instance, last night I disabled C11 on one pool because it always takes that pool a relatively long time to provide valid work for this algorithm.  I still want to use C11 on other pools, so I do not want to disable it completely, I just don't want the software to use it on this one pool regardless of what the earnings estimate may be.

I understand that being able to copy these preferences could save a little time in some circumstances, but the time savings will be considerably less than that from copying device settings.  Unlike the device settings which often apply to every GPU, the algorithms list is specific to each pool.  Copying settings for Phi to MininingPoolHub or NiceHash wouldn't have any affect since those pools do not support that algorithm.  Also, since the software automatically excludes algorithms that are disabled on all devices, the use case of disabling an algorithm on every pool is already handled.  Finally, most users don't enable every auto-exchange pool, so the need to copy these settings to every pool is limited.

As for your suggestions about an auto-tuning feature, I agree that in theory it is an intriguing idea.  However, given the poor reputation that existing products such as EVGA's auto-overclocking software have, reliably implementing such a feature is a bit more challenging than the concept implies.  Trying to balance tuning time with overall reliability would be problematic as overclock settings may appear to be stable after a few minutes, but then fail after a few hours.  Also, if the overclock settings corrupt the device driver and require a restart, the whole OS would be frozen, preventing the software from recording the failure to avoid using those settings in the future. Thus, the tuning process would keep repeating the same tests that lead to system freezes. Due to issues such as this, if I ever decide to include any auto-tuning features, they will probably be focused on intensity settings only.
jr. member
Activity: 756
Merit: 2
In the auto exchange pool configuration, there is the option to exclude protocols. But again, it's one on one. It would be good options to copy from another pool, or even copy to all pools that selection of protocols.

In the auto exchange pool configuration, there is the option to exclude protocols. But again, it's one on one. It would be good options to copy from another pool, or even copy to all pools that selection of protocols.

You may want to have all the same, but in some pool I can make a different change, but it would be punctual. It should be easier to copy that configuration to other pools easily, as you did in the graphics card section
jr. member
Activity: 756
Merit: 2
I liked the update, I'm testing it in the small rig only in x17, Alexis makes a difference.

I did not know that before in the autochange pools, the disabled protocols worked equally. It is good to know that now it fulfills that condition.

I still think there are no miners, but I see that their intention is to add them more and more.

I'm going to tell you something that no software does and what they should do, a super optimization that would take many hours, but that would give the best performance for each protocol and card, something that you promise in this program.

You make the typical benchmark pass to know the HASH you have in each protocol. But what do you do to maximize that? . The process of manual optimization is very heavy, so it will be good for people with experience to save time and without experience. It should be optional. The default values ​​or what works for others may not be the best for me or my brand of card.

They should be two different steps, and should be launched separately on two buttons to avoid too many hours.

1 phase:
IMPORTANT in this phase, ignoring the OC values ​​completely, must be a pure test.

I would only do the optimization in the activated Protocols, those that are deactivated or must obviously be ignored. Or you could do one by one to each protocol, but I'm more interested in leaving it working hours and in the end having the best configuration

It would be based on the optimization of mining and intensity. Basically I will go to each protocol and all the miners of that protocol, and I say all that can be chosen, whether or not they are activated.

Image we have only activated X17, go to x17 and try the 4 miners there. Try the first one, for example Klaust and do the intensity test from 15 to 31 which is the maximum in jump ranges of 0.3. I would try klaust at 15 - 15.3 - 15.6 - 15.9 ....... until the failure or the maximum that is 31

If it fails, it repeats the test in that same fault intensity, and when it fails twice, it is the fault, for example in 19.6. At that point we took two jumps back, that is, 0.3 and 0.3 = 0.6, and the chosen intensity would be 19.

Then repeat the same procedure with miner 2, miner 3, miner 4 ...

at the end for that protocol, x17, it should self-select only the fastest miner of those that have been tested.

This first pass would choose the best miner for the selected protocol.

2 phase

Focus on intensity and OC at the same time, trying not to be too many steps, but there must be many steps.

There are many tests for a single protocol. Following the previous example, I would have selected X17 and Alexis, probably over 19 (I know because I've tried it).

The variabesl that I am going to give you, I have thought about them for a long time and it is like work, but of course I leave many steps because I have to do it one by one by hand. And with this even if it takes a day, you have the best result.

Intensity: we have the before, in this case 19, because tests will be done in 18 - 18.5 and 19

Core: -20 to +200 in breaks of 10. bone -20, -10, 0, +10, +20 .....

memory, from -200 to +400 in steps of 100, are 6 steps

TP, power supply, from 70 to 100 in jumps of 5. Again there are 6 steps.

To reduce the time, it would be done in two subprocesses. First optimization of the combo (intensity and core), all steps, failures are controlled and are not valid, but you have to do all the steps to know which yields more.
The second subprocess would also be using the core, but the number that we already have before, for example, has given the number 90 of core. Well we will use that number + - 10, in this case 80 - 90 - 100, and the formula would be

(core as I have explained, mem, prower supply), all the steps are done, the errors are ignored, and you only keep the results, choosing the one that more HASH provides.

And it would be optimized. It can be improved more, and it is not only taking into account only the HASH, but also the number of actions taken. Sometimes a very high intensity, because here it affects above all the intensity that is the one that adds more or less load to the nuclei. Well, it is not the same to do at the same time 5 hashes confirmed at 12 mhs, that 8 hash at 11 mhs. Although the Hash is smaller (11mhs) there have been more shares. And you must if you can, because here I think you can not do with those that you can not read, but it would be interesting if you can have both data, well 3.

Test time, maximum hash, number of shares

The idea is that at the same time the combination of shares + HASH is superior. This fits only with intensity, so it may or may not be easy.


I said I was not going to try it, but I see so much potential, and I fully understand that being a single programmer with limited time, do not develop as fast as we would like.
I mention this from the point of view of contributing ideas and suggestions, especially of some things that ALL others do not have.

If I had to choose between what to prioritize, if more miners or if you like the idea of ​​super optimization, choose the super optimization.

Nobody else has it, make the most of each card and each configuration, save time for experts and help newbies. And from the point of view of programming I think it would be easier to super optimize, I really think it is more difficult to explain than to program it. The programming would be about what is already there, because you do not have to change anything, just add those extra steps for those who want to lose a whole day and have the perfect configuration, and if it is not, it will be very close. And at this point I would add that a final control point, also optional, would be obligatory, and that optimized configuration will work for at least 10 minutes in a row, if it passes ok, and if not, it goes back to the previous point of optimization and repeats discarding if you get the same result

With this you will maximize your software before taking the next steps, it will stand out as the main quality against your competition and most importantly, the best daily profit rate. It is important to have intensity and OC adjusted for something, it is the key to profitability.

Once done, it would be a great platform to scale your software with new functions or miners, it is best to take advantage of what is already there, and make the tests as long as it takes to maximize profit.
I think that is just what it takes to be unique and really useful. Do not put ma configurations almost at random and settle for the result. I think that can silence many voices that say they say it does not pay.

Nice hash is hard to beat in benefits but some programs do. Some very good ones like AW lack optimization by protocol and card, and neither a super opcimizacion, also you have to pay for advanced functions. Others are more difficult and data is lost in each update.

You have a lot of auto change groups and to mine directly. You can mine with auto change and at the same time a coin. It has a good profit but it must adjust very well, it has intensity settings, and more important of OC for each protocol and card. Basically it has everything, it can be improved with more functions and pools etc .. But from my humble opinion you need a software that is able to get the best possible configuration for each machine, whatever this machine, have plenty of memory or it lacks, it has more or less CPU and with certain intensities it drowns. For me the super optimization would be the serious takeoff of Hash Auger, that every day I like more.

Regards

Note:
I forgot the tests should be done with all the cards at once. Not for saving time. It is because in real mining they will all be working at the same time creating bottlenecks in some points. It would not be appropriate to do the test with independent cards or one at a time. The test should be how it would be more or less the use when it is mined and not all have the same memory, motherboard and CPU equal.
newbie
Activity: 481
Merit: 0
HA - thanks for your work on copying settings over between cards.  Putting one additional request out there...can you make a master "switch" list where you add the ability to turn (anything you can) on and off in totality? I specifically care about the ability to turn on/off algorithms that apply all at once to all 8 cards, rather than having to do it individually as it is now, but the concept could apply to quite a few other "switches" you have throughout the settings that could all be put in one place vs. having to individually tick things off for every instance. Having individual switches is great, but master overrides can be useful too.


I haven't forgot about this.  I was going to start working on it a few days ago, but then one of the big pools went offline for a few hours and I took that as an opportunity to improve the way the software handles that type of error to minimize downtime.  Also, recent discussions inspired  me to look at ways to improve mining performance in general.  However, creating a centralized device manager is still one of my priorities for 1.9.  I want to accommodate users with mixed rigs (1070s, 1080s, etc) that may not want to apply the same set of settings to every card as well as users that only use the same type of card in the same rig.  Basically, my design intent is to replace the current list of device panels with a display that doesn't require as much scrolling. 

Yep sounds good. Thanks for addressing.

Cool.  Since it has taken me a little longer to implement this feature than I originally intended and it makes since to have all the device management enhancements grouped together in the versioning scheme, I will try to release 1.8.3 with this change in a few days.  Some of the other improvements I have planned for 1.9 may take a little longer to implement, so I don't want to hold this request back longer than necessary.
newbie
Activity: 34
Merit: 0
HA - thanks for your work on copying settings over between cards.  Putting one additional request out there...can you make a master "switch" list where you add the ability to turn (anything you can) on and off in totality? I specifically care about the ability to turn on/off algorithms that apply all at once to all 8 cards, rather than having to do it individually as it is now, but the concept could apply to quite a few other "switches" you have throughout the settings that could all be put in one place vs. having to individually tick things off for every instance. Having individual switches is great, but master overrides can be useful too.


I haven't forgot about this.  I was going to start working on it a few days ago, but then one of the big pools went offline for a few hours and I took that as an opportunity to improve the way the software handles that type of error to minimize downtime.  Also, recent discussions inspired  me to look at ways to improve mining performance in general.  However, creating a centralized device manager is still one of my priorities for 1.9.  I want to accommodate users with mixed rigs (1070s, 1080s, etc) that may not want to apply the same set of settings to every card as well as users that only use the same type of card in the same rig.  Basically, my design intent is to replace the current list of device panels with a display that doesn't require as much scrolling. 

Yep sounds good. Thanks for addressing.
newbie
Activity: 481
Merit: 0
HA - thanks for your work on copying settings over between cards.  Putting one additional request out there...can you make a master "switch" list where you add the ability to turn (anything you can) on and off in totality? I specifically care about the ability to turn on/off algorithms that apply all at once to all 8 cards, rather than having to do it individually as it is now, but the concept could apply to quite a few other "switches" you have throughout the settings that could all be put in one place vs. having to individually tick things off for every instance. Having individual switches is great, but master overrides can be useful too.


I haven't forgot about this.  I was going to start working on it a few days ago, but then one of the big pools went offline for a few hours and I took that as an opportunity to improve the way the software handles that type of error to minimize downtime.  Also, recent discussions inspired  me to look at ways to improve mining performance in general.  However, creating a centralized device manager is still one of my priorities for 1.9.  I want to accommodate users with mixed rigs (1070s, 1080s, etc) that may not want to apply the same set of settings to every card as well as users that only use the same type of card in the same rig.  Basically, my design intent is to replace the current list of device panels with a display that doesn't require as much scrolling. 
newbie
Activity: 481
Merit: 0
Hello again, well I write this for two reasons, say goodbye for a while, give you my impressions.

I have tried their software a lot but for now it is insufficient, my time is golden and I can not invest so much time in a software still green.

You have said that you want to make a simple and easy software. It is a mistake, you can never overcome the simplicity of nicehash, which is to install, make bencmark and that's it, besides being more powerful. People compare everything, and everyone that I have recommended the program has told me the same, poor performance and I agree.

It does not matter if you add 1000 functions that we like, that's good, but not enough. You must fight to get the most out of the cards. You can not say that Alexis is unstable when I use it every day in manual and several versions and there are no stability problems, just look for the correct OC for each miner / protocol

I personally believe that there will be no success if you can not overcome the benefits of Nicehash. I can not lose 30% power in for example X17 (I use a version of alexis). 30% less is to lose more than 1 card if we talk in power.

I love the interface, I love to mine to change to BTC and save currency that I like, but I do not like to lose almost 1/3 of power because I do not have the right miners.

You have a very good and simple thing to do, which is the OC for each protocol, you can use any miner as long as those OCs are adjusted correctly and forget self-intensity.

In a month or two, I'll see how the work is going. Luck

As I said before, I encourage users to try other mining software and determine what works best for their needs. I appreciate the time you spent evaluating my software. However, I disagree with a number of your points. First, one can look at the NiceHash subreddit and see the many users who have fundamental problems with the NiceHash software. Most people would agree that the stability and usability problems those people are having with NiceHash are much more servere than the few issues with copying device settings you found in my software. I have already fixed the issues you have found, but users of other products complain that bugs they reported months ago are still not fixed.  Also, earnings on NiceHash are limited to what buyers on its marketplace are willing to pay. Hash Auger allows users to switch between NiceHash and other pools to find the best earnings while still offering more configuration options than many other mining managers. I'm not sure how NiceHash helps users "get the most from their hardware" when it doesn't offer the ability to easily set algorithm-specific overclock settings or intensity values. Finally, Hash Auger includes algorithms such as Phi, x16r and x16s that NiceHash simply doesn’t offer.

While you may not have stability issues with Alexis miner on your rigs, that miner consistently crashes before finding a single x17 result on several of my rigs - most  likely due to its age and the fact that it is optimized for Cuda 7.5.  I have tested it with multiple cards using no overclock and different overclocks and with nothing more than just the miner and the command prompt. When I asked you for more information about the version of the Nvidia device driver you are using and which version of Alexis miner you are using, you ignored my questions. Also, I find it interesting that you keep mentioning that the software includes less miners than you would like but then suggest NiceHash is superior. NiiceHash version 2 includes far fewer miners than NiceHash Legacy and I’m pretty sure Alexis miner isn’t one of them.

Thanks again for all your detailed feedback and suggestions. I wish you the best in all your endeavors.
jr. member
Activity: 756
Merit: 2
Hello again, well I write this for two reasons, say goodbye for a while, give you my impressions.

I have tried their software a lot but for now it is insufficient, my time is golden and I can not invest so much time in a software still green.

You have said that you want to make a simple and easy software. It is a mistake, you can never overcome the simplicity of nicehash, which is to install, make bencmark and that's it, besides being more powerful. People compare everything, and everyone that I have recommended the program has told me the same, poor performance and I agree.

It does not matter if you add 1000 functions that we like, that's good, but not enough. You must fight to get the most out of the cards. You can not say that Alexis is unstable when I use it every day in manual and several versions and there are no stability problems, just look for the correct OC for each miner / protocol

I personally believe that there will be no success if you can not overcome the benefits of Nicehash. I can not lose 30% power in for example X17 (I use a version of alexis). 30% less is to lose more than 1 card if we talk in power.

I love the interface, I love to mine to change to BTC and save currency that I like, but I do not like to lose almost 1/3 of power because I do not have the right miners.

You have a very good and simple thing to do, which is the OC for each protocol, you can use any miner as long as those OCs are adjusted correctly and forget self-intensity.

In a month or two, I'll see how the work is going. Luck
newbie
Activity: 481
Merit: 0
I do not mean to estimate how much I will win, I mean how many satochis I have mined yesterday and only in autoexchange mode.

If I mine a single currency, it is VERY easy to know how much it gives me, I just have to see the wallet of that currency.

But when I use it in auto exchange with zergpool, zpool, etc ... it's not so easy to know if I'm going well or I'm going bad. I do not know if the change I made in the refreshment of pools and profits produces more or less.

Going underhand in auto exchange blindly is not a good idea. If only mino in Zpool is easy, I just have to see zpool. But if I mino in 5 pools of auto exchange?, It already complicates my life to know if I am fine with the configuration or not.

As I say, it is not an estimate of the future, it is knowing what I have produced yesterday and before yesterday. As you can see I do not mean all the situations, only in the mining mode with pools auto exchange.

I did not know you were the only programmer. Do not worry too much, I'm the one who usually has a lot of ideas and I tend to burden my own web programmers. When your product is better known, I hope you earn a lot of money.

I do enjoy working on this project, but I don't expect it to earn much money. The dev fee is just some extra motivation to keep me working on it when I should be sleeping. I just mentioned it because I do like a lot of your suggestions, but my schedule does limit how many feature requests I can include in each release. 

I now have a better understanding of your request.  I do want to include more historical information so that users can better evaluate their mining performance when auto-exchanging.  Unfortunately, even that information doesn't fall neatly into calendar days due to the time required for coins to mature and get traded.  For example, Verge typically requires a much higher number of network confirmations before some pools will trade it.  It may be well into the next day or even the following day before the pool pays out for that coin while other coins have quicker pay outs. Depending on how much Verge was mined in relation to other coins with shorter confirmation times, that may distort daily earnings, What also complicates things is that many of the auto-exchange pools have disabled the recent earnings portion of their APIs, so that information is not as easy to access as it could be.

However, I do want to help users better see what their rigs are mining, so I will be including summaries of work data in the future. I expect that information and functionality will evolve over time.
jr. member
Activity: 756
Merit: 2
I do not mean to estimate how much I will win, I mean how many satochis I have mined yesterday and only in autoexchange mode.

If I mine a single currency, it is VERY easy to know how much it gives me, I just have to see the wallet of that currency.

But when I use it in auto exchange with zergpool, zpool, etc ... it's not so easy to know if I'm going well or I'm going bad. I do not know if the change I made in the refreshment of pools and profits produces more or less.

Going underhand in auto exchange blindly is not a good idea. If only mino in Zpool is easy, I just have to see zpool. But if I mino in 5 pools of auto exchange?, It already complicates my life to know if I am fine with the configuration or not.

As I say, it is not an estimate of the future, it is knowing what I have produced yesterday and before yesterday. As you can see I do not mean all the situations, only in the mining mode with pools auto exchange.

I did not know you were the only programmer. Do not worry too much, I'm the one who usually has a lot of ideas and I tend to burden my own web programmers. When your product is better known, I hope you earn a lot of money.
newbie
Activity: 481
Merit: 0
A pity about Alexis, but I hope that over time you will find another more efficient miner in some protocols.

Changing the subject a bit, allowing the second most readable currency is an option as discussed in your day, but it is also making it more undermined in the currency than this, I recommend that you raise the 5 minute price review time that I believe What is this, 30 minutes, so that the miner has time to warm up and give up. What is most lost is to change the protocol every few hours, that must be avoided, as I have 15% of earnings and 25 minutes for the new price monitoring.

One thing that I would like to ask the programmer, is some statistics system, I would like to know in a simple panel, how many satochis I make per day. Now he only tells me the total that is in the pools without charging.

But for me to better evaluate my performance, and when I make changes to see if these changes are effective, I need to know the last 7 days the amount of satochis produced, even if it is not completely accurate and there is a good approximation.

Right now a day goes by and I have to go scoring and watching the program and the pools to try to know the satochis won. I think that for me and anyone this can be very important. Because I can play to remove certain protocols, or change the change for profit or the time of revision and see day by day, which configuration generates me
--------------

More ideas for the programmer to consider.


For example, if I want to mine a currency, say XVG, that only mine if the difficulty is less than X (x would be a variable to be defined by the user) in that way, when it is easy to mine the mino and when it lasts will go to undermine auto exchange. I do not think that is too much difficulty, you just have to check DIFF for the currency that is the same in all pools. When mining a coin, this option does not matter, but when we are in the mode that can be mined for auto exchange and for a particular currency, only do it if the difficulty drops from X (the money would give the same as it fluctuates with the BTC and the $ ç9, in this way the mixed mining of auto exchange + chosen currency, only mine when it is easy, may increase in price but if the difficulty does not decrease, I do not want to undermine it .I hope that the concept is understood and is interesting.

As you have added many new pools and I thank you for that. It would be well an option that I want to mine a coin, for example again XVG, and I choose 8 pools or more, and make 2 or 3 hours in each pool and then tell me which one yields the most XVG. In this the following points influence. Number of miners, pool hash, stratum speed, stratum stability or even how far I am from the pool. But if the system is capable of undermining the same wallet in different pools, and knowing how much it has done, it is already a beginning. Of course it can not be exact because if I choose 8 pools at 2 hours, it is 16 hours and the conditions can change. But at least I can find out which are the worst pools and remove them for that currency

I hope that my ideas and suggestions are of your liking, I treat the best for me, for you and for all in short. I've been working with teams of programmers for years and I'm usually the group's creator. and I firmly believe in your project

You always have interesting insights and quality feedback, which I sincerely appreciate.  I will continue to look into the various miners available and see if I can incorporate additional ones if they are stable and offer significant benefits.  I did notice that Nevermore miner tends to run x17 a little faster than Tpruvot - not as a big of difference as what you said is possible, but an improvement nonetheless. Based on that, I have enabled more algorithms to run on Nevermore in order to provide more opportunities for improved hash rates. Since the software automatically subtracts the miner's dev fee from the price estimates, it should only use the miner if the hash rate increase is greater than the cost of the fee.

Hash Auger does not have a 5 minute price review.  The default and fastest setting is 10 minutes, but is based completely on the Pool Refresh Rate user setting.  If you set this to 30 minutes, than the software will only compare prices and possibly switch algorithms once every half an hour. The software will only change algorithms for each device if the new estimated earnings are greater than the Minimum Profit Switch percent, another user adjustable setting.  Between the two settings, users have a lot of flexibility over how often the software will switch algorithms.

I understand that revenue estimates are popular, but they are also among the most complained about features because they are often inaccurate.  Consider NiceHash, many users complain that those price estimates are not correct. NiceHash revenues should be the simplest to calculate because the service is paying only for hash rate.  It does not have to wait for coins to mature and then be traded at values that are often different than what was estimated when the coins were mined. Trying to calculate various revenue projections across different types of pools adds variables that can make such price calculations even more inaccurate. I would rather not show any earning projections than potentially mislead users by showing incorrect estimates.  That being said, I am considering ways to show users a historical record of what was mined so that they can be better compare their mining output to their actual payouts.

I recall that you would like the ability to mine a list of specific coins across all enabled pools.  I am still working on a design for that feature. I also understand that more advanced users such as yourself would like the ability to further customize the algorithm-switching parameters.  All of those ideas are interesting and worth considering.  However, as the sole programmer on this project with only a few hours a day to work on improvements, fix issues and provide support, most new enhancements will be introduced in an iterative manner over several releases rather than one big release that includes them all.
Pages:
Jump to: