How do I solo mine with minerd?
I don't know of any publicly available pools which is kind of a pain as it would help with testing of my version of the miner (verifying hash rates etc).
There is a GPU miner however it crashes on me after a while every time I run it (compiled for ubuntu) and I have not gotten around to figure out why. There is a windows 64 bit binary available for it although that would mean trusting a complete stranger who has made only 2 posts on the forum although the source does what the person claims: https://bitcointalksearch.org/topic/m.10012906
to solo mine with slimminer:
./minerd -o 127.0.0.1:41683 -O username:password
This will run it with a thread for each core
./minerd -o 127.0.0.1:41683 -O user:pass -t 16
will run with 16 threads. You will also need to properly configure your slimcoin.conf with a matching username "rpcuser=username" and password"rpcpassword=password" and if you are mining from a different machine add "rpcallowip=192.168.1.102" for example.
As for the best miner I think the GPU miner would be it if it could be made stable. Other than that I think there are several improvements that can be made to kryptoslab's optimized version which I have in my own version but its not ready to be released as the more I optimize it the more the calculation for the hashrate seems to get inaccurate so I need a good way to profile my changes in an objective way.
some optimizations of the dcrypt function for mining:
- calculating the half state for the first hash then for each nonce. you only need 1 shar256 round instead of 2 with a little memory copying overhead since I'm using openssl
- looping the index through the first hash/skip-list to the end to see if it aligns and aborting the hash if it doesn’t and aborting the hash if the end value does not match the first time. This gives me the biggest speed boost.
- not converting the skip-list to ascii and then back again since it will only be hashed once
- precalculating all of the possible hashes for the first 4 rounds of the buffer (this requires a lot of memory but gives a pretty good speedup. Right now I have a different version for my laptop that has less ram and so only computes the first 3 hashes and a computer with vast amounts of memory could push it to 5 (~3GB of ram needed for that though: the size of the resulting digest as well as the sha256 context of the result hash. for me the attempted slowed down or crashed randomly and took 20 seconds to calculate the table which was faster to load from a file)) this saves ~2 sha256 calls for how deep you go but replaces that with a couple of memory copies to get the pre-calculated state. Not a bad speed up though
- using a fixed sized buffer and limiting the count at 16. kryptoslab's version seems optimal at around 100 max iterations however if you stepped through the skiplist and find out how many rounds are needed before actually hashing anything but the first hash you can abort without wasting the hashing power. Since you know the count will never go over 16 you can now just allocate the maz sized buffer instead of using the self made expanding default one. I actually think a fixed buffer should be in the main client dcrypt function with the result hash being done progressively hashed if it fills the buffer. That way you can eliminate the large memory requirements of dcrypt. I really don’t like the idea of random sized memory requirements for the hash function, some of the iteration counts I have seen go over 3K iterations which means 64*3k memory being allocated and deallocated over and over. I wonder what the max iteration count is on the blockchain... It bugs me also that the original dev wrote this function with a custom memory manager that does not test to see if the memory was in fact properly allocated: https://github.com/kryptoslab/slimcoin/blob/master/src/dcrypt.cpp (extend_array) which means if a block with a hash with a huge count is received by a client you could actually remotely crash a client when it receives the block if its low on memory and cant page some out. Extremely unlikely but annoying that its possible.