my client shouldn't have nothing to do with the problem, i'm running it without problem since months (using it actually to play not just to leave it open)
however i noticed some problem when calling name_pending command often, sometimes calling it stuck the daemon, i think this has something to do with some deadlock in huntercoind code or something like that, that should be checked extensively (the code contains too much locks around)
(My client doesn't use name_pending automatically, so couldn't trigger this problem neither)
Yes, there are probably some deadlocks in the code. If you spot a particular one, please open an issue on Github. Should be relatively easy to check for correct locking in a particular case (when I know where to look at).
I just know that looping requests to name_pending cause some locks, so that's the only thing i can say (call to name_pending RPC procedure), i don't have control over the overall huntercoin process and can't debug it
if you try to put a name_pending into a loop every 5 seconds you should notice the problem, maybe you have more chance to debug into the deadlocked code
anyway talking about name_pending and other parts of code that uses many locks, wouldn't be possible to use some backbuffer and flip them when needed? this is what i do in my behaviours when parsing game status and i need to do some time consuming things on that data:
e.g. in your case, when you process a new block and generate a new game_state, you lock everything, preventing all "readonly commands" to be processed, while you could process the new game_status into a back buffer, then once it has been processed, you then apply a lock and swap the new game_state with the old. this isn't the right example, i suppose game_state isn't processed this way, but wanted just to point out the generic problem.
Maybe looking at the code of name_pending would more useful:
i see that you apply this lock:
CRITICAL_BLOCK (cs_mapTransactions)
CRITICAL_BLOCK (cs_main)
and i see that those locks are applied in many transactions, why?
I mean, name_pending just read data, and i think isn't important if data is really updated with ms precision, because being all multithreaded, once it's returned value is used, it could have already been changed, so the lock is only useful to prevent memory faults when reading a memory while it is written, but... why don't just use back buffer here, where the writing processes write on a back buffer then use locks just to swap new data with old one? this would improve drastically the performance of the daemon, reducing a lot deadlocks problems.
I don't know QT and i don't know how it handle multithread arrays, but maybe there is even a way to read from a multithread safe array without even worrying about locks, that are handled internally and are applied only to write (but this is just a guess)
not sure if i explained well what i mean