Author

Topic: [ANN] [QRK] Quark | Core 0.10 upgrade - page 396. (Read 1031025 times)

full member
Activity: 205
Merit: 100
August 29, 2013, 02:45:53 PM
pls add the official compiled a miner in the first message. Many people do not know how to compile the source code.

community in vk.com/quark_coin
member
Activity: 60
Merit: 10
August 29, 2013, 01:27:33 PM
Yeah i saw that, maybe too many connections or quarkcoind used too much cpu? Or the too many files open error, I have that on my yacpool sometimes if I forget ulimit -n 1000000.
Have you seen what p2pool reports during those times, if anything? My yacpool sometimes is not reachable for a short time (~5 min) from browsers, but p2pool keeps running fine and miners submit work if i go check, I think that happens when yacoind uses a lot of CPU.

I will did into the log and hope thats not already rotated out.

Also there are still some corrupt miners which submit shares above target, it looks like it's reporting all it calculates:

QP967T5PavPKNdFm4ctqxLdbLMhECU5uoZ

If i were better in Python i would implement a temporary IP ban for such clients, so the clients get some notice.
Or is there a way to pass some messages to the miner with stratum and/or http RPC?
sr. member
Activity: 359
Merit: 250
August 29, 2013, 01:17:24 PM
Anyone knows how to solo mine with the minerd?
Same as a pool but the information comes from quarkcoin.conf and you must have the daemon running
-o 127.0.0.1:rpcport
-u rpcusername
-p rpcpassword

Code:
./minerd -a quark -o 127.0.0.1:11973 -u  -p 

Throws connection error:
Code:
[2013-08-29 18:15:01] 1 miner threads started, using 'quark' algorithm.
[2013-08-29 18:15:01] HTTP request failed: Failed connect to 127.0.0.1:11973; Connection refused
[2013-08-29 18:15:01] json_rpc_call failed, retry after 30 seconds

My quarkcoin.conf file:

Code:
rpcuser=blabla
rpcpassword=bla
listen=1
gen=1
server=1
rpcport=9000
rpcconnect=127.0.0.1
addnode=...

and many other addnode
hero member
Activity: 756
Merit: 501
August 29, 2013, 01:16:44 PM
Yeah i saw that, maybe too many connections or quarkcoind used too much cpu? Or the too many files open error, I have that on my yacpool sometimes if I forget ulimit -n 1000000.
Have you seen what p2pool reports during those times, if anything? My yacpool sometimes is not reachable for a short time (~5 min) from browsers, but p2pool keeps running fine and miners submit work if i go check, I think that happens when yacoind uses a lot of CPU.
member
Activity: 60
Merit: 10
August 29, 2013, 12:37:29 PM
Great post, my thoughts exactly. Post your QRK address already so ppl can tip you!  Tongue

Yeah, when i'm back at work next week, need to get my main address:)



I have no idea what the blockexplorer problem is, so i will reset the database of the blockexplorer (not the pool) and let abe reimport all stuff.
Hopefully that will fix all issues. If not, maybe quarkcoind's database is out of sync
When I searched the error it seems to relate to network reviewing blocks or of order or orphans confusing Abe.  Was supposedly fixed 7 months ago on bitcoin Abe are you perhaps using an outdated fork?

No, thats relativly up-to-date. (2 or 3 wekks)
But it was maybe that every FastCGI instance started was trying to import, and the database got inconsistent.
Now FastCGi won't import, and the dataimport does now a cronjob, which runs every minute (so it may that not always the newest block is visible)
(Hmm,  it may also since the quaktcoind used for the blockexplorer is the same as the one from the p2pool instance. I will later seperate them)

@all:

The Blockexplorer is working again.


Edit:
@other p2pool operators:

On the stats page of the pool i regulary have some spikes going to zero, i think if had had read the forum correctly, then also there is no connection to the miners. If you find a solution, please share it.
legendary
Activity: 2088
Merit: 1015
August 29, 2013, 12:14:10 PM
Great post, my thoughts exactly. Post your QRK address already so ppl can tip you!  Tongue

Yeah, when i'm back at work next week, need to get my main address:)



I have no idea what the blockexplorer problem is, so i will reset the database of the blockexplorer (not the pool) and let abe reimport all stuff.
Hopefully that will fix all issues. If not, maybe quarkcoind's database is out of sync
When I searched the error it seems to relate to network reviewing blocks or of order or orphans confusing Abe.  Was supposedly fixed 7 months ago on bitcoin Abe are you perhaps using an outdated fork?
member
Activity: 60
Merit: 10
August 29, 2013, 11:16:53 AM
Great post, my thoughts exactly. Post your QRK address already so ppl can tip you!  Tongue

Yeah, when i'm back at work next week, need to get my main address:)



I have no idea what the blockexplorer problem is, so i will reset the database of the blockexplorer (not the pool) and let abe reimport all stuff.
Hopefully that will fix all issues. If not, maybe quarkcoind's database is out of sync
hero member
Activity: 756
Merit: 501
August 29, 2013, 11:04:05 AM
Great post, my thoughts exactly. Post your QRK address already so ppl can tip you!  Tongue
member
Activity: 60
Merit: 10
August 29, 2013, 10:57:19 AM
Hi folks,

sorry for being so rare, but i have to much stuff going around currently and in the next weeks.

Cool, the more nodes the better!  Cheesy

Do you guys think I should put Neisklar's node as bootstrap addres? Maybe it is wiser to have a number of separate p2pools so that users get fewer but higher payouts, minimizing tx fees.

Just so you know. I blocked the peer port on my pool, you can't use it as bootstrap.
At first that was during the setup, testing and so on, to be just on a save side, but later on i'm in the opinion that it's better to have more seperate p2pools.

Why?

1) If we have one big p2pool network, or have many "isolated" p2pools nodes, that is the same regarding decentralization.
The important part is (although fewer fee's for me ;-) but that's no problem, i did that mainly for fun and providing some push for the coin) to have pools operated by different people.

2) The wallet hase problems with large transactions. Large doesn't mean high amount of QRK, it means high amount of inputs.
If we have one big p2pool network the block reward is distributed about all nodes, means even more smaller payouts to slower miners.
Then when sending a amount, the wallet first tries to assemble that amount of many smaller amounts (the inputs) you received earlier, this resulting in large transactions.

If i'm wrong with my opinions, please feel free to show me my errors in my thinkings.



BTW if you have high hashing power you shouldn't use a pool. You only make it more difficult for slower miners.
Just take a litecoin calculator, take the QRK-diff DIVIDE it by 256 and use that diff in the calculator. If you see that you will get a block in 2 or 3 hours, thats ok, don't use a pool. At least in my opinion.


I'm to lazy to do proper quotations.
Next topic is miner optimizing:

Please have a look at this site:
http://bench.cr.yp.to/primitives-sha3.html
http://bench.cr.yp.to/primitives-hash.html

There you will find DETAILLED comparision AND OPTIMIZED code of the used hashing functions for different platforms, cpu instructions sets and so on
I'm sorry to say, but currently i don't have the time to work on a quicker miner, and some people already did some really good optimizations. My last tries also produced mostly errors so i'm in the hope the cpu miner for quark is a good starting point so that more experience people when it comes to optimizing, and assembler stuff can produce some really great miners.

(Personally I would like to see one W64 with sse4 and AES support;-) )

Here is also some halfway good list of supported instructions sets by different cpu's
http://gcc.gnu.org/onlinedocs/gcc/i386-and-x86_002d64-Options.html


legendary
Activity: 1197
Merit: 1000
August 29, 2013, 10:26:16 AM
You can try ALPHA stratum pool here:

http://qrk.coinmine.pl
Stratum Port: 6010

fee is now 0% but will raise to 1% later on

feeleep
legendary
Activity: 2674
Merit: 3000
Terminated.
August 29, 2013, 10:10:42 AM
As the supply gets reduce it should have a better price, hopefully.
zvs
legendary
Activity: 1680
Merit: 1000
https://web.archive.org/web/*/nogleg.com
August 29, 2013, 09:33:42 AM
QfJNcqsWbZ6hi1bxVhQEqCm3kPyHeR6juF just started and found block:

QfJNcqsWbZ6hi1bxVhQEqCm3kPyHeR6juF: 5.12
QfD5gf3Xqnj1PWN5xFp4dbsehkwmcJ5vX5: 63.10611
Qb1VvQJorRQmJB41Rfzu82HUXrytKFNTPE: 955.77388

that's crazy...  so, after that matures, i'll send 500,  seems only fair, haha
legendary
Activity: 2674
Merit: 3000
Terminated.
August 29, 2013, 09:22:27 AM
Still undervalued  Cool

Reward is going to halve soon. Last chance to buy cheap quarks Grin
Probably it is Cheesy
full member
Activity: 193
Merit: 100
August 29, 2013, 08:49:09 AM
Still undervalued  Cool

Reward is going to halve soon. Last chance to buy cheap quarks Grin
legendary
Activity: 2674
Merit: 3000
Terminated.
August 29, 2013, 07:29:25 AM
Still undervalued  Cool
hero member
Activity: 756
Merit: 501
August 29, 2013, 07:04:37 AM
@ zvs: The 0% fee will surely attract some people. Latency is very important too especially with p2pools.
My server is in Frankfurt, Germany so should be good for EU people. Where's yours?  Smiley

@ ahmed: More pools, more network security!  Cheesy
hero member
Activity: 518
Merit: 500
Bitrated user: ahmedbodi.
August 29, 2013, 07:04:11 AM
i can fire up a pool like my SRC Pool if needed like crypto-expert.com/SRC however the fee will be 3% so i can recoup the costs of coding the fresh backend
zvs
legendary
Activity: 1680
Merit: 1000
https://web.archive.org/web/*/nogleg.com
August 29, 2013, 06:59:34 AM
ed: tho the pool i just put up will get more p2pool orphaned shares, since it's 500khash vs 50mhash or whatever.  soon to be 0 since i'm turning off my cpuminer..   oh, i'm getting the right # of shares, but it seems to be underreporting my hashrate (should be that 500khash, says 50khash.  but for shares found, it's like 350khash)
Your miner is "outdated", a fresh git clone should help https://bitcointalksearch.org/topic/m.3021932

I'll think about the bootstrapp address, slow miners on a 200 MH pool would recieve <1QRK payouts, not sure if that'd be worth it.
Ah, I'm using a windows build, too much work, heh..

I might mess w/ it later on my virtual machine w/ ubuntu.. 

I guess for now I'll disconnect my node and run a 'separate' pool if we aren't going to connect everyone up...  right now I'm just shooting myself in the foot (much higher chance of shares being orphaned).

Though I guess it's not much of a pool, if nobody switches...  I might run 1 or 2 threads on my CPU, I guess  Grin
hero member
Activity: 756
Merit: 501
August 29, 2013, 06:55:30 AM
ed: tho the pool i just put up will get more p2pool orphaned shares, since it's 500khash vs 50mhash or whatever.  soon to be 0 since i'm turning off my cpuminer..   oh, i'm getting the right # of shares, but it seems to be underreporting my hashrate (should be that 500khash, says 50khash.  but for shares found, it's like 350khash)
Your miner is "outdated", a fresh git clone should help https://bitcointalksearch.org/topic/m.3021932

I'll think about the bootstrapp address, slow miners on a 200 MH pool would recieve <1QRK payouts, not sure if that'd be worth it.
zvs
legendary
Activity: 1680
Merit: 1000
https://web.archive.org/web/*/nogleg.com
August 29, 2013, 06:38:02 AM
Cool, the more nodes the better!  Cheesy

Do you guys think I should put Neisklar's node as bootstrap addres? Maybe it is wiser to have a number of separate p2pools so that users get fewer but higher payouts, minimizing tx fees.
I dunno, it seems to me like it'd be better to have all the p2pool nodes connected..   I noticed that your pool had a number of double solved blocks (115761, 115750, 115742).  So some orphans there..

But, anyway, hooking up all the p2pool nodes would mean less orphans for p2pool solved blocks.  it should already be reduced a decent amount (from solo miners), since my quarkcoind has tons of connections

ed: tho the pool i just put up will get more p2pool orphaned shares, since it's 500khash vs 50mhash or whatever.  soon to be 0 since i'm turning off my cpuminer..   oh, i'm getting the right # of shares, but it seems to be underreporting my hashrate (should be that 500khash, says 50khash.  but for shares found, it's like 350khash)
Jump to: