Author

Topic: Bitcoind stopping along with complete VPS shutdown. (Read 1212 times)

hero member
Activity: 490
Merit: 500
For anyone else running on a memory restricted system, perhaps this is useful.

From running just:
Code:
bitcoind -daemon
I changed it to
Code:
bitcoind -daemon -dbcache=30 -par=4 -banscore=20 -disablewallet
.

That reduced memory usage from about 1.3GB to about 0.85GB, something which helped my node a lot.

I'll see how this develops as more connections are made, currently sitting at 10. I guess the dbcache flag is rather significant though. If anyone has more optimization techniques, I am all ears.
hero member
Activity: 490
Merit: 500
Just a thought, but another possibility might be that the Linux out-of-memory killer was tripped. Many VPS providers don't create a swap file by default, which might be a cause of this (but be careful, creating a swap file might cost you $ depending on the provider and on where the swap file is created).

I am familiar with that killer from before, that's why I upgraded RAM on the server. However, it's constantly been high, and I was also running testnet, which didn't make it any better. Bitcoind debug.log did not give any indication of memory failure however, and I couldn't see that in any other logfiles either, but the first time, before RAM upgrade, logfiles clearly showed memory was an issue.

I'm just surprised I did not get an e-mail from the host when the VPS did shut down.

As for top, it currently shows:

Code:
KiB Mem:   2097152 total,  2026652 used,    70500 free,        0 buffers
KiB Swap:  2097152 total,    48504 used,  2048648 free,  1216420 cached

The host says there's 2GB Vswap, and from what I found about this:

Code:
What is VSwap?

The new CentOS 6 OpenVZ kernel has a new memory management model, which supersedes User beancounters. It is called VSwap.

When the Guaranteed Ram limit is reached, memory pages belonging to the container are pushed out to so called virtual swap (vswap). The difference between normal swap and vswap is that with vswap no actual disk I/O usually occurs. Instead, a container is artificially slowed down, to emulate the effect of the real swapping. Actual swap out occurs only if there is a global memory shortage on the system.

So perhaps this is the problem?

Code:
ps -e -o pid,vsz,comm= | sort -n -k 2
previously gave.

Code:
1632 124348 rsyslogd
 7784 434028 bitcoind (testnet i assume)
 7816 1246896 bitcoind

I've shutdown the testnet node, and limiting connections on the bitcoind, and will try to clean up other processes I do not need.



hero member
Activity: 672
Merit: 504
a.k.a. gurnec on GitHub
Just a thought, but another possibility might be that the Linux out-of-memory killer was tripped. Many VPS providers don't create a swap file by default, which might be a cause of this (but be careful, creating a swap file might cost you $ depending on the provider and on where the swap file is created).
hero member
Activity: 490
Merit: 500

I would contact them now. Seems like they shut it down, but allowed bitcoind to shutdown in time too. syslog will have some info.

The timestamps between debug.log for bitcoind and system logs were 4 hours apart, but realizing that and looking closer i found 'Received signal 15; terminating' for a few processes. And both shutdown, both happening over the last 4 days happend at different times of the day. I see nothing that should've caused this really.

And if I did something wrong, I suspect the host would've notified me. It was a time when I ran a process hogging 100% cpu, and then i was notified quite quickly.

Anyway, filed a ticket with the host, and now we shall see.

legendary
Activity: 1358
Merit: 1001
https://gliph.me/hUF

I would contact them now. Seems like they shut it down, but allowed bitcoind to shutdown in time too. syslog will have some info.
hero member
Activity: 490
Merit: 500
That's not a crash, that's a clean shutdown, which is exactly what's supposed to happen when the OS is shut down for whatever reason. You should probably ask the hosting company what that reason is.

Updated the title to reflect this, I will put the blame on a poorly phrased thread title on lack of sleep. Wink I will see if the problem persists after the upgrade, if it does, I will contact the hosting company.
legendary
Activity: 4542
Merit: 3393
Vile Vixen and Miss Bitcointalk 2021-2023
That's not a crash, that's a clean shutdown, which is exactly what's supposed to happen when the OS is shut down for whatever reason. You should probably ask the hosting company what that reason is.
hero member
Activity: 490
Merit: 500
Just thought I'd notify about this issue, in the event it is something others has experienced.

System: OpenVZ VPS
OS: Debian GNU/Linux 7.6 (wheezy)
RAM: 2GB
HDD: 150GB
Bitcoind: Bitcoin Core Daemon version v0.9.2.1-g354c0f3-beta

From ~/.bitcoin/debug.log:

receive version message: /Satoshi:0.8.6/: version 70001, blocks=324395, us=xxx.xxx.xxx.xxx:8333, them=0.0.0.0:0,peer=xxx.xxx.xxx.xxx:59760
2 mins after this:
net thread interrupt
opencon thread interrupt
addcon thread interrupt
dumpaddr thread stop
msghand thread interrupt
Shutdown : In progress...
RPCAcceptHandler: Error: Operation canceled
RPCAcceptHandler: Error: Operation canceled
StopNode()
Shutdown : done

The entire VPS also went offline, not sure if this happened immediately, or a while after bitcoind shut down. I didn't seem to find any interesting info in the system log-files. This happend twice, over the span of two days. Prior to this, the bitcoind daemon had been running fine for a long time without much problems (since the upgrade to 0.9.2.1.). The VPS does not seem to be compromized, and nobody except me currently has access to any credentials accessing the server, apart from staff of the company doing the hosting of course. There's no bitcoins stored on the VPS, it's only acting as a helping node on the network.

There's been no notifications on e-mail from hosting company, and frankly I haven't bothered them with the issue yet, I thought I'd just upgrade to 0.9.3 and see if the issues will still persist, or if it will be solved.

Previously I had some memory issues, but after upgrading the memory, this has not been an issue. Any suggestions as to what may have caused this? If there's some exploit whereby an attacker can deny service, that's something that should be looked into. The core of the problem probably is something else entirely though. Just wanted to see if anybody had some insight to this.
Jump to: