Author

Topic: Running out of Memory RAM with RPC API REST /rest/block/ (Read 263 times)

staff
Activity: 3458
Merit: 6793
Just writing some code
Neither I nor any of the other developers are observing unbounded memory usage. It does go up a bit, but it always plateaus pretty quickly and doesn't seem to get into memory exhaustion.

Seeing some memory usage increase with this level of block verbosity is expected since the block has to be read into memory, decoded, and turned into JSON text with a ton of extra details that are also not explicitly in the block file itself, and text is not very efficient.
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
What is extremely strange is that it happens when the full verbosity block is consulted. Because when the block is consulted without verbosity and I consult all the tx of the block separately (my python script), I do not have memory leaks, in fact my memory remains stable.

I want to note here since you use python that the Python interpreter does not garbage collect freed memory in many instances but instead holds onto it to reuse it for some other variable, even if you explicitly delete the variable or call the garbage collection functions in Python. It's not directly related to your problem, but if your script sees high memory usage that is not going down, this is why. The only way to fix that is to run your code in a subprocess that you create in your script.

But anyway, I have also previously experienced this problem in Bitcoin Core that you are referring to. The large memory usage happens when you try to fetch many transactions at once (as what happens when you fetch a block) because the node has to get more information for each of these transactions from other sources i.e. other blocks. The higher the verbosity, the worse the memory usage becomes. It is also quite independent of the number of RPC threads you are running, but the memory usage becomes noticeably higher if you batch many RPC calls together.
newbie
Activity: 10
Merit: 10
Hello,

I'm encountering a persistent issue with memory RAM exhaustion while running Bitcoin Core on my system.
Here's a brief overview of the problem:

Problem Description:

I have a Bitcoin Core node set up on my system, and I query it every minute using an python script consulting for new blocks.

Using the API /rest/block/.json the response is almost immediate but the memory of the daemon process for each block query increases a lot! and there is no release of this memory afterwards.

To solve this problem I must close bitcoin core and reopen it from time to time

But if I use the API /rest/block/notxdetails/.json and process the json response and query each tx my memory remains always the same but is very slow.

Is there any solution to this problem?
Thanks!

MEDIUM SOLUTION:


It turns out that I downloaded bitcoind and installed it, created symbolic links to my folders where my blocks are located, ran bitcoind -daemon and ran my scripts, the surprise I had was that my memory barely changed, I consulted more than a thousand blocks and my Memory barely increased 150MB, something quite insignificant considering the last 1000 blocks.

With Bitcoin Core Desktop version (Flathub) querying only one block and full verbosity increased my memory by approximately 100 MB.

newbie
Activity: 10
Merit: 10
Thank you for the actions taken, how do I follow up on these issues?
If you have a github account, you can subscribe to the issue to get notifications when comments are posted to the issue. If you have anything else to add, you can also comment on it yourself.

How long does this whole process take?
It depends entirely on if anyone is interested in debugging this, and how hard it actually is to debug. Either way, someone will have to do the work to figure out what is wrong, then open a PR to fix it. Then the PR will need to be reviewed, and that can take anywhere from a few days to several years, depending on how complicated it is. You won't get the benefits of it until it hits a release, and the next major release is scheduled for October. There may be a backport release containing this fix before then, but these are not guaranteed nor are they scheduled, so don't count on it.

Basically, it'll be at least several months before a release is made with this fixed.

Thank you for your time and your willingness to explain the workflow to me...
staff
Activity: 3458
Merit: 6793
Just writing some code
Thank you for the actions taken, how do I follow up on these issues?
If you have a github account, you can subscribe to the issue to get notifications when comments are posted to the issue. If you have anything else to add, you can also comment on it yourself.

How long does this whole process take?
It depends entirely on if anyone is interested in debugging this, and how hard it actually is to debug. Either way, someone will have to do the work to figure out what is wrong, then open a PR to fix it. Then the PR will need to be reviewed, and that can take anywhere from a few days to several years, depending on how complicated it is. You won't get the benefits of it until it hits a release, and the next major release is scheduled for October. There may be a backport release containing this fix before then, but these are not guaranteed nor are they scheduled, so don't count on it.

Basically, it'll be at least several months before a release is made with this fixed.
newbie
Activity: 10
Merit: 10
I seem to have reproduced this on my machine, at least calling getblock with verbosity 2 has increased the memory usage for most of the calls.

I've opened an issue on github so other developers can take a look and debug this: https://github.com/bitcoin/bitcoin/issues/30052

The behavior that I've observed is that it doesn't always increase the memory usage. This suggests to me that it's probably loading something into an unbounded cache, and the cache is growing too large causing the crash.

I think the error is reproduced with all current blocks, blocks with a lot of data.

Thank you for the actions taken, how do I follow up on these issues?

How long does this whole process take?

Thank you
staff
Activity: 3458
Merit: 6793
Just writing some code
I seem to have reproduced this on my machine, at least calling getblock with verbosity 2 has increased the memory usage for most of the calls.

I've opened an issue on github so other developers can take a look and debug this: https://github.com/bitcoin/bitcoin/issues/30052

The behavior that I've observed is that it doesn't always increase the memory usage. This suggests to me that it's probably loading something into an unbounded cache, and the cache is growing too large causing the crash.
newbie
Activity: 10
Merit: 10
~
You have a verry good processor actually.
Let me ask....
Is it an additional  SSD or the the one you installed your operating system on??

First of all, thanks to everyone who has taken the time to help me solve or detect the bug...

My Fedora Distro is installed in my laptop SSD and my org.bitcoincore.bitcoin-qt directory is in my external SSD. Im using a Bitcoin Core Flathub installation.
Im going to try the approach of install bitcoind and test to see if the problem remains.
newbie
Activity: 10
Merit: 10
Seems like there's a memory leak somewhere. Do you see the same problem if you use the getblock RPC with verbosity 2?

e.g.
Code:
bitcoin-cli getblock  2

Both that rest api and the rpc use almost the same code so this would help narrow down what's causing the problem.

First of all, thanks to everyone who has taken the time to help me solve or detect the bug...

I ran the command
Code:
bitcoin-cli getblock 00000000000000000002a7c4c1e48d76c5a37902165a270156b7a8d72728a054 2

and the memory leak problems continue, every query increases the memory allocated to bitcoin process by approximately 100MB.

What is extremely strange is that it happens when the full verbosity block is consulted. Because when the block is consulted without verbosity and I consult all the tx of the block separately (my python script), I do not have memory leaks, in fact my memory remains stable.

I'm curious, can you consult that block and see your memory? Is it just me with this problem?
staff
Activity: 3458
Merit: 6793
Just writing some code
Seems like there's a memory leak somewhere. Do you see the same problem if you use the getblock RPC with verbosity 2?

e.g.
Code:
bitcoin-cli getblock  2

Both that rest api and the rpc use almost the same code so this would help narrow down what's causing the problem.
sr. member
Activity: 476
Merit: 299
Learning never stops!
~
You have a verry good processor actually.
Let me ask....
Is it an additional  SSD or the the one you installed your operating system on??
newbie
Activity: 10
Merit: 10
Using the API /rest/block/.json the response is almost immediate but the memory of the daemon process for each block query increases a lot! and there is no release of this memory afterwards.
Googling for Bitcoin Core memory leaks, I found this post:
Quote
By default, since glibc 2.10, the C library will create up to two heap
arenas per core. This is known to cause excessive memory usage in some
scenarios. To avoid this we have to set

MALLOC_ARENA_MAX=1

without this the usage grows beyond 36GB, with this it stays under 2GB
normally.

ref: https://github.com/bitcoin/bitcoin/blob/master/doc/reduce-memory.md#linux-specific
I don't know if it's going to solve your problem, but it's worth a try.

If this approach is with the bitcoin.conf file, I already try but the problem remain the same Sad
newbie
Activity: 10
Merit: 10

Okay what's your pc  processor  if I may ask ??

For the hardware component  try getting an additional SSD if you have an extra slot for hardisk on your pc  . The exFAT format is  for windows actually  move your node_block directory to this new  SSD  drive  and create a partition from your bitcoin.conf file. Note don't delete your old node_block directory...you can just remain it .
Now you can query from the new directory.... i don't know if I'm on point that align with yours

For the software component you could give the response give by @LoyceV  a trial


My Bitcoin Core data is already in an SSD.

Please check again my previous responde from the one that you quote. Because I made some modifications

CPU:
Code:
CPU:
  Info: 6-core model: Intel Core i7-9750H bits: 64 type: MT MCP
    arch: Coffee Lake rev: A cache: L1: 384 KiB L2: 1.5 MiB L3: 12 MiB
  Speed (MHz): avg: 800 min/max: 800/4500 cores: 1: 800 2: 800 3: 800 4: 800
    5: 800 6: 800 7: 800 8: 800 9: 800 10: 800 11: 800 12: 800 bogomips: 62399
  Flags: avx avx2 ht lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx
sr. member
Activity: 476
Merit: 299
Learning never stops!

It is not a scalable solution for me, the day will come when even if I have 500GB ram I will reach the RAM limit.
It also doesn't seem very smart to me to be closing and opening Bitcoin Core every 2 days.

It is interesting to note that using the API:
Code:
curl http://localhost:8332/rest/block/notxdetails/00000000000000000002a7c4c1e48d76c5a37902165a270156b7a8d72728a054.json

And then processing each tx query with my python script:
Code:
curl http://localhost:8332/rest/tx/{tx_id}.json

The memory always remains the same! but it takes between 30 to 45 seconds for my script to query each of the tx inside each block (blocks with more than 3K tx inside)
Okay what's your pc  processor  if I may ask ??

For the hardware component  try getting an additional SSD if you have an extra slot for hardisk on your pc  . The exFAT format is  for windows actually  move your node_block directory to this new  SSD  drive  and create a partition from your bitcoin.conf file. Note don't delete your old node_block directory...you can just remain it .
Now you can query from the new directory.... i don't know if I'm on point that align with yours

For the software component you could give the response give by @LoyceV  a trial


Googling for Bitcoin Core memory leaks, I found this post:
Quote
By default, since glibc 2.10, the C library will create up to two heap
arenas per core. This is known to cause excessive memory usage in some
scenarios. To avoid this we have to set

MALLOC_ARENA_MAX=1

without this the usage grows beyond 36GB, with this it stays under 2GB
normally.

ref: https://github.com/bitcoin/bitcoin/blob/master/doc/reduce-memory.md#linux-specific
I don't know if it's going to solve your problem, but it's worth a try.
newbie
Activity: 10
Merit: 10

Bitcoin core uses the RAM  alot and to reduce this your hardisk function usually gets involved and if it reads at a lower rate then you could end up getting a slow response because it will get used up faster while running your node ....
There are some couple of things you could do to speed it up
  • Get a new SSD Drive  with a larger storage since hardisk needs more space to peform faster
  • If your PC has an additional slot for an extra ram then you could get an additional RAM for your PC  else if you have just a single slot and your ram is low, then replace it with a higher Ram
.
  • You could take down some background  processing applications  to free up usage too although  it's less effective but it contributes.
With these you should have it speed up

It is not a scalable solution for me, the day will come when even if I have 500GB ram I will reach the RAM limit.
It also doesn't seem very smart to me to be closing and opening Bitcoin Core every 2 days.

It is interesting to note that using the API:
Code:
curl http://localhost:8332/rest/block/notxdetails/00000000000000000002a7c4c1e48d76c5a37902165a270156b7a8d72728a054.json

And then processing each tx query with my python script:
Code:
curl http://localhost:8332/rest/tx/{tx_id}.json

The memory always remains the same! but it takes between 30 to 45 seconds for my script to query each of the tx inside each block (blocks with more than 3K tx inside)

I made a Benchmark python script:

With API /rest/block/{blockhash}
Code:
myuser@server:$ python time_query_rest.py 800000 800000
Block Height: 800000
Number of lines in block data: 13666565
Total time to query all transactions in block 800000: 0.0 seconds
Number of transactions in block 800000: 3721

With API /rest/block/notxdetails/{blockhash}
Code:
myuser@server:$ python time_query_rest_notxdetails.py 800000 800000
Block Height: 800000
Number of lines in block data: 253731
Total time to query all transactions in block 800000: 38.34489599999994 seconds
Number of transactions in block 800000: 3721
newbie
Activity: 10
Merit: 10
Quote
How much memory do you have?

Can you add more ram if you are lite?

Also can you try a 5 minute query vs a 1 minute  query?

How much memory do you have?
12GB RAM availability

Also can you try a 5 minute query vs a 1 minute  query?
Its the same, in each query begins to increase. If you want you can try it and see how it increases:
Code:
curl http://localhost:8332/rest/block/00000000000000000002a7c4c1e48d76c5a37902165a270156b7a8d72728a054.json

Thanks for your time.

 
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
Using the API /rest/block/.json the response is almost immediate but the memory of the daemon process for each block query increases a lot! and there is no release of this memory afterwards.
Googling for Bitcoin Core memory leaks, I found this post:
Quote
By default, since glibc 2.10, the C library will create up to two heap
arenas per core. This is known to cause excessive memory usage in some
scenarios. To avoid this we have to set

MALLOC_ARENA_MAX=1

without this the usage grows beyond 36GB, with this it stays under 2GB
normally.

ref: https://github.com/bitcoin/bitcoin/blob/master/doc/reduce-memory.md#linux-specific
I don't know if it's going to solve your problem, but it's worth a try.
sr. member
Activity: 476
Merit: 299
Learning never stops!
~


Bitcoin core uses the RAM  alot and to reduce this your hardisk function usually gets involved and if it reads at a lower rate then you could end up getting a slow response because it will get used up faster while running your node ....
There are some couple of things you could do to speed it up
  • Get a new SSD Drive  with a larger storage since hardisk needs more space to peform faster
  • If your PC has an additional slot for an extra ram then you could get an additional RAM for your PC  else if you have just a single slot and your ram is low, then replace it with a higher Ram
.
  • You could take down some background  processing applications  to free up usage too although  it's less effective but it contributes.
With these you should have it speed up


Also can you try a 5 minute query vs a 1 minute  query?
Yes! reducing the query time interval can also help in speeding things up
legendary
Activity: 4326
Merit: 8950
'The right to privacy matters'
Hello,

I'm encountering a persistent issue with memory RAM exhaustion while running Bitcoin Core on my system.
Here's a brief overview of the problem:

Problem Description:

I have a Bitcoin Core node set up on my system, and I query it every minute using an python script consulting for new blocks.

Using the API /rest/block/.json the response is almost immediate but the memory of the daemon process for each block query increases a lot! and there is no release of this memory afterwards.

To solve this problem I must close bitcoin core and reopen it from time to time

But if I use the API /rest/block/notxdetails/.json and process the json response and query each tx my memory remains always the same but is very slow.

Is there any solution to this problem?
Thanks!



How much memory do you have?

Can you add more ram if you are lite?

Also can you try a 5 minute query vs a 1 minute  query?
newbie
Activity: 10
Merit: 10
Hello,

I'm encountering a persistent issue with memory RAM exhaustion while running Bitcoin Core Desktop version (Flathub) on my system.
Here's a brief overview of the problem:

Problem Description:

I have a Bitcoin Core node set up on my system, and I query it every minute using an python script consulting for new blocks.

Using the API /rest/block/.json the response is almost immediate but the memory of the daemon process for each block query increases a lot! and there is no release of this memory afterwards.

To solve this problem I must close bitcoin core and reopen it from time to time

But if I use the API /rest/block/notxdetails/.json and process the json response and query each tx my memory remains always the same but is very slow.

Is there any solution to this problem?
Thanks!

MEDIUM SOLUTION:

It turns out that I downloaded bitcoind and installed it, created symbolic links to my folders where my blocks are located, ran bitcoind -daemon and ran my scripts, the surprise I had was that my memory barely changed, I consulted more than a thousand blocks and my Memory barely increased 150MB, something quite insignificant considering the last 1000 blocks.

With Bitcoin Core Desktop version (Flathub) querying only one block and full verbosity increased my memory by approximately 100 MB.

Jump to: