Before optimizing the plots my miner is using up to 8192MB of memory while reading through the non-optimized plots, is that normal?
I think I missed something somewhere. What determines how much memory Blagos miner uses and the GPU Plotter? I assumed the amount of memory the GPU plotter uses is based on your globalworksize x number of cards and the amount of memory the miner used is your stagger size (when not optimized).
Also it seems as though I'm making less after optimizing then I was before optimizing (by a large margin).
I assume ur mining on windows? Windows caches everything it reads from large files and holds onto it in a 'standby list' of memory. For some reason windows doesnt know how to let go of this properly when reading large files and u have to run something like ranaurufu's memory cleaner which will purge the standby memory list once it hits a certain limit, but it does this for all memory mapped files on the standby list, therefore there is a systemwide decrease in IO performance, which is fine if ur windows box is dedicated solely to HDD mining. So, ur best bet is to set up mining on a linux machine that doesnt do this insanely aggressive caching. But, for ur first question, yes more RAM while plotting is better.
When using runaurufu's memory cleaner, u will want to make sure to set ur plot sizes to 200GB or less, that way, they can be flushed more often and the peak size of memory used is less. Because, it seems that windows wont allow a file to be flushed while its actively still being read, so a 4TB plot, will consume a shit ton of memory till the miner has completed reading it.