now another question...
i know who it's very dangerous do largest RAID0...
1) but do you tink it's more profittable for performance, mine all burst plots with gpu ove 4 separate HDD (D:\, E:\, F:\, G:\)?
2) or mine plots with CPU but over on a RAID0 array with mine 4X8TB hdd???
i think the response is with all together.. but i can't..
so, what is better???
Config 1) or 2)
thank you
Both configs will work.
I'd keep the disks separated.
With both GPU and CPU miners you can either mine them sequentially or in parallel.
That depends on your objective;
GPU, parallel: fastest,
but you have to make sure the opencl code for Burst is working in parallel with your other mining codes or you "lose" a GPU
GPU, sequential: about 1/4 the speed
you still have to make sure the opencl code for Burst is working in parallel with your other mining codes or you "lose" a GPU
CPU, parallel:
ideally, you have (up to) 4 cores for that, or you don't mind other processing takes a hit if you have 4 cores or less
CPU, sequential:
you need just one core for this, it is the slowest, but impact on system is the lowest.
Like outlined above, this would take ~70s on a single 2.1 GHz AVX2 core for 32 TB (~142 MB/s for 60s on every 240s block)
So, even the slowest (CPU, seq) variant would still finish mining a block in ~70 seconds.
In general you have 240s, but when a succession of multiple very fast blocks is on the net (say, 3 blocks with 30s), your 20s deadline for the first block in this sequence will be discarded if your miner publishes it after 240s, when 3,4,5 other fast blocks are on the chain already - it is the longer chain.
Hence the optimization for fast scanning. The "60s rule" is just just a fictional target, you choose your own depending on what your setup can do.
But there is this one thing: Your storage should be able to deliver this datarate. And this is much less a question of how many disks you stripe, but how the files are laid out physically. Therefore I hinted "large files, optimized";
example 1: 128 GiB file, plotted with a stagger size of 4096.
id_start_length_stagger
12345678901234567890_0_524288_4096
on every block, 128GiB/4096=32MiB Bytes are read (from every 128 GiB plotfile).
The stagger indicates the inner layout of the plotfile, and with a stagger of 4096 that means just
4096*64=262144 Bytes (256 KiB) are sequential,
skip 4095*4096*64=1023,75 MiB,
read another 256 Kib, …
So you'd have 128 unnecessary seeks.
example 2: 128 GiB file, optimized
12345678901234567890_0_524288_524288
here the interesting scoop is laid out sequentially; 524288*64=32MiB
This *may* deliver >100 MB/s on a not-too-old SATA disk
example 3: 1 TiB file, optimized
12345678901234567890_0_4194304_4194304
here the interesting scoop is laid out sequentially; 4194304*64=256MiB
this *will* deliver >100 MB/s on a not-too-old SATA disk
But only when you create this file on a fresh, empty filesystem. If you optimize (reorganize, merge, ..) from and to the same disk,
the file will be fragmented. The inner structure is sequential, but the physical sectors won't be. So plot to one disk, optimize from there to the target.
Again, I have no idea of the inner workings of the all-in-one package mentioned above.
You should consult the forums, lots of info to be found; eg forums.burst-team.us