The most I can gen @ is 3000 on a higher end card.
The lower end ones with 1GB constantly fail with the error posted above.
It's a 3GB R9 280x...
Hi guys,
I am aware of this issue. The fact is that I have to create two full size buffers on the GPU side to reduce thread-local memory consumption. Thus the memory amount needed on the CPU side has to be doubled to get an estimate of what is needed on the GPU side.
As an example, for a stagger size of 4000 you will need 1GB RAM on CPU side and more than 2GB (exactly (PLOT_SIZE + 16) x stagger) on GPU side (doesn't include here the local buffers and the kernel code itself).
Once I have a stable version (really soon ), I will work on this particular problem.
Well, I've kind of figured out what the problem is... I have a 3GB XFX 280X (R9-280X-TDBD) but when I run the new gpuminer v2 and run "list devices" it shows my card with a max global memory of 2048 instead of 3072. Maybe they forgot to put in a third of my vRAM?
Any idea why the discrepancy? Not that it REALLY matters anymore... I'm almost done plotting, lol.