See
https://bitcointalksearch.org/topic/m.10292524Using single sha256 for nonce as a reference, the ballpark for conventional hardware of this TMTO is 25/75 pow/poc (bruteforce 8 bits of each 32bit nonce during a readout - assuming a disk can read 10Mnonce/s a second (*4 byte = 40Mbyte/s), that translates to 2560M nonce/s for top of the line GPU. The shorter the nonces are, the easier the PoW part is.
With dedicated hardware, 50/50 or more in favour of PoW.
As for your energy argument - if you have 1PB cluster eating 1kW and with a tiny (compared to HDDs) investment for PoW "booster" you have effectively 2PB cluster eating 2kW ... guess what most miners will opt for. Big appeal of current PoC is that most of capital investment is the equipment, not energy. POC2 as it is will drastically shift it towards conventional PoW.
On the upside - this can effectively get rid of NaS.
Fixing POW2 involves making the nonces large enough (at least 64 bit) so PoW becomes impractical for all intents and purposes.
Currently each scoop is 64 bytes - or 512 bits - long. Lets assume you brute force 16 bits of each, thats 65536 hashes per nonce the gpu needs to do - leaving 496 bits to store. You´d be able to store 3% more nonces, at the cost of massive CPU/GPU load.
The long scoop length is an effective protection against this "hack", I hope its not going to be changed drastically with poc2.
In my opinion the number of scoops should be increased (to 16 * 4096 maybe), the scoop size reduced to 32 bytes - that would total in 2MB of disk space per nonce and 16MB/TB to be read each block, at a VERY LOW cpu load. Energy efficiency is the main selling point of BURST, don´t give up on that.
5. High blocktime variance
Due to the high amount of diskspace used per nonce, the total amount of nonces checked per block is very low compared to PoW coins. This leads to higher variance in block times.
In my opinion this is not the reason for higher block time variance. Coins with "slow" pow-algorithms like scrypt-jane don´t have this problem either.
In fact, basetarget may be adjusting too fast.