It's also, from what I've seen, not clear how much protection ECC actually provides... all data covered by the protection is in the same domain for row aliasing, and typical protection only provides single error correction; soo...
Single Error Correction is usually Double Error Detection (SECDED) or even better (Chipkill)
For the intECC case I presume that the internally-erroring chip/module will return something obviously outrageous (all ones, 0xDEADBEEF, maybe something configurable?) so at least the crashes should be obvious and harder to exploit than bitflips.
For the classic ECC all hardware that I know will do Machine Check Fault/Non-Maskable Interrupt and then SIGBUS or equivalent.
I hope this exploit will get sufficient publicity that I can stop having to think about such things like bitsquatting and similar nonsense.
Ever since the first paper on this subject was published I've been running hosts doing anything important with overspec memory running underclocked, as an additional mitigation. A couple percent performance isn't worth any risk of corruption.
I've been fortunate enough to be bitten by the bitflip early in school, and after that I never did any serious work without parity or ECC RAM. I always take care to verify the operation of machine check exceptions and properly configure memory scrubbers.
The copy of
yaccpar in my project lost one bit in this line ("!" became " ", 0x21 to 0x20):
for( yyxi=yyexca; (*yyxi!= (-1)) || (yyxi[1]!=yystate) ; yyxi += 2 ) ; /* VOID */
changing it to
for( yyxi=yyexca; (*yyxi!= (-1)) || (yyxi[1] =yystate) ; yyxi += 2 ) ; /* VOID */
which still compiles and even kinda-sometimes-runs. After that I was inoculated to ever trusting a computer without memory error detection or correction. The school computer that produced this error actually had parity DRAM chips, but the school's vendor did a "leg lift" on it, i.e. bent a pin upwards to disable parity error interrupts.