Regarding attacker bots tricking individuals, this is hardly a refutation of such a system, similar to saying that e-mail is not viable due to phishing attacks, or even that bitcoin is not viable due to software that mimics the bitcoin software. One could say that even ethereum, litecoin, and monero have these problems, because when one goes to any of those websites, it could be a malicious copycat website.
You are erroneously equating situations which are not at all analogous. An apt analogy is the veracity of a CAPTCHA insuring a bot is not accessing the web resource. And it is a fact that bots trick (or pay) humans to fool millions of CAPTCHAs.
Phishing of email or other resource the user wants is not going to work in most cases because the user will realize at some point that the resource he/she wants was not correctly obtained. Whereas, having a user to complete a CAPTCHA which is for another website before giving the user what he/she wants, is not going to dissuade most users.
Regarding the HumanIQ project, those are specific criticisms of that system.
Yes I did make some specific criticisms of the HumanIQ design which are orthogonal to our discussion.
A proof of person does not have to be always tied to a device, or the hacker getting access to funds by impersonating the individual. Eg, imagine that after the one-time proof of identity occurs, a set of private keys are created. If someone impersonated that individual in the future (eg, with your malicious capcha example, which is also less likely if the proof of identity is a one-time affair), they would not gain access to the private keys.
Then if they've lost their private keys, they can't get access again employing their PoP.
And the masses will lose their private keys. Sheesh are you not aware that most people create a new Facebook account when they can't remember their password. This happens quite often, which is they they end using a password that is easy to crack using a dictionary attack such as "1234mydogSpot".
You mention the false positive rate with using voice ID. But if voice ID was combined with another method, the 1% or 0.1% rates of theft fall astronomically.
All of the biometrics can be fooled with unacceptably high EER with synthetic attacks, even if combined. Did you not see the example I cited where tailor made eye glasses could fool face recognition. That is if we are using as a proof to reset private keys, which is what HumanIQ proposed.
Automated PoP has too high of an EER. The only way to do PoP is with a human interrogator which is trusted. But then you throw decentralization out the window and introduce human subjectivity and error.