Your description of arbitrary code execution vulnerabilities is reasonably accurate. However, what you're overlooking is the fact that it is the software already on the computer which takes the data supplied by the hacker and reinterprets it as code. The code came from the hacker, but the software already on your computer ran it. This happens all the time without any harm or fault attached; most web pages include executable code that your computer downloads and runs, for example. Even bitcoin transactions include executable scripts. Most of the time the computer's owner is not even aware of the code. In many cases (e.g. ads and tracking code in web pages) it is even true that the owner would not approve of running the code if he or she was made aware of it.
The question is whether it is sufficient that the computer accepted the code and ran it, or if the owner must be expected to approve of running it given the choice. I would argue that when you own a machine that is designed to receive and process messages, and connect it to the Internet, it is your responsibility to make sure it processes them safely (or accept the consequences), even in the case of malformed or maliciously crafted messages. If that places an unacceptable burden on the participants, I've already suggested a system of contracts which would suffice to enforce some basic etiquette while remaining consistent with the natural rights of everyone involved.
Consider this: What we have here is basically a case where you have some information you don't want to give to anyone else. Forget the computers; if this was simply information in your head, and someone else, by asking the right questions, managed to get you to reveal it despite your attempts at concealment (by observing your involuntary body language, for example), that would not make them an aggressor. Like I said before, of course, all analogies are false to some extent, including this one. I'm not basing any conclusions on it. However, I think it makes a decent illustration.
I don't agree with your definition of hacking. There are many ambiguous forms of hacking. A hacker could have gained access to the bootsector of the computer (again, through different means). Hacking is about getting some form of control, not about getting your code to run per se. You could be running code that was already there. You could not run code at all.
Why do you expect to be able to take responsibility of the stuff happening on your computer all the time?
Do you also think that if you own a car you are responsible for what someone else does with that car all the time?
Do you stand by your car all night to make sure noone steals it and commits a crime with it?
A modern pc does tens of billions of things per second. A mobile device not that much less.
Because of this speed we have no direct sight over it (even you).
I don't see most people inspecting every packet going in and out. They simply trust a tool or a service. They have to because otherwise everyone has to be a computer technician to protect their computer well. It's not practical given the nature of computers.
But then again, as a software developer you must know that it's incredibly hard to make something reasonably complex
and bugfree. Bugs that lead to security issues can happen at multiple levels, from conceptual to implementation and everything inbetween.
The tools used for protection are flawed so how much responsibility can you realistically expect people to take?
So altho i agree that the owner of the device has some responsibility, this responsibility is shared with the manufacturer and society in general (laws and opinions).
And i'm not sure what you mean by your analogy.
If the person asking the questions does this with criminal intentions then it is certainly not legal.