I think pugman covered your first question without going into too much speculation.
Bots are easy to spot and banning it by keywords is the best solution.
False positive should be rare, however, i think unbanning those later isn't difficult (deserve it somehow especially if they are posting like a bot).
It's not just about false positives though. it's very easy for people to change their replies to avoid an automatic ban. For example, if you banned keywords such as "Great project", "Good luck" or "To the moon" then they would just change their answers to be "Great Projekt", "Goood luck" or "To the mooooooon".
let's be honest if you look at most bots they are likely people who are involved in the development side of the project. I've seen numerous high level accounts communicating back and forth just to bump a topic, as well as thousands of accounts registering just to bump the topic and never log in again. So, if they are having their replies deleted through a automated system they'll just change their replies ever so slightly.
They even do this now. The new trend is making your reply look big and constructive but, it's either copying multiple replies earlier in the thread or just repeating different ways to praise the thread.
Update: I've explained a little more via pm to Invoking about my stance on this. I'll give an example here more specific to the OP rather than giving a generic example with the "Good luck" sort of posts. My opinion on this is that automated bots bring up too many false positives. especially, in the case of applying for bounties in the above format. Again, they could just change the #join or any part of their application to game the system, but I think my previous explanation of this is enough, and I don't need to elaborate any more. Secondly, users who are legitimately applying for a bounty may well use the format that the bots use. We see this quite regular in signature campaign threads, that a user may copy and paste the above replies and fill in their own details. From time to time you see them forget or make a editing error and they've left it unedited.
This would be the case in my opinion which would bring up a lot of false positives. People copying the format of these bots, and getting punished for it. In my PM to Invoking I suggested a system which a bot only reports these automatically, and not hand out punishments. The moderator who reviews the report then can decide whether this is a bot or not. I think this would result in less false positives from happening, although it would likely increase the workload of a moderator by quite some bit.