Two different wtxids do not represent two different txns but different txids definitively do so.
It does for an upgraded (Segwit aware) client.
A segwit client is simply vulnerable to a flooding attack by an attacker who intentionally generates multiple signatures for the same txn if and only if wtxids are used as a reference in relay network instead of txids.
Considering your comment, I afraid that there are some misunderstandings regrading the terms wtxid and txid.
In a legacy bitcoin transaction, with scriptSig being part of the txn body, i.e. not being
segregated, the sha256 hash of the txn (which obviously yields different hash values for different scriptSigs), is called
wtxid in our (new) terminology.
After segwit we have another option as well: Hashing main txn body (no witness data) and using it as the main reference this is called
txid.
Obviously using wtxids, opens doors to transaction malleability whether you are segwit aware or not.
To be more specific: I think even in the bootstrap process we could have segwit witness data pruned if there were enough blocks under which the containing block is buried.
??
Without the witness data, an input (of an up-to-date transaction) can not be validated.
And it is the most sensitive point
Once you are thinking in-the-box, you are right, you need witness data to verify but thinking out-of-the-box you may find it reasonable to have a better incremental verification strategy: Suppose I start pruning witness data before removing the actual blocks from the history and still I'm able to help blockchain reconciliation for nodes that are satisfied with a medium level of verification given a threshold of confirmations (more blocks) is reached.
Again thinking in-the-box may give us no clue of why one should prune his blockchain incrementally, I mean you either want the block or don't want it, yes?
No! Out-of-the-box, things look a bit different: I can imagine a fast-sync strategy in which bootstrapping nodes do not need signatures after a threshold of
strongly verified blocks has reached but still they want to verify the integrity of blocks and their consistency with the claimed (committed probably) UTXO set, it would be a
moderate verification strategy.
So, we could have a UTXO committed by like 1000 blocks, where 500 blocks have no witness data and the 500 recent blocks are fully maintained. Now a bootstrapping node verifies that there are at least 1000 blocks that commit to a UTXO hash where 500 among them are strongly verified and are stacked upon another moderately verified half.
Such a node would have a very fast boot process with a multi-gigabyte HDD and still practically a full-node for most usual use-cases just like a pruned node. To be more precise: up to 2 times more compact than a comparable pruned node thanks to above mentioned incremental moderate pruning strategy.