Maybe they want to install a Windows application separately, that could be completely irrelevant with Bitcoin (i.e., a game).
Then still, that game could use Wine, and the Core client could run natively in Linux. Wine Is Not Emulator. It is in the name! But even if it would be, then still, port forwarding from a VM to a native system would solve it. But in Wine, you don't need even that, because Wine is just WinAPI implementation for Linux, so if you open some TCP port, then using Wine-wrapped WinSock is the same as using raw Unix sockets.
Another reason could be that Wine has a nice UI, similar to Windows 98 or systems like that. And some people like that more than Linux GUI. But then, changing UI in native Linux installation is a better option than running everything from Wine.
And if the reason is something else, then I have no idea, what it could be, because those two are probably the most obvious ones. It is definitely not something like "I want to run x86 applications on M1 processor", because Wine is not about that. Which means, x86 is used in both systems, if Wine can run it.
Why use Wine at all instead of just running Windows virtually?
Because virtualization is slow. And if the architecture is the same, then there is no reason to use VM, if it works in Wine. I would understand it in cases, like "I want to run x86 Windows applications on Mac M1 processor", but this is a different story. And even then, if some transcompiler would be available, I would use it instead, to translate x86 instructions into M1 ones, and then execute them natively. Why? Because of performance. And if you talk about games, then this is a place, where performance is important.
Edit: I can add one more thing. Emulators are slow. And I hope, that in the future, they will be much faster than today. How they could do that? For example, by translating binaries, and storing them in native architecture before using. Emulators are great at dynamic analysis: you start from some processor state, execute things instruction by instruction, and then... then, emulators often do a huge mistake: they drop that native instructions, and they are never stored anywhere.
If emulators would perform opcode translation during first execution, and then use native executables in the physical architecture, they would be much faster than today. Then, only the startup would take a lot of time. But if some huge loop would be translated once, for example from x86 into M1, then it could stay in M1, and be executed natively in M1. And that would bring a lot of speedup to the whole emulation.
Also, after initial translation, there should be more phases to do some opcode cleanup, but well, saving executed opcodes is a good start, and I hope emulators will go in that direction. Some of them, like QEMU, can already dump input and output assembly, the only missing piece is capturing all of that, and producing well-structured executables, instead of for example loops expanded into all iterations (which means, emulators could get brilliant performance, but native approach just consumes too much space, because then there are no loops).