So, by "breaking" a curve you mean brute forcing it, expecting to finish in a reasonable time?
Yes. You can always take some O(2^n) problem, and increase your constants, then it will turn into O(n) problem. Or maybe even O(1) problem, if you just use even bigger constants. It is just abusing the fact, that O-notation ignores constants, so if you can use "for all" instead of "for N", you can always simplify it into one, and go on, until reaching practically useless, but mathematically correct O(1).
I suppose that includes finding an algorithm that can do it in a better than linear time complexity for large curves?
Better than linear? It is constant! Funny thing: you can always measure the time of execution, and put "sleep(maxTime-time)" at the end of your algorithm, then you will always reach "constant execution time". But of course, it will be useless, even if correct.
But, what about non-deterministic things, like quantum phenomena?
If we don't know something, we can perceive that as "random". But if we would know all of that, then maybe we could look at it in a fully deterministic way? Who knows, you can always model some kind of game, where you, as a programmer, know all rules, and they are deterministic, but all NPCs inside can still perceive some changes as "quantum" or "random". The world is full of mysteries, and we will probably never know, because even if we will understand quantum things, then we may reach just the next stage of knowledge, and find another problem, that we will classify as "random".
Couldn't there be problems which cannot be solved in a constant time, as their complete randomness changes them fundamentally?
They could exist. But as long as our machines are quite deterministic, we don't have good enough tools to play with that. For example, some people thought that bitaddress can generate random enough keys, based on user input, when moving the mouse, or typing some characters. But it turned out, that if you do a "spider" on keyboard, or if you move your mouse "randomly", then it could still be predicted to some extent. For example, "34hr807wyr9awehgt978w43qr87wyer7" looks randomly, but is definitely non-random. You can easily notice, which characters are close to each other, and based on that input, you can determine, that it was done on QWERTY keyboard, and create "spider generator", that will try to guess such patterns without bruteforcing. The same is true for mouse movements.
But, isn't philosophy pretty much suggesting that you can't know everything?
It is true to some extent. Yes, you cannot know absolutely everything, and answer all questions about the world. But: if you narrow the scope, then you can reach the point of full knowledge. For example, you can say: "I want to find any SHA-1 collision". And then, you can complete that goal, and think "so, now people will move into SHA-256 and other hash functions". And it is true, some people moved there. But because of backward-compatibility, we still have "hardened SHA-1". And you can say, again: "I want to find any hardened SHA-1 collision". It is never ending story of asking questions, and finding answers. Life is full of mysteries, and many people have a lot of fun, when they play such games. And in many such problems, there are some rewards for those efforts, for example there was a reward for generating SHA-1 collision. This story never ends, because you can still say: "well, that solution is good, but now I want to receive exactly two-block SHA-1 collision, not that five-block collision from this famous PDF". And you can use OP_SIZE to enforce that.
So, even if you don't know everything, the knowledge you have, can still provide you a lot of fun. And you probably can still reach full knowledge, if you define your problem correctly, and narrow the question to some achievable state.