The confidence interval would not be a measure of how sure I am the function looks like that because this data does not exist. They're always used to show confidence for predictions of days that doesn't exist yet.
I just don't understand how you can argue that this way of measuring error is meaningless and then have absolutely no better way of doing it yourself. You seem to know nothing about neural networks and how they calculate predictions like this and your suggestions have been either impossible or just much worse and more confusing than what I have now. My point was just that the vast majority of people understand the error easily. It's not an oversimplification and it makes a lot of sense to almost everyone. It's far from meaningless, and you have yet to give any reason at all to back your claim other than the fact that there will be slight differences in accuracy between the shorter and longer term data points on the same chart (which is a very weak argument).
If you are going to come here and tell me the data produced by my project (which I've spent months and months and a huge amount of effort working on) is "meaningless" then you should have a better than tenuous argument as to why, and you should have some kind of better way that you would have done it. You seem to have neither of these things, and I would be much more inclined to take your advice if you did.
And I mentioned this before but trying to understand the functions that a neural network finds in data would be kind of like, in this case, trying to understand a function with 60 * 200 * 24 = 288,000 different variables. This is far from simple stuff. This is how computers recognize voices and words, it is how they identify images, it is how Google brain can differentiate between humans and cats, it is how stock market prices can be predicted, and there are dozens of other very complicated tasks that they perform. This is much more complicated than just finding some pattern in a timeseries. This is artificial intelligence and machine learning.
Confidence intervals don't work like that. If the data exists then you don't state a confidence interval. You never know the data when you state confidence intervals.
Here's why your error is a poor stat. One, your error makes a time series itself. Condensing it like that and keeping all the historic results in the current error is not useful. Your error tells the user very little about recent applicability. Secondly, you don't make any effort to tell the user how your mean varies. The minimum is zero and the mean is 1.3% okay, but how about the variance? What's the tail look like? That's a very important piece of information that anyone who is more than slightly curious will miss from your site. Listing a mean with no variance or other info is meaningless when the underlying distribution is largely unknown.
I know very little about the method used to generate your model. I know nothing about AI. I'm a mathematician I spend all my time modelling. You shouldn't condense 24 time series (error) into one number with equal weighting like that it isn't a relevant statistic to anyone but you. That means your predictions for the price 2 years ago are as relevant as your ability to predict yesterdays price as far as your error goes.
The vast majority don't understand the error at all. They just don't look at it long enough to understand it or simply don't care or know enough to see why its flawed.
PS there are confidence interval methods for neural network models. Your method for calculation is an innaproriate statistic, no matter how many people 'understand it'.