Pages:
Author

Topic: Prediction: Breaking $500 within 24 hours - page 7. (Read 13228 times)

member
Activity: 109
Merit: 10

The confidence interval would not be a measure of how sure I am the function looks like that because this data does not exist. They're always used to show confidence for predictions of days that doesn't exist yet.

I just don't understand how you can argue that this way of measuring error is meaningless and then have absolutely no better way of doing it yourself. You seem to know nothing about neural networks and how they calculate predictions like this and your suggestions have been either impossible or just much worse and more confusing than what I have now. My point was just that the vast majority of people understand the error easily. It's not an oversimplification and it makes a lot of sense to almost everyone. It's far from meaningless, and you have yet to give any reason at all to back your claim other than the fact that there will be slight differences in accuracy between the shorter and longer term data points on the same chart (which is a very weak argument).

If you are going to come here and tell me the data produced by my project (which I've spent months and months and a huge amount of effort working on) is "meaningless" then you should have a better than tenuous argument as to why, and you should have some kind of better way that you would have done it. You seem to have neither of these things, and I would be much more inclined to take your advice if you did.

And I mentioned this before but trying to understand the functions that a neural network finds in data would be kind of like, in this case, trying to understand a function with 60 * 200 * 24 = 288,000 different variables. This is far from simple stuff. This is how computers recognize voices and words, it is how they identify images, it is how Google brain can differentiate between humans and cats, it is how stock market prices can be predicted, and there are dozens of other very complicated tasks that they perform. This is much more complicated than just finding some pattern in a timeseries. This is artificial intelligence and machine learning.

Confidence intervals don't work like that. If the data exists then you don't state a confidence interval. You never know the data when you state confidence intervals.

Here's why your error is a poor stat. One, your error makes a time series itself. Condensing it like that and keeping all the historic results in the current error is not useful. Your error tells the user very little about recent applicability. Secondly, you don't make any effort to tell the user how your mean varies. The minimum is zero and the mean is 1.3% okay, but how about the variance? What's the tail look like? That's a very important piece of information that anyone who is more than slightly curious will miss from your site. Listing a mean with no variance or other info is meaningless when the underlying distribution is largely unknown.

I know very little about the method used to generate your model. I know nothing about AI. I'm a mathematician I spend all my time modelling. You shouldn't condense 24 time series (error) into one number with equal weighting like that it isn't a relevant statistic to anyone but you. That means your predictions for the price 2 years ago are as relevant as your ability to predict yesterdays price as far as your error goes.

The vast majority don't understand the error at all. They just don't look at it long enough to understand it or simply don't care or know enough to see why its flawed.

PS there are confidence interval methods for neural network models. Your method for calculation is an innaproriate statistic, no matter how many people 'understand it'.
member
Activity: 84
Merit: 10
In your example, your confidence interval would be a measure of how sure you are the function really looks like that over the next 60mins, based on historic info.

I'm fed up of arguing about your definition of error and your inappropriate simplification and presenting only a mean. It's a bad measure of error for the data you are presenting, and an inappropriate statistic on it's own. As it stands, the data you present is meaningless to everyone but you. It doesn't matter how many people visit the page, if they don't have any mathematical background then they won't understand the flaw in your 'average error' statement.

Lastly, thanks for the caps, but I'm not a child. I never mentioned humans being able to 'see' patterns in datasets. But humans can 'understand' patterns, in many cases. Bitcoin prices make a timeseries, and there are many, many, many ways to find patterns in time series (& judge how likely they are to 'really' exist) and understand exactly what they mean.

The confidence interval would not be a measure of how sure I am the function looks like that because this data does not exist.

I just don't understand how you can argue that this way of measuring error is meaningless and then have absolutely no better way of doing it yourself. You seem to know nothing about neural networks and how they calculate predictions like this and your suggestions have been either impossible or just much worse and more confusing than what I have now. My point was just that the vast majority of people understand the error easily. It's not an oversimplification and it makes a lot of sense to almost everyone. It's far from meaningless, and you have yet to give any reason at all to back your claim other than the fact that there will be slight differences in accuracy between the shorter and longer term data points on the same chart (which is a very weak argument).

If you are going to come here and tell me the data produced by my project (which I've spent months and months and a huge amount of effort working on) is "meaningless" then you should have a better than tenuous argument as to why, and you should have some kind of better way that you would have done it. You seem to have neither of these things, and I would be much more inclined to take your advice if you did.

And I mentioned this before but trying to understand the functions that a neural network finds in data would be kind of like, in this case, trying to understand a function with 60 * 200 * 24 = 288,000 different variables. This is far from simple stuff. This is how computers recognize voices and words, it is how they identify images, it is how Google brain can differentiate between humans and cats, it is how stock market prices can be predicted, and there are dozens of other very complicated tasks that they perform. This is much more complicated than just finding some pattern in a timeseries. This is artificial intelligence and machine learning.
hero member
Activity: 518
Merit: 500
Quantum mechanics tells us that things are random. Even if you could chart every particle in the universe, you could not predict everything. Although, you can find likely outcomes.

Haha I was waiting for someone to bring up quantum mechanics. I feel like that's the only legitimate argument you can make against what I said. I don't understand enough about quantum mechanics to really discuss this, but from what I've heard from people who actually study it and know a decent amount about it (nobody knows "a lot" about quantum mechanics) all of the things about randomness are still pretty theoretical or just not well understood.

It's a very interesting subject, but regardless it doesn't conflict with my point that bitcoin prices are NOT random.

You should check out "Through the Wormhole", they spend a lot of time on the subject. Basically if you shoot a electron through a solid flat surface it would leave more then one hole. Unless you were observing it. May as well be magic, to our generation at least. We might just be living inside the matrix, that's just a glitch in the program..

um what? pretty sure the original experiment was that they had two slits and they shot them through each causing a wave patter to appear much like spectrum lines from light passing through slits causing "banding" and when "observed" they stopped behaving like a "wave" and went back to behaving like a "particle" and just made two "piles" on either side, much the way sand pouring through two holes would cause two piles of sand on the other side,

I can tell you it is not "observation" that causes this, and the secret to figuring out how this oddity is occurring you just need to know HOW they were "observing" the action, since it was not with a human eye, and no human ever had to look at what was recorded to effect the outcome that can be seen as an after effect with out looking at it. the effect on the outcome was the same either way lol. it amazes me how little people really understand about things and how they hear chinese whispers about things and then just take it all literally, instead of going to the source and looking hard at the data.


the market is not random, that is for sure, there are events that cause things to occur, but knowing what to expect is going to take more than looking at the numbers each time, you need to look at spikes and then look at real events that tailor peoples lives, and see if there is a correlation, then see if the event has a pattern (like the fourth of july that happens here in the USA as a holiday where people blow lots of money on food fireworks and partying, to realize that people may cash out to do so, causing a crash, and then afterwards the market can repair itself as people go back to having just as much burden on their wallets as they did before things happened, sudden windfalls that happen say yearly will have significant effect on things, but not always, so you have to calculate all new changes and how they effect people negatively or positively, and your neural network at this point I am gathering can not do that, but when it can, then we will not see an unstable market anymore, unless you with hold that and use it to make millions lol.

It works basically the way I said it. They shoot a electron through two slits which gives a wave pattern because the electron goes through both slits at the same time. When they observe each slit to actually see it happen, it changes and only goes through one slit at a time. It has been tested with the detector on but not collecting data. If any data is recorded telling us one way or the other, the wave pattern disappears. Look up the double-slit experiment if you need a lesson. I was trying to keep it simple because it can be hard to understand. Clearly you don't get it, but I guess you already debunked quantum mechanics and without a single shred of evidence. Obviously looking at the data doesn't effect the test, it's the act of measurement that forces a particle to choose a specific location.
member
Activity: 109
Merit: 10
Xell - A confidence interval just wouldn't make sense with this model of prediction. It would be like if you said that you are 90% confident that the value of the function f(x) = x + 5 is 7 given 2 for x, and 10% sure that it is 8 or 6. So that's not an option. Also error calculation HAS to be historic. There is no other way it could ever possibly be done - you can't measure the error of a prediction that has yet to play itself out.

The 24 hour prediction is probably a small amount higher than average, and the 1 hour prediction is probably a small amount lower - yes, that is true. I could possibly somehow inform viewers of the error of each hour individually, but I am 100% positive that this would be much more confusing to the vast majority of people. It would provide a lot of information, cluttering the charts and very few people would understand it without having to ask for an explanation. The way it is now, the vast majority of the 13,000 unique people who have viewed it so far have not needed an explanation, and the few who I have explained it to understood it pretty quickly. I would really like for everyone to be able to understand it pretty easily, but from what you guys are saying here it seems like, unfortunately, that may not be possible.

Also I don't think anybody is assuming that the average error quoted above the 24 hour chart refers ONLY to the last prediction. I'm pretty positive of this lol...

I would be curious to see what study you are referring to that showed that bitcoin prices correlate more with media patterns than historic patterns, but regardless this is irrelevant. As I've explained before, the cause of a recurring pattern in price does not matter. As long as some pattern exists, it can be recognized and used to successfully predict future prices. It can be the media causing the pattern, group psychology, or flying spaghetti monsters. Neural networks don't need to know this information.

And remember, this is finding patterns that are WAAYYYY too complicated for any human to see or understand. Just because something looks like it is the case doesn't mean that it is. It may look like price movements are arbitrary and they don't repeat themselves, but that just means that you cannot see the underlying patterns because our brains are just incapable of this.

In your example, your confidence interval would be a measure of how sure you are the function really looks like that over the next 60mins, based on historic info.

I'm fed up of arguing about your definition of error and your inappropriate simplification and presenting only a mean. It's a bad measure of error for the data you are presenting, and an inappropriate statistic on it's own. As it stands, the data you present is meaningless to everyone but you. It doesn't matter how many people visit the page, if they don't have any mathematical background then they won't understand the flaw in your 'average error' statement.

Lastly, thanks for the caps, but I'm not a child. I never mentioned humans being able to 'see' patterns in datasets. But humans can 'understand' patterns, in many cases. Bitcoin prices make a timeseries, and there are many, many, many ways to find patterns in time series (& judge how likely they are to 'really' exist) and understand exactly what they mean.
member
Activity: 84
Merit: 10
Xell - A confidence interval just wouldn't make sense with this model of prediction. It would be like if you said that you are 90% confident that the value of the function f(x) = x + 5 is 7 given 2 for x, and 10% sure that it is 8 or 6. So that's not an option. Also error calculation HAS to be historic. There is no other way it could ever possibly be done - you can't measure the error of a prediction that has yet to play itself out.

The 24 hour prediction is probably a small amount higher than average, and the 1 hour prediction is probably a small amount lower - yes, that is true. I could possibly somehow inform viewers of the error of each hour individually, but I am 100% positive that this would be much more confusing to the vast majority of people. It would provide a lot of information, cluttering the charts and very few people would understand it without having to ask for an explanation. The way it is now, the vast majority of the 13,000 unique people who have viewed it so far have not needed an explanation, and the few who I have explained it to understood it pretty quickly. I would really like for everyone to be able to understand it pretty easily, but from what you guys are saying here it seems like, unfortunately, that may not be possible.

Also I don't think anybody is assuming that the average error quoted above the 24 hour chart refers ONLY to the last prediction. I'm pretty positive of this lol...

I would be curious to see what study you are referring to that showed that bitcoin prices correlate more with media patterns than historic patterns, but regardless this is irrelevant. As I've explained before, the cause of a recurring pattern in price does not matter. As long as some pattern exists, it can be recognized and used to successfully predict future prices. It can be the media causing the pattern, group psychology, or flying spaghetti monsters. Neural networks don't need to know this information.

And remember, this is finding patterns that are WAAYYYY too complicated for any human to see or understand. Just because something looks like it is the case doesn't mean that it is. It may look like price movements are arbitrary and they don't repeat themselves, but that just means that you cannot see the underlying patterns because our brains are just incapable of this.
member
Activity: 109
Merit: 10
Xell - Of course the error is historic. I can't compare my predictions to future prices...

In your example of a prediction being $495 or $505 and the actual price being $500, those two options would be two different average errors. If you predict $505 and the price is 500, that is closer than if you predict $495 and it ends up being $500 because the difference ($5) is a smaller percent of $505 than it is of $495.

You were asking about whether the error was based on the 24 hour prediction or the 1 hr prediction or something in between - it is based on all of them. I still think you're making it more complicated than it is. I look at every predicted price, calculate how far off it is from the actual price at that time, and then take the average of those errors. So in reality if you were to average all of the predictions made 24 hours out, the error would probably be a bit higher. However, if you were to average the predictions that are 1 hour out, it would be lower. Really the best way to look at it though is just that the predicted prices (individually) are each off by an average of 1.3%.

twiifm (and I originally) was suggesting you had a statistical model (maybe time-series based), in which case you can give confidence bounds on your predictions based on your model. I'm not a computer scientist, but I'd be surprised if this wasn't possible in your case too, rather than keeping your error purely historic.

Maybe to some people the error is confusing, and I don't want that. It is definitely not misleading though. To say it is misleading implies that the error makes it seem like it's more accurate than it is, which is false. The error effectively describes the neural network's accuracy.

For those of you who find the average error confusing, I would be very open to your ideas on how to make it more intuitive. Given a set of predicted prices, how would you describe their accuracy? To me the most intuitive thing was just to calculate the error of each prediction and take the average of those errors, so that's what I've done. I just don't see a simpler, more obvious way to do it than that. But if you guys have ideas I would love to hear them.

It is misleading, because you are quoting an average error for a 24h period - your error increases the further away from 'now' you are, and what you imply is that this is an error for the 24h period when in fact the error for now+24h is greater than what you state. It's also misleading because you state ``[the errors] never rise more than 0.1% above these numbers.'' so 1.4% for the 24h case. But as the title of this thread shows, clearly that's just a nonsense.

As for ideas, here is my idea for accuracy: Instead of one number for the 24h period, have an 'average error' for each predicted price (that is, an average error for prices predicted 1,2,3,4,...,24h into the future) and plot a band rather than a line to show them. You might also consider a toggle function to only consider errors since the start of this year. My reason for suggesting that is there has been a different trend since January, seems different from the overall trend since Bitcoin began.

Also, anyone else should note that that bitcoin prices have been show to correlate most strongly with media cycles, rather than historic price cycles. Of course, you can show anything if you try hard enough with poor statistical tests but it's worth bearing in mind when considering the predictive power of a model that doesn't take media coverage into consideration.
member
Activity: 84
Merit: 10
Xell - Of course the error is historic. I can't compare my predictions to future prices...

In your example of a prediction being $495 or $505 and the actual price being $500, those two options would be two different average errors. If you predict $505 and the price is 500, that is closer than if you predict $495 and it ends up being $500 because the difference ($5) is a smaller percent of $505 than it is of $495.

You were asking about whether the error was based on the 24 hour prediction or the 1 hr prediction or something in between - it is based on all of them. I still think you're making it more complicated than it is. I look at every predicted price, calculate how far off it is from the actual price at that time, and then take the average of those errors. So in reality if you were to average all of the predictions made 24 hours out, the error would probably be a bit higher. However, if you were to average the predictions that are 1 hour out, it would be lower. Really the best way to look at it though is just that the predicted prices (individually) are each off by an average of 1.3%.

morphtrust - It uses bitstamp. Are you saying that the patterns in price fluctuation are different now because people have seen the older patterns before? Because if so, I would definitely have to disagree. I'm still not exactly sure if that's what you are saying though.

Costanza1 - Yes, it averages all of the predictions' errors for each chart. The 24 hour chart averages the 1 hr, 2 hr, etc up to 24 hour errors. I have not calculated the error at individual hours in advance, but I imagine it would show lower errors for the shorter predictions and larger errors for the longer ones. I don't think the difference between the shorter and longer ones would be particularly large though.

----------

Maybe to some people the error is confusing, and I don't want that. It is definitely not misleading though. To say it is misleading implies that the error makes it seem like it's more accurate than it is, which is false. The error effectively describes the neural network's accuracy.

For those of you who find the average error confusing, I would be very open to your ideas on how to make it more intuitive. Given a set of predicted prices, how would you describe their accuracy? To me the most intuitive thing was just to calculate the error of each prediction and take the average of those errors, so that's what I've done. I just don't see a simpler, more obvious way to do it than that. But if you guys have ideas I would love to hear them.
full member
Activity: 201
Merit: 100
Just wanted to say this is a very interesting project/thread and I'm enjoying the discussion.  I too agree the error % is confusing.  If you have 24 different predictions of the price at a specific time do you average all 24 of those predictions and then compare that average to the actual price to find your error?  As your 24hr predictions are more accurate than your 5 day predictions wouldnt that also mean that predictions for an hour ahead are far more accurate than predictions 23 hours ahead?  Have you calculated error based on x hours in advance as opposed to the whole 24 hour chart?

Thanks!
sr. member
Activity: 259
Merit: 250
morphtrust - the possibility of people trading based on this information, and therefore causing higher errors is, of course, inevitable. However, at this point in the site's life I tend to doubt that enough people are actually making decisions based on this data to significantly affect the market though. This is very hard to measure though - especially given that I'm far from an expert in economics or markets and I don't really know how many people are needed to affect the market to what degree.

Xell - I mean, I thought the average error was pretty self explanatory. If it has an average error of 1.3%, that means that on average, it is off by 1.3%. That's about it. If you take all of its predictions and compare them against the actual prices, you'll find that the average of all the errors is 1.3%. A few people have asked about this though, so I'll give a little bit more detail.

The neural network, once it is done training, does some "test runs" where it attempts to make predictions at a few tens of thousands of points on the historic data. It compares its predictions to the actual prices that occurred and calculates the errors. The average errors that show up on the charts on the home page are the averages of all of these errors from the test runs.

With neural networks, there is no data that you would get which corresponds with something like a confidence level. It tries to approximate a function, but there is no probability involved in this.

I think you mis read what I said, the price variance is lower from one year to the next because people are looking at "CHARTS" not "YOUR CHARTS"  basically the same reasons that cause spikes and dips are still happening but they are not as severe because people trying to use that to make money and pressing the event they know BECAUSE IT HAPPENED BEFORE" (not because they even know about your site or have ever been on it) your site would be  possibly effecting that if there were enough people who know about it yes, but right now I doubt that is the case as well, I have not been able to figure out what market your chart is so accurate to, because I trade on BTC38 and it was not very accurate when I checked it, and that is fine because I doubt the people there are on the same pulses the people in general are on your chart, since their market often does not mirror what other places are doing ever,
hero member
Activity: 784
Merit: 500
What he is giving is the mean of the variance.  Not % of error.  So what he is saying is the variance of his prediction is +/- 1.5% from the mean.

For standard deviation you have to take square root or you might get negative number

No he isn't. The error is purely historic. It has nothing to do with the current prediction.

OK, that's what it sounds like he's saying.  In any case I agree w you that it's misleading
member
Activity: 109
Merit: 10
What he is giving is the mean of the variance.  Not % of error.  So what he is saying is the variance of his prediction is +/- 1.5% from the mean.

For standard deviation you have to take square root or you might get negative number

No he isn't. The error is purely historic. It has nothing to do with the current prediction.
hero member
Activity: 784
Merit: 500
What he is giving is the mean of the variance.  Not % of error.  So what he is saying is the variance of his prediction is +/- 1.5% from the mean.

For standard deviation you have to take square root or you might get negative number
member
Activity: 109
Merit: 10
Just reread your post. That method of expressing error is hopeless when your predictions update hourly. It just doesn't make sense to take an average of errors like that. You need to explain better exactly how you calculate the error when (in the 24 hour case) you are really making 24 predictions per hour. You look at the error from all 24 predictions, square them (or take absolute value) and add them up? So you're adding 24 positive values up?

Your errors are also meaningless in terms of current prediction, because the errors you list are purely historic and are independent of the current prediction.

I think you are making it a lot more complicated than it is. I really calculate the error in the most basic way possible. Also I don't know what you are talking about as far as squaring anything or taking absolute values.

If I predict the price to be $500, and it turns out to be $505, that's a 1% error. It was off by $5, which is 1% of the predicted price. So I just take all of the predictions and calculate those errors, and take the average. And by taking the average, I mean taking the average, as in the average of 2, 2, 3, and 5 is 2 + 2 + 3 + 5 = 12 / 4 = 3. I just add them all together and divide by the number of predictions that were made to figure out what the average error is.

Average error really just means exactly what it says - on average, predictions are off by that percent.

I think you're making it a lot more simple than it really is. You have 24 published predictions for every hourly price. So 24h ago it predicted the price now. 23 hour ago it predicted a different price for now. Is the error based on the 24 hour prediction or the 1h prediction or a combination of those and all the predictions inbetween

As for making sure the values are positive just means if one prediction is 505 and the other 495 while it turns out to be 500 the average error is not zero.
hero member
Activity: 784
Merit: 500
Yes very misleading.  1.5% error sounds like out of 100 predictions only 1.5 are wrong and 98.5% are correct
member
Activity: 84
Merit: 10
Then you need to make that clear. Because right now it appears that you mean 1.3% over 24 hours. But in effect it is this between datapoints. Over the 24 datapoints, it can be off by 30% or so...

No lol... it will never be off by 30% unless something ridiculous happens. This is the average error of all of the predictions in that time frame, it's not aggregate or anything. If you look at all of those points, on average each one is off by around 1.3%. Like I said, I just calculate how far off each one is and take the average of all of them.
legendary
Activity: 1680
Merit: 1001
CEO Bitpanda.com
Just reread your post. That method of expressing error is hopeless when your predictions update hourly. It just doesn't make sense to take an average of errors like that. You need to explain better exactly how you calculate the error when (in the 24 hour case) you are really making 24 predictions per hour. You look at the error from all 24 predictions, square them (or take absolute value) and add them up? So you're adding 24 positive values up?

Your errors are also meaningless in terms of current prediction, because the errors you list are purely historic and are independent of the current prediction.

I think you are making it a lot more complicated than it is. I really calculate the error in the most basic way possible. Also I don't know what you are talking about as far as squaring anything or taking absolute values.

If I predict the price to be $500, and it turns out to be $505, that's a 1% error. It was off by $5, which is 1% of the predicted price. So I just take all of the predictions and calculate those errors, and take the average. And by taking the average, I mean taking the average, as in the average of 2, 2, 3, and 5 is 2 + 2 + 3 + 5 = 12 / 4 = 3. I just add them all together and divide by the number of predictions that were made to figure out what the average error is.

Average error really just means exactly what it says - on average, predictions are off by that percent.

Then you need to make that clear. Because right now it appears that you mean 1.3% over 24 hours. But in effect it is this between datapoints. Over the 24 datapoints, it can be off by 30% or so...
member
Activity: 84
Merit: 10
Just reread your post. That method of expressing error is hopeless when your predictions update hourly. It just doesn't make sense to take an average of errors like that. You need to explain better exactly how you calculate the error when (in the 24 hour case) you are really making 24 predictions per hour. You look at the error from all 24 predictions, square them (or take absolute value) and add them up? So you're adding 24 positive values up?

Your errors are also meaningless in terms of current prediction, because the errors you list are purely historic and are independent of the current prediction.

I think you are making it a lot more complicated than it is. I really calculate the error in the most basic way possible. Also I don't know what you are talking about as far as squaring anything or taking absolute values.

If I predict the price to be $500, and it turns out to be $505, that's a 1% error. It was off by $5, which is 1% of the predicted price. So I just take all of the predictions and calculate those errors, and take the average. And by taking the average, I mean taking the average, as in the average of 2, 2, 3, and 5 is 2 + 2 + 3 + 5 = 12 / 4 = 3. I just add them all together and divide by the number of predictions that were made to figure out what the average error is.

Average error really just means exactly what it says - on average, predictions are off by that percent.
member
Activity: 109
Merit: 10
morphtrust - the possibility of people trading based on this information, and therefore causing higher errors is, of course, inevitable. However, at this point in the site's life I tend to doubt that enough people are actually making decisions based on this data to significantly affect the market though. This is very hard to measure though - especially given that I'm far from an expert in economics or markets and I don't really know how many people are needed to affect the market to what degree.

Xell - I mean, I thought the average error was pretty self explanatory. If it has an average error of 1.3%, that means that on average, it is off by 1.3%. That's about it. If you take all of its predictions and compare them against the actual prices, you'll find that the average of all the errors is 1.3%. A few people have asked about this though, so I'll give a little bit more detail.

The neural network, once it is done training, does some "test runs" where it attempts to make predictions at a few tens of thousands of points on the historic data. It compares its predictions to the actual prices that occurred and calculates the errors. The average errors that show up on the charts on the home page are the averages of all of these errors from the test runs.

With neural networks, there is no data that you would get which corresponds with something like a confidence level. It tries to approximate a function, but there is no probability involved in this.

Just reread your post. That method of expressing error is hopeless when your predictions update hourly. It just doesn't make sense to take an average of errors like that. You need to explain better exactly how you calculate the error when (in the 24 hour case) you are really making 24 predictions per hour. You look at the error from all 24 predictions, square them (or take absolute value) and add them up? So you're adding 24 positive values up?

Your errors are also meaningless in terms of current prediction, because the errors you list are purely historic and are independent of the current prediction.
legendary
Activity: 1680
Merit: 1001
CEO Bitpanda.com

Xell - I mean, I thought the average error was pretty self explanatory. If it has an average error of 1.3%, that means that on average, it is off by 1.3%. That's about it. If you take all of its predictions and compare them against the actual prices, you'll find that the average of all the errors is 1.3%. A few people have asked about this though, so I'll give a little bit more detail.



1.3% of what? Absolute value? Pricemovement between 2 datapoints? Over the whole period?
member
Activity: 84
Merit: 10
morphtrust - the possibility of people trading based on this information, and therefore causing higher errors is, of course, inevitable. However, at this point in the site's life I tend to doubt that enough people are actually making decisions based on this data to significantly affect the market though. This is very hard to measure though - especially given that I'm far from an expert in economics or markets and I don't really know how many people are needed to affect the market to what degree.

Xell - I mean, I thought the average error was pretty self explanatory. If it has an average error of 1.3%, that means that on average, it is off by 1.3%. That's about it. If you take all of its predictions and compare them against the actual prices, you'll find that the average of all the errors is 1.3%. A few people have asked about this though, so I'll give a little bit more detail.

The neural network, once it is done training, does some "test runs" where it attempts to make predictions at a few tens of thousands of points on the historic data. It compares its predictions to the actual prices that occurred and calculates the errors. The average errors that show up on the charts on the home page are the averages of all of these errors from the test runs.

With neural networks, there is no data that you would get which corresponds with something like a confidence level. It tries to approximate a function, but there is no probability involved in this.
Pages:
Jump to: