Pages:
Author

Topic: Non-spreadsheet long-term predictions (Read 8936 times)

sr. member
Activity: 364
Merit: 250
February 22, 2015, 11:58:33 PM
#46
Any plans to update the model in the near future? Interested to see how it looks now that its been 3 months from the last update.
hero member
Activity: 560
Merit: 500
December 16, 2014, 09:56:34 AM
#45
exocytosis: "This user is currently ignored.:  Cheesy
sr. member
Activity: 476
Merit: 250
December 16, 2014, 02:36:48 AM
#44
* Just for fun -- projected price on 2014-12-31 is 438.


Quoted for future reference. We'll see in a couple of weeks.
legendary
Activity: 2338
Merit: 2106
December 16, 2014, 02:29:19 AM
#43
interesting... unfortunately, price is still behind  Undecided
sr. member
Activity: 1078
Merit: 254
December 15, 2014, 05:51:54 PM
#42
following
legendary
Activity: 1442
Merit: 1186
November 26, 2014, 05:02:58 PM
#41
How have I never noticed this post before?  Very interesting, will save a check back frequently.
sr. member
Activity: 433
Merit: 250
November 26, 2014, 04:40:14 PM
#40
Pretty accurate.

We're floating around 366 and it's predicted 366 for P50
legendary
Activity: 1470
Merit: 1007
November 25, 2014, 12:42:18 PM
#39
What technical analysis tools do you use? I am curious.

No easy answer to that question. Probably makes more sense to describe my two big meta-strategies, and then fill in some of the details: trend following based on momentum, and mean reversion based on clearly identifiable outliers. Not very original, I know.

For the momentum trades, the basis is usually price (close, high, low, sometimes hl2) in relation to some averages (either simple moving averages, which shouldn't work, but does, or more commonly, averages with some form of exponential smoothing). Plus divergences on the conventional indicators (like MACD or RSI). And finally: volume analysis (in relation to price, and over time). Extremely important factor in this market, in my opinion.

The mean reversion decisions are a bit more complicated, but price in relation to some average is the basis for that as well, though picking up additional clues (like the divergences mentioned above) becomes more important unless I want to enter deeply unprofitable trades.

Trading frequency I aim for is once every few weeks, possibly months. In summary, discretionary swing/position trading, with some amount of algorithmic input and backtesting from experimental methods or indicators.
sr. member
Activity: 317
Merit: 252
November 25, 2014, 12:07:18 PM
#38
What technical analysis tools do you use? I am curious.
legendary
Activity: 1470
Merit: 1007
November 23, 2014, 08:44:03 AM
#37
@Joe200

Good decision, imo (to "fix" the model as using the entire history, and see what you get doing so).

I've said it before (somewhere on page 1 iirc), but your model is one of the more, maybe the most, mathematically mature of the long-term models, if for no other reason that you provide interval estimates. This model, like the others, still suffers from the obvious problem of changing parameters, but then again, the more accurate you'd like to get in your market predictions the narrower your predictive time frame has to get presumably, so maybe that's the trade-off that we have to accept: Your model does make a prediction into far future, but will either have a high error rate at times (using the "median scenario" as a point estimate), or imprecise (using the entire interval range). There are probably more precise, less error prone ways to look at the market, but you'd be happy if you can predict 2 days into the future with them. So, apples and oranges.

Not sure I agree with your assessment that it's all voodoo/cargo cult what technical analysis is doing, but happy to hear we're not disagreeing on the basic premise itself, which is surprisingly common in here: whether prediction (based on past market data) is possible in principle, or whether some absolute interpretation of fundamentals aka "the news" determines price. (I say "surprisingly common", because I always wonder what people that don't believe prediction is possible even do in a speculation forum. Complain about the unpredictability of the markets, I guess Cheesy)
sr. member
Activity: 317
Merit: 252
November 22, 2014, 11:01:33 PM
#36
Posted an update on the main page.
sr. member
Activity: 317
Merit: 252
November 22, 2014, 10:55:27 PM
#35
This is the post from 2014-11-19.

(I am copying the original message to here. This page will have the latest.)

The spreadsheet extrapolation. Many people plot log price versus time, find the best fitting line, and extrapolate it. This is what they get:

Code:
              Estimate Std. Error t value Pr(>|t|)    
(Intercept) -1.6131065  0.0434221  -37.15   <2e-16 ***
day          0.0056570  0.0000477  118.60   <2e-16 ***

R-squared:  0.8994

R2 = 90%! This model MUST be correct. Add in the bias -- if this model is correct, we will be making 0.57% per day forever. Who doesn't like that.

What others point out in response is that the trend cannot go on forever. Although, even if log price vs. time is an S-shaped curve, we really don't know where we are in that curve. If we near the beginning of that curve, then the linear extrapolation is fine. Or are we further along?

Even if we are near the beginning of the S-curve, there is another issue. OLS assumes that the residuals are independent of each other, which, in a time series, is clearly and completely false. And we should talk about confidence intervals as well, not just a point estimate.

A better model. Here is a very basic model, the assumptions for which are actually not violated.

Code:
diff log price ~ Normal(mu, sigma)


The difference in the log price, which is approximately the daily percent return, has a Normal distribution with mean mu and standard deviation sigma. Am I saying that this is the "true" correct model? No. This is a useful simplification.

Assuming that mu and sigma could change over time, this model does not take that into account. We would have to decide outside of the model how much data to use to estimate the current mu and sigma.

Let's look at estimates of mu and sigma based on half year and one year chunks of data.

Code:
  n.data       from         to        mu  sigma   z.stat   diff.mu  diff.sd  diff.z
1    115 2010-07-17 2010-11-08    0.0161 0.0854    0.189        NA       NA      NA
2    183 2010-11-09 2011-05-10    0.0147 0.0722    0.204  -0.00141  -0.0132  0.0149
3    182 2011-05-11 2011-11-14  -0.00349 0.0876  -0.0399   -0.0182   0.0154  -0.244
4    183 2011-11-15 2012-05-15   0.00378 0.0392   0.0963   0.00727  -0.0484   0.136
5    183 2012-05-16 2012-11-14   0.00427 0.0323    0.132  0.000496 -0.00695  0.0361
6    182 2012-11-15 2013-05-15    0.0127 0.0799    0.159   0.00842   0.0476  0.0265
7    183 2013-05-16 2013-11-16   0.00735 0.0372    0.198  -0.00534  -0.0427   0.039
8    182 2013-11-17 2014-05-17  0.000315 0.0507  0.00622  -0.00704   0.0135  -0.192
9    183 2014-05-18 2014-11-19 -0.000951 0.0242  -0.0393  -0.00127  -0.0265 -0.0455
Code:
  n.data       from         to       mu  sigma   z.stat  diff.mu  diff.sd  diff.z
1    115 2010-07-17 2010-11-08   0.0161 0.0854    0.189       NA       NA      NA
2    365 2010-11-09 2011-11-14  0.00564 0.0808   0.0698  -0.0105 -0.00461  -0.119
3    366 2011-11-15 2012-11-14  0.00402 0.0359    0.112 -0.00162  -0.0449  0.0422
4    365 2012-11-15 2013-11-16     0.01 0.0623    0.161  0.00599   0.0264  0.0487
5    365 2013-11-17 2014-11-19 -0.00032 0.0397 -0.00805  -0.0103  -0.0226  -0.169

A lot of information here. What do you see? Here is what jumps out at me.
* Yearly chunks of data. This past year had a negative return, for the first time ever. This is disturbing for HODLers.
* Half year chunks of data. This past half year had a negative return. The only other time with a negative return was 2011-05-11 to 2011-11-14. That's when the most giant crash so far happened -- from 2011-06-09 / 29.6 to 2011-11-18 / 2.14, a loss of 92.8%.
* Interestingly, with the current negative return, sigma (volatility) is also small. Also interestingly, during the 2011-05-11 to 2011-11-14 period, volatility did not decrease. What do you think this all means?

We have to think a lot more carefully about how much data to use to make the estimates. The projections into the future are pretty straightforward, if we have good estimates of the parameters. Ideas welcome.

History.
* 2014-05-08
* 2014-06-26

sr. member
Activity: 317
Merit: 252
November 22, 2014, 10:26:56 PM
#34
Maybe I didn't phrase it very clearly. What I tried to say wasn't that his analysis is biased or subjective, but that the way he is adjusting/refining his model reminds of the path that one takes when attempting to trade the market based on ("traditional") technical analysis. 'Pattern finding', as opposed to the more rigorous, but maybe also rather futile attempt to do purely quantitative forecasting. It's not a black/white distinction, just what his updated model suddenly made me think of.

I am a big believer in the basic axiom of technical analysis, though I think that the actual tools of technical analysis are all voodoo. The axiom is that what people think is "fundamentals" is actually driven by price. "News X just happened, the price should go up, and here it is going up." WRONG. This is your interpretation of News X in a bull market. In a bear market, you would probably not even notice News X. And if you do, you interpret its influence on price differently.

This same thing started happening to me in this thread. My biases came into it. When I believed that the price would recover quickly, I said, let's just use all the data to estimate the parameters. Now, with the longest bear market to date, I am scared. My mind is playing tricks on me. My model seems objective. But how I choose to construct the model is subjective. And my views are influenced by the price. Price action is driving something that's seemingly objective.

As I explained in the last post, I will just use all the data in this thread.
sr. member
Activity: 317
Merit: 252
November 22, 2014, 10:17:47 PM
#33
Thanks for the update, Joe200. Pleasure to read, as before.

I notice you're changing the approach a bit: first time, you used all available data for the estimates, now you're using a half year/one year period preceding the time of calculation, and see how the parameters change over time. Correct so far?

Don't get this the wrong way, it's not meant to be a criticism of your attempt to model price, but I do find it interesting that it looks to me like you are getting closer to some of the considerations that drive technical analysts.

What I mean is this: you started (some months ago) by looking at the entire available data for your estimates. Turns out, at any point in time, this can lead to wildly unreliable predictions (especially if the preceding period saw a huge price increase or decrease). Now you're breaking down the data into smaller chunks, and try to see how the parameters change over time. The next logical step seems to be to start varying the history size, and try to see if the parameters change in some systematic way in the past, i.e. if there's some pattern to how they change over time. Sorry to say it, but you're one of us now Cheesy

Oda,

You are right. Cheesy  Roll Eyes  Cry

I always knew that the parameters changing over time could become an issue, I just didn't know how to deal with it, and still don't. When I made my original post, I got lucky -- the parameter estimates using all of the data were very close to the parameter estimates using just the most recent 2 years of data. I waved my hands and pretended like the parameters don't change over time, and gave my predictions.

This time around, we are in the longest bear market ever (https://bitcointalksearch.org/topic/previous-crashes-and-recoveries-676333). The parameter estimates using all the data are quite different from the parameter estimates using the most recent data. This gave me pause.

After thinking about this for a couple of days, here is what I decided. I am not going to develop the One True Model of Bitcoin Price(TM) in one simple post. The original purpose of this thread was simply to explain that the linear extrapolation on the log-price chart that so many people do is complete nonsense, and to give a model that's similar in spirit but that's more sound. The name of the model is the  Basic Long-Term Model. Nothing more. This model is indeed an improvement on the log-price chart extrapolation. It might not be a very good model, but that, in a sense, is not the point.

So I've decided that, in this thread, I will just keep using all available data. It's easy to do, easy to explain, and easy to understand. This is not the One True Model of Bitcoin Price(TM). I have other threads with other models. I hope that each one contributes something.
legendary
Activity: 1470
Merit: 1007
November 22, 2014, 09:50:37 AM
#32
...
At lot of the analysis around here is shaped by personal opinions.

Still, I hope he is right!

Maybe I didn't phrase it very clearly. What I tried to say wasn't that his analysis is biased or subjective, but that the way he is adjusting/refining his model reminds of the path that one takes when attempting to trade the market based on ("traditional") technical analysis. 'Pattern finding', as opposed to the more rigorous, but maybe also rather futile attempt to do purely quantitative forecasting. It's not a black/white distinction, just what his updated model suddenly made me think of.
sr. member
Activity: 952
Merit: 281
November 22, 2014, 09:13:11 AM
#31
Thanks for the update, Joe200. Pleasure to read, as before.

I notice you're changing the approach a bit: first time, you used all available data for the estimates, now you're using a half year/one year period preceding the time of calculation, and see how the parameters change over time. Correct so far?

Don't get this the wrong way, it's not meant to be a criticism of your attempt to model price, but I do find it interesting that it looks to me like you are getting closer to some of the considerations that drive technical analysts.

What I mean is this: you started (some months ago) by looking at the entire available data for your estimates. Turns out, at any point in time, this can lead to wildly unreliable predictions (especially if the preceding period saw a huge price increase or decrease). Now you're breaking down the data into smaller chunks, and try to see how the parameters change over time. The next logical step seems to be to start varying the history size, and try to see if the parameters change in some systematic way in the past, i.e. if there's some pattern to how they change over time. Sorry to say it, but you're one of us now Cheesy
At lot of the analysis around here is shaped by personal opinions.

Still, I hope he is right!
legendary
Activity: 1470
Merit: 1007
November 21, 2014, 07:53:05 PM
#30
Thanks for the update, Joe200. Pleasure to read, as before.

I notice you're changing the approach a bit: first time, you used all available data for the estimates, now you're using a half year/one year period preceding the time of calculation, and see how the parameters change over time. Correct so far?

Don't get this the wrong way, it's not meant to be a criticism of your attempt to model price, but I do find it interesting that it looks to me like you are getting closer to some of the considerations that drive technical analysts.

What I mean is this: you started (some months ago) by looking at the entire available data for your estimates. Turns out, at any point in time, this can lead to wildly unreliable predictions (especially if the preceding period saw a huge price increase or decrease). Now you're breaking down the data into smaller chunks, and try to see how the parameters change over time. The next logical step seems to be to start varying the history size, and try to see if the parameters change in some systematic way in the past, i.e. if there's some pattern to how they change over time. Sorry to say it, but you're one of us now Cheesy
sr. member
Activity: 263
Merit: 280
November 20, 2014, 09:49:23 PM
#29
Interesting approach and data.

Congratulations!
sr. member
Activity: 317
Merit: 252
sr. member
Activity: 317
Merit: 252
November 19, 2014, 11:49:12 AM
#27
Just putting the original message here. I will now put the latest on the front page.

Tl;dr. Here are the price predictions for Bitstamp BTC/USD.

Code:
  n.fut       date   p_5   p_50      p_95
1     0 2014-05-08    NA    446        NA
2     1 2014-05-09   403    448       499
3     7 2014-05-15   351    466       620
4    30 2014-06-07   300    542       982
5    91 2014-08-07   282    810     2,330
6   365 2015-05-08   489  4,890    48,800
7   731 2016-05-08 1,500 54,000 1,940,000

p_50 is the median prediction, p_5 is the "pessimistic" prediction, and p_95 is the "optimistic" prediction. In a year, we could reasonably be at 10x where we are right now. Or, we could be still at where we are now. Read on for a discussion.

The spreadsheet extrapolation. Many people plot log price versus time, find the best fitting line, and extrapolate it. This is what they get:

Code:
              Estimate Std. Error t value Pr(>|t|)    
(Intercept) -1.817e+00  4.592e-02  -39.57   <2e-16 ***
day          6.062e-03  5.735e-05  105.70   <2e-16 ***

R-squared:  0.8898

Look at this -- R2 = 90%! We are making 0.6% per day. No problem.

What others point out in response is that the trend cannot go on forever. While the long-term curve could in fact be S-shaped, we could still be in the early part of it. The trend does appear to be increasing (log) linearly. So, in my view, that's not the real issue.

The real issue is that OLS assumes that the residuals are independent of each other, which, in a time series, is clearly and completely false. And we should talk about confidence intervals as well, not just a point estimate.

A better model. Here is a very simple / basic model, the assumptions for which are actually not violated.
Code:
diff log price ~ Normal(mu, sigma)
The difference in the log price, which is approximately the daily percent return, has a Normal distribution with mean mu and standard deviation sigma.

Using all available data, we have

Code:
  n.data       from         to      mu  sigma   cv
1  1,380 2010-07-17 2014-05-08 0.00656 0.0652 9.93

So yes, there has actually been a growth of 0.66% per day. But also look at the standard deviation. CV (the ratio of sigma / mu) is around 10 -- there is a lot of variation.

More estimates. Are these parameters changing with time? Is mu perhaps drifting? Here are the historical values:

Code:
  n.data       from         to      mu  sigma    cv
1    183 2010-07-17 2011-01-16 0.01120 0.0810  7.25
2    365 2010-07-17 2011-07-23 0.01530 0.0834  5.44
3    548 2010-07-17 2012-01-22 0.00881 0.0788  8.95
4    731 2010-07-17 2012-07-23 0.00701 0.0697  9.95
5    913 2010-07-17 2013-01-21 0.00634 0.0638 10.10
6  1,100 2010-07-17 2013-07-23 0.00688 0.0675  9.81
7  1,280 2010-07-17 2014-01-21 0.00759 0.0664  8.75
8  1,380 2010-07-17 2014-05-08 0.00656 0.0652  9.93

After the initial explosion (1.5% per day!), mu has been relatively stable - though it is not in its lower range. Sigma / volatility has declined too. Anyone have any other insights on this?

But maybe the market is changing, which would mean that we should ignore old data. Here are estimates using only a year's worth of data.

Code:
  n.data       from         to      mu  sigma    cv
1    289 2010-07-17 2011-05-02 0.01440 0.0777  5.40
2    365 2010-11-01 2011-11-07 0.00754 0.0820 10.90
3    365 2011-05-03 2012-05-08 0.00120 0.0689 57.30
4    365 2011-11-07 2012-11-06 0.00354 0.0374 10.60
5    365 2012-05-08 2013-05-08 0.00851 0.0608  7.15
6    365 2012-11-06 2013-11-06 0.00866 0.0618  7.14
7    365 2013-05-08 2014-05-08 0.00390 0.0527 13.50

In the past year, mu has been really low. CV is the second highest that it's ever been. -- But mu can't go any lower for now, right? Cuz we've bottomed out? Anyone has any thoughts on this table?

Predictions. Predictions using all of the available data are at the top of the post. A year from now, the median prediction is $4,890 per coin. Meaning we could be at 10x of where we are. The pessimistic prediction is $489. I don't see how we just sit here for a year, but I guess it's possible. And the "to the moon" prediction? In 2 years, we could be at $$$ 2 million dollars $$$ per coin. :-) Ladies and gentlemen, hold on to your coinz. :-)

What if the market has changed and we should be ignoring the early days? Interestingly enough, the parameters in the past 2 years are almost the same as the parameters that use all of the data.

Code:
  n.data       from         to      mu  sigma  cv
1    731 2012-05-07 2014-05-08 0.00612 0.0569 9.3

And so the prediction using the past 2 years of data is very similar to the prediction that uses all the data.

Code:
  n.fut       date   p_5   p_50      p_95
1     0 2014-05-08    NA    446        NA
2     1 2014-05-09   408    448       492
3     7 2014-05-15   363    465       596
4    30 2014-06-07   317    535       903
5    91 2014-08-07   302    778     2,000
6   365 2015-05-08   466  4,160    37,100
7   731 2016-05-08 1,090 39,100 1,400,000

In a year, we are still at either 10x or, if things go bad, at the same price as now. In 2 years, "to the moon" is $1.4 mil instead of $1.9 mil.

Break-even point. If you buy today and thing go bad, how long will you have to hold to get your fiat back? Here is a prediction using all of the data:

Code:
  n.fut       date p_5  p_50   p_95
1   331 2015-04-04 446 3,910 34,300

331 days -- less than a year. So keep buying. :-) Buy coins, come back a year later, and you will very likely have made a profit.

Thoughts, comments, suggestions, criticisms are very much appreciated.

Pages:
Jump to: