Pages:
Author

Topic: in defense of technical analysis (Read 3743 times)

hero member
Activity: 728
Merit: 500
March 10, 2013, 09:17:04 AM
#47


Now, responding to your remark: "compare them along any measure and you will find they are different if you look close enough".There are always differences, that's granted. What matters is if it is statistically significant. To be sure there are different experimental designs to filter out, changing criterion, reversal and other that are very well known in medical research, double-blind testing, control groups.  We useAnalysis of Variance to tackle that problem, which is the same statistical tool used in any other "hard" science.


I would suggest to stop using that. ANOVAs rely on assumptions (eg normality) that are either known to be false when describing human behaviour or impossible to prove. Do permutation testing instead.

Well that will depend on the type and nature of the experiments that are performed, but thanks for the suggestions Wink
Are you a statistician?

No, just someone who deals with similar situations and realized ANOVAs, etc are not the correct tool for that job. People just use them because its the only thing they were ever taught.

Lets stop this OT convo tho.
newbie
Activity: 49
Merit: 0
March 09, 2013, 10:36:03 PM
#46


Now, responding to your remark: "compare them along any measure and you will find they are different if you look close enough".There are always differences, that's granted. What matters is if it is statistically significant. To be sure there are different experimental designs to filter out, changing criterion, reversal and other that are very well known in medical research, double-blind testing, control groups.  We useAnalysis of Variance to tackle that problem, which is the same statistical tool used in any other "hard" science.


I would suggest to stop using that. ANOVAs rely on assumptions (eg normality) that are either known to be false when describing human behaviour or impossible to prove. Do permutation testing instead.

Well that will depend on the type and nature of the experiments that are performed, but thanks for the suggestions Wink
Are you a statistician?
hero member
Activity: 728
Merit: 500
March 09, 2013, 03:22:32 PM
#45


Now, responding to your remark: "compare them along any measure and you will find they are different if you look close enough".There are always differences, that's granted. What matters is if it is statistically significant. To be sure there are different experimental designs to filter out, changing criterion, reversal and other that are very well known in medical research, double-blind testing, control groups.  We useAnalysis of Variance to tackle that problem, which is the same statistical tool used in any other "hard" science.


I would suggest to stop using that. ANOVAs rely on assumptions (eg normality) that are either known to be false when describing human behaviour or impossible to prove. Do permutation testing instead.
full member
Activity: 133
Merit: 100
March 09, 2013, 12:30:53 PM
#44
So in order for a TA analyst (I won't judge the TA science per se, I'll keep it with the people) should make correct predictions most of the times (or at least, correct enough to break even).

So here is my suggestion: Every TA in Btctalk keep a note about his predictions in a text file (sure you'll post about it here, but I don't want to go through hundreds of posts) and then present them to the btctalk society to judge his TA skills.

The file format is quite simple

dd/mm/yyyy: (predicted price for the day) | (predicted future trend)

Who's in?

We should do better than that, and create a speculation game/competition. Everyone starts with 100 virtual bitcoins (is that an oxymoron too?) and trades them using mtgox prices. After a few months, we see who comes out ahead.

That's basically the same thing (he should add the quantity invested of the 100btc, maybe in %).

Quote
dd/mm/yyyy: (predicted price for the day) | (predicted future trend) | (% of your investment*)

* 100 is the base. You start with 100.

So where are all these TA analysts  Cool
newbie
Activity: 49
Merit: 0
March 08, 2013, 09:19:47 PM
#43


First of all we must define that science is the pursuit of objetive understanding of the natural world, first and utmost.
Then IF this understanding is true, it should be testable, reproducible, therefore, it should have predictive power.
And IF this understanding is false, it should be provable as false.
This is the key requirement to be scientific.
Science+Time refine this knowledge. High accuracy is not an excluding requirement to be scientific (or to discredited), but it is a refinement of the result of this testing and retesting process over time to confirm, improve or reject an hypothesis or a theory. It is really not the result but the process that matters.



Yes, please show me a paper from any of the fields you listed that falsifies a real prediction. As I said, as practiced, the things getting falsified are worthless because we know them to be false before trying to falsify them. It would be the same as if TA was deemed accurate because it predicted the price would not be exactly the same tomorrow at this second as it is now. So it is just a 50-50 chance of guessing right up or down.

That is a grave statement and a deep misunderstanding of the scientific process.
First of all, what is your profession? I just want to know what kind of audience I am responding to.
Are you a graduate student in hard sciences, an academician or just an average joe fanatic of the sciences?
If you are one of the first, I am appalled at the lack of epistemological understanding. (this is something I also realized among the grad students in my school)

You can never know what is false unless it is tested. If your mentality was widespread, all counterintuitive hypotheses would be rejected from the get go. That kind of prejudgement is very harming for scientific discoveries, which makes me think that you are not actually related with any scientific discipline, either that or you are too new to this.

If you want to know about researches in social sciences, simply subscribe to social scientific journals yourself. If you work in academia, you should have free access to all of them. You have plenty of social disciplines that use quantitative research.

You are severely misunderstanding me. Take any two groups of people and compare them along any measure and you will find they are different if you look close enough. I "know" this as much as I can know anything. This is what occurs in the social sciences. Here is a good description of the problem from back in 1967:

http://www.psych.ucsb.edu/~janusonis/meehl1967.pdf

Funny, I needed this paper to troll my professors Smiley
To be honest, I have my own criticisms as well, but I wouldn't go as far as saying that soft sciences are not real science.
It is science, not pseudo, not fringe, it IS science. At least psychology is a field that has been serious about it, and within psychology, neuropsychology, behavioral neuroscience, comparative psychology, neurobiology are the hardest of all in the spectrum of the psychological subdisciplines. In fact, there is nothing really "soft" in them, they all are very well versed on NHST (well, they should be), it is a requirement for research for that line of study.
I will read that paper with more time to dissect it carefully later, I love it, thank you.

Now, responding to your remark: "compare them along any measure and you will find they are different if you look close enough".There are always differences, that's granted. What matters is if it is statistically significant. To be sure there are different experimental designs to filter out, changing criterion, reversal and other that are very well known in medical research, double-blind testing, control groups. We use Analysis of Variance to tackle that problem, which is the same statistical tool used in any other "hard" science.

The only difference with the hard sciences, is that we are much younger and growing.
Btw, we are way offtopic.

PS: Let me share this paper: http://www.statpower.net/Steiger%20Biblio/Steiger04b.pdf
hero member
Activity: 728
Merit: 500
March 08, 2013, 08:48:03 PM
#42
You can start with SSRN and then the Journal of experimental social psychology (ISSN: 0022-1031), Personality and social psychology review (ISSN:1088-8683), Journal of personality and social psychology (ISSN: 0022-3514), Experimental Economics (ISSN: 1386-4157).

Finally, I invite you to read this article published in Science:
http://www.ucd.ie/geary/static/publications/workingpapers/gearywp200935.pdf
Have fun.

I just noticed this edit. From that paper:

Quote
For example, if a firm pays a higher wage or a subject provides higher effort, costs are higher and final earnings are lower.

This type of thinking is exactly what I am talking about. "Higher" this, "Lower" that. Its 50-50 all the way. The falsified hypothesis is one of zero effect of A on B. In social science A always affects B in some way, so that hypothesis is worthless. This is what makes it not really "science", the data may be good, but the way it is interpreted is not scientific. No matter how many confounds there are it should be possible to estimate some interval of the result of A on B that can be narrowed over time as more data is collected. This does seem common in the social sciences for whatever reason.

Have you ever seen a social science experiment replicated exactly?

Note: this all goes for biomedical science as well. The only difference is it is often easier to control confounds.
hero member
Activity: 728
Merit: 500
March 08, 2013, 06:18:09 PM
#41


First of all we must define that science is the pursuit of objetive understanding of the natural world, first and utmost.
Then IF this understanding is true, it should be testable, reproducible, therefore, it should have predictive power.
And IF this understanding is false, it should be provable as false.
This is the key requirement to be scientific.
Science+Time refine this knowledge. High accuracy is not an excluding requirement to be scientific (or to discredited), but it is a refinement of the result of this testing and retesting process over time to confirm, improve or reject an hypothesis or a theory. It is really not the result but the process that matters.



Yes, please show me a paper from any of the fields you listed that falsifies a real prediction. As I said, as practiced, the things getting falsified are worthless because we know them to be false before trying to falsify them. It would be the same as if TA was deemed accurate because it predicted the price would not be exactly the same tomorrow at this second as it is now. So it is just a 50-50 chance of guessing right up or down.

That is a grave statement and a deep misunderstanding of the scientific process.
First of all, what is your profession? I just want to know what kind of audience I am responding to.
Are you a graduate student in hard sciences, an academician or just an average joe fanatic of the sciences?
If you are one of the first, I am appalled at the lack of epistemological understanding. (this is something I also realized among the grad students in my school)

You can never know what is false unless it is tested. If your mentality was widespread, all counterintuitive hypotheses would be rejected from the get go. That kind of prejudgement is very harming for scientific discoveries, which makes me think that you are not actually related with any scientific discipline, either that or you are too new to this.

If you want to know about researches in social sciences, simply subscribe to social scientific journals yourself. If you work in academia, you should have free access to all of them. You have plenty of social disciplines that use quantitative research.

You are severely misunderstanding me. Take any two groups of people and compare them along any measure and you will find they are different if you look close enough. I "know" this as much as I can know anything. This is what occurs in the social sciences. Here is a good description of the problem from back in 1967:

http://www.psych.ucsb.edu/~janusonis/meehl1967.pdf
newbie
Activity: 49
Merit: 0
March 08, 2013, 06:09:18 PM
#40


First of all we must define that science is the pursuit of objetive understanding of the natural world, first and utmost.
Then IF this understanding is true, it should be testable, reproducible, therefore, it should have predictive power.
And IF this understanding is false, it should be provable as false.
This is the key requirement to be scientific.
Science+Time refine this knowledge. High accuracy is not an excluding requirement to be scientific (or to discredited), but it is a refinement of the result of this testing and retesting process over time to confirm, improve or reject an hypothesis or a theory. It is really not the result but the process that matters.



Yes, please show me a paper from any of the fields you listed that falsifies a real prediction. As I said, as practiced, the things getting falsified are worthless because we know them to be false before trying to falsify them. It would be the same as if TA was deemed accurate because it predicted the price would not be exactly the same tomorrow at this second as it is now. So it is just a 50-50 chance of guessing right up or down.

That is a grave statement and a deep misunderstanding of the scientific process.
First of all, what is your profession? I just want to know what kind of audience I am responding to.
Are you a graduate student in hard sciences, an academician or just an average joe fanatic of the sciences?
If you are one of the first, I am appalled at the lack of epistemological understanding. (this is something I also realized among the grad students in my school)

You can never know what is false unless it is tested. If your mentality was widespread, all counterintuitive hypotheses would be rejected from the get go. That kind of prejudgement is very harming for scientific discoveries, which makes me think that you are not actually related with any scientific discipline, either that or you are too new to this.

If you want to know about researches in social sciences, simply subscribe to social scientific journals yourself. If you work in academia, you should have free access to all of them. You have plenty of social disciplines that use quantitative research.
You can start with SSRN and then the Journal of experimental social psychology (ISSN: 0022-1031), Personality and social psychology review (ISSN:1088-8683), Journal of personality and social psychology (ISSN: 0022-3514), Experimental Economics (ISSN: 1386-4157).

Finally, I invite you to read this article published in Science:
http://www.ucd.ie/geary/static/publications/workingpapers/gearywp200935.pdf
Have fun.
hero member
Activity: 728
Merit: 500
March 08, 2013, 05:34:20 PM
#39


First of all we must define that science is the pursuit of objetive understanding of the natural world, first and utmost.
Then IF this understanding is true, it should be testable, reproducible, therefore, it should have predictive power.
And IF this understanding is false, it should be provable as false.
This is the key requirement to be scientific.
Science+Time refine this knowledge. High accuracy is not an excluding requirement to be scientific (or to discredited), but it is a refinement of the result of this testing and retesting process over time to confirm, improve or reject an hypothesis or a theory. It is really not the result but the process that matters.



Yes, please show me a paper from any of the fields you listed that falsifies a real prediction. As I said, as practiced, the things getting falsified are worthless because we know them to be false before trying to falsify them. It would be the same as if TA was deemed accurate because it predicted the price would not be exactly the same tomorrow at this second as it is now. So it is just a 50-50 chance of guessing right up or down.
newbie
Activity: 49
Merit: 0
March 08, 2013, 04:42:27 PM
#38
So in order for a TA analyst (I won't judge the TA science per se, I'll keep it with the people) should make correct predictions most of the times (or at least, correct enough to break even).

So here is my suggestion: Every TA in Btctalk keep a note about his predictions in a text file (sure you'll post about it here, but I don't want to go through hundreds of posts) and then present them to the btctalk society to judge his TA skills.

The file format is quite simple

dd/mm/yyyy: (predicted price for the day) | (predicted future trend)

Who's in?

Doubtful.
Chartists are limited in two ways:
1) All the techniques that they learn, it is basic and known by millions of wannabies who learn the same MACD indicators. EVEN if it worked, there is no advantage to be profitable. And because everyone shares the same knowledge, the market reacts based on that preconception, and it works only because everyone agrees that it will happen, thus happens. This is known in psychology as self-fulfilling prophecy.

2) They are human. What I mean by this is that they can't process all the indicators in real time, they must simplify and most of them rely solely on graphical patterns, neither mathematics nor market analyses.
Those who rely on math, end up generating algorithms, which are obviously kept secret if it works, therefore they have an edge they can profit from. If everyone knew THE a successful strategy, the edge would be lost.
Charters (TA's) don't research the market for indicators, they apply tools that had existed for centuries. And because every average guy know what you know, you can't beat them unless they make a huge mistake. At the end, without solid math, it becomes a cheap martingale (the betting system, so popular among amateur gamblers).

The way I see it, Technical Analysis is to alchemy, what Algorithmic Trading is to chemical engineering.
In algos you go beyond what the drawn pattern, you gather waaaaaaaaay more data in search of patterns that may anticipate the market.

Seeing the lack of rebuttals, I think he just created this thread as a sandbox for us for all the bitching.
Genius move, haha

moral sciences

Oxymoron. The guy who invented the term "Social science" should have had his arse kicked all the way down the road and back.

Social science is real science.
Experiments follows the scientific method, they ARE falsifiable, they are quantifiable, they can be empirical, and they are peer reviewed.
It also relies on observation, and there are a very eclectic range of subdisciplines.
Social Psychology and Behavioral Economics are my favorite because of their counterintuitive discoveries, and I exploit them effectively to push sales every day, to influence people and to seduce women.

Please, don't ridicule things simply because you don't understand. You'll end up ridiculing yourself.

It could be real science, but as currently practiced it is not. In social science everything is related to everything else, so you know the null hypothesis of "not related" or "A has no effect on B" is false to begin with. I'm sure there is some good work out there so please show me a paper that attempts to falsify a real prediction similar to "the speed of light is between 298,000,000 and 300,000,000 meters per second".

This is similar to TA. To judge a particular method's effectiveness people need to make predictions of "the price will be within this range at this time and date".

I would like to know what you consider to be a Social Science, you are encompassing a whole spectrum of disciplines.
Are you talking about Political Science, Sociology, Social Anthropology, Social Psychology, Social Neuroscience or Economy?
The way that people generalize "Social Sciences" is not only rude, but very ignorant... especially when they have a distorted concept of science even among those who claim to love hard sciences. And I blame the schools and colleges for not teaching the history of science.
I would understand from high schoolers, but seeing this kind of behavior from adults is very frustrating at the least.

First of all we must define that science is the pursuit of objetive understanding of the natural world, first and utmost.
Then IF this understanding is true, it should be testable, reproducible, therefore, it should have predictive power.
And IF this understanding is false, it should be provable as false.
This is the key requirement to be scientific.
Science+Time refine this knowledge. High accuracy is not an excluding requirement to be scientific (or to discredited), but it is a refinement of the result of this testing and retesting process over time to confirm, improve or reject an hypothesis or a theory. It is really not the result but the process that matters.

Currently all social sciences (excepting political sciences), have predictive power. Some disciplines have more, others less.
Obviously the "harder" it gets (such as the neuropsychological branch), the more accurate it will be.
And yet, all the corpus of knowledge from all the disciplines are extremely useful and relevant for our everyday life, and they are actively exploited commercially. It is so pervasive that it is transparent in our daily lives, but right now you are being manipulated by the biggest marketing firms.

You should ask yourself why did you buy the brand what you bought. You might think you had free will, think again.
legendary
Activity: 980
Merit: 1040
March 08, 2013, 04:35:17 PM
#37
We should do better than that, and create a speculation game/competition. Everyone starts with 100 virtual bitcoins (is that an oxymoron too?) and trades them using mtgox prices. After a few months, we see who comes out ahead.
That sounds like a fun idea, but also sounds like a lot of work and would be hard to keep everyone "honest".

In any case, I'd play and just sit on my 100 coins Tongue

why would it be hard to keep them honest?
And yeah, it would be a bit of work, especially if you want to do it right, but a basic implementation might not be that hard. I think I will look in to it anyway.
hero member
Activity: 518
Merit: 500
March 08, 2013, 04:28:50 PM
#36
We should do better than that, and create a speculation game/competition. Everyone starts with 100 virtual bitcoins (is that an oxymoron too?) and trades them using mtgox prices. After a few months, we see who comes out ahead.
That sounds like a fun idea, but also sounds like a lot of work and would be hard to keep everyone "honest".

In any case, I'd play and just sit on my 100 coins Tongue
legendary
Activity: 980
Merit: 1040
March 08, 2013, 04:24:01 PM
#35
So in order for a TA analyst (I won't judge the TA science per se, I'll keep it with the people) should make correct predictions most of the times (or at least, correct enough to break even).

So here is my suggestion: Every TA in Btctalk keep a note about his predictions in a text file (sure you'll post about it here, but I don't want to go through hundreds of posts) and then present them to the btctalk society to judge his TA skills.

The file format is quite simple

dd/mm/yyyy: (predicted price for the day) | (predicted future trend)

Who's in?

We should do better than that, and create a speculation game/competition. Everyone starts with 100 virtual bitcoins (is that an oxymoron too?) and trades them using mtgox prices. After a few months, we see who comes out ahead.
hero member
Activity: 728
Merit: 500
March 08, 2013, 04:03:05 PM
#34
Seeing the lack of rebuttals, I think he just created this thread as a sandbox for us for all the bitching.
Genius move, haha

moral sciences

Oxymoron. The guy who invented the term "Social science" should have had his arse kicked all the way down the road and back.

Social science is real science.
Experiments follows the scientific method, they ARE falsifiable, they are quantifiable, they can be empirical, and they are peer reviewed.
It also relies on observation, and there are a very eclectic range of subdisciplines.
Social Psychology and Behavioral Economics are my favorite because of their counterintuitive discoveries, and I exploit them effectively to push sales every day, to influence people and to seduce women.

Please, don't ridicule things simply because you don't understand. You'll end up ridiculing yourself.

It could be real science, but as currently practiced it is not. In social science everything is related to everything else, so you know the null hypothesis of "not related" or "A has no effect on B" is false to begin with. I'm sure there is some good work out there so please show me a paper that attempts to falsify a real prediction similar to "the speed of light is between 298,000,000 and 300,000,000 meters per second".

This is similar to TA. To judge a particular method's effectiveness people need to make predictions of "the price will be within this range at this time and date".
full member
Activity: 133
Merit: 100
March 08, 2013, 03:55:14 PM
#33
So in order for a TA analyst (I won't judge the TA science per se, I'll keep it with the people) should make correct predictions most of the times (or at least, correct enough to break even).

So here is my suggestion: Every TA in Btctalk keep a note about his predictions in a text file (sure you'll post about it here, but I don't want to go through hundreds of posts) and then present them to the btctalk society to judge his TA skills.

The file format is quite simple

dd/mm/yyyy: (predicted price for the day) | (predicted future trend)

Who's in?
newbie
Activity: 49
Merit: 0
March 08, 2013, 03:52:50 PM
#32
Seeing the lack of rebuttals, I think he just created this thread as a sandbox for us for all the bitching.
Genius move, haha

moral sciences

Oxymoron. The guy who invented the term "Social science" should have had his arse kicked all the way down the road and back.

Social science is real science.
Experiments follows the scientific method, they ARE falsifiable, they are quantifiable, they can be empirical, and they are peer reviewed.
It also relies on observation, and there are a very eclectic range of subdisciplines.
Social Psychology and Behavioral Economics are my favorite because of their counterintuitive discoveries, and I exploit them effectively to push sales every day, to influence people and to seduce women.

Please, don't ridicule things simply because you don't understand. You'll end up ridiculing yourself.
legendary
Activity: 1820
Merit: 1000
March 08, 2013, 03:42:07 PM
#31
Well, that "crazy drop" bounced exactly off the 20 day EMA, which has been tested several times during this run, so by this measure the uptrend is still intact.

This is a good example why TA "analysis" constantly raises all kinds of red warning signs.
This statement quoted above uses an impression, assumption or conclusion as a basis, not an unbiased fact. Then, on top of such shakey foundations, TA folks typically execute a lot of math formulas and graph vodoo.

What is wrong in the statement above?

First, the subject in question has not been constituted properly (anyone observing that "crazy drop" should have noticed that the reversal and spike up coincided with the effective drop-out of the Mt.Gox data feed, leaving most of the Bitcoin economy blindfolded. Also you should have noted that the down move ended in a cluster of existing orders, which would have been a noticeable resistance to break. Yet the claimant doesn't take that into account or even mentions it.).

Secondly, the hypothesis to be verified by this "observation" might have been chosen to fit, and there are no safeguard measures against self-deception. (why 20 day EMA, why not 10 or 30 days. And what exactly is the scope of "several times during this run"?)

Third, there is no actual statement or claim, but rather a conditional, self-referral claim ("so by this measure the uptrend is still intact"). But for a person not trained in logical dissection or scientific reasoning, this looks like a real statement. And especially this is what raises the suspicion of manipulation.



The reason for the 20 EMA and not the 10 or 30 is that the lowest dips during this run since January have been bouncing off of it - it's a pattern price has been following. That is a plain fact, not a biased one. I couldn't have noted the cluster of orders because I don't have access to that information, since I'm holding long-term and don't have an account on any exchange. I'm not surprised, though that there was a cluster of orders right at the 20 EMA, since that is a good spot to go long on a dip given the established pattern. What it does is give one a low risk entry, because one can sell quickly if the dip doesn't turn at the EMA, thus limiting one's loss. This is how technical analysis is used by traders - they use TA to find "low risk" set-ups, where low risk means that if things don't go as expected, one can stop out quickly, but if they do go as expected, one will have a significantly larger gain than the potential loss. It basically gives one a "line in the sand," and TA may work largely because traders tend to use the same lines in the sand - but it doesn't matter why so long as it can be used effectively to find low risk trades. Also, a moving average is mathematically a way to gauge an uptrend relative to a time-frame. This is a technical point about the price action that can be used by a trader who is trying to stay with the trend and not sell too early for emotional reasons ("Gosh, price is too high! Sell!" or "OMG this is a big dip! Price must me crashing! Sell!"). If there is a hypothesis here, it is only a trading hypothesis. A reasonable strategy for a trader to take here is to say, "I'm going to stay with this run so long as we don't breach the 20 EMA, since this has held so far." So, one might place a stop on one's trade a little below the 20 EMA, and keep raising the stop as the EMA rises. This is just a way to make a reasonable line in the sand for oneself in order to stay with the trade. There is no hypothesis here to the effect that BTC (or any other security) will always bounce off of a rising 20 EMA, nor is there a hypothesis that we crash into oblivion once it is breached. But a rising 20 day EMA is a support level, so it is a spot to potentially go long, or a spot to set a stop below, since breaking support represents weakness in price momentum.     
legendary
Activity: 2576
Merit: 2267
1RichyTrEwPYjZSeAYxeiFBNnKC9UjC5k
March 08, 2013, 02:25:19 PM
#30
moral sciences

Oxymoron. The guy who invented the term "Social science" should have had his arse kicked all the way down the road and back.
hero member
Activity: 602
Merit: 500
March 08, 2013, 01:17:22 PM
#29
Well, that "crazy drop" bounced exactly off the 20 day EMA, which has been tested several times during this run, so by this measure the uptrend is still intact.

This is a good example why TA "analysis" constantly raises all kinds of red warning signs.
This statement quoted above uses an impression, assumption or conclusion as a basis, not an unbiased fact. Then, on top of such shakey foundations, TA folks typically execute a lot of math formulas and graph vodoo.

What is wrong in the statement above?

First, the subject in question has not been constituted properly (anyone observing that "crazy drop" should have noticed that the reversal and spike up coincided with the effective drop-out of the Mt.Gox data feed, leaving most of the Bitcoin economy blindfolded. Also you should have noted that the down move ended in a cluster of existing orders, which would have been a noticeable resistance to break. Yet the claimant doesn't take that into account or even mentions it.).

Secondly, the hypothesis to be verified by this "observation" might have been chosen to fit, and there are no safeguard measures against self-deception. (why 20 day EMA, why not 10 or 30 days. And what exactly is the scope of "several times during this run"?)

Third, there is no actual statement or claim, but rather a conditional, self-referral claim ("so by this measure the uptrend is still intact"). But for a person not trained in logical dissection or scientific reasoning, this looks like a real statement. And especially this is what raises the suspicion of manipulation.

hero member
Activity: 602
Merit: 500
March 08, 2013, 12:58:04 PM
#28
And (2) we need to execute that method under controlled and verifiable conditions and yield a significant score of prediction.

To show that TA works I don't think "controlled and verified" conditions would be a feasible experiment. You'd have to round up every Bitcoin trader in the world and somehow control them, because they are the "conditions".

You have a point here, and this brings up the question if this is a matter of science and specifically of exact mathematical science. But there isn't just math and physics, there are also the moral sciences and there is technology and engineering. Each of those has a way of precision.

Indeed, personally I rather see TA as an proposed / alleged method of technology or engineering. And it is quite common to run field tests of various kinds to verify methods and procedures in technology. In this context I had in mind, "controlled and verified" means that the conditions are controlled to an extent so that the experiment could be repeated, which of course includes some management regarding the knowledge of the subjects involved into the test.

In situations where self-suggestion of the test subject is important, often some kind of blind test or even double blind test is applied. Transferred to our situation, this would mean to conjure up some trading strategies, which look superficially correct, but are wrong or ineffective, based on the methodology of TA. And then, in the practical test, the traders executing these placebo strategies need to do significantly worse than those employing the real TA based strategies. And, as a point of reference, some strategies based on random choices need to be included as well.
Pages:
Jump to: