Report 200404: The Profitability of Technical Analysis: A Review
October 2004
CheolHo Park and
Scott H. Irwin
[1]
Copyright
2004 by CheolHo Park and Scott H. Irwin. All rights reserved. Readers
may make verbatim copies of this document for noncommercial purposes
by any means, provided that this copyright notice appears on all
such copies.
Introduction
Technical analysis is a forecasting
method of price movements using past prices, volume, and open interest.[2]
Pring (2002), a leading technical analyst, provides a more specific
definition:
"The technical approach to investment is essentially a reflection
of the idea that prices move in trends that are determined by the
changing attitudes of investors toward a variety of economic, monetary,
political, and psychological forces. The art of technical analysis,
for it is an art, is to identify a trend reversal at a relatively
early stage and ride on that trend until the weight of the evidence
shows or proves that the trend has reversed." (p. 2)
Technical analysis includes a variety of forecasting techniques
such as chart analysis, pattern recognition analysis, seasonality
and cycle analysis, and computerized technical trading systems.
However, academic research on technical analysis is generally limited
to techniques that can be expressed in mathematical forms, namely
technical trading systems, although some recent studies attempt
to test visual chart patterns using pattern recognition algorithms.
A technical trading system consists of a set of trading rules that
result from parameterizations, and each trading rule generates trading
signals (long, short, or out of market) according to their parameter
values. Several popular technical trading systems are moving averages,
channels, and momentum oscillators.
Since Charles H. Dow first introduced the Dow theory in the late
1800s, technical analysis has been extensively used among market
participants such as brokers, dealers, fund managers, speculators,
and individual investors in the financial industry.[3] Numerous surveys
indicate that practitioners attribute a significant role to technical
analysis. For example, futures fund managers rely heavily on computerguided
technical trading systems (Irwin and Brorsen 1985; Brorsen and Irwin
1987; Billingsley and Chance 1996), and about 30% to 40% of foreign
exchange traders around the world believe that technical analysis
is the major factor determining exchange rates in the shortrun
up to six months (e.g., Menkhoff 1997; Cheung and Wong 2000; Cheung,
Chinn, and Marsh 2000; Cheung and Chinn 2001).
In contrast to the views of many practitioners, most academics
are skeptical about technical analysis. Rather, they tend to believe
that markets are informationally efficient and hence all available
information is impounded in current prices (Fama 1970). In efficient
markets, therefore, any attempts to make profits by exploiting currently
available information are futile. In a famous passage, Samuelson
(1965) argues that:
"…there is no way of making an expected profit by extrapolating
past changes in the futures price, by chart or any other esoteric
devices of magic or mathematics. The market quotation already contains
in itself all that can be known about the future and in that sense
has discounted future contingencies as much as is humanly possible."
(p. 44)
Nevertheless, in recent decades rigorous theoretical explanations
for the widespread use of technical analysis have been developed
based on noisy rational expectation models (Treynor and Ferguson
1985; Brown and Jennings 1989; Grundy and McNichols 1989; Blume,
Easley, and O'Hara 1994), behavioral (or feedback) models (De Long
et al. 1990a, 1991; Shleifer and Summers 1990), disequilibrium models
(Beja and Goldman 1980), herding models (Froot, Scharfstein, and
Stein 1992), agentbased models (Schmidt 2002), and chaos theory
(Clyde and Osler 1997). For example, Brown and Jennings (1989) demonstrated
that under a noisy rational expectations model in which current
prices do not fully reveal private information (signals) because
of noise (unobserved current supply of a risky asset) in the current
equilibrium price, historical prices (i.e., technical analysis)
together with current prices help traders make more precise inferences
about past and present signals than do current prices alone (p.
527).
Since Donchian (1960), numerous empirical studies have tested the
profitability of technical trading rules in a variety of markets
for the purpose of either uncovering profitable trading rules or
testing market efficiency, or both. Most studies have concentrated
on stock markets, both in the US and outside the US, and foreign
exchange markets, while a smaller number of studies have analyzed
futures markets. Before the mid1980s, the majority of the technical
trading studies simulated only one or two trading systems. In these
studies, although transaction costs were deducted to compute net
returns of technical trading strategies, risk was not adequately
handled, statistical tests of trading profits and data snooping
problems were often disregarded, and outofsample verification
along with parameter (trading rule) optimization were not considered
in the testing procedure. After the mid1980s, however, technical
trading studies greatly improved upon the drawbacks of early studies
and typically included some of the following features in their testing
procedures: (1) the number of trading systems tested increased relative
to early studies; (2) returns were adjusted for transaction costs
and risk; (3) parameter (trading rule) optimization and the outofsample
verification were conducted; and (4) statistical tests were performed
with either conventional statistical tests or more sophisticated
bootstrap methods, or both.
The purpose of this report is to review the evidence on the profitability
of technical analysis. To achieve this purpose, the report comprehensively
reviews survey, theoretical and empirical studies regarding technical
analysis and discusses the consistency and reliability of technical
trading profits across markets and over time. Despite a recent explosion
in the literature on technical analysis, no study has surveyed the
literature systematically and comprehensively. The report will pay
special attention to testing procedures used in empirical studies
and identify their salient features and weaknesses. This will improve
general understanding of the profitability of technical trading
strategies and suggest directions for future research. Empirical
studies surveyed include those that tested technical trading systems,
trading rules formulated by genetic algorithms or some statistical
models (e.g., ARIMA), and chart patterns that can be represented
algebraically. The majority of the studies were collected from academic
journals published from 1960 to the present and recent working papers.
Only a few studies were obtained from books or magazines.
Survey Studies
Survey studies attempt to directly
investigate market participants' behavior and experiences, and document
their views on how a market works. These features cannot be easily
observed in typical data sets. The oldest survey study regarding
technical analysis dates back to Stewart (1949), who analyzed the
trading behavior of customers of a large Chicago futures commission
firm over the 19241932 period. The result indicated that in general
traders were unsuccessful in their grain futures trading, regardless
of their scale and knowledge of the commodity traded. Amateur speculators
were more likely to be long than short in futures markets. Long
positions generally were taken on days of price declines, while
short positions were initiated on days of price rises. Thus, trading
against the current movement of prices appeared to be dominant.
However, a representative successful speculator showed a tendency
to buy on reversals in price movement during upward price swings
and sell on upswings that followed declines in prices, suggesting
that successful speculators followed market trends.
Smidt (1965a) surveyed trading activities of amateur traders in
the US commodity futures markets in 1961.[4] In this survey, about
53% of respondents claimed that they used charts either exclusively
or moderately in order to identify trends. The chartists, whose
jobs hardly had relation to commodity information, tended to trade
more commodities in comparison to the other traders (nonchartists).
Only 24% of the chartists had been trading for six or more years,
while 42% of nonchartists belonged to the same category. There
was a slight tendency for chartists to pyramid more frequently than
other traders.[5] It is interesting to note that only 10% of the chartists,
compared to 29% of the nonchartists, nearly always took long positions.
The Group of Thirty (1985) surveyed the views of market participants
on the functioning of the foreign exchange market in 1985. The respondents
were composed of 40 large banks and 15 securities houses in 12 countries.
The survey results indicated that 97% of bank respondents and 87%
of the securities houses believed that the use of technical analysis
had a significant impact on the market. The Group of Thirty reported
that "Technical trading systems, involving computer models
and charts, have become the vogue, so that the market reacts more
sharply to short term trends and less attention is given to basic
factors (p. 14)."
Brorsen and Irwin (1987) carried out a survey of large public futures
funds' advisory groups in 1986. In their survey, more than half
of the advisors responded that they relied heavily on computerguided
technical trading systems. Most fund advisors appeared to use technical
trading rules by optimizing parameters of their trading systems
over historical data whose amounts varied by advisors, with two
years being the smallest amount. Because of liquidity costs, futures
funds held 80% of their positions in the nearby contract, and the
average number of commodities they traded had been quite constant
through time. Since technically traded public and private futures
funds were estimated to control an average of 23% of the open interest
in ten important futures markets, the funds seemed large enough
to move prices if they traded in unison (p. 133).
Frankel and Froot (1990) showed
that switching a forecasting method for another over time may explain
changes in the demand for dollars in foreign exchange markets. The
evidence provided was the survey results of Euromoney magazine
for foreign exchange forecasting firms. According to the magazine,
in 1978, nineteen forecasting firms exclusively used fundamental
analysis and only three firms technical analysis. After 1983, however,
the distribution had been reversed. In 1983, only one firm reported
using fundamental analysis, and eight technical analysis. In 1988,
seven firms appeared to rely on fundamental analysis while eighteen
firms employed technical analysis.
Taylor and Allen (1992) conducted a survey on the use of technical
analysis among chief foreign exchange dealers in the London market
in 1988. The results indicated that 64% of respondents reported
using moving averages and/or other trendfollowing systems and 40%
reported using other trading systems such as momentum indicators
or oscillators. In addition, approximately 90% of respondents reported
that they were using some technical analysis when forming their
exchange rate expectations at the shortest horizons (intraday to
one week), with 60% viewing technical analysis to be at least as
important as fundamental analysis.
Menkhoff (1997) investigated the behavior of foreign exchange professionals
such as dealers or fund managers in Germany in 1992. His survey
revealed that 87% of the dealers placed a weight of over 10% to
technical analysis in their decision making. The mean value of the
importance of technical analysis appeared to be 35% and other professionals
also showed similar responses. Respondents believed that technical
analysis influenced their decision from intraday to 26 months by
giving a weight of between 34% and 40%. Other interesting findings
were: (1) professionals preferring technical analysis were younger
than other participants; (2) there was no relationship between institutional
size and the preferred use of technical analysis; and (3) chartists
and fundamentalists both indicated no significant differences in
their educational level.
Lui and Mole (1998) surveyed the use of technical and fundamental
analysis by foreign exchange dealers in Hong Kong in 1995. The dealers
believed that technical analysis was more useful than fundamental
analysis in forecasting both trends and turning points. Similar
to previous survey results, technical analysis appeared to be important
to dealers at the shorter time horizons up to 6 months. Respondents
considered moving averages and/or other trendfollowing systems
the most useful technical analysis. The typical length of historical
period used by the dealers was 12 months and the most popular data
frequency was daily data.
Cheung and Wong (2000) investigated practitioners in the interbank
foreign exchange markets in Hong Kong, Tokyo, and Singapore in 1995.
Their survey results indicated that about 40% of the dealers believed
that technical trading is the major factor determining exchange
rates in the medium run (within 6 months), and even in the long
run about 17% believed technical trading is the most important determining
factor.
Cheung, Chinn, and Marsh (2000) surveyed the views of UKbased
foreign exchange dealers on technical anaysis in 1998. In this survey,
33% of the respondents described themselves as technical analysts
and the proportion increased by approximately 20% compared to that
of five years ago. Moreover, 26% of the dealers responded that technical
trading is the most important factor that determines exchange rate
movements over the medium run.
Cheung and Chinn (2001) published survey results for USbased foreign
exchange traders conducted in 1998. In the survey, about 30% of
the traders indicated that technical trading best describes their
trading strategy. Five yeas earlier, only 19% of traders had judged
technical trading as their trading practice. About 31% of the traders
responded that technical trading was the primary factor determining
exchange rate movements up to 6 months.
Oberlechner (2001) reported findings from a survey on the importance
of technical and fundamental analysis among foreign exchange traders
and financial journalists in Frankfurt, London, Vienna, and Zurich
in 1996. For foreign exchange traders, technical analysis seemed
to be a more important forecasting tool than fundamental analysis
up to a 3month forecasting horizon, while for financial journalists
it seemed to be more important up to 1month. However, forecasting
techniques differed in trading locations on shorter forecasting
horizons. From intraday to a 3month forecasting horizon, traders
in smaller trading locations (Vienna and Zurich) placed more weight
on technical analysis than did traders in larger trading locations
(London and Frankfurt). Traders generally used a mixture of both
technical and fundamental analysis in their trading practices. Only
3% of the traders exclusively used one of the two forecasting techniques.
Finally, comparing the survey results for foreign exchange traders
in London to the previous results of Taylor and Allen (1992), the
importance of technical analysis appeared to increase across all
trading horizons relative to 1988 (the year when Taylor and Allen
conducted a survey).
In sum, survey studies indicate that technical analysis has been
widely used by practitioners in futures markets and foreign exchange
markets, and regarded as an important factor in determining price
movements at shorter time horizons. However, no survey evidence
for stock market traders was found.
Theory
The Efficient Markets Hypothesis
The efficient markets hypothesis
has long been a dominant paradigm in describing the behavior of
prices in speculative markets. Working (1949, p. 160) provided an
early version of the hypothesis:
If it is possible under any given combination of circumstances
to predict future price changes and have the predictions fulfilled,
it follows that the market expectations must have been defective;
ideal market expectations would have taken full account of the information
which permitted successful prediction of the price changes.
In later work, he revised his definition of a perfect futures market
to "… one in which the market price would constitute at
all times the best estimate that could be made, from currently available
information, of what the price would be at the delivery date of
the futures contracts (Working, 1962, p. 446)." This definition
of a perfect futures market is in essence identical to the famous
definition of an efficient market given by Fama (1970, p. 383):
"A market in which prices always 'fully reflect' available
information is called 'efficient'." Since Fama's survey study
was published, this definition of an efficient market has long served
as the standard definition in the financial economics literature.
A more practical definition of
an efficient market is given by Jensen (1978, p. 96) who wrote:
"A market is efficient with respect to information set θ_{t} if it
is impossible to make economic profits by trading on the basis of
information set θ_{t}." Since the economic profits are riskadjusted
returns after deducting transaction costs, Jensen's definition implies
that market efficiency may be tested by considering the net profits
and risk of trading strategies based on information set θ_{t}. Timmermann
and Granger (2004, p. 25) extended Jensen's definition by specifying
how the information variables in θ_{t} are used in actual forecasting.
Their definition is as follows:
A market is efficient with respect to the information set θ_{t}, search
technologies S_{t}, and forecasting models M_{t}, if it is impossible to make
economic profits by trading on the basis of signals produced from
a forecasting model in M_{t} defined over predictor variables in the information
set θ_{t} and selected using a search technology in S_{t}.[6]
On the other hand, Jensen (1978,
p. 97) grouped the various versions of the efficient markets hypothesis
into the following three testable forms based on the definition
of the information set θ_{t}:
(1) the Weak Form of the Efficient markets hypothesis, in which
the information set θ_{t} is taken to be solely the information contained
in the past price history of the market as of time t.
(2) the Semistrong Form of the Efficient markets hypothesis, in
which θ_{t} is taken to be all information that is publicly available
at time t. (This includes, of course, the past history of prices
so the weak form is just a restricted version of this.)
(3) the Strong Form of the Efficient markets hypothesis, in which
θ_{t} is taken to be all information known to anyone at time t.
Thus, technical analysis provides a weak form test of market efficiency
because it heavily uses past price history. Testing the efficient
markets hypothesis empirically requires more specific models that
can describe the process of price formation when prices fully reflect
available information. In this context, two specific models of efficient
markets, the martingale model and the random walk model, are explained
next.
The Martingale Model
In the mid1960s, Samuelson (1965)
and Mandelbrot (1966) independently demonstrated that a sequence
of prices of an asset is a martingale (or a fair game) if it has
unbiased price changes. A martingale stochastic process {P_{t}} is expressed
as:
E(P_{t+1} P_{t}, P_{t1}, ...) = P_{t} (1)
or equivalently,
E(P_{t+1}  P_{t} P_{t}, P_{t1}, ...) = 0 (2)
where P_{t} is a price of an asset at time t. Equation (1) states that
tomorrow's price is expected to be equal to today's price, given
knowledge of today's price and of past prices of the asset. Equivalently,
(2) states that the asset's expected price change (or return) is
zero when conditioned on the asset's price history. The martingale
process does not imply that successive price changes are independent.
It just suggests that the correlation coefficient between these
successive price changes will be zero, given information about today's
price and past prices. Campbell, Lo, and MacKinlay (1997, p. 30)
stated that:
In fact, the martingale was long considered to be a necessary
condition for an efficient asset market, one in which the information
contained in past prices is instantly, fully, and perpetually reflected
in the asset's current price. If the market is efficient, then it
should not be possible to profit by trading on the information contained
in the asset's price history; hence the conditional expectation
of future price changes, conditional on the price history, cannot
be either positive or negative (if short sales are feasible) and
therefore must be zero.
Thus, the assumptions of the martingale model eliminate the possibility
of technical trading rules based only on price history that have
expected returns in excess of equilibrium expected returns. Another
aspect of the martingale model is that it implicitly assumes risk
neutrality. However, since investors are generally riskaverse,
in practice it is necessary to properly incorporate risk factors
into the model.
As a special case of the fair game model, Fama (1970) suggested
the submartingale model, which can be expressed as:
where P_{j,t} is the
price of security j at time t; P_{j,t+1} is its price at
t+1; r_{j,t+1}is the oneperiod percentage return (P_{j,t+1}
 P_{j,t})/ P_{j,t}; θ_{t} is a general
symbol for whatever set of information is assumed to be "fully
reflected" in the price at t: and the tildes indicate that
P_{j,t+1} and r_{j,t+1}are random variables at t.
This states that the expected value of next period's price based
on the information available at time t, θ_{t}, is equal
to or greater than the current price. Equivalently, it says that
the expected returns and price changes are equal to or greater than
zero. If (3) holds as an equality, then the price sequence {P_{j,t}}
for security j follows a martingale with respect to the information
sequence {θ_{t}}. An important empirical implication
of the submartingale model is that no trading rules based only
on the information set θ_{t} can have greater expected
returns than ones obtained by following a buyandhold strategy
in a future period. Fama (1970, p. 386) emphasized that "Tests
of such rules will be an important part of the empirical evidence
on the efficient markets model."
Random Walk Models
The idea of the random walk model
goes back to Bachelier (1900) who developed several models of price
behavior for security and commodity markets.[7] One of his models is
the simplest form of the random walk model: if P_{t} is the unit price
of an asset at the end of time t, then it is assumed that the increment P_{t,t}  P_{t}
is an independent and normally distributed random variable with
zero mean and variance proportional to t. The random walk model may
be regarded as an extension of the martingale model in the sense
that it provides more details about the economic environment. The
martingale model implies that the conditions of market equilibrium
can be stated in terms of the first moment, and thus it tells us
little about the details of the stochastic process generating returns.
Campbell, Lo, and MacKinlay (1997) summarize various versions of
random walk models as the following three models, based on the distributional
characteristics of increments. Random walk model 1 (RW1) is the
simplest version of the random walk hypothesis in which the dynamics
of {P_{t}} are given by the following equation:
where m
is the expected price change or drift, and IID(0, s^{2}) denotes that e_{t} is independently
and identically distributed with mean 0 and variance s^{2}. The independence
of increments e_{t} implies that the random walk process is also a fair
game, but in a much stronger sense than the martingale process:
independence implies not only that increments are uncorrelated,
but that any nonlinear functions of the increments are also uncorrelated.
Fama (1970, p. 386) stated that "In the early treatments of
the efficient markets model, the statement that the current price
of a security 'fully reflects' available information was assumed
to imply that successive price changes (or more usually, successive
oneperiod returns) are independent. In addition, it was usually
assumed that successive changes (or returns) are identically distributed."
However, the assumption of identically distributed increments has
been questioned for financial asset prices over long time spans
because of frequent changes in the economic, technological, institutional,
and regulatory environment surrounding the asset prices.
Random walk model 2 (RW2) relaxes the assumptions of RW1 to include
processes with independent but nonidentically distributed increments (e_{t}):
RW2 can be regarded as a more general price process in that, for
example, it allows for unconditional heteroskedasticity in the e_{t}'s,
a particularly useful feature given the timevariation in volatility
of many financial asset return series.
Random walk model 3 (RW3) is
an even more general version of the random walk hypothesis, which
is obtained by relaxing the independence assumption of RW2 to include
processes with dependent but uncorrelated increments. For example,
a process that has the following properties satisfies the assumptions
of RW3 but not of RW1 and RW2: Cov [e_{t}, e_{t1}]=0 for all k¹0, but where Cov [e^{2}_{t}, e^{2}_{t1}]¹0 for all k¹0 for some
This process has uncorrelated increments but is evidently not independent
because its squared increments are correlated.
Fama and Blume (1966) argued that, in most cases, the martingale
model and the random walk model are indistinguishable because the
martingale's degree of dependence is so small, and hence for all
practical purposes they are the same. Nevertheless, Fama (1970)
emphasized that market efficiency does not require the random walk
model. From the viewpoint of the submartingale model, the market
is still efficient unless returns of technical trading rules exceed
those of the buyandhold strategy, even though price changes (increments)
in a market indicate small dependence. In fact, the martingale model
does not preclude any significant effects in higher order conditional
moments since it assumes the existence of the first moment (expected
return) only.
Noisy Rational Expectations Models
The efficient markets model
implies instantaneous adjustment of price to new information by
assuming that the current equilibrium price fully impounds all available
information. It implicitly assumes that market participants are
rational and they have homogeneous beliefs about information. In
contrast, noisy rational expectations equilibrium models assume
that the current price does not fully reveal all available information
because of noise (unobserved current supply of a risky asset or
information quality) in the current equilibrium price. Thus, price
shows a pattern of systematic slow adjustment to new information
and this implies the existence of profitable trading opportunities.
Noisy rational expectations equilibrium models were developed
on the basis of asymmetric information among market participants.
Working (1958) first developed a model in which traders are divided
into two groups: a large group of wellinformed and skillful traders
and a small group of illinformed and unskillful traders. In his
model, some traders seek to get pertinent market information ahead
of the rest, while others seek information that gives advance indication
of future events. Since there exist many different pieces of information
that influence prices, price tends to change gradually and frequently.
The tendency of gradual price changes results in very shortterm
predictability. In the process, traders who make their decision
on the basis of new information may seek quick profits or take their
losses quickly, because they may regard an adverse price movement
as a signal that the price is reflecting other information which
they do not possess. Meanwhile, illqualified traders who have little
opportunity to acquire valuable information early and little ability
to interpret the information if any may choose to "go with
the market."
Smidt (1965b) developed another early model in this area and provided
the first theoretical foundation for the possibility of profitable
technical trading rules by taking account of the speed and efficiency
with which a speculative market responds to new information. He
hypothesized two futures markets. The first market is an ideal one
where all traders are immediately and simultaneously aware of any
new information pertaining to the price of futures contracts. The
second market has two types of traders, "insiders" and
"outsiders." While insiders are traders who learn about
new information relatively early, outsiders are traders who only
hear about the new information after insiders have heard about it.
According to Smidt, if all traders are equally well informed as
in the ideal market or if insiders perfectly predict subsequent
outsiders' behavior, there exists only a limited possibility of
profits for technical traders. Even if insiders do not always perfectly
anticipate outsiders, technical analysis may have no value if insiders
are as likely to underestimate as to overestimate the outsiders'
response to new information. However, if insiders do not perfectly
predict outsiders' behavior and hence there is a systematic tendency
for a price rise or fall to be followed by a subsequent further
rise or fall, then technical traders may earn longrun profits in
a market, even in the absence of price trends. Thus, Smidt argued
that "evidence that a trading system generates positive profits
that are not simply the results of following a trend also constitutes
evidence of market imperfections" (p.130).
Grossman and Stiglitz (1976, 1980) developed a formal noisy rational
expectations model in which there is an equilibrium degree of disequilibrium.
They demonstrated that, in a competitive market, no one has an incentive
to obtain costly information if the marketclearing price reflects
all available information, and thus the competitive market breaks
down. Like Smidt's framework, Grossman and Stiglitz's model also
assumes two types of traders, "informed" and "uninformed,"
depending on whether they paid a cost to obtain information. When
price reflects all available information, each informed trader in
a competitive market feels they could stop paying for information
and do as well as uninformed traders. But all informed traders feel
this way. Therefore, if a market is informationally efficient, then
having any positive fraction informed is not an equilibrium. Conversely,
having no one informed is also not an equilibrium since each trader
feels that they could make profits from becoming informed.
Grossman and Stiglitz further demonstrated that if information
is very inexpensive, or if informed traders have very precise information,
then equilibrium exists and the speculative market price will reveal
most of the informed traders' information. However, such a market
will be very thin because it can be made of traders with almost
homogeneous beliefs. Grossman and Stiglitz's model supports the
weak form of the efficient markets hypothesis in which no profits
are made from looking at price history because their model assumes
uninformed traders have rational expectations. What is not supported
by their model is the strong form of the efficient markets hypothesis
because prices are unable to fully reflect all private information
and thus the informed do a better job in allocating their portfolio
than the uninformed.
In contrast to Grossman and Stiglitz, Hellwig (1982) showed that
if the time span between successive market transactions is short,
the market can approximate full informational efficiency closely,
but the returns to the informed traders can be greater than zero.
The GrossmanStiglitz conclusion resulted from the assumption that
traders learn from current prices before any transactions at these
prices take place, while Hellwig assumes that traders draw information
only from past equilibrium prices at which transactions have actually
been completed. Thus, the informed have time to use their information
before other traders have inferred it from the market price and
can make positive returns, which in turn provide an incentive to
spend resources on information.
In Hellwig's model, the market cannot be informationally efficient
if traders learn from past prices rather than current prices, because
the information contained in the current price is not yet 'correctly
evaluated' by uninformed traders. However, the deviation from informational
efficiency is small if the period is short, since the underlying
stochastic processes are continuous and have only small increments
in a short time interval. That is, the news of any one period is
insignificant and thus the informational advantage of informed traders
is small. This implies that the equilibrium price in any period
must be close to an informationally efficient market level. Despite
their small informational advantage, however, informed traders can
make positive returns by taking very large positions in their transactions.
Therefore, the return to being informed in one period is prevented
from being zero and the market approaches full informational efficiency.
Treynor and Ferguson (1985) showed that if technical analysis is
combined with nonpublic information that may change the price of
an asset, then it could be useful in achieving unusual profit in
a speculative market. In their model, an investor obtaining nonpublic
information privately must decide how to act. If the investor receives
the information before the market does and establishes an appropriate
position, then they can expect a profit from the change in price
that is forthcoming when the market receives the information. If
the investor receives the information after the market does, then
they do not take the position. The investor uses past prices to
compute the probability that the market has already incorporated
the information. Treynor and Ferguson measured such profitability
using Bayes' theorem conditioned on past prices. However, they pointed
out that the investor's profit opportunity is created by the nonprice
information but not the past prices. Past prices only help exploit
the information efficiently.
Brown and Jennings (1989) proposed a twoperiod noisy rational
expectations model in which a current (secondperiod) price is dominated
as an informative source by a weighted average of past (firstperiod)
and current prices. According to these authors, if the current price
depends on noise (i.e., unobserved current supply of a risky asset)
as well as private information of market participants, it cannot
be a sufficient statistic for private information. Moreover, noise
in the current equilibrium price does not allow for price to fully
reveal all publicly available information provided by price histories.
Therefore, past prices together with current prices enable investors
to make more accurate inferences about past and present signals
than do current prices alone. Brown and Jennings demonstrated that
technical analysis based on past prices has value in every myopicinvestor
economy in which current prices are not fully revealing of private
information and traders have rational conjectures about the relation
between prices and signals.
Grundy and McNichols (1989) independently introduced a multiperiod
noisy rational expectations model analogous to that in Brown and
Jennings (1989). Their model is also similar to the model in Hellwig
(1982) in that a sequence of prices fully reveals average private
signals (Ybar) as the number of rounds of trade becomes infinite,
although Hellwig assumed that per capita supply is observable but
traders cannot condition their demand on the current price. In Grundy
and McNichols' model, supply is unobservable but traders are able
to condition their demand on the current price. In particular, they
conjectured that when supply is perfectly correlated across rounds,
Ybar can be revealed with just two rounds of trade. In the first
round of trade, an exogenous supply shock keeps price from fully
revealing the average private signal Ybar. Allowing a second round
of trade leads to one of two types of equilibria: non Ybarrevealing
andYbar revealing. In the nonYbar revealing equilibrium, traders
have homogeneous beliefs concerning the secondround price. Thus,
traders do not learn about Ybar from the second round of trade and
continue to hold their Paretooptimal allocations from the first
round. The market will again clear at the price of the first round
and no trade takes place in the second round. In the Ybarrevealing
equilibrium, Paretooptimal allocations are not achieved in the
first round and traders do not have concordant beliefs concerning
the secondround price, since the sequence of prices, i.e., prices
of the first and second rounds, reveals Ybar.Traders do not learn
Ybar from the secondround price alone but do learn it from the
price sequence. Trade thus takes place at both the first and second
rounds even without new public (or private) information. In the
Ybarrevealing equilibrium, rational traders are chartists and their
risksharing behavior leads to trade.
Blume, Easley, and O'Hara (1994) developed an equilibrium model
that emphasizes the informational roles of volume and technical
analysis. Unlike previous equilibrium models that considered the
aggregate supply of a risky asset as the source of noise, their
model assumes that the source of noise is the quality of information.
They showed that volume provides "information about the quality
of traders' information" that cannot be conveyed by prices,
and thus, observing the price and the volume statistics together
can be more informative than observing the price statistic alone.
In their model, technical analysis is valuable because current market
statistics may be insufficient to reveal all information. They argued
that "Because the underlying uncertainty in the economy is
not resolved in one period, sequences of market statistics can provide
information that is not impounded in a single market price"
(p. 177). The value of technical analysis depends on the quality
of information. Technical analysis can be more valuable if past
price and volume data possess higherquality information, and be
less valuable if there is less to be learned from the data. In any
case, technical analysis helps traders to correctly update their
views on the market.
Noise Traders and Feedback Models
In the early 1990s, several financial
economists developed the field of behavioral finance, which is "finance
from a broader social science perspective including psychology and
sociology" (Shiller 2003, p. 83). In the behavioral finance
model, there are two types of investors: arbitrageurs (also called
sophisticated investors or smart money) and noise traders (feedback
traders or liquidity traders). Arbitrageurs are defined as investors
who form fully rational expectations about security returns, while
noise traders are investors who irrationally trade on noise as if
it were information (Black 1986). Noise traders may obtain their
pseudosignals from technical analysts, brokers, or economic consultants
and irrationally believe that these signals impound information.
The behavioralists' approach, also known as feedback models, is
then based on two assumptions. First, noise traders' demand for
risky assets is affected by their irrational beliefs or sentiments
that are not fully justified by news or fundamental factors. Second,
since arbitrageurs are likely to be risk averse, arbitrage, defined
as trading by fully rational investors not subject to such sentiment,
is risky and therefore limited (Shleifer and Summers 1990, p. 19).
In feedback models, noise traders buy when prices rise and sell
when prices fall, like trend chasers. For example, when noise traders
follow positive feedback strategies (buy when prices rise), this
increases aggregate demand for an asset they purchased and thus
results in a further price increase. Arbitrageurs having short horizons
may think that the asset is mispriced above its fundamental value,
and sell it short. However, their arbitrage is limited because it
is always possible that the market will perform very well (fundamental
risk) and that the asset will be even more overpriced by noise traders
in the near future because they can be even more optimistic ("noise
trader risk," De Long et al. 1990a). As long as there exists
risk created by the unpredictability of noise traders' opinions,
sophisticated investors' arbitrage will be reduced even in the absence
of fundamental risk and thus they do not fully counter the effects
of the noise traders. Rather, it may be optimal for arbitrageurs
to jump on the "bandwagon" themselves. Arbitrageurs optimally
buy the asset that noise traders have purchased and sell it out
much later when its price rises high enough. Therefore, although
ultimately arbitrageurs make prices return to their fundamental
levels, in the short run they amplify the effect of noise traders
(De Long et al. 1990b). On the other hand, when noise traders are
pessimistic and thus follow negative feedback strategies, downward
price movement drives further price decreases and over time this
process eventually creates a negative bubble. In the feedback models,
since noise traders may be more aggressive than arbitrageurs due
to their overoptimistic (or overpessimistic) or overconfident views
on markets, they bear more risk with higher expected returns. As
long as riskreturn tradeoffs exist, noise traders may earn higher
returns than arbitrageurs. De Long et al. (1991) further showed
that even in the long run noise traders as a group survive and dominate
the market in terms of wealth despite their excessive risk taking
and excessive consumption. Hence, the feedback models suggest that
technical trading profits may be available even in the long run
if technical trading strategies (buy when prices rise and sell when
prices fall) are based on noise or "popular models" and
not on information such as news or fundamental factors (Shleifer
and Summers 1990).
Other Models
Additional models provide support for the use of technical analysis.
Beja and Goldman (1980) introduced a simple disequilibrium model
that explained the dynamic behavior of prices in the short run.
The rationale behind their model was, "When price movements
are forced by supply and demand imbalances which may take time to
clear, a nonstationary economy must experience at least some transient
moments of disequilibrium. Observed prices will then depend not
only on the state of the environment, but also on the state of the
market" (p. 236). The state of the economic environment represents
agents' endowments, preferences, and information generally changing
with time. In the disequilibrium model, therefore, the investor's
excess demand function for a security includes two components: (1)
fundamental demand which is the aggregate demand that the auctioneer
would face if at time t one were to conduct a Walrasian auction
in the economy; and (2) the difference between actual excess demand
and corresponding fundamental demand. With nonequilibrium trading,
the demands should reflect the potential for direct speculation
on price changes, including the price's adjustment towards equilibrium.
In general, this is a function of both speculators' average assessment
of the current trend in the security's price and the opportunity
growth rate of alternative investments in nonequilibrium trading
with comparable securities. The process of trend estimation is adaptive
because the price changes include some randomness. Beja and Goldman
showed that when trend followers have some market power, an increase
in fundamental demand might generate oscillations, although the
economy dominated by fundamental demand is stable and nonoscillatory.
Furthermore, increasing the market impact of the trend followers
causes oscillations and makes the system unstable. These situations
imply poor signaling quality of prices. On the other hand, they
also demonstrated that moderate speculation might improve the quality
of price signal and thus accelerate the convergence to equilibrium.
This happens when the speculators' response to changes in price
movements is relatively faster than the impact of fundamental demand
on price adjustment.
Froot, Scharfstein, and Stein (1992) demonstrated that herding
behavior of shorthorizon traders can lead to informational inefficiency.
Their model showed that an informed trader who wants to buy or sell
in the near future could benefit from their information only if
it is subsequently impounded into the price by the trades of similarly
informed speculators. Thus, shorthorizon traders would make profits
when they can coordinate their research efforts on the same information.
This kind of positive informational spillover can be so powerful
that herding traders may even analyze information that is not closely
related to the asset's longrun value. Technical analysis is one
example. Froot, Scharfstein, and Stein stated, "the very fact
that a large number of traders use chartist models may be enough
to generate positive profits for those traders who already know
how to chart. Even stronger, when such methods are popular, it is
optimal for speculators to choose to chart" (p. 1480). In their
model, such an equilibrium is possible even in the condition in
which prices follow a random walk and hence publicly available information
has no value in forecasting future price changes.
Clyde and Osler (1997) provide another theoretical foundation for
technical analysis as a method for nonlinear prediction on a high
dimension (or chaotic) system. They showed that graphical technical
analysis methods might be equivalent to nonlinear forecasting methods
using Takens' (1981) method of phase space reconstruction combined
with local polynomial mapping techniques for nonlinear prediction.
In Takens' method, the true phase space of a dynamic system with
n state variables can be reconstructed by plotting an observable
variable associated with the system against at least 2n of its own
lagged values (p. 494). The objective of the phase space reconstruction
is to discover an attractor, and if an attractor is found, nonlinear
prediction can be performed using local polynomial mapping techniques.
Forecasting using local polynomial mapping is related to identifying
the current position on the attractor and then observing the evolution
over time of points near the current point. If points near the current
point evolve to points that are near each other on the attractor,
forecasting can be made with some confidence that the current point
will evolve to the same region.
The above process was tested by applying the identification algorithm
of a "headandshoulders" pattern to a simulated highdimension
nonlinear price series to see if technical analysis has any predictive
power. More specifically, the following two hypotheses were tested:
(1) technical analysis has no more predictive power on nonlinear
data than it does on random data; and (2) when applied to nonlinear
data, technical analysis earns no more hypothetical profits than
those generated by a random trading rule. For the first hypothesis,
the fraction of total positions that are profitable (the hit ratio)
was investigated. The result indicated that the hit ratios exceeded
0.5 in almost all cases when the headandshoulders pattern was
applied to the nonlinear series. Moreover, profits from applying
the headandshoulders pattern to the nonlinear series exceeded
the median of those from the bootstrap simulated data in almost
all cases, even at the longer horizons. Thus, the first hypothesis
was rejected. Similarly, the hit ratio tests for 100 nonlinear series
also rejected the second hypothesis. As a result, technical analysis
seemed to work better on nonlinear data than on random data and
generated more profits than random buying and selling when applied
to a known nonlinear system. This led Clyde and Osler to conclude
that "Technical methods may generally be crude but useful methods
of doing nonlinear analysis" (p. 511).
Introducing a simple agentbased model for market price dynamics,
Schmidt (1999, 2000, 2002) showed that if technical traders are
capable of affecting market liquidity, their concerted actions can
move the market price in the direction favorable to their strategy.
The model assumes a constant total number of traders that consists
of "regular" traders and "technical" traders.
Again, the regular traders are partitioned into buyers and sellers,
and have two dynamic patterns in their behavior: a "fundamentalist"
component and a "chartist" component. The former motivates
traders to buy an asset if the current price is lower than the fundamental
value, and to sell it otherwise, while the latter leads traders
to buy if the price increases and sell when price falls. In the
model, price moves linearly with the excess demand, which in turn
is proportional to the excess number of buyers from both regular
and technical traders.
The result is similar to those of Beja and Goldman (1980) and
Froot, Scharfstein, and Stein (1992). In the absence of technical
traders, price dynamics formed slowly decaying oscillations around
an asymptotic value. However, inclusion of technical traders in
the model increased the price oscillation amplitude. The logic is
simple: if technical traders believe price will fall, they sell,
and thus, excess demand decreases. As a result, price decreases,
and the chartist component of regular traders forces them to sell.
This leads price to decrease further until the fundamentalist priorities
of regular traders become overwhelming. The opposite situation occurs
if technical traders make a buy decision based on their analysis.
Hence, Schmidt concluded that if technical traders are powerful
enough in terms of trading volume, they can move price in the direction
favorable to their technical trading strategy.
Summary of Theory
In efficient market models, such
as the martingale model and random walk models, technical trading
profits are not feasible because, by definition, in efficient markets
current prices reflect all available information (Working 1949,
1962; Fama 1970) or it is impossible to make riskadjusted profits
net of all transaction costs by trading on the basis of past price
history (Jensen 1978). The martingale model suggests that an asset's
expected price change (or return) is zero when conditioned on the
asset's price history. In particular, the submartingale model (Fama
1970) implies that no trading rules based only on past price information
can have greater expected returns than buyandhold returns in a
future period. The simplest random walk model assumes that successive
price changes are independently and identically distributed with
zero mean. Thus, the random walk model has much stronger assumptions
than the martingale model.
In contrast, other models, such as noisy rational expectations
models, feedback models, disequilibrium models, herding models,
agentbased models, and chaos theory, postulate that price adjusts
sluggishly to new information due to noise, market frictions, market
power, investors' sentiments or herding behavior, or chaos. In these
models, therefore, there exist profitable trading opportunities
that are not being exploited. For example, Brown and Jennings's
noisy rational expectations model assumes that the current price
does not fully reveal private information because of noise (unobserved
current supply of a risky asset) in the current equilibrium price,
so that historical prices (i.e., technical analysis) together with
the current price help traders make more precise inferences about
past and present signals than does the current price alone. As another
example, behavioral finance models posit that noise traders, who
misperceive noise as if it were information (news or fundamental
factors) and irrationally act on their belief or sentiments, bear
a large amount of risk relative to rational investors and thus may
earn higher expected returns. Since noise trader risk (future resale
price risk) limits rational investors' arbitrage even when there
is no fundamental risk, noise traders on average can earn higher
returns than rational investors in the short run, and even in the
long run they can survive and dominate the market (De Long et al.
1990a, 1991). The behavioral models suggest that technical trading
may be profitable in the long run even if technical trading strategies
(buy when prices rise and sell when prices fall) are based on noise
or "popular models" and not on information (Shleifer and
Summers 1990).
Nevertheless, the efficient markets hypothesis still seems to
be a dominant paradigm in the sense that financial economists have
not yet reached a consensus on a better model of price formation.
Over the last two decades, however, the efficient markets paradigm
has been increasingly challenged by a growing number of alternative
theories such as noisy rational expectations models and behavioral
models. Hence, sharp disagreement in theoretical models makes empirical
evidence a key consideration in determining the profitability of
technical trading strategies. Empirical findings regarding technical
analysis are reviewed next.
Empirical Studies
Numerous empirical studies have tested
the profitability of various technical trading systems, and many
of them included implications about market efficiency. In this report,
previous empirical studies are categorized into two groups, "early"
studies and "modern" studies, based on an overall evaluation
of each study in terms of the number of technical trading systems
considered, treatments of transaction costs, risk, data snooping
problems, parameter optimization and outofsample verification,
and statistical tests adopted. Most early studies generally examined
one or two trading systems and considered transaction costs to compute
net returns of trading rules. However, risk was not adequately handled,
statistical tests of trading profits and data snooping problems
were often disregarded, and outofsample verification along with
parameter optimization were omitted, with a few exceptions. In contrast,
modern studies simulate up to thousands of technical trading rules
with the growing power of computers, incorporate transaction costs
and risk, evaluate outofsample performance of optimized trading
rules, and test statistical significance of trading profits with
conventional statistical tests or various bootstrap methods.
Although the boundary between
early and modern studies is blurred, this report regards Lukac,
Brorsen, and Irwin's (1988) work as the first modern study since
it was among the first technical trading studies to substantially
improve upon early studies in many aspects. They considered 12 technical
trading systems, conducted outofsample testing for optimized trading
rules with a statistical significance test, and measured performance
of trading rules after adjusting for transaction costs and risk.
Thus, early studies commence with Donchian's (1960) study and include
42 studies through 1987, while modern studies cover the 19882004
period with 92 studies.[8]
Figure 1 presents the number of technical
trading studies over several decades. It is noteworthy that during
the last decade academics' interest in technical trading rules has
increased dramatically, particularly in stock markets and foreign
exchange markets. The number of technical trading studies over the
19952004 period amounts to about half of all empirical studies
conducted since 1960. In this report, representative studies that
contain unique characteristics of each group are reviewed and discussed.
The report also includes tables that summarize each empirical study
with regard to markets, data frequencies, in and outof sample
periods, trading systems, benchmark strategies, transaction costs,
optimization, and conclusions.
Technical Trading Systems
Before reviewing historical research,
it is useful to first introduce and explicitly define major types
of technical trading systems. A technical trading system comprises
a set of trading rules that can be used to generate trading signals.
In general, a simple trading system has one or two parameters that
determine the timing of trading signals. Each rule contained in
a trading system is the results of parameterizations. For example,
the Dual Moving Average Crossover system with two parameters (a
short moving average and a long moving average) may be composed
of hundreds of trading rules that can be generated by altering combinations
of the two parameters. Among technical trading systems, the most
wellknown types of systems are moving averages, channels (support
and resistance), momentum oscillators, and filters. These systems
have been widely used by academics, market participants or both,
and, with the exception of filter rules, have been prominently featured
in wellknown books on technical analysis, such as Schwager (1996),
Kaufman (1998), and Pring (2002). Filter rules were exhaustively
tested by academics for several decades (the early 1960s through
the early 1990s) before moving average systems gained popularity
in academic research. This section describes representative trading
systems for each major category: Dual Moving Average Crossover,
Outside Price Channel (Support and Resistance), Relative Strength
Index, and Alexander's Filter Rule.
Dual Moving Average Crossover
Moving average based trading systems
are the simplest and most popular trendfollowing systems among
practitioners (Taylor and Allen 1992; Lui and Mole 1998). According
to Neftci (1991), the (dual) moving average method is one of the
few technical trading procedures that is statistically well defined.
The Dual Moving Average Crossover system generates trading signals
by identifying when the shortterm trend rises above or below the
longterm trend. Specifications of the system are as follows:
Outside Price Channel
Next to moving averages, price channels
are also extensively used technical trading methods. The price channel
is sometimes referred to as "trading range breakout" or
"support and resistance." The fundamental characteristic
underlying price channel systems is that market movement to a new
high or low suggests a continued trend in the direction established.
Thus, all price channels generate trading signals based on a comparison
between today's price level with price levels of some specified
number of days in the past. The Outside Price Channel system is
analogous to a trading system introduced by Donchian (1960), who
used only two preceding calendar week's ranges as a channel length.
More specifically, this system generates a buy signal anytime the
closing price is outside (greater than) the highest price in a channel
length (specified time interval), and generates a sell signal anytime
the closing price breaks outside (lower than) the lowest price in
the price channel. Specifications of the system are as follows:
Relative Strength Index
The Relative Strength Index, introduced
by Wilder (1978), is one of the most wellknown momentum oscillator
systems. Momentum oscillator techniques derive their name from the
fact that trading signals are obtained from values which "oscillate"
above and below a neutral point, usually given a zero value. In
a simple form, the momentum oscillator compares today's price with
the price of ndays ago. Wilder (1978, p. 63) explains the momentum
oscillator as follows:
The momentum oscillator measures the velocity of directional price
movement. When the price moves up very rapidly, as some point it
is considered to be overbought; when it moves down very rapidly,
at some point it is considered to be oversold. In either case, a
reaction or reversal is imminent.
Momentum values are similar to standard moving averages, in that
they can be regarded as smoothed price movements. However, since
the momentum values generally decrease before a reverse in trend
has taken place, momentum oscillators may identify a change in trend
in advance, while moving averages usually cannot. The Relative Strength
Index was designed to overcome two problems encountered in developing
meaningful momentum oscillators: (1) erroneous erratic movement,
and (2) the need for an objective scale for the amplitude of oscillators.[9]
Specifications of the system are as follows:
C.
Parameters:n, ET.[10]
Alexander's Filter Rule
This system was first introduced
by Alexander (1961, 1964) and exhaustively tested by numerous academics
until the early 1990s. Since then, its popularity among academics
has been replaced by moving average methods. This system generates
a buy (sell) signal when today's closing price rises (falls) by
x% above (below) its most recent low (high). Moves less than x%
in either direction are ignored. Thus, all price movements smaller
than a specified size are filtered out and the remaining movements
are examined. Alexander (1961, p. 23) argued that "If stock
price movements were generated by a trendless random walk, these
filters could be expected to yield zero profits, or to vary from
zero profits, both positively and negatively, in a random manner."
Specifications of the system are as follows:
These are only four examples of the very large number of technical
trading systems that have been proposed. For other examples, readers
should see Wilder (1978), Barker (1981), or other books on technical
analysis. In addition, the above examples do not cover other forms
of technical analysis such as charting. Most books on technical
analysis explain a broad category of visual chart patterns, and
some recent academic papers (e.g., Chang and Osler 1999; Lo, Mamaysky,
and Wang 2000) have also investigated the forecasting ability of
various chart patterns by developing pattern recognition algorithms.
Early Empirical Studies (19601987)
Overview
In most early studies, technical
trading rules are applied to examine price behavior in various speculative
markets, along with standard statistical analyses. Until technical
trading rules were dominantly used to test market efficiency, previous
empirical studies had employed only statistical analyses such as
serial correlation, runs analysis, and spectral analysis. However,
these statistical analyses revealed several limitations. As Fama
and Blume (1966) pointed out, the simple linear relationships that
underlay the serial correlation model were not able to detect the
complicated patterns that chartists perceived in market prices.
Runs analysis was too inflexible in that a run was terminated whenever
a reverse sign occurred in the sequence of successive price changes,
regardless of the size of the price change (p. 227). Moreover, it
was difficult to incorporate the elements of risk and transaction
costs into statistical analyses. Fama (1970) argued that "there
are types of nonlinear dependence that imply the existence of profitable
trading systems, and yet do not imply nonzero serial covariances.
Thus, for many reasons it is desirable to directly test the profitability
of various trading rules" (p. 394). As a result, in early studies
technical trading rules are considered as an alternative to avoid
such weaknesses of statistical analyses, and are often used together
with statistical analyses.
To detect the dependence of price changes or to test the profitability
of technical trading rules, early studies used diverse technical
trading systems such as filters, stoploss orders, moving averages,
momentum oscillators, relative strength, and channels. Filter rules
were the most popular trading system. Although many early studies
considered transaction costs to compute net returns of trading rules,
few studies considered risk, conducted parameter optimization and
outofsample tests, or performed statistical tests of the significance
of trading profits. Moreover, even after Jensen (1967) highlighted
the danger of data snooping in technical trading research, none
of the early studies except Jensen and Benington (1970) explicitly
dealt with the problem. Technical trading profits were often compared
to one of several benchmarks, such as the buyandhold returns,
geometric mean returns, or zero mean profits, to derive implications
for market efficiency.
Among the early studies, three
representative studies, Fama and Blume (1966), Stevenson and Bear
(1970), and Sweeney (1986), were selected for indepth reviews.
These studies had significant effects on later studies. In addition,
these studies contain the aforementioned typical characteristics
of early work, but are also relatively comprehensive compared to
other studies in the same period. Table 1
presents summaries of each early study in terms of various criteria
such as markets studied, data frequencies, sample periods, trading
systems, benchmark strategies, transaction costs, optimization,
and conclusions.
Representative Early Studies
Fama and Blume (1966), in the
bestknown and most influential work on technical trading rules
in the early period, exhaustively tested Alexander's filter rules
on daily closing prices of 30 individual securities in the Dow Jones
Industrial Average (DJIA) during the 19561962 period. They simulated
24 filters ranging from 0.5% to 50%. Previously, Alexander (1961,
1964) applied his famous filter rules to identify nonlinear patterns
in security prices (S&P Industrials, Dow Jones Industrials).
He found that the small filter rules generated larger gross profits
than the buyandhold strategy, and these profits were not likely
to be eliminated by commissions. This led him to conclude that there
were trends in stock market prices. However, Mandelbrot (1963) pointed
out that Alexander's computations of empirical returns included
serious biases that exaggerated filter rule profits. Alexander assumed
that traders could always buy at a price exactly equal to the subsequent
low plus x% and sell at the subsequent high minus x%. Because of
the frequency of large price jumps, however, the purchase would
occur at a little higher price than the low plus x%, while the sale
would occur at somewhat lower price than the high minus x%. By accommodating
this criticism, Alexander (1964) retested S&P Industrials using
the closing prices of the confirmation day as transaction prices.
The results indicated that after commissions, only the largest filter
(45.6%) beat the buyandhold strategy by substantial margin.
Fama and Blume also argued that Alexander's (1961, 1964) results
were biased because he did not incorporate dividend payments into
data. In general, adjusting for dividends reduces the profitability
of short sales and thus decreases the profitability of the filter
rules. Thus, Fama and Blume's tests were performed after taking
account of the shortcomings of Alexander's works. Their results
showed that, when commissions (brokerage fees) were taken into account,
only four out of 30 securities had positive average returns per
filter. Even ignoring commissions, the filter rules were inferior
to a simple buyandhold strategy for all but two securities. Fama
and Blume split the filter rule returns before commissions into
the returns for long and short transactions, respectively. On short
transactions, only one security had positive average returns per
filter, while on long transactions thirteen securities had higher
average returns per filter than buyandhold returns. Hence, they
argued that even long transaction did not consistently outperform
the buyandhold strategy.
Fama and Blume went on to examine average returns of individual
filters across the 30 securities. When commissions were included,
none of the filter rules consistently produced large returns. Although
filters between 12% and 25% produced positive average net returns,
these were not substantial when compared to buyandhold returns.
However, when trading positions were broken down into long and short
positions, three small filters (0.5%, 1.0%, and 1.5%) generated
greater average returns on long positions than those on the buyandhold
strategy.[11] For example, the 0.5% filter rule generated an average
gross return of 20.9% and an average net return of 12.5% after 0.1%
clearing house fee per roundtrip transaction. The average net return
was about 2.5% points higher than the average return (9.86%) of
the buyandhold strategy. Fama and Blume, however, claimed that
the profitable long transactions would not have been better than
a simple buyandhold strategy in practice, if the idle time of
funds invested, operating expenses of the filter rules, and brokerage
fees of specialists had been considered. Hence, Fama and Blume concluded
that for practical purposes the filter technique could not be used
to increase the expected profits of investors.
Stevenson and Bear (1970) conducted a similar study on July corn
and soybean futures from 1957 through 1968. They tested three trading
systems related to the filter technique: stoploss orders attributed
to Houthakker (1961), filter rules by Alexander and Fama and Blume,
and combinations of both rules. The stoploss order works as follows:
an investor buys a futures contract at the opening on the first
day of trading and places a stoploss order x% below the purchase
price. If the order is not executed, the investor holds the contract
until the last possible date prior to delivery. If the order is
executed, no further position is assumed until the opening day of
trading of the next contract. For each system, three filter sizes
(1.5%, 3%, and 5%) were selected and commissions charged were 0.5
cents per bushel for both corn and soybeans. The results indicated
that for soybeans the stoploss order with a 5% filter outperformed
a buyandhold strategy by a large amount, while for corn it greatly
reduced losses relative to the benchmark across all filters. The
pure filter systems appeared to have relatively poor performance.
For corn, all filters generated negative net returns, although 3%
and 5% filters performed better than the buyandhold strategy.
For soybeans, 1.5% and 3% filters were inferior to the buyandhold
strategy because they had losses, while a 5% filter rule outperformed
the benchmark with positive net returns. The combination system
was the best performer among systems. For soybeans, all filters
beat the buyandhold strategy, and particularly 3% and 5% filters
generated large net returns. The 3% and 5% filters also outperformed
the buyandhold strategy for corn. On the other hand, the combination
system against market (counter trend system) indicated nearly opposite
results. Overall, stoploss orders and combination rules were profitable
in an absolute sense, outperforming the buyandhold strategy. Profits
of technical trading rules led Bear and Stevenson to cast considerable
doubt on the applicability of the random walk hypothesis to the
price behavior of commodity futures markets.
Sweeney (1986) carried out comprehensive tests on various foreign
exchange rates by considering risk, transaction costs, postsample
performance, and statistical tests. Based on the assumption that
the Capital Asset Pricing Model (CAPM) can explain excess returns
to both filter rules and the buyandhold strategy and that risk
premia are constant over time, Sweeney developed a riskadjusted
performance measure, the socalled Xstatistic, in terms of filter
returns in excess of buyandhold returns. The Xstatistic is defined
as technical trading returns in excess of buyandhold returns plus
an adjustment factor which takes account of different risk premia
of the two trading strategies. Using the X statistic as a riskadjusted
performance measure, Sweeney tested daily data on the dollarGerman
mark ($/DM) exchange rate from 1975 through 1980, with filters ranging
from 0.5% to 10%. The results indicated that all filters but 10%
beat the buyandhold strategy and that the X statistic was statistically
significant for filters of 0.5% and 1%. The results were mostly
retained even after transaction costs of 0.125% per roundtrip were
considered, with slight reductions in returns (annual mean excess
returns of 1.6%3.7% over the buyandhold strategy). Moreover,
even when interestrate differentials in the statistic X were neglected,
the results were similar to those of the Xstatistic. Indeed, this
makes filter tests for foreign exchange rates quite convenient because
it is hard to collect the daily interestrate differentials. As
a result, Sweeney additionally tested 10 foreign currencies over
the 19731980 period, without considering the interestrate differentials.
The time period was divided into two parts, the first 610 days and
the remaining 1,220 days. For the first period, the filter rules
statistically significantly outperformed the buyandhold strategy
in 22 out of 70 cases (7 rules for 10 countries). Results for the
second period were similar, indicating 21 significant cases. In
general, smaller filters (0.5% to 3%) showed better performance
than larger filters. Transaction costs affected the results to about
the same degree as in the case of the dollarDM rate.
In Sweeney's model, the CAPM explains returns to the buyandhold
strategy and the filter rules, and implies that expected excess
returns to the filter rule over the buyandhold strategy should
be equal to zero. Thus, the significant returns of the filter rules
suggest that the CAPM cannot explain price behavior in foreign exchange
markets. Sweeney concluded that major currency markets indicated
serious signs of inefficiency over the first eight years of the
generalized managed floating beginning in March 1973. However, he
also pointed out that the results could be consistent with the efficient
markets hypothesis if risk premia vary over time. In this case,
the filter rule on average puts investors into the foreign currency
market when the risk premia or the expected returns are larger than
average. Then, positive returns on the filter rule may not be true
profits but just a reflection of higher average risk borne.
Summary of Early Studies
As summarized in Table
1, early empirical studies examined the profitability of technical
trading rules in various markets. The results varied greatly from
market to market as the three representative studies indicated.
For 30 individual stock markets, Fama and Blume (1966) found that
filter rules could not outperform the simple buyandhold strategy
after transaction costs. For July corn and soybean futures contracts,
Stevenson and Bear's (1970) results indicated that stoploss orders
and combination rules of filters and stoploss orders generated
substantial net returns and beat the buyandhold strategy. For
10 foreign exchange rates, Sweeney (1986) found that small (long)
filter rules generated statistically significant riskadjusted net
returns. Overall, in the early studies, very limited evidence of
the profitability of technical trading rules was found in stock
markets (e.g., Fama and Blume 1966; Van Horne and Parker 1967; Jensen
and Benington 1970), while technical trading rules often realized
sizable net profits in futures markets and foreign exchange markets
(e.g., for futures markets, Stevenson and Bear 1970; Irwin and Uhrig
1984; Taylor 1986; for foreign exchange markets, Poole 1967; Cornell
and Dietrich 1978; Sweeney 1986). Thus, stock markets appeared to
be efficient relative to futures markets or foreign exchange markets
during the time periods examined.
Nonetheless, the early studies exhibited several important limitations
in testing procedures. First, most early studies exhaustively tested
one or two popular trading systems, such as the filter or moving
average. This implies that the successful results in the early studies
may be subject to data snooping (or model selection) problems. Jensen
and Benington (1970) argued that "given enough computer time,
we are sure that we can find a mechanical trading rule which works
on a table of random numbers  provided of course that we are allowed
to test the rule on the same table of numbers which we used to discover
the rule. We realize of course that the rule would prove useless
on any other table of random numbers, and this is exactly the issue
with Levy's[12] results" (p. 470). Indeed, Dooly and Shafer (1983)
and Tomek and Querin (1984) proved this argument by showing that
when technical trading rules were applied to randomly generated
price series, some of the series could be occasionally profitable
by chance. Moreover, popular trading systems may be ones that have
survivorship biases.[13] Although Jensen (1967) suggested replicating
the successful results on additional bodies of data and for other
time periods to judge the impact of data snooping, none of the early
studies except Jensen and Benington (1970) followed this suggestion.
Second, the riskiness of technical trading rules was often ignored.
If investors are risk averse, they will always consider the riskreturn
tradeoffs of trading rules in their investment. Thus, large trading
rule returns do not necessarily refute market efficiency since returns
may be improved by taking greater risks. For the same reason, when
comparing between trading rule returns and benchmark returns, it
is necessary to make explicit allowance for difference of returns
due to different degrees of risk. Only a few studies (Jensen and
Benington 1970; Cornell and Dietrich 1978; Sweeney 1986) adopted
such a procedure.
Third, most early studies lacked statistical tests of technical
trading profits. Only four studies (James 1968; Peterson and Leuthold
1982; Bird 1985; Sweeney 1986) measured statistical significance
of returns on technical trading rules using Z or ttests under
the assumption that trading rule returns are normally distributed.
However, applying conventional statistical tests to trading rule
returns may be invalid since a sequence of trading rule returns
generally does not follow the normal distribution. Talyor (1985)
argued that "the distribution of the return from a filter strategy
under the null hypothesis of an efficient market is not known, so
that proper significance tests are impossible" (p. 727). In
fact, Lukac and Brorsen (1990) found that technical trading returns
were positively skewed and leptokurtic, and thus argued that past
applications of ttests to technical trading returns might be biased.
Moreover, in the presence of data snooping, significance levels
of conventional hypothesis tests are exaggerated (Lovell 1983; Denton
1985).
Fourth, Taylor (1986, p. 201) argued that "Most published
studies contain a dubious optimization. Traders could not guess
the best filter size (g) in advance and it is unlikely an optimized
filter will be optimal in the future. The correct procedure is,
of course, to split the prices. Then choose g using the first part
and evaluate this g upon the remaining prices." If the optimal
parameter performs well over in and outofsample data, then the
researcher may have more confidence in the results. Only three studies
(Irwin and Uhrig 1984; Taylor 1983, 1986) used this procedure.
Fifth, technical trading profits were often compared to the performance
of a benchmark strategy to derive implications for market efficiency.
Benchmarks used in early studies were buyandhold returns, geometric
mean returns, interest rates for bank deposit, or zero mean profits.
However, there was no consensus on which benchmark should be used
for a specific market.
Finally, the results of the technical trading studies in the earlier
period seem to be difficult to interpret because the performance
of trading rules was often reported in terms of an "average"
across all trading rules or all assets (i.e., stocks, currencies,
or futures contracts) considered, rather than bestperforming rules
or individual securities (or exchange rates or contracts). For example,
in interpreting their results, Fama and Blume (1966) relied on average
returns across all filters for a given stock or across all stocks
for a given filter. If they evaluated the performance of the best
rules or each individual stock, then their conclusion might have
been different. Sweeney (1988) pointed out that "The averaging
presumably reduces the importance of aberrations where a particular
filter works for a given stock as a statistical fluke. The averaging
can, however, serve to obscure filters that genuinely work for some
but not all stocks" (p. 296).
Modern Empirical Studies (19882004)
Overview
As noted previously, "modern"
empirical studies are assumed to commence with Lukac, Brorsen, and
Irwin (1988), who provide a more comprehensive analysis than any
early study. Although modern studies generally have improved upon
the limitations of early studies in their testing procedures, treatment
of transaction costs, risk, parameter optimization, outofsample
tests, statistical tests, and data snooping problems still differ
considerably among them. Thus, this report categorizes all modern
studies into seven groups by reflecting the differences in testing
procedures. Table 2 provides general information
about each group. "Standard" refers to studies that included
parameter optimization and outofsample tests, adjustment for transaction
cost and risk, and statistical tests. "Modelbased bootstrap"
studies are ones that conducted statistical tests for trading returns
using a modelbased bootstrap approach introduced by Brock, Lakonishok,
and LeBaron (1992). "Genetic programming" and "Reality
Check" indicate studies that attempted to solve data snooping
problems using the genetic programming technique introduced by Koza
(1992) and the Bootstrap Reality Check methodology developed by
White (2000), respectively. "Chart patterns" refers to
studies that developed and applied recognition algorithms for chart
patterns. "Nonlinear" studies are those that applied nonlinear
methods such as artificial neural networks or feedforward regressions
to recognize patterns in prices or estimate the profitability of
technical trading rules. Finally, "Others" indicates studies
that do not belong to any categories mentioned above.
Modern studies, which are summarized
in Tables 3 to 9, include 92 studies dating
from Lukac, Brorsen, and Irwin (1988) through Sapp (2004). As with
the early studies, a representative study from each of the seven
categories is reviewed in detail. They are Lukac, Brorsen, and Irwin
(1988), Brock, Lakonishok, and LeBaron (1992), Allen and Karjalainen
(1999), Sullivan, Timmermann, and White (1999), Chang and Osler
(1999), Gençay (1998a), and Neely (1997).
Representative Modern Studies
Standard Studies
Studies in this category incorporate
transaction costs and risk into testing procedures while considering
various trading systems. Trading rules are optimized in each system
based on a specific performance criterion and outofsample tests
are conducted for the optimal trading rules. In particular, the
parameter optimization and outofsample tests are significant improvements
over early studies, because these procedures are close to actual
traders' behavior and may partially address data snooping problems
(Jensen 1967; Taylor 1986).
A representative study among the standard studies is Lukac, Brorsen,
and Irwin (1988). Based on the efficient markets hypothesis and
the disequilibrium pricing model suggested by Beja and Goldman (1980),
they proposed three testable hypotheses: the random walk model,
the traditional test of efficient markets, and the Jensen test of
efficient markets. Each test was performed to check whether the
trading systems could produce positive gross returns, returns above
transaction costs, and returns above transaction costs plus returns
to risk. Over the 19751984 period, twelve technical trading systems
were simulated on price series from 12 futures markets across commodities,
metals and financials. The 12 trading systems consisted of channels,
moving averages, momentum oscillators, filters (or trailing stops),
and a combination system, some of which were known to be widely
used by fund managers and traders. The nearby contracts were used
to overcome the discontinuity problem of futures price series. That
is, the current contract is rolled over to the next contract prior
to the first notice date and a new trading signal is generated using
the past data of the new contract. Technical trading was simulated
over the previous three years and parameters generating the largest
profit over the period were used for the next year's trading. At
the end of the next year, new parameters were again optimized, and
so on.[14] Therefore, the optimal parameters were adaptive and the simulation
results were outofsample. Twotailed ttests were performed to
test the null hypothesis that gross returns generated from technical
trading are zero, while onetailed ttests were conducted to test
the statistical significance of net returns after transaction costs.
In addition, Jensen's was measured by using the capital asset pricing
model (CAPM) to determine whether net returns exist above returns
to risk. Results of normality tests indicated that, for aggregate
monthly returns from all twelve systems, normality was not rejected
and the returns showed negative autocorrelation. Thus, ttests for
portfolio returns were regarded as an appropriate procedure.
The results of trading simulations showed that seven of twelve
systems generated statistically significant monthly gross returns.
In particular, four trading systems, the close channel, directional
parabolic, MII price channel, and dual moving average crossover,
yielded statistically significant monthly portfolio net returns
ranging from 1.89% to 2.78% after deducting transaction costs.[15] The
corresponding return of a buyandhold strategy was 2.31%. Deutschmark,
sugar, and corn markets appeared to be inefficient because in these
markets significant net returns across various trading systems were
observed. Moreover, estimated results of the CAPM indicated that
the aforementioned four trading systems had statistically significant
intercepts (Jensen's a and thus implied that trading profits from
the four systems were not a compensation for bearing systematic
risk during the sample period. Thus, Lukac, Brorsen, and Irwin construed
that there might be additional causes of market disequilibrium beyond
transaction costs and risk. They concluded that the disequilibrium
model could be considered a more appropriate model to describe the
price movements in the futures markets for the 19781984 period.
Other studies in this category
are summarized in Table 3. Lukac and Brorsen
(1990) used similar procedures to those in Lukac, Brorsen, and Irwin
(1988), but extended the number of systems, commodities, and test
periods. They investigated 30 futures markets with 23 technical
trading systems over the 19751986 period. They also used dominant
contracts as in Lukac, Brorsen, and Irwin (1988), but skipped trading
in months in which a more distant contract was consistently dominant
in order to reduce liquidity costs. Parameters were reoptimized
by cumulative methods. That is, in each year optimal parameters
were selected by simulating data from 1975 to the current year.
The parameter producing the largest profit over the period was used
for the next year's trading. They found that aggregate portfolio
returns of the trading systems were normally distributed, but market
level returns were positively skewed and leptokurtic. Thus, they
argued that past research that used ttests on individual commodity
returns might be biased. The results indicated that 7 out of 23
trading systems generated monthly net returns above zero at a 10
percent significance level after transaction costs were taken into
account. However, most of the profits from the technical trading
rules appeared to be made during the 19791980 period. In the individual
futures markets, exchange rate futures earned highest returns, while
livestock futures had the lowest returns.
Most studies in this category, with a few exceptions, investigated
foreign exchange markets. Taylor and Tari (1989), Taylor (1992,
1994), Silber (1994), and Szakmary and Mathur (1997) all showed
that technical trading rules could yield annual net returns of 2%10%[16]
for major currency futures markets from the late 1970s to the early
1990s. Similarly, Menkoff and Schlumberger (1995), Lee and Mathur
(1996a, 1996b), Maillet and Michel (2000), Lee, Gleason, and Mathur
(2001), Lee, Pan, and Liu (2001), and Martin (2001) found that technical
trading rules were profitable for some spot currencies in each sample
period they considered. However, technical trading profits in currency
markets seem to gradually decrease over time. For example, Olson
(2004) reported that riskadjusted profits of moving average crossover
rules for an 18currency portfolio declined from over 3% between
the late 1970s and early 1980s to about zero percent in the late
1990s. Kidd and Brorsen (2004) provide some evidence that the reduction
in returns to managed futures funds in the 1990s, which predominantly
use technical analysis, may have been caused by structural changes
in markets, such as a decrease in price volatility and an increase
in large price changes occurring while markets are closed. For the
stock market, Taylor (2000) investigated a wide variety of US and
UK stock indices and individual stock prices, finding an average
breakeven oneway transaction cost of 0.35% across all data series.
In particular, for the DJIA index, an optimal trading rule (a 5/200
moving average rule) estimated over the 18971968 period produced
a breakeven oneway transaction cost of 1.07% during the 19681988
period. Overall, standard studies indicate that technical trading
rules generated statistically significant economic profits in various
speculative markets, especially in foreign exchange markets and
futures markets. Despite the successful results of standard studies,
there still exists a possibility that they were spurious because
of data snooping problems. Although standard studies optimized trading
rules and traced the outofsample performance of the optimal trading
rules, a researcher can obtain a successful result by deliberately
searching for profitable choice variables, such as profitable "families"
of trading systems, markets, insample estimation periods, outofsample
periods, and trading model assumptions including performance criteria
and transaction costs.
Modelbased Bootstrap Studies
Studies in this category apply
a modelbased bootstrap methodology to test statistical significance
of trading profits. Although some other recent studies of technical
analysis use the bootstrap procedure, modelbased bootstrap studies
differ from other studies in that they usually analyzed the same
trading rules (the moving average and the trading range breakout)
that Brock, Lakonishok, and LeBaron investigated, without conducting
trading rule optimization and outofsample verification. Among
modern studies, one of the most influential works on technical trading
rules is therefore Brock, Lakonishok, and LeBaron (1992). The reason
appears to be their use of a very long price history and, for the
first time, modelbased bootstrap methods for making statistical
inferences about technical trading profits. Brock, Lakonishok, and
LeBaron recognized data snooping biases in technical trading studies
and attempted to mitigate the problems by (1) selecting technical
trading rules that had been popular over a very long time; (2) reporting
results from all their trading strategies; (3) utilizing a very
long data series; and (4) emphasizing the robustness of results
across various nonoverlapping subperiods for statistical inference
(p. 1734).
According to Brock, Lakonishok, and LeBaron, there are several
advantages of using the bootstrap methodology. First, the bootstrap
procedure makes it possible to perform a joint test of significance
for different trading rules by constructing bootstrap distributions.
Second, the traditional ttest assumes normal, stationary, and timeindependent
distributions of data series. However, it is well known that the
return distributions of financial assets are generally leptokurtic,
autocorrelated, conditionally heteroskedastic, and time varying.
Since the bootstrap procedure can accommodate these characteristics
of the data using distributions generated from a simulated null
model, it can provide more powerful inference than the ttest. Third,
the bootstrap method also allows estimation of confidence intervals
for the standard deviations of technical trading returns. Thus,
the riskiness of trading rules can be examined more rigorously.
The basic approach in a bootstrap procedure is to compare returns
conditional on buy (or sell) signals from the original series to
conditional returns from simulated comparison series generated by
widely used models for stock prices. The popular models used by
Brock, Lakonishok, and LeBaron were a random walk with drift, an
autoregressive process of order one (AR (1)), a generalized autoregressive
conditional heteroskedasticity inmean model (GARCHM), and an exponential
GARCH (EGARCH). The random walk model with drift was simulated by
taking returns (logarithmic price changes) from the original series
and then randomly resampling them with replacement. In other models
(AR (1), GARCHM, EGARCH), parameters and residuals were estimated
using OLS or maximum likelihood, and then the residuals were randomly
resampled with replacement. The resampled residuals coupled with
the estimated parameters were then used to generate a simulated
return series. By constraining the starting price level of the simulated
return series to be exactly as its value in the original series,
the simulated return series could be transformed into price levels.
In this manner, 500 bootstrap samples were generated for each null
model, and each technical trading rule was applied to each of the
500 bootstrap samples. From these calculations, the empirical distribution
for trading returns under each null model was estimated. The null
hypothesis was rejected at the percent level if trading returns
from the original series were greater than the percent cutoff level
of the simulated trading returns under the null model.
Brock, Lakonishok, and LeBaron tested two simple technical trading
systems, a moving averageoscillator and a trading range breakout
(resistance and support levels), on the Dow Jones Industrial Average
(DJIA) from 1897 through 1986. In moving average rules, buy and
sell signals are generated by two moving averages: a shortperiod
average and a longperiod average. More specifically, a buy (sell)
position is taken when the shortperiod average rises above (falls
below) the longperiod average. Five popular combinations of moving
averages (1/50, 1/150, 5/150, 1/200, and 2/200, where the first
figure represents the short period and the second figure does the
long period) were selected with and without a 1% band and these
rules were tested with and without a 10day holding period for a
position. A band around the moving average is designed to eliminate
"whipsaws" that occur when the short and long moving averages
move closely. In general, introducing a band reduces the number
of trades and therefore transaction costs. Moving average rules
were divided into two groups depending on the presence of the 10day
holding period: variablelength moving average (VMA) and fixedlength
moving average (FMA). FMA rules have fixed 10day holding periods
after a crossing of the two moving averages, while VMA rules do
not. Trading range breakout (TRB) rules generate a buy (sell) signal
when the current price penetrates a resistance (support) level,
which is a local maximum (minimum) price. The local maximums and
minimums were computed over the past 50, 150, and 200 days, and
each rule was tested with and without a 1% band. With a 1% band,
trading signals were generated when the price level moved above
(below) the local maximum (minimum) by 1%. For trading range breakout
rules, 10day holding period returns following trading signals were
computed. Transaction costs were not taken into account.
Results for the VMA rules indicated that buy returns were all
positive with an average daily return of 0.042% (about 12% per year),
while sell returns were all negative with an average daily return
of 0.025% (about 7% per year). For buy returns, six of the ten
rules rejected the null hypothesis that the returns equal the unconditional
returns (daily 0.017%), at the 5% significance level using twotailed
ttests. The other four rules were marginally significant. For sell
returns, tstatistics were all highly significant. All the buysell
spreads were positive with an average of 0.067%, and the tstatistics
for these differences were highly significant, rejecting the null
hypothesis of equality with zero. The 1% band increased the spread
in every case. For the FMA rules, all buy returns were greater than
the unconditional 10day return with an average of 0.53%. Sell returns
were all negative with an average of 0.40%. The buysell differences
were positive for all trading rules with an average of 0.93%. Seven
of the ten rules rejected the null hypothesis that the difference
equals zero at the 5% significance level. For the trading range
breakout rules, buy returns were positive across all the rules with
an average of 0.63%, while sell returns were all negative with an
average of 0.24%. The average buysell return was 0.86% and all
six rules rejected the null hypothesis of the buysell spread differences
being equal to zero.
The bootstrap results showed that all null models could not explain
the differences between the buy and sell returns generated by the
technical trading rules. For example, the GARCHM generated the
largest buysell spread (0.018%) for the VMA rules among the null
models, but the spread was still smaller than that (0.067%) from
the original Dow series. Similar results were obtained from the
FMA and TRB rules. Standard deviations for buys and sells from the
original Dow series were 0.89 and 1.34%, respectively, and thus
the market was less volatile during buy periods relative to sell
periods. Since the buy signals also earned higher mean returns than
the sell signals, these results could not be explained by the riskreturn
tradeoff. Brock, Lakonishok, and LeBaron concluded their study by
writing, "the returnsgenerating process of stocks is probably
more complicated than suggested by the various studies using linear
models. It is quite possible that technical rules pick up some of
the hidden patterns" (p. 1758).
Despite its contribution to the statistical tests in the technical
trading literature, Brock, Lakonishok, and LeBaron's study has several
shortcomings in testing procedures. First, only gross returns of
each trading rule were calculated without incorporating transaction
costs, so that no evidence about economic profits was presented.
Second, trading rule optimization and outofsample tests were not
conducted. As discussed in the previous section, these procedures
may be important ingredients in determining the genuine profitability
of technical trading rules. Finally, results may have been "contaminated"
by data snooping problems. Since moving average and trading range
breakout rules have kept their popularity over a very long history,
these rules were likely to have survivorship biases. If a large
number of trading rules are tested over time, some rules may work
by pure chance even though they do not possess real predictive power
for returns. Of course, inference based on the subset of the surviving
trading rules may be misleading because it does not account for
the full set of initial trading rules (Sullivan, Timmermann, and
White 1999, p. 1649).[17]
Table 4
presents summaries of other modelbased bootstrap studies. As indicated
in the table, a number of studies in this category either tested
the same trading rules as in Brock, Lakonishok, and LeBaron (1992)
or followed their testing procedures. For example, Levich and Thomas
(1993) tested two popular technical trading systems, filter rules
and moving average crossover systems, on five currency futures markets
(the Deutsche mark, Japanese yen, British pound, Canadian dollar,
and Swiss franc) during the period 19761990. To measure the significance
level of profits obtained from the trading rules, they constructed
the empirical distribution of trading rule profits by randomly resampling
price changes in the original series 10,000 times and then applying
the trading rules to each simulated series. They found that, across
trading rules from both trading systems, average profits of all
currencies except the Canadian dollar were substantial (about 6%
to 9%) and statistically significant, even after deducting transaction
costs of 0.04% per oneway transaction.
Bessembinder and Chan (1998) evaluated the same 26 technical trading
rules as in Brock, Lakonishok, and LeBaron (1992) on dividendadjusted
DJIA data over the period 19261991. As Fama and Blume (1966) pointed
out, incorporating dividend payments into data tends to reduce the
profitability of short sales and thus may decrease the profitability
of technical trading rules. Bessembinder and Chan also argued that
"Brock et al. do not report any statistical tests that pertain
to the full set of rules. Focusing on those rules that are ex post
most (or least) successful would also amount to a form of data snooping
bias" (p. 8). This led them to evaluate the profitability and
statistical significance of returns on portfolios of the trading
rules as well as returns on individual trading rules. For the full
sample period, the average buysell differential across all 26 trading
rules was 4.4% per year (an average breakeven oneway transaction
cost[18] of 0.39%) with a bootstrap pvalue of zero. Nonsynchronous
trading with a oneday lag reduced the differential to 3.2% (breakeven
oneway transaction costs of 0.29%) with a significant bootstrap
pvalue of 0.002. However, the average breakeven oneway transaction
cost has declined over time, and, for the most recent subsample
period (19761991) it was 0.22%, which was compared to estimated
oneway transaction costs of 0.24%0.26%.[19] Hence, Bessembinder and
Chan concluded that, although the technical trading rules used by
Brock, Lakonishok, and LeBaron revealed some forecasting ability,
it was unlikely that traders could have used the trading rules to
improve returns net of transaction costs.
The results of the modelbased bootstrap studies varied enormously
across markets and sample periods tested. In general, for (spot
or futures) stock indices in emerging markets, technical trading
rules were profitable even after transaction costs (Bessembinder
and Chan 1995; Raj and Thurston 1996; Ito 1999; Ratner and Leal
1999; Coutts and Cheung 2000; Gunasekarage and Power 2001), while
technical trading profits on stock indices in developed markets
were negligible after transaction costs or have decreased over time
(Hudson, Dempsey, and Keasey 1996; Mills 1997; Bessembinder and
Chan 1998; Ito 1999; Day and Wang 2002). For example, Ratner and
Leal (1999) documented that Brock, Lakonishok, and LeBaron's moving
average rules generated statistically significant net returns in
four equity markets (Mexico, Taiwan, Thailand, and the Philippines)
over the19821995 period. For the FT30 index in the London Stock
Exchange, Mills (1997) showed that mean daily returns produced from
moving average rules were much higher (0.081% and 0.097%) than buyandhold
returns for the 19351954 and 19551974 periods, respectively, although
the returns were insignificantly different from a buyandhold return
for the 19751994 period. On the other hand, LeBaron (1999), Neely
(2002), and Saacke (2002) reported the profitability of moving average
rules in currency markets. For example, LeBaron (1999) found that
for the mark and yen, a 150 moving average rule generated Sharpe
ratios of 0.600.98 after a transaction cost of 0.1% per roundtrip
over the 19791992 period. These Sharpe ratios were much greater
than those (0.30.4) for buyandhold strategies on aggregate US
stock portfolios. However, Kho (1966) and Sapp (2004) showed that
trading rule profits in currency markets could be explained by timevarying
risk premia using some version of the conditional CAPM. In addition,
there has been serious disagreement about the source of technical
trading profits in the foreign exchange market. LeBaron (1999) and
Sapp (2004) reported that technical trading returns were greatly
reduced after active intervention periods of the Federal Reserve
were eliminated, while Neely (2002) and Saacke (2002) showed that
trading returns were uncorrelated with foreign exchange interventions
of central banks. Most studies in this category have similar problems
to those in Brock, Lakonishok, and LeBaron (1992). Namely, trading
rule optimization, outofsample verification, and data snooping
problems were not seriously considered, although several recent
studies incorporated parameter optimization and transaction costs
into their testing procedures.
Genetic Programming Studies
Genetic programming, introduced
by Koza (1992), is a computerintensive search procedure for problems
based on the Darwinian principle of survival of the fittest. In
this procedure, a computer randomly generates a set of potential
solutions for a specific problem and then allows them to evolve
over many successive generations under a given fitness (performance)
criterion. Solution candidates (e.g., technical trading rules) that
satisfy the fitness criterion are likely to reproduce, while ones
that fail to meet the criterion are likely to be replaced. The solution
candidates are represented as hierarchical compositions of functions
like tree structures in which the successors of each node provide
the arguments for the function identified with the node. The terminal
nodes without successors include the input data, and the entire
tree structure as a function is evaluated in a recursive manner
by investigating the root node of the tree. The structure of the
solution candidates, which is not prespecified as a set of functions,
can be regarded as building blocks to be recombined by genetic programming.
When applied to technical trading rules, the building blocks consist
of various functions of past prices, numerical and logical constants,
and logical functions that construct more complicated building blocks
by combining simple ones. The function set can be divided into two
groups of functions: real and Boolean. The realvalued functions
are arithmetic operators (plus, minus, times, divide), average,
maximum, minimum, lag, norm, and so on, while Boolean functions
include logical functions (and, or, not, ifthen, ifthenelse)
and comparisons (greater than, less than). There are also real constants
and Boolean constants (true or false). As a result, these functions
require the trading systems tested to be well defined.
The aforementioned unique features of genetic programming may provide
some advantages relative to traditional studies with regard to testing
technical trading rules. Traditional technical trading studies investigate
a predetermined parameter space of trading systems, whereas the
genetic programming approach examines a search space composed of
logical combinations of trading systems or rules. Thus, the fittest
or optimized rule identified by genetic programming can be regarded
as an ex ante rule in the sense that its parameters are not determined
before the test. Since the procedure makes researchers avoid much
of the arbitrariness involved in selecting parameters, it can substantially
reduce the risk of data snooping biases. Of course, it cannot completely
eliminate all potential bias because in practice its search domain
(i.e., trading systems) is still constrained to some degree (Neely,
Weller, and Dittmar 1997).
Allen and Karjalainen (1999) applied the genetic programming approach
to the daily S&P 500 index from 19281995 to test the profitability
of technical trading rules. They built the following algorithm to
find the fittest trading rules (p. 256):
Step 1. Create a random rule. Compute the fitness of the rule
as the excess return in the training period above the buyandhold
strategy. Do this 500 times (this is the initial population).
Step 2. Apply the fittest rule in the population to the selection
period and compute the excess return. Save this rule as the initial
best rule.
Step 3. Pick two parent rules at random, using a probability distribution
skewed towards the best rule. Create a new rule by breaking the
parents apart randomly and recombining the pieces (this is a crossover).
Compute the fitness of the new rule in the training period. And
then replace one of the old rules by the new rule, using a probability
distribution skewed towards the worst rule. Do this 500 times to
create a new generation.
Step 4. Apply the fittest (best) rule in the new generation to
the selection period and compute the excess return. If the excess
return improves upon the previous best rule, save as the new best
rule. Stop if there is no improvement for 25 generations or after
a total of 50 generations. Otherwise, go back to Step 3.
This procedure describes one trial, and each trial starting from
a different random population generates one best rule. The best
rule is then tested in the validation (outofsample) period immediately
following the selection period. If no rule better than the buyandhold
strategy in the training period is produced in the maximum number
of generations, the trial is discarded. In Allen and Karjalainen's
study, the size of the genetic structures was bounded to 100 nodes
and to a maximum of ten levels of nodes. The search space as building
blocks was also constrained to logical combinations of simple rules,
which are moving averages and maxima and minima of past prices.
The data used was the S&P 500 index over the 19281995 period.
To identify optimal trading rules, 100 independent trials were conducted
by saving one rule from each trial. The fitness criterion was maximum
excess return over the buyandhold strategy after taking account
of transaction costs. The excess returns were calculated only on
buy positions with several oneway transaction costs (0.1%, 0.25%,
and 0.5%). To avoid potential data snooping in the selection of
time periods, ten successive training periods were employed. The
5year training and 2year selection periods began in 1929 and were
repeated every five years until 1974, with each outofsample test
beginning in 1936, 1941, and so on, up to 1981. For example, the
first training period was from 19291933, the selection period from
19341935, and the test period from 19361995. For each of the ten
training periods, ten trials were executed. The outofsample results
indicated that trading rules optimized by genetic programming failed
to generate consistent excess return after transaction costs. After
considering the most reasonable transaction costs of 0.25%, average
excess returns were negative for nine of the ten periods. Even after
transaction costs of 0.1%, the average excess returns were negative
for six out of the ten periods. For most test periods, only a few
trading rules indicated positive excess returns. However, in most
of the training periods, the optimized trading rules showed some
forecasting ability because the difference between average daily
returns during days in the market and out of the market was positive,
and the volatility during 'in' days was generally lower than during
'out' days. Allen and Karjalainen tried to explain the volatility
results by the negative relationship between ex post stock market
returns and unexpected changes in volatility. For example, when
volatility is higher than expected, investors revise their volatility
forecasts upwards, requiring higher expected returns in the future,
or lower stock prices and hence lower realized returns at present.
It is interesting that these results are analogous to Brock, Lakonishok,
and LeBaron's finding (1992).
The structure of the optimal trading rules identified by genetic
programming varied across different trials and transaction costs.
For instance, with 0.25% transaction costs the most optimal rules
were similar to a 250day moving average rule, while with 0.1% transaction
costs approximately half of the rules resembled a rule comparing
the normalized price to a constant, and the rest of the rules were
similar to either 10 to 40day moving average rules or a trading
range breakout rule comparing today's price to a 3day minimum price.
However, the optimal trading rules in several training periods were
too complex to be matched with simple technical trading rules. Overall,
throughout the outofsample simulations, the genetically optimized
trading rules did not realize excess returns over a simple buyandhold
strategy after transaction costs. Hence, Allen and Karjalainen concluded
that their results were generally consistent with market efficiency.
Table 5
presents summaries of other genetic programming studies. Using similar
procedures to those used in Allen and Karjalainen (1999), Neely,
Weller, and Dittmar (1997) investigated six foreign exchange rates
(mark, yen, pound, Swiss franc, mark/yen, and pound/Swiss franc)
over the 19741995 period. For all exchange rates, they used 19751977
as the training period, 19781980 as the selection period, and 19811995
as the validation period. They set transaction costs of 0.1% per
roundtrip in the training and selection periods, and 0.05% in the
validation period. Results indicated that average annual net returns
from each portfolio of 100 optimal trading rules for each exchange
rate ranged 1.0%6.0%. Trading rules for all currencies earned statistically
significant positive net returns that exceeded the buyandhold
returns. In addition, when returns were measured using a median
portfolio rule in which a long position was taken if more than 50
rules signaled long and a short position otherwise, net returns
in the dollar/mark, dollar/yen, and mark/yen were substantially
increased. Similar results were obtained for the Sharpe ratio criterion.
However, in many cases the optimal trading rules appeared to be
too complex to simplify their structures. The trading rule profits
did not seem to be compensation for bearing systematic risk, since
most of the betas estimated for four benchmarks (the Morgan Stanley
Capital International (MSCI) world equity market index, the S&P
500, the Commerzbank index of German equity, and the Nikkei) were
negative. In only one case (dollar/yen on the MSCI World Index),
beta was significantly positive with a value of 0.17. To determine
whether the performance of trading rules can be explained by a given
model for the datagenerating process, Brock, Lakonishok, and LeBaron's
bootstrap procedures were used with three null models (a random
walk, ARMA, and ARMAGARCH (1,1)). The bestperforming ARMA model
could explain only about 11% of the net returns to the dollar/mark
rate yielded by 10 representative trading rules.
Ready (2002) compared the performance of technical trading rules
developed by genetic programming to that of moving average rules
examined by Brock, Lakonishok, and LeBaron (1992) for dividendadjusted
DJIA data. Brock, Lakonishok, and LeBaron's best trading rule (1/150
moving average without a band) for the 19631986 period generated
substantially higher excess returns than the average of trading
rules formed by genetic programming after transaction costs. For
the 19571962 period, however, the moving average rule underperformed
every one of genetic trading rules. Thus, it seemed unlikely that
Brock, Lakonishok, and LeBaron's moving average rules would have
been chosen by a hypothetical trader at the end of 1962. This led
Ready to conclude that "the apparent success (after transaction
costs) of the Brock, Lakonishok, and LeBaron (1992) moving average
rules is a spurious result of data snooping" (p. 43). He further
found that genetic trading rules performed poorly for each outofsample
period, i.e., 19631986 and 19872000.
Similarly, Wang (2000) and Neely (2003) reported that genetically
optimized trading rules failed to outperform the buyandhold strategy
in both S&P 500 spot and futures markets. For example, Neely
(2003) showed that genetic trading rules generated negative mean
excess returns over the buyandhold strategy during the entire
outofsample periods, 19361995. On the other hand, Neely and Weller
(1999, 2001) documented the profitability of genetic trading rules
in various foreign exchange markets, although trading profits appeared
to gradually decline over time. Neely and Weller's (2001) finding
indicated that technical trading profits for four major currencies
were 1.7%8.3% per year over the 19811992 period, but near zero
or negative except for the yen over the 19931998 period. By testing
intradaily data in 1996, Neely and Weller (2003) also found that
genetic trading rules realized breakeven transaction costs of less
than 0.02% for most major currencies, under realistic trading hours
and transaction costs. Roberts (2003) documented that during the
19781998 period genetic trading rules generated a statistically
significant mean net return (a daily mean profit of $1.07 per contract)
in comparison to a buyandhold return ($3.30) in a wheat futures
market. For corn and soybeans futures markets, however, genetic
trading rules produced both negative mean returns and negative ratios
of profit to maximum drawdown. In sum, technical trading rules formulated
by genetic programming appeared to be unprofitable in stock markets,
particularly in recent periods. In contrast, genetic trading rules
performed well in foreign exchange markets with their decreasing
performance over time. In grain futures markets, the results were
mixed.
The genetic programming approach may avoid data snooping problems
caused by ex post selection of technical trading rules in the sense
that the rules are chosen by using price data available before the
beginning of the test period and thus all results are outofsample.
However, the results of genetic programming studies may be confronted
with a similar problem. That is, "it would be inappropriate
to use a computer intensive genetic algorithm to uncover evidence
of predictability before the algorithm or computer was available"
(Cooper and Gulen 2003, p. 9). In addition, it is questionable whether
trading rules formed by genetic programming have been used by real
traders. A genetically trained trading rule is a "fit solution"
rather than a "best solution" because it depends on the
evolution of initially chosen random rules. Thus, numerous "fit"
trading rules may be identified on the same insample data. For
this reason, most researchers using the genetic programming technique
have evaluated the "average" performance of 10 to 100
genetic trading rules. More importantly, trading rules formulated
by a genetic program generally have a more complex structure than
that of typical technical trading rules used by technical analysts.
This implies that the rules identified by genetic programming may
not approximate real technical trading rules applied in practice.
Hence, studies applying genetic programming to sample periods ahead
of its discovery violate the first two conditions suggested by Timmermann
and Granger (2004), which indicate that forecasting experiments
need to specify (1) the set of forecasting models available at any
given point in time, including estimation methods; (2) the search
technology used to select the best (or a combination of best) forecasting
model(s).
Reality Check Studies
According to White (2000), "Data
snooping occurs when a given set of data is used more than once
for purposes of inference or model selection" (p. 1097). He
argued that when such data reuse occurs, any satisfactory results
obtained may simply be due to chance rather than to any merit inherent
in the method yielding the results. Lo and MacKinlay (1990) also
argued that "the more scrutiny a collection of data is subjected
to, the more likely will interesting (spurious) patterns emerge"
(p. 432). Indeed, in empirical studies of prediction, when there
is little theoretical guidance regarding the proper selection of
choice variables such as explanatory variables, assets, insample
estimation periods, and others, researchers may select the choice
variables "in either (1) an adhoc fashion, (2) to make the
outofsample forecast work, or (3) by conditioning on the collective
knowledge built up to that point (which may emanate from (1) and/or
(2)), or some combination of the three" (Cooper and Gulen 2003,
p. 3). Such data snooping practices inevitably overstate significance
levels (e.g., tstatistic or ) of conventional hypothesis tests
(Lovell 1983; Denton 1985; Lo and MacKinlay 1990; Sullivan, Timmermann,
and White 1999; Cooper and Gulen 2003).
In the literature on technical trading strategies, a fairly blatant
form of data snooping is an ex post and "insample" search
for profitable trading rules. Jensen (1967) argued that "if
we begin to test various mechanical trading rules on the data we
can be virtually certain that if we try enough rules with enough
variants we will eventually find one or more which would have yielded
profits (even adjusted for any risk differentials) superior to a
buyandhold policy. But, and this is the crucial question, does
this mean the same trading rule will yield superior profits when
actually put into practice?" (p. 81). More subtle forms of
data snooping are suggested by Cooper and Gulen (2003). Specifically,
a set of data in technical trading research can be repeatedly used
to search for profitable "families" of trading systems,
markets, insample estimation periods, outofsample periods, and
trading model assumptions including performance criteria and transaction
costs. As an example, a researcher may deliberately investigate
a number of insample optimization periods (or methods) on the same
data to select one that provides maximum profits. Even if a researcher
selects only one insample period in an adhoc fashion, it is likely
to be strongly affected by similar previous research. Moreover,
if there are many researchers who choose one individual insample
optimization method on the same data, they are collectively snooping
the data. Collective data snooping is potentially the most dangerous
because it is not easily recognized by each individual researcher
(Denton 1985).
White (2000) developed a statistical procedure that, unlike the
genetic programming approach, can assess the effects of data snooping
in the traditional framework of predetermined trading rules. The
procedure, which is called the Bootstrap Reality Check methodology,
tests a null hypothesis that the best trading rule performs no better
than a benchmark strategy. In this approach, the best rule is searched
by applying a performance measure to the full set of trading rules,
and a desired pvalue can be obtained from comparing the performance
of the best trading rule to approximations to the asymptotic distribution
of the performance measure. Thus, White's approach takes account
of dependencies across trading rules tested.
Sullivan, Timmermann, and White (1999) applied White's Bootstrap
Reality Check methodology to 100 years of the Dow Jones Industrial
Average (DJIA), from 1897 through 1996. They used the sample period
(18971986) studied by Brock, Lakonishok, and LeBaron (1992) for
insample tests and an additional 10 years from 19871996 for outofsample
tests. S&P 500 index futures from 1984 through 1996 were also
used to test the performance of trading rules. For the full set
of technical trading rules, Sullivan, Timmermann, and White considered
about 8,000 trading rules drawn from 5 simple technical trading
systems that consisted of filters, moving averages, support and
resistance, channel breakouts, and onbalance volume averages. Two
performance measures, the mean return and the Sharpe ratio, were
employed. A benchmark for the mean return criterion was the "null"
system, which means out of market. In the case of the Sharpe ratio
criterion, a benchmark of a riskfree rate was used, implying that
technical trading rules earn the riskfree rate on days when a neutral
signal is generated. Transaction costs were not incorporated directly.
The results for the mean return criterion indicated that during
the 18971996 period the best rule was a 5day moving average that
produced an annual mean return of 17.2% with a Bootstrap Reality
Check pvalue of zero, which ensures that the return was not the
result of data snooping. Since the average return was obtained from
6,310 trades (63.1 per year), the breakeven transaction cost level
was 0.27% per trade. The universe of 26 trading rules used by Brock,
Lakonishok, and LeBaron (1992) was also examined. Among the trading
rules, the best rule was a 50day variable moving average rule with
a 1% band, generating an annualized return of 9.4% with the Bootstrap
Reality Check pvalue of zero. Thus, the results of Brock, Lakonishok,
and LeBaron (1992) were robust to data snooping biases.[20] These returns
were compared with the average annual return of 4.3% on the buyandhold
strategy during the same sample period. Similar results were obtained
for the Sharpe ratio criterion. Over the full 100year period, the
buyandhold strategy generated a Sharpe ratio of 0.034, while Sharpe
ratios for the best rules in Brock, Lakonishok, and LeBaron's universe
and the full universe were 0.39 and 0.82, respectively. Although
the Bootstrap Reality Check pvalues were all zero for both cases,
the best rules in Brock, Lakonishok, and LeBaron's study appeared
to have insignificant pvalues in several subperiods. Outofsample
results were relatively disappointing. Over the 10year (19871996)
sample on the DJIA, the 5day moving average rule selected as the
best rule from the full universe over the 18971986 period yielded
a mean return of 2.8% per year with a nominal pvalue[21] of 0.32, indicating
that the best rule did not continue to generate valuable economic
signals in the subsequent period. For the S&P 500 futures index
over the period 19841996, the best rule generated a mean return
of 9.4% per year with a nominal pvalue of 0.04. At first glance,
thus, the rule seemed to produce a statistically significant return.
However, the pvalue adjusted for data snooping was 0.90, suggesting
that the return was a result of data snooping. Sullivan, Timmermann,
and White construed that the poor outofsample performance relative
to the significant insample performance of technical trading rules
might be related to the recent improvement of the market efficiency
due to the cheaper computing power, lower transaction costs, and
increased liquidity in the stock market.
Table 6
presents summaries of the Reality Check studies. Sullivan, Timmermann,
and White (2003) expanded the universe of trading rules by combining
technical trading rules and calendar frequency trading rules tested
in their previous works (Sullivan, Timmermann, and White 1999, 2001).
The augmented universe of trading rules was comprised of 17,298
trading rules.[22]
The results indicated that for the full sample period (18971998),
the best of the augmented universe of trading rules, which was a
2dayonbalance volume strategy, generated mean return of 17.1%
on DJIA data with a data snooping adjusted pvalue of zero and outperformed
a buyandhold strategy (a mean return of 4.8%). For a recent period
(19871996), the best rule was a weekofthemonth strategy with
a mean return of 17.3% being slightly higher than a buyandhold
return (13.6%). However, the return was not statistically significant
with a data snooping adjusted pvalue of 0.98. Similar results were
found for the S&P 500 futures data. The best rule (a mean return
of 10.7%) outperformed the benchmark (a mean return of 8.0%) during
the 19841996 period, but a data snooping adjusted pvalue was 0.99.
Hence, they argued that it might be premature to conclude that both
technical trading rules and calendar rules outperformed the benchmark
in the stock market.
Qi and Wu (2002) applied White's Bootstrap Reality Check methodology
to seven foreign exchange rates during the 19731998 period. They
created the full set of rules with four trading systems (filters,
moving averages, support and resistance, and channel breakouts)
among five technical trading systems employed in Sullivan, Timmermann,
and White (1999). Results indicated that the best trading rules,
which were mostly moving average rules and channel breakout rules,
produced positive mean excess returns over the buyandhold benchmark
across all currencies and had significant data snooping adjusted
pvalues for the Canadian dollar, the Italian lira, the French franc,
the British pound, and the Japanese yen. The mean excess returns
were economically substantial (7.2% to 12.2%) for all the five currencies
except for the Canadian dollar (3.6%), even after adjustment for
transaction costs of 0.04% per oneway transaction. In addition,
the excess returns could not be explained by systematic risk. Similar
results were found for the Sharp ratio criterion, and the overall
results appeared robust to incorporation transaction costs into
the general trading model, changes in a vehicle currency, and changes
in the smoothing parameter in the stationary bootstrap procedure.
Hence, Qi and Wu concluded that certain technical trading rules
were genuinely profitable in foreign exchange markets during the
sample period.
By using White's Bootstrap Reality Check methodology, Sullivan,
Timmermann, and White (1999, 2003) corroborated academics' belief
regarding technical trading rules in their outofsample tests.
However, several problems are found in their work. First, the universe
of trading rules considered by Sullivan, Timmermann, and White (1999,
2003) may not represent the true universe of trading rules. For
example, their first study assumed that rules from five simple technical
trading systems represented the full set of technical trading rules.
However, there may be numerous different technical trading systems
such as various combination systems that were not included in their
full set of technical trading rules. If a set of trading rules tested
is a subset of an even larger universe of rules, White's Bootstrap
Reality Check methodology delivers a pvalue biased toward zero
under the assumption that the included rules in the "universe"
performed quite well during the historical sample period. This can
be illustrated by comparing the results of Sullivan, Timmermann,
and White's studies. When only technical trading rules were tested
on DJIA data over the 19871996 period, the best rule (a 200day
channel rule with 0.150 width and a 50day holding period) generated
an annual mean return of 14.41% with a pvalue of 0.341. However,
the best (a weekofthemonth rule) of the augmented universe of
trading rules yielded an annual mean return of 17.27% with a pvalue
of 0.98 for the same data. Obviously, the former has a downward
biased pvalue. Second, transaction costs were not directly incorporated
into the trading model. Transaction costs may have a significant
effect on selection of the optimal trading rules. If Sullivan, Timmermann,
and White considered mean net return as a performance measure, their
best trading rules for the full insample period might be changed
because incorporating transaction costs into a performance measure
tends to penalize trading rules that generate more frequent transactions.
In fact, Qi and Wu (2002) found that when they changed a performance
measure from mean returns to mean net returns, the best trading
rules selected were rules that generated less frequent trading signals
than in case of the mean return criterion. Third, the data snooping
effects of the best trading rule measured in terms of the Bootstrap
Reality Check pvalue in a sample period cannot be assessed in a
different sample period (e.g., an outofsample period), because
the best trading rule usually differs according to sample periods
considered.
A final problem arises from White's (2000) procedure itself. In
the testing procedure for superior predictive ability (SPA) such
as White's procedure, the null hypothesis typically consists of
multiple inequalities, which lead to a composite null hypothesis.
One of the complications of testing a composite hypothesis is that
the asymptotic distribution of the test statistic is not unique
under the null hypothesis. The typical solution for the ambiguity
in the null distribution is to apply the least favorable configuration
(LFC), which is known as the points least favorable to the alternative
hypothesis. This is exactly what White (2000) has done. However,
Hansen (2003) showed that such a LFCbased test has some limitations
because it does not ordinarily meet an "asymptotic similar
condition" which is necessary for a test to be unbiased, and
as a result it may be sensitive to the inclusion of poor forecasting
models. In fact, the simulation and empirical results in Hansen
(2003, 2004) indicated that the inclusion of a few poorperforming
models severely reduces rejection probabilities of White's Reality
Check test under the null, causing the test to be less powerful
under the alternative. In research on technical trading systems,
researchers generally search over a large number of parameter values
in each trading system tested, because there is no theoretical guidance
with respect to the proper selection of parameters. Thus, poorperforming
trading rules are inevitably included in the analysis, and testing
these trading rules with the Reality Check procedure may produce
biased results.[23] Despite these limitations, Reality Check studies
can be regarded as a substantial improvement over previous technical
trading studies in that they attempted to explicitly quantify data
snooping biases regarding the selection of technical trading rules.
Chart Pattern Studies
Chart pattern studies test the
profitability or forecasting ability of visual chart patterns widely
used by technical analysts. Wellknown chart patterns, whose names
are usually derived from their shapes in bar charts, are gaps, spikes,
flags, pennants, wedges, saucers, triangles, headandshoulders,
and various tops and bottoms (see e.g. Schwager (1996) for detailed
charting discussion). Previously, Levy (1971) documented the profitability
of 32 fivepoint chart formations for NYSE securities. He found
that none of the 32 patterns for any holding period generated profits
greater than average purchase or shortsale opportunities. However,
a more rigorous study regarding chart patterns was provided by Chang
and Osler (1999).[24]
Chang and Osler evaluated the performance of the headandshoulders
pattern using daily spot rates for 6 currencies (mark, yen, pound,
franc, Swiss franc, and Canadian dollar) during the entire floating
rate period, 19731994. The headandshoulders pattern can be described
as a sequence of three peaks with the highest in the middle. The
center peak is referred to as 'head', the left and right peaks around
the head as 'shoulders', and the straight line connecting the troughs
separating the head from right and left shoulders as 'the neckline'.
The pattern is considered 'confirmed' when the price path penetrates
the neckline after forming the right shoulder. Headandshoulders
can occur both at peaks and at troughs, where they are called 'tops'
and 'bottoms', respectively. After developing the headandshoulders
identification and profittaking algorithm, Chang and Osler established
a strategy for entering and exiting positions based on such recognition.
The entry position is taken when a current price breaks the neckline,
while the timing of exit can be determined arbitrarily. They set
up two kinds of exit rules: an endogenous rule and an exogenous
rule. The endogenous rule includes both stoploss and bounce. The
stoploss is triggered at 1% of the entry price to limit losses
whenever price moves in the opposite direction to that expected
by the headandshoulders. The bounce possibility is captured by
the following strategy: if the downtrend of prices following a
confirmed headandshoulders top turns uptrend before falling by
at least 25% of the vertical distance from the head to the neckline,
then investors hold their current positions until either prices
cross back over the neckline by at least 1% (stoploss) or a second
trough (of any size) is reached in the zigzag. The exogenous rule
is to close an open position after an exogenously specified number
of days from the entry point. One to 60 (1, 3, 5, 10, 20, 30, and
60) days were considered.
For the endogenous exit rule, headandshoulders rules generated
statistically significant returns of about 13% and 19% per year
for the mark and yen, respectively, but not for the other exchange
rates. Returns from the exogenous exit rule appeared to be insignificant
in most cases. The trading profits from the endogenous exit rules
were substantially higher than either the annual buyandhold returns
of 2.5% for the mark and 4.4% for the yen or annual average stock
yield of 6.8% measured on the S&P 500 index. The headandshoulders
returns for the mark and yen were also significantly greater than
those derived from 10,000 simulated random walk data series obtained
from a bootstrap method and were substantial even after adjusting
for transaction costs of 0.05% per roundtrip, interest differential,
and risk. For example, the Sharpe ratios for the mark and yen were1.00
and 1.47, respectively, while the Sharpe ratio for the S&P 500
was 0.32. Moreover, it turned out that the returns were not compensation
for bearing systematic risk, since none of the estimated betas were
statistically significantly different from zero with the largest
beta being 0.03. Profits for the mark and yen were also robust to
changes in the parameters of the headandshoulders recognition
algorithm, changes in the sample period, and the assumption that
exchange rates follow a GARCH (1,1) process rather than the random
walk model. Over the sample period, a portfolio that consisted of
all six currencies earned total returns of 69.9%, which were significantly
higher than returns produced in the simulated data.
Chang and Osler further investigated the performance of moving
average rules and momentum rules and compared the results with the
observed performance of the headandshoulders rule. Returns from
the simple technical trading systems appeared statistically significant
for all six currencies and the simpler rules easily outperformed
the headandshoulders rules in terms of total profits and the Sharpe
ratios. To evaluate the incremental contribution of the headandshoulders
rule when combined with each of simpler rules, combination rules
of both strategies were simulated on the mark and yen. Results indicated
that each combination rule generated slightly higher returns than
the simple rule alone, but significantly increased risk (daily variation
of returns). Hence, Chang and Osler concluded that, although the
headandshoulders patterns had some predictive power for the mark
and yen during the period of floating exchange rates, the use of
the headandshoulders rule did not seem to be rational, because
they were easily dominated by simple moving average rules and momentum
rules and increased risk without adding significant profits when
used in combination with the simpler rules.
Table 7
summarizes other chart pattern studies. Lo, Mamaysky, and Wang (2000)
examined more chart patterns. They evaluated the usefulness of 10
chart patterns, which are the headandshoulders (HS) and inverse
headandshoulders (IHS), broadening tops (BTOP) and bottoms (BBOT),
triangle tops (TTOP) and bottoms (TBOT), rectangle tops (RTOP) and
bottoms (RBOT), and double tops (DTOP) and bottoms (DBOT). To see
whether these technical patterns are informative, goodnessoffit
and KolmogorovSmirnov tests were applied to the daily data of individual
NYSE/AMEX stocks and Nasdaq stocks during the 19621996 period.
The goodnessoffit test compares the quantiles of returns conditioned
on technical patterns with those of unconditional returns. If the
technical patterns provide no incremental information, both conditional
and unconditional returns should be similar. The KolmogorovSmirnov
statistic was designed to test the null hypothesis that both conditional
and unconditional empirical cumulative distribution functions of
returns are identical. In addition, to evaluate the role of volume,
Lo, Mamaysky, and Wang constructed three return distributions conditioned
on (1) technical patterns; (2) technical patterns and increasing
volume; and (3) technical patterns and decreasing volume.
The results of the goodnessoffitness test indicated that the
NYSE/AMEX stocks had significantly different relative frequencies
on the conditional returns from those on the unconditional returns
for all but 3 patterns, which were BBOT, TTOP, and DBOT. On the
other hand, Nasdaq stocks showed overwhelming significance for all
the 10 patterns. The results of the KolmogorovSmirnov test showed
that, for the NYSE/AMEX stocks, 5 of the 10 patterns (HS, BBOT,
RTOP, RBOT, and DTOP) rejected the null hypothesis, implying that
the conditional distributions of returns for the 5 patterns were
significantly different from the unconditional distributions of
returns. For the Nasdaq stocks, in contrast, all the patterns were
statistically significant at the 5% level. However, volume trends
appeared to provide little incremental information for both stock
markets with a few exceptions. The difference between the conditional
distributions of increasing and decreasing volume trends was statistically
insignificant for most patterns in both NYSE/AMEX and Nasdaq markets.
Hence, Lo, Mamaysky, and Wang concluded that technical patterns
did provide some incremental information, especially, for the NASDAQ
stocks. They argued that "Although this does not necessarily
imply that technical analysis can be used to generate 'excess' trading
profits, it does raise the possibility that technical analysis can
add value to the investment process" (p. 1753). In terms of
trading profits, Dawson and Steeley (2003) confirmed the argument
by applying the same technical patterns as in Lo, Mamaysky, and
Wang (2000) to UK data. Although they found return distributions
conditioned on technical patterns were significantly different from
the unconditional distributions, an average market adjusted return
turned out to be negative across all technical patterns and sample
periods they considered.
Caginalp and Laurent (1998) reported that candlestick reversal
patterns generated substantial profits in comparison to an average
gain for the same holding period. For the S&P 500 stocks over
the 19921996 period, downtoup reversal patterns produced an average
return of 0.9% during a twoday holding period (annually 309% of
the initial investment). The profit per trade ranged from 0.56%0.76%
even after adjustment for commissions and bidask spreads on a $100,000
trade, so that the initial investment was compounded into 202%259%
annually. Leigh, Paz, and Purvis (2002) and Leigh et al. (2002)
also noted that bull flag patterns for the NYSE Composite Index
generated positive excess returns over a buyandhold strategy before
transaction costs. However, Curcio et al. (1997), Guillaume (2000),
and Lucke (2003) all showed limited evidence of the profitability
of technical patterns in foreign exchange markets, with trading
profits from the patterns declining over time (Guillaume 2000).
In general, the results of chart pattern studies varied depending
on patterns, markets, and sample periods tested, but suggested that
some chart patterns might have been profitable in stock markets
and foreign exchange markets. Nevertheless, all studies in this
category, except for Leigh, Paz, and Purvis (2002), neither conducted
parameter optimization and outofsample tests, nor paid much attention
to data snooping problems.
Nonlinear Studies
Nonlinear studies attempted to
directly measure the profitability of a trading rule derived from
a nonlinear model, such as the feedforward networks or the nearest
neighbors regressions, or evaluate the nonlinear predictability
of asset returns by incorporating past trading signals from simple
technical trading rules (e.g., moving average rules) or lagged returns
into a nonlinear model. A single layer feedforward network regression
model with d hidden layer units and with lagged returns is typically
given by
where y_{t} is an indicator
variable which takes either a value of 1 (for a long position) or
1 (for a short position) and r_{ti}=log(P_{ti}/P_{ti1}) is the return at time ti. Sometimes, the
lagged returns are replaced with trading signals generated by a
simple technical trading rule such as a moving average rule. Each
hidden layer unit receives the weighted sum of all inputs and a
bias term and generates an output signal through the hidden transfer
function (G), where g_{ij} is the weight of its connection from the ith
input unit to the jth hidden layer unit. In the similar manner,
the output unit receives the weighted sum of the output signals
of the hidden layer and generates a signal through the output transfer
function (F), where b_{j} is the weight of the connection from the jth
hidden layer unit. For example, in Gençay (1998a), the number
of hidden layer units was selected to be {1, 2, …, 15} and
p was set to 9. Gençay argued that "under general regularity
conditions, a sufficiently complex single hidden layer feedforward
network can approximate any member of a class of functions to any
desired degree of accuracy where the complexity of a single hidden
layer feedforward network is measured by the number of hidden units
in the hidden layer" (p. 252).
Gençay (1998a) tested the profitability of simple technical
trading rules based on a feedforward network using DJIA data for
19631988. Across 6 subsample periods, the technical trading rules
generated annual net returns of 7%35% after transaction costs and
easily dominated a buyandhold strategy. The results for the Sharpe
ratio were similar. Hence, the technical trading rule outperformed
the buyandhold strategy after transaction costs and risk were
taken into account. In addition, correct sign predictions for the
recommended positions ranged 57% to 61%.
Other nonlinear studies are summarized
in Table 8. Gençay (1998b, 1999)
further investigated the nonlinear predictability of asset returns
by incorporating past trading signals from simple technical trading
rules, i.e., moving average rules, or lagged returns into a nonlinear
model, either the feedforward network or the nearest neighbor regression.
Outofsample results regarding correct sign predictions and the
mean square prediction error (MSPE) indicated that, in general,
both the feedforward network model and the nearest neighbor model
yielded substantial forecast improvement and outperformed the random
walk model or GARCH (1,1) model in both stock and foreign exchange
markets. In particular, the nonlinear models based on past buysell
signals of the simple moving average rules provided more accurate
predictions than those based on past returns. Gençay and
Stengos (1998) extended previous nonlinear studies by incorporating
a 10day volume average indicator into a feedforwad network model
as an additional regressor. For the same DJIA data as used in Gençay
(1998a), the nonlinear model produced an average of 12% forecast
gain over the beanchmark (an OLS model with lagged returns as regressors)
and provided much higher correct sign predictions (an average of
62%) than other linear and nonlinear models. FernándezRodríguez,
GonzálezMartel, and SosvillaRivero (2000) applied the feedback
network regression to the Madrid Stock index, finding that their
technical trading rule outperformed the buyandhold strategy before
transaction costs. SosvillaRivero, AndradaFélix, and FernándezRodríguez
(2002) also showed that a trading rule based on the nearest neighbor
regression earned net returns of 35% and 28% for the mark and yen,
respectively, during the 19821996 period, and substantially outperformed
buyandhold strategies. They further showed that when eliminating
days of US intervention, net returns from the trading strategy substantially
declined to 10% and 28% for the mark and yen, respectively. FernándezRodríguez,
SosvillaRivero, and AndradaFélix (2003) found that simple
trading rules based on the nearest neighbors model were superior
to moving average rules in European exchange markets for 19781994.
Their nonlinear trading rules generated statistically significant
annual net returns of 1.5%20.1% for the Danish krona, French franc,
Dutch guilder, and Italian lira. In general, technical trading rules
based on nonlinear models appeared to have either profitability
or predictability in both stock and foreign exchange markets. However,
nonlinear studies have a similar problem to that of genetic programming
studies. That is, as suggested by Timmermann and Granger (2004),
it may be improper to apply the nonlinear approach that was not
available until recent years to reveal the profitability of technical
trading rules. Furthermore, these studies typically ignored statistical
tests for trading profits, and might be subject to data snooping
problems because they incorporated trading signals from only one
or two popular technical trading rules into the models.
Other Studies
Other studies are ones that do
not belong to any categories reviewed so far. In general, these
studies are similar to the early studies in that they did not conduct
trading rule optimization and outofsample verification and address
data snooping problems, although several studies (Sweeney 1988;
Farrell and Olszewski 1993; Irwin et al. 1997) performed outofsample
tests.
Neely (1997) tested the profitability of filter rules and moving
average rules on four major exchange rates (the mark, yen, pound
sterling, and Swiss franc) over the 19741997 period.
Filter rules included six filters from 0.5% to 3% with window lengths
of 5 business days to identify local extremes and moving average
rules consisted of four dual moving averages (1/10, 1/50, 5/10,
5/50). The results indicated that trading rules yielded positive
net returns in 38 of the 40 cases after deducting transaction costs
of 0.05% per roundtrip. Specifically, for the mark, 9 of the 10
trading rules generated positive net returns with an annual mean
net return of 4.4%. These trading profits did not seem to be compensation
for bearing risk. In terms of Sharpe ratios, every moving average
rule (average of 0.6) and two filter rules outperformed a buyandhold
strategy (0.3) in the S&P 500 Index over the same sample period.
The CAPM betas estimated from the 10 trading rules also generally
indicated zero or negative correlation with the S&P 500 monthly
returns. The results for other exchange rates were similar. Hence,
the trading rules, especially moving average rules, appeared to
be profitable beyond transaction costs and risk. However, Neely
argued that the apparent success of the technical trading rules
might not necessarily implicate market inefficiency because of problems
in testing procedure, such as difficulties in getting actual prices
and interest rates, the absence of a proper measure of risk, and
data snooping. In particular, he emphasized data snooping problems
in studies of technical analysis by noting that "the rules
tested here are certainly subject to a datamining bias, since many
of them had been shown to be profitable on these exchange rates
over at least some of the subsample" (p. 32).
Table 9 summarizes other studies in this
category. As an exceptional case among the studies, Neftci's (1991)
work is close to a theoretical study. Using the notion of Markov
times, he demonstrated that the moving average rule was one of the
few mathematically welldefined technical analysis rules. Markov
times are defined as random time periods, whose value can be determined
by looking at the current information set (p. 553). Therefore, Markov
times do not rely on future information. If a trading rule generates
a sequence of trading signals that fail to be Markov times, it would
be using future information to emit such signals. However, various
patterns or trend crossings in technical analysis, such as "headandshoulders"
and "triangles," did not appear to generate Markov times.
To verify whether 150day moving average rule has predictive value,
Neftci incorporated trading signals of the moving average rule into
a dummy variable in an autoregression equation. For the DowJones
Industrials, Ftest results on the variable were insignificant over
the 17951910 period but highly significant over the 19111976 period.
Hence, the moving average rule seemed to have some predictive power
beyond the own lags of the DowJones Industrials.
Pruitt and White (1988) and Pruitt, Tse, and White (1992) documented
that a combination system consisting of cumulative volume, relative
strength, and moving average (CRISMA) was profitable in stock markets.
For example, Pruitt, Tse, and White (1992) obtained annual excess
returns of 1.0%5.2% after transaction costs of 2% over the 19861990
period and found that the CRISMA system outperformed the buyandhold
or market index strategy. Sweeney (1988) and Corrado and Lee (1992)
also found that filterbased rules outperformed buyandhold strategies
after transaction costs in stock markets. Schulmeister (1988) and
Dewachter (2001) reported the profitability of various technical
trading rules in foreign exchange markets, but Marsh (2000) showed
that technical trading profits in foreign exchange markets decreased
in the recent period. Irwin et al. (1997) compared the performance
of the channel trading system to ARIMA models in soybeanrelated
futures markets. During their outofsample period (19841988),
the channel system generated statistically significant mean returns
ranging 5.1%26.6% across the markets and beat the ARIMA models
in every market. Overall, studies in this category indicated that
technical trading rules performed quite well in stock markets, foreign
exchange markets, and grain futures markets. As noted above, however,
these studies typically omitted trading rule optimization and outofsample
verification and did not address data snooping problems.
Summary of Modern Studies
Modern studies greatly improved
analytic techniques relative to those of early studies, with more
advanced theories and statistical methods spurred on by rapid growth
of computing power. Modern studies were categorized into seven groups
based on their testing procedures. "Standard" studies
(Lukac, Brorsen, and Irwin 1988; Lukac and Brorsen 1990; and others)
comprehensively tested the profitability of technical trading rules
using parameter optimization, outofsample verification, and statistical
tests for trading profits. In addition, transaction costs and risk
were incorporated into the general trading model. Standard studies,
in general, found that technical trading profits were available
in speculative markets. Taylor (2000) obtained a breakeven oneway
transaction cost of 1.07% for the DJIA data during the 19681988
period using an optimized moving average rule. Szakmary and Mathur
(1997) showed that moving average rules produced annual net returns
of 3.5%5.4% in major foreign exchange markets for 19781991, although
the profits of moving average rules in foreign exchange markets
tend to dissipate over time (Olsen 2004). Lukac, Brorsen, and Irwin
(1988) also found that four technical trading systems, the dual
moving average crossover, close channel, MII price channel, and
directional parabolic, yielded statistically significant portfolio
annual net returns ranging from 3.8%5.6% in 12 futures markets
during the 19781984 period. Nevertheless, since these studies did
not explicitly address data snooping problems, there is a possibility
that the successful results were caused by chance.
"Modelbased bootstrap" studies (Brock, Lakonishok, and
LeBaron 1992; Levich and Thomas 1993; Bessembinder and Chan 1998;
and others) conducted statistical tests for trading returns using
modelbased bootstrap approaches pioneered by Brock, Lakonishok,
and LeBaron (1992). In these studies, popular technical trading
rules, such as moving average rules and trading range breakout rules,
were tested in an effort to reduce data snooping problems. The results
of the modelbased bootstrap studies differed across markets and
sample periods tested. In general, technical trading strategies
were profitable in several emerging (stock) markets and foreign
exchange markets, while they were unprofitable in developed stock
markets (e.g., US markets). Ratner and Leal (1999) found that moving
average rules generated statistically significant annual net returns
of 18.2%32.1% in stock markets of Mexico, Taiwan, Thailand, and
the Philippines during the 19821995 period. LeBaron (1999) also
showed that a 150 moving average rule for the mark and yen generated
Sharpe ratios of 0.600.98 after a transaction cost of 0.1% per
roundtrip over the 19791992 period, which were much greater than
those (0.30.4) for buyandhold strategies on aggregate US stock
portfolios. However, Bessembinder and Chan (1998) noted that profits
from Brock, Lakonishok, and LeBaron's (1992) trading rules for the
DJIA index declined substantially over time. In particular, an average
breakeven oneway transaction cost across the trading rules in
a recent period (19761991) was 0.22%, which was compared to estimated
oneway transaction costs of 0.24%0.26%. As pointed out by Sullivan,
Timmermann, and White (1999), on the other hand, popular trading
rules may have survivorship bias, which implies that they may have
been profitable over a long historical period by chance. Moreover,
modelbased bootstrap studies often omitted trading rule optimization
and outofsample verification.
"Genetic programming" studies (Neely, Weller, and Dittmar
1997; Allen and Karjalainen 1999; Ready 2002; and others) attempted
to avoid data snooping problems by testing ex ante trading rules
optimized by genetic programming techniques. In these studies, outofsample
verification for the optimal trading rules was conducted together
with statistical tests, and transaction costs and risk were incorporated
into the testing procedure. Genetic programming studies generally
indicated that technical trading rules formulated by genetic programming
might be successful in foreign exchange markets but not in stock
markets. For example, Allen and Karjalainen (1999), Ready (2002),
and Neely (2003) all documented that over a long time period, genetic
trading rules underperformed buyandhold strategies for the S&P
500 index or the DJIA index. In contrast, Neely and Weller (2001)
obtained annual net profits of 1.7%8.3% for four major currencies
over the 19811992 period, although profits decreased to around
zero or were negative except for the yen over the 19931998 period.
The results for futures markets varied depending on markets tested.
Roberts (2003) obtained a statistically significant daily mean net
profit of $1.07 per contract in the wheat futures market for 19781998,
which exceeded a buyandhold return of $3.30 per contract, but
found negative mean net returns for corn and soybean futures markets.
The genetic programming technique may become an alternative approach
to test technical trading rules because it provides a sophisticated
search procedure. However, it was not applied to technical analysis
until the mid1990s, and moreover, the majority of optimal trading
rules identified by a genetic program appeared to have more complex
structures than that of typical technical trading rules. Hence,
there has been strong doubt as to whether actual traders could have
used these trading rules. Cooper and Gulen (2003) and Timmermann
and Granger (2004) suggested that the genetic programming method
must not be applied to sample periods before its discovery.
"Reality Check" studies (Sullivan, Timmermann, and White
1999, 2003; Qi and Wu 2002) use White's Bootstrap Reality Check
methodology to directly quantify the effects of data snooping. White's
methodology delivers a data snooping adjusted pvalue by testing
the performance of the best rule in the context of the full universe
of trading rules. Thus, the approach accounts for dependencies across
trading rules tested. Reality Check studies by Sullivan, Timmermann,
and White (1999, 2003) provide some evidence that technical trading
rules might be profitable in the stock market until the mid1980s
but not thereafter. For example, Sullivan, Timmermann, and White
(1999) obtained an annual mean return of 17.2% (a breakeven transaction
cost of 0.27% per trade) from the best rule for the DJIA index over
the 18971996 period, with a datasnooping adjusted pvalue of zero.
However, in an outofsample period (19871996), the best rule optimized
over the 18971986 period yielded an annual mean return of only
2.8%, with a nominal pvalue of 0.32. For the foreign exchange market,
on the other hand, Qi and Wu (2002) obtained economically and statistically
significant technical trading profits over the 19731998 period.
They found mean excess returns of 7.2%12.2% against the buyandhold
strategy for major currencies except for the Canadian dollar (3.63%)
after adjustment for transaction costs and risk. Despite the fact
that Reality Check studies use a statistical procedure that can
account for data snooping effects, they also have some problems.
For example, there is difficulty in constructing the full universe
of technical trading rules. Furthermore, if a set of trading rules
tested is selected from an even larger universe of rules, a pvalue
calculated by the methodology could be biased toward zero under
the assumption that the included rules in the "universe"
performed quite well during the sample period.
"Chart patterns" studies (Chang and Osler 1999; Lo,
Mamaysky, and Wang 2000; and others) developed and simulated algorithms
that can recognize visible chart patterns used by technical analysts.
In general, the results of chart pattern studies varied depending
on patterns, markets, and sample periods tested, but suggested that
some chart patterns might have been profitable in stock markets
and foreign exchange markets. For example, Chang and Osler (1999)
showed that the headandshoulders pattern generated statistically
significant returns of about 13% and 19% per year for the mark and
yen, respectively, for 19731994. These returns appeared to be substantially
higher than either buyandhold returns or average stock yields
on the S&P 500 index, and were still retained after taking account
of transaction costs, interest differential, and risk. Similarly,
Caginalp and Laurent (1998) found that for the S&P 500 stocks,
downtoup candlestick reversal patterns earned mean net returns
of 0.56%0.76% during a twoday holding period (annually 202%259%
of the initial investment) after transaction costs over the 19921996
period. Nevertheless, most studies in this category neither conducted
parameter optimization and outofsample tests, nor paid much attention
to data snooping problems.
"Nonlinear" studies (Gençay 1998a; Gençay
and Stengos 1998; FernándezRodríguez, GonzálezMartel,
and SosvillaRivero 2000; and others) investigated either the informational
usefulness or the profitability of technical trading rules based
on nonlinear methods, such as the nearest neighbor or the feedforward
network regressions. Nonlinear studies showed that technical trading
rules based on nonlinear models possessed profitability or predictability
in both stock and foreign exchange markets. Gençay (1998a)
found that simple technical trading rules based on a feedforward
network for the DJIA index generated annual net returns of 7%35%
across 6 subsample periods over the 19631988 period and easily
dominated a buyandhold strategy. SosvillaRivero, AndradaFélix,
and FernándezRodríguez (2002) also showed that a
trading rule based on the nearest neighbor regression earned net
returns of 35% and 28% for the mark and yen, respectively, during
the 19821996 period, and substantially outperformed buyand hold
strategies. However, nonlinear studies have a similar problem to
that of genetic programming studies. That is, it may be improper
to apply the nonlinear approach that was not available until recent
years to reveal the profitability of technical trading rules. Furthermore,
these studies typically ignored statistical tests for trading profits,
and might be subject to data snooping problems because they incorporated
trading signals from only one or two popular technical trading rules
into the models.
"Other studies" include all studies that do not belong
to any categories described in the above. Testing procedures of
these studies are similar to those of the early studies, in that
they did not conduct trading rule optimization and outofsample
verification, with a few exceptions. Studies in this category suggested
that technical trading rules performed quite well in stock markets,
foreign exchange markets, and grain futures markets. Neely (1997)
tested filter rules and moving average rules on four major exchange
rates over the 19741997 period and obtained positive net returns
in 38 of the 40 cases after adjusting for transaction costs. Pruitt,
Tse, and White (1992) found that the CRISMA (combination of cumulative
volume, relative strength, and moving average) system earned annual
mean excess returns of 1.0%5.2% after transaction costs in stock
markets for 19861990 and outperformed the B&H or market index
strategy. For soybeanrelated futures markets, Irwin et al. (1997)
reported that channel rules generated statistically significant
mean returns ranging 5.1%26.6% over the 19841988 period and beat
the ARIMA models in every market they tested. However, it is highly
likely that these successful findings were attainable due to data
snooping.
Table 10 summarizes the results of modern
studies. As shown in the table, the number of studies that identified
profitable technical trading strategies is far greater than the
number of studies that found negative results. Among a total of
92 modern studies, 58 studies found profitability (or predictability)
in technical trading strategies, while 24 studies reported negative
results. The rest (10 studies) indicated mixed results. In every
market, the number of profitable studies is twice that of unprofitable
studies. However, modern studies also indicated that technical trading
strategies had been able to yield economic profits in US stock markets
until the late 1980s, but not thereafter (Bessembinder and Chan
1998; Sullivan, Timmermann, and White 1999; Ready 2002). Several
studies found economic profits in emerging (stock) markets, regardless
of sample periods considered (Bessembinder and Chan 1995; Ito 1999;
Ratner and Leal 1999). For foreign exchange markets, it seems evident
that technical trading strategies have made economic profits over
the last few decades, although some studies suggested that technical
trading profits have declined or disappeared in recent years (Marsh
2000; Neely and Weller 2001; Olson 2004). For futures markets, technical
trading strategies appeared to be profitable between the mid1970s
and the mid1980s. No study has yet comprehensively documented the
profitability of technical trading strategies in futures markets
after that period.
Summary and Conclusion
This report reviewed survey
studies, theories and empirical work regarding technical trading
strategies. Most survey studies indicate that technical analysis
has been widely used by market participants in futures markets and
foreign exchange markets, and that at least 30% to 40% of practitioners
regard technical analysis as an important factor in determining
price movement at shorter time horizons up to 6 months.
In the theoretical literature, the conventional efficient markets
models, such as the martingale and random walk models, rule out
the existence of profitable technical trading rules because both
models assume that current prices fully reflect all available information.
On the other hand, several other models, such as noisy rational
expectations models, feedback models, disequilibrium models, herding
models, agentbased models, and chaos theory, suggest that technical
trading strategies may be profitable because they presume that price
adjusts sluggishly to new information due to noise, market power,
traders' irrational behavior, and chaos. In these models, thus,
there exist profitable trading opportunities that are not being
exploited. Such sharp disagreement in theoretical models makes empirical
evidence a key consideration in determining the profitability of
technical trading strategies.
More than 130 empirical studies have examined the profitability
of technical trading rules over the last four decades. In this report,
empirical studies were categorized into two groups, "early"
studies and "modern" studies depending on the characteristics
of testing procedures. In general, the majority of early studies
examined one or two technical trading systems, and deducted transaction
costs to compute net returns of trading rules. In these studies,
however, risk was not adequately handled, statistical tests of trading
profits and data snooping problems were often ignored, and outofsample
tests along with parameter optimization were not conducted, with
a few exceptions. The results of early studies varied from market
to market. Overall, studies of stock markets found very limited
evidence of the profitability of technical trading strategies, while
studies of foreign exchange markets and futures markets frequently
obtained sizable net profits. For example, Fama and Blume (1966)
reported that for 30 individual securities of the Dow Jones Industrial
Average (DJIA) over the 19561962 period, long signals of a 0.5%
filter rule generated an average annual net return of 12.5% that
was not much different from the buyandhold returns. In contrast,
Sweeney (1986) found that for the majority of 10 major currencies
small filter rules produced economically and statistically significant
mean excess returns (3%7%) over the buyandhold returns during
the 19731980 period. Irwin and Uhrig (1984) also reported that
several technical trading systems such as channel, moving average,
and momentum oscillator systems generated substantial net returns
in corn, cocoa, sugar, and soybean futures markets over the 19731981
period.
Modern studies improved upon the drawbacks of early studies and
typically included some of the following features in their testing
procedures: (1) the number of trading systems tested increased relative
to early studies; (2) transaction costs and risk were incorporated
(3) parameter (trading rule) optimization and the outofsample
verification were conducted; and (4) statistical tests were performed
with either conventional statistical tests or more sophisticated
bootstrap methods, or both. In this report, modern studies were
divided into seven groups based on their testing procedures: standard,
modelbased bootstrap, genetic programming, Reality Check, chart
patterns, nonlinear, and others. Modern studies indicated that technical
trading strategies had been able to yield economic profits in US
stock markets until the late 1980s, but not thereafter (Bessembinder
and Chan 1998; Sullivan, Timmermann, and White 1999; Ready 2002).
For example, Taylor (2000) obtained a breakeven oneway transaction
cost of 1.07% per transaction for the DJIA data over the 19681988
period using a 5/200day moving average rule optimized over the
18971968 period,[25] while Sullivan, Timmermann, and White (1999) showed
that the best rule (a 1/5day moving average rule) optimized over
the 18971986 period yielded a statistically insignificant annual
mean return of only 2.8% for 19871996. Several studies found economic
profits in emerging (stock) markets, regardless of the sample periods
tested (Bessembinder and Chan 1995; Ito 1999; Ratner and Leal 1999).
For foreign exchange markets, it seems evident that technical trading
strategies had been profitable at least until the early 1990s, because
many modern studies found net profits of around 5%10% for major
currencies (the mark, yen, pound, and Swiss franc) in their outofsample
tests (Taylor 1992, 1994; Silber 1994; Szakmary and Mathur 1997;
Olsen 2004). However, a few studies suggested that technical trading
profits in foreign exchange markets have declined in recent years
(Marsh 2000; Neely and Weller 2001; Olson 2004).[26] For example, Olson
(2004) reported that riskadjusted profits of moving average rules
for an 18currency portfolio declined from over 3% between the late
1970s and early 1980s to about zero percent in the late 1990s. For
futures markets, technical trading strategies appeared to be profitable
between the mid1970s and the mid1980s. For example, Lukac, Brorsen,
and Irwin (1988) found that several technical trading systems, such
as the dual moving average crossover, close channel, MII price channel,
and directional parabolic systems, yielded statistically significant
portfolio annual net returns ranging from 3.8%5.6% in 12 futures
markets during the 19781984 period. However, no study has yet comprehensively
documented the profitability of technical trading strategies after
that period.
Despite positive evidence about profitability and improved procedures
for testing technical trading strategies, skepticism about technical
trading profits remains widespread among academics. For example,
in a recent and highlyregarded textbook on asset pricing, Cochrane
(2001) argues that: "Despite decades of dredging the data,
and the popularity of media reports that purport to explain where
markets are going, trading rules that reliably survive transactions
costs and do not implicitly expose the investor to risk have not
yet been reliably demonstrated (p. 25)." As Cochrane points
out, the skepticism seems to be based on data snooping problems
and potentially insignificant economic profits after appropriate
adjustment for transaction costs and risk. In this context, Timmermann
and Granger (2004, p. 16) provide a detailed guide to the key issues
that future studies of the profitability of technical trading systems
must address:
1. The set of forecasting models available at any given point in
time, including estimation methods.
2. The search technology used to select the best (or a combination
of best) forecasting model(s).
3. The available 'real time' information set, including public versus
private information and ideally the cost of acquiring such information.
4. An economic model for the risk premium reflecting economic agents'
tradeoff between current and future payoffs.
5. The size of transaction costs and the available trading technologies
and any restrictions on holdings of the asset in question.
The first two issues above focus squarely on the question of data
snooping. In many previous studies, technical trading rules that
produced significant returns were selected for investigation ex
post. These profitable trading rules may have been selected because
they were popular or widely used over time. However, there is no
guarantee that the trading rules were chosen by actual investors
at the beginning of the sample period. Similarly, studies using
genetic algorithm or artificial neural networks often apply these
relatively new techniques to the sample period before their discovery.
Results of these studies are likely to be spurious because the search
technologies were hardly available during the sample period. Therefore,
the set of trading models including trading rules and other assumptions
and the search technologies need to be specified.
Two possible approaches to handle data snooping problems in studies
of technical trading strategies have been proposed. The first is
to simply replicate previous results on a new set of data (e.g.,
Lovell 1983; Lakonishok and Smidt 1988; Lo and MacKinlay 1990; Schwert
2003; Sullivan, Timmermann, and White 2003). If another successful
result is obtained from a new dataset by using the same procedure
as used in an original study, we can be more confident the profitability
(or predictability) of the original procedure. For a study to be
replicated, however, the following three conditions should be satisfied:
(1) the markets and trading systems tested in the original study
should be comprehensive, in the sense that results can be considered
broadly representative of the actual use of technical systems; (2)
testing procedures must be carefully documented, so they can be
"frozen" at the point in time the study was published,
and (3) the original work should be published long enough ago that
a followup study can have a sufficient sample size. Thus, if there
is no sufficient new data or a lack of rigorous and comprehensive
documentation about trading model assumptions and procedures, this
approach may not be valid. Another approach is to apply White's
(2000) Bootstrap Reality Check methodology, in which the effect
of data snooping is directly quantified by testing the null hypothesis
that the performance of the best rule in the full universe of technical
trading rules is no better than the performance of a benchmark.
This approach thus accounts for dependencies across all technical
trading rules tested. However, a problem with White's bootstrap
methodology is that it is difficult to construct the full universe
of technical trading rules. Moreover, there still remain the effects
of data snooping from other choice variables, such as markets, insample
estimation periods, outofsample periods, and trading model assumptions
including performance criteria and transaction costs, because White's
procedure only captures data snooping biases caused by the selection
of technical trading rules.
The third issue raised by Timmermann and Granger may not be a critical
factor in technical trading studies because the information set
used typically consists of prices and volume that are easily obtainable
in real time, with low costs. The fourth and the fifth issues have
the potential to be major factors. It is well known that risk is
difficult to estimate because there is no generally accepted measure
or model. Timmermann and Granger (2004) argue that "most models
of the risk premium generate insufficient variation in economic
riskpremia to explain existing asset pricing puzzles" (p.
18). In studies of technical analysis, the Sharpe ratio and the
CAPM beta may be the most widely used risk measures. However, these
measures have some wellknown limitations. For example, the Sharpe
ratio penalizes the variability of profitable returns exactly the
same as the variability of losses, despite the fact that investors
are more concerned about downside volatility in returns rather than
total volatility (i.e., the standard deviation). This leads Schwager
(1985) and Dacorogna et al. (2001) to propose different riskadjusted
performance measures that take into account drawbacks of the Sharpe
ratio. These measures may be used as alternatives or in conjunction
with the Sharpe ratio. The CAPM beta is also known to have the jointhypothesis
problem. Namely, when abnormal returns (positive intercept) are
found, researchers can not differentiate whether they were possible
because markets were truly inefficient or because the CAPM was a
misspecified model. It is wellknown that the CAPM and other multifactor
asset pricing models such as the FamaFrench three factor model
are subject to "bad model" problems (Fama 1998). The CAPM
failed to explain average returns on small stocks (Banz 1981), and
the FamaFrench three factor model does not seem to fully explain
average returns on portfolios built on size and booktomarket equity
(Fama and French 1993). Cochrane (2001, p. 465) suggests that some
version of the consumptionbased model, such as Constantinides and
Duffie's (1996) model with uninsured idiosyncratic risks and Campbell
and Cochrane's (1999) habit persistence model, may be an answer
to the bad model problems in the stock market and even explain the
predictability of returns in other markets (like bond and foreign
currency markets).
The last issue is associated with market microstructure. Transaction
costs generally consist of two components: (1) brokerage commissions
and fees and (2) bidask spreads. Commissions and fees are readily
observable, although they may vary according to investors (individuals,
institutions, or market makers) and trade size. Data for bidask
spreads (also known as execution costs, liquidity costs, or slippage
costs), however, have not been widely available until recent years.
To account for the impact of the bidask spread on asset returns,
various bidask spread estimators were introduced by Roll (1984),
Thompson and Waller (1987), and Smith and Whaley (1994). However,
these estimators may not work particularly well in approximating
the actual ex post bidask spreads if the assumptions underlying
the estimators do not correspond to the actual market microstructure
(Locke and Venkatesh 1997).[27] Although data for calculating actual
bidask spreads generally is not publicly available, obtaining the
relevant dataset seems to be of particular importance for the accurate
estimation of bidask spreads. It is especially important because
such data would reflect marketimpact effects, or the effect of
trade size on market price. Marketimpact arises in the form of
price concession for large trades (Fleming, Ostdiek, and Whaley
1996). A larger trade tends to move the bid price downward and move
the ask price upward. The magnitude of marketimpact depends on
the liquidity and depth of a market.[28] The more liquid and deeper
a market is, the less the magnitude of the marketimpact. In addition
to obtaining appropriate data sources regarding bidask spreads,
either using transaction costs much greater than the actual historical
commissions (Schwager 1996) or assuming several possible scenarios
for transaction costs may be considered as plausible alternatives.
Other aspects of market microstructure that may affect technical
trading returns are nonsynchronous trading and daily price limits,
if any. Many technical trading studies assume that trades can be
executed at closing prices on the day when trading signals are generated.
However, Day and Wang (2002), who investigated the impact of nonsynchronous
trading on technical trading returns estimated from the DJIA data,
argued that "… if buy signals tend to occur when the closing
level of the DJIA is less than the true index level, estimated profits
will be overstated by the convergence of closing prices to their
true values at the market open" (p. 433). This problem may
be mitigated by using either the estimated 'true' closing levels
for any asset prices (Day and Wang 2002) or the next day's closing
prices (Bessembinder and Chan 1998). On the other hand, price movements
are occasionally locked at the daily allowable limits, particularly
in futures markets. Since trendfollowing trading rules typically
generate buy (sell) signals in up (down) trends, the daily price
limits enforce buy (sell) trades to be executed at higher (lower)
prices than those at which trading signals were generated. This
may results in seriously overstated trading returns. Thus, researchers
should incorporate accurate daily price limits into the trading
model. Many issues with respect to market microstructure including
ones mentioned above are now being resolved with the advent of detailed
transactions databases including transaction price, time of trade,
volume, bidask quotes and depths, and various codes describing
the trade (Campbell, Lo, and MacKinlay 1997, p. 107).
In conclusion, we found consistent evidence that simple technical
trading strategies were profitable in a variety of speculative markets
at least until the early 1990s. As discussed above, however, most
previous studies are subject to various problems in their testing
procedures. Future research must address these problems in testing
before conclusive evidence on the profitability of technical trading
strategies is provided.
References
Akemann, C. A., and W. E. Keller.
"Relative Strength Does Persist." Journal of Portfolio
Management, (Fall 1977):3845.
Alexander, S. S. "Price Movements in Speculative Markets: Trends
or Random Walks." Industrial Management Review, 2(1961):726.
Alexander, S. S. "Price Movements in Speculative Markets: Trends
or Random Walks No. 2." Industrial Management Review,
5(1964):2546.
Allen, F., and R. Karjalainen. "Using Genetic Algorithms to
Find Technical Trading Rules." Journal of Financial Economics,
51(1999):245271.
Andreou, E., N. Pittis, and A. Spanos. "On Modeling Speculative
Prices: The Empirical Literature." Journal of Economic Surveys,
15(2001):187220.
Antoniou, A., N. Ergul, P. Holmes, and R. Priestley. "Technical
Analysis, Trading Volume and Market Efficiency: Evidence from an
Emerging Market." Applied Financial Economics, 7(1997):361365.
Arnott, R. D. "Relative Strength Revisited." Journal
of Portfolio Management, (Spring1979):1923.
Bachelier, L. Théorie de la Spéculation. Doctoral
Dissertation in Mathematics, University of Paris (1900), Translated
into English by Cootner, P. H. (ed.) (1964):1778.
Banz, R. W. "The Relationship Between Return and Market Value
of Common Stocks." Journal of Financial Economics, 9(1981):318.
Beja, A., and M. B. Goldman. "On the Dynamic Behavior of Prices
in Disequilibrium." Journal of Finance, 35(1980):235248.
Bessembinder H., and K. Chan. "The Profitability of Technical
Trading Rules in the Asian Stock Markets." PacificBasin
Finance Journal, 3(1995):257284.
Bessembinder H., and K. Chan. "Market Efficiency and the Returns
to Technical Analysis." Financial Management, 27(1998):517.
Billingsley, R. S., and D. M. Chance. "Benefits and Limitations
of Diversification Among Commodity Trading Advisors." Journal
of Portfolio Management, (Fall 1996):6580.
Bird, P. J. W. N. "The Weak Form Efficiency of the London Metal
Exchange." Applied Economics, 17(1985):571587.
Black, F. "Noise." Journal of Finance, 41(1986):529543.
Blume, L., D. Easley, and M. O'Hara. "Market Statistics and
Technical Analysis: The Role of Volume." Journal of Finance,
49(1994):153181.
Bohan, J. "Relative Strength: Further Positive Evidence."
Journal of Portfolio Management, (Fall 1981):3639.
Brock, W., J. Lakonishock, and B. LeBaron. "Simple Technical
Trading Rules and the Stochastic Properties of Stock Returns."
Journal of Finance, 47(1992):17311764.
Brorsen, B. W., and S. H. Irwin. "Futures Funds and Price Volatility."
The Review of Futures Markets, 6(1987):118135.
Brown, D. P., and R. H. Jennings. "On Technical Analysis."
Review of Financial Studies, 2(1989):527551.
Brunnermeier, M. K. Asset Pricing under Asymmetric Information 
Bubbles, Crashes, Technical Analysis, and Herding. Oxford, UK: Oxford
University Press, 2001.
Brush, J. S. "Eight Relative Strength Models Compared."
Journal of Portfolio Management, (Fall 1986):2128.
Brush, J. S., and K. E. Boles. "The Predictive Power in Relative
Strength and CAPM." Journal of Portfolio Management,
(Summer 1983):2023.
Caginalp, G., and H. Laurent. "The Predictive Power of Price
Patterns." Applied Mathematical Finance, 5(1998):181205.
Campbell, J. Y., and J. H. Cochrane. "By Force of Habit: A
ConsumptionBased Explanation of Aggregate Stock Market Behavior."
Journal of Political Economy, 107(1999):205251.
Campbell, J. Y., A. W. Lo, and A. C. MacKinlay. The Econometrics
of Financial Markets. Princeton, NJ: Princeton University Press,
1997.
Chang, P. H. K., and C. L. Osler. "Methodical Madness: Technical
Analysis and the Irrationality of ExchangeRate Forecasts."
Economic Journal, 109(1999):636661.
Cheung, Y. W., and M. D. Chinn. "Currency Traders and Exchange
Rate Dynamics: A Survey of the US Market." Journal of International
Money and Finance, 20(2001):439471.
Cheung, Y. W., and C. Y. P. Wong. "The Performance of Trading
Rules on Four Asian Currency Exchange Rates." Multinational
Finance Journal, 1(1997):122.
Cheung, Y. W., and C. Y. P. Wong. "A Survey of Market Practitioners'
Views on Exchange Rate Dynamics." Journal of International
Economics, 51(2000): 401419.
Cheung, Y. W., M. D. Chinn, and I. W. Marsh. "How Do UKBased
Foreign Exchange Dealers Think Their Market Operates?" NBER
Working Paper, No. 7524, 2000.
Clyde W. C., and C. L. Osler. "Charting: Chaos Theory in Disguise?"
Journal of Futures Markets, 17(1997):489514.
Cochrane, J. H. Asset Pricing. Princeton, NJ: Princeton University
Press, 2001.
Constantinides, G. M., and D. Duffie. "Asset Pricing with Heterogeneous
Consumers." Journal of Political Economy, 104(1996):219240.
Cooper, M., and H. Gulen. "Is TimeSeries Based Predictability
Evident in RealTime?" Working Paper, Krannert Graduate School
of Management, Purdue University, 2003.
Cootner, P. H. "Stock Prices: Random vs. Systematic Changes."
Industrial Management Review, 3(1962):2445.
Cootner, P. H. (ed.) The Random Character of Stock Market Prices.
Cambridge, MA: MIT Press, 1964.
Corrado, C. J., and S. H. Lee. "Filter Rule Tests of the Economic
Significance of Serial Dependencies in Daily Stock Returns."
Journal of Financial Research, 15(1992):369387.
Cornell, W. B., and J. K. Dietrich. "The Efficiency of the
Market for Foreign Exchange under Floating Exchange Rates."
Review of Economics and Statistics, 60(1978):111120.
Coutts, J. A., and K.Cheung. "Trading Rules and Stock Returns:
Some Preliminary Short Run Evidence from the Hang Seng 19851997."
Applied Financial Economics, 10(2000):579586.
Curcio, R., C. Goodhart, D. Guillaume, and R. Payne. "Do Technical
Trading Rules Generate Profits? Conclusions from the IntraDay Foreign
Exchange Market." International Journal of Finance and Economics,
2(1997):267280.
Dacorogna, M. M., R. Gençay, U. A. Müller, and O. V.
Pictet. "Effective Return, Risk Aversion and Drawdowns."
Physica A, 289(2001):229248.
Dale, C., and R. Workman. "The Arc Sine Law and the Treasury
Bill Futures Market." Financial Analysts Journal, 36(1980):7174.
Dale, C., and R. Workman. "Measuring Patterns of Price Movements
in the Treasury Bill Futures Market." Journal of Economics
and Business, 33(1981):8187.
Davutyan, N., and J. Pippenger. "Excess Returns and Official
Intervention: Canada 19521960." Economic Inquiry, 27(1989):489500.
Dawson, E. R., and J. Steeley. "On the Existence of Visual
Technical Patterns in the UK Stock Market." Journal of Business
Finance & Accounting, 30(January/March 2003), 263293.
Day, T. E., and P. Wang. "Dividends, Nonsynchronous Prices,
and the Returns from Trading the Dow Jones Industrial Average."
Journal of Empirical Finance, 9(2002):431454.
De Long, J. B., A. Shleifer, L. H. Summers, and R. J. Waldmann.
"Noise Trader Risk in Financial Markets." Journal of
Political Economy, 98(1990a):703738.
De Long, J. B., A. Shleifer, L. H. Summers, and R. J. Waldmann.
"Positive Feedback Investment Strategies and Destabilizing
Rational Speculation." Journal of Finance, 45(1990b):379395.
De Long, J. B., A. Shleifer, L. H. Summers, and R. J. Waldmann.
"The Survival of Noise Traders in Financial Markets."
Journal of Business, 64(1991):119.
Denton, F. T. "Data Mining as an Industry." Review
of Economics and Statistics, 67(1985):124127.
Dewachter, H. "Can Markov Switching Models Replicate Chartist
Profits in the Foreign Exchange Market?" Journal of International
Money and Finance, 20(2001):2541.
Donchian, R. D. "TrendFollowing Methods in Commodity Price
Analysis." Commodity Year Book, (1957):3547.
Donchian, R. D. "High Finance in Copper." Financial
Analysts Journal, (Nov./Dec. 1960):133142.
Dooley, M. P., and J. R. Shafer. "Analysis of ShortRun Exchange
Rate Behavior: March 1973 to November 1981." In D. Bigman and
T. Taya, (ed.), Exchange Rate and Trade Instability: Causes, Consequences,
and Remedies, Cambridge, MA: Ballinger, 1983.
Dryden, M. "A Source of Bias in Filter Tests of Share Prices."
Journal of Business, 42(1969): 321325.
Dryden, M. "Filter Tests of U.K. Share Prices." Applied
Economics, 1(1970a):261275.
Dryden, M. "A Statistical Study of U.K. Share Prices."
Scottish Journal of Political Economy, 17 (1970b):369389.
Fama, E. F. "Efficient Capital Markets: A Review of Theory
and Empirical Work." Journal of Finance, 25(1970):383417.
Fama, E. F. "Efficient Capital Markets: II." Journal
of Finance, 46(1991):15751617.
Fama, E. F. "Market Efficiency, LongTerm Returns, and Behavioral
Finance." Journal of Financial Economics, 49(1998):283306.
Fama, E. F., and M. E. Blume. "Filter Rules and Stock Market
Trading." Journal of Business, 39 (1966):226241.
Fama, E. F., and K. R. French. "Common Risk Factors in the
Returns on Stocks and Bonds." Journal of Financial Economics,
33(1993):356.
Fang, Y., and D. Xu. "The Predictability of Asset Returns:
An Approach Combining Technical Analysis and Time Series Forecasts."
International Journal of Forecasting, 19(2003):369 385.
Farrell, C. H., and E. A. Olszewski. "Assessing Inefficiency
in the S&P 500 Futures Market." Journal of Forecasting,
12(1993):395420.
FernándezRodríguez, F., C. GonzálezMartel,
and S. SosvillaRivero. "On the Profitability of Technical
Trading Rules Based on Artificial Neural Networks: Evidence from
the Madrid Stock Market." Economic Letters, 69(2000):8994.
FernándezRodríguez, F., S. SosvillaRivero, and J.
AndradaFélix. "Technical Analysis in Foreign Exchange
Markets: Evidence from the EMS." Applied Financial Economics,
13(2003):113122.
Fleming, J., B. Ostdiek, and R. E. Whaley. "Trading Costs and
the Relative Rates of Price Discovery in Stock, Futures, and Option
Markets." Journal of Futures Markets, 16(1996):353387.
Frankel, J. A., and K. A. Froot. "Chartist, Fundamentalists,
and Trading in the Foreign Exchange Market." American Economic
Review, 80(1990):181185.
Froot, K. A., D. S. Scharfstein, and J. C. Stein. "Herd on
the Street: Informational Inefficiencies in a Market with ShortTerm
Speculation." Journal of Finance, 47(1992):14611484.
Fyfe, C., J. P. Marney, and H. F. E. Tarbert. "Technical Analysis
versus Market Efficiency  A Genetic Programming Approach."
Applied Financial Economics, 9(1999):183191.
Gençay, R. "Optimization of Technical Trading Strategies
and the Profitability in Security Markets." Economic Letters,
59(1998a):249254.
Gençay, R. "The Predictability of Security Returns with
Simple Technical Trading Rules." Journal of Empirical Finance,
5(1998b):347359.
Gençay, R. "Linear, Nonlinear and Essential Foreign
Exchange Rate Prediction with Simple Technical Trading Rules."
Journal of International Economics, 47(1999):91107.
Gençay, R., and T. Stengos. "Moving Average Rules, Volume
and the Predictability of Security Returns with Feedforward Networks."
Journal of Forecasting, 17(1998):401414.
Goldbaum, D. "A Nonparametric Examination of Market Information:
Application to Technical Trading Rules." Journal of Empirical
Finance, 6(1999):5985.
Goodacre, A., and T. KohnSpeyer. "CRISMA Revisited."
Applied Financial Economics, 11 (2001):221230.
Goodacre, A., J. Bosher, and A. Dove. "Testing the CRISMA Trading
System: Evidence from the UK Market." Applied Financial
Economics, 9(1999):455468.
Gray, R. W., and S. T. Nielsen. "Rediscovery of Some Fundamental
Price Behavior Characteristics." Paper Presented at the Meeting
of the Econometric Society, Cleveland, Ohio, September 1963.
Group of Thirty. The Foreign Exchange Market in the 1980s. New York,
NY: Group of Thirty, 1985.
Grossman, S. J., and J. E. Stiglitz. "Information and Competitive
Price Systems." American Economic Review, 66(1976):246253.
Grossman, S. J., and J. E. Stiglitz. "On the Impossibility
of Informationally Efficient Markets." American Economic
Review, 70(1980):393408.
Grundy, B. D., and M. McNichols. "Trade and the Revelation
of Information through Prices and Direct Disclosure." Review
of Financial Studies, 2(1989):495526.
Guillaume, D. M. Intradaily Exchange Rate Movements. Boston, MA:
Kluwer Academic Publishers, 2000.
Gunasekarage, A., and D. M. Power. "The Profitability of Moving
Average Trading Rules in South Asian Stock Markets." Emerging
Markets Review, 2(2001):1733.
Hansen, P. R. "Asymptotic Tests of Composite Hypotheses."
Working Paper, Department of Economics, Brown University, 2003.
Hansen, P. R. "A Test for Superior Predictive Ability."
Working Paper, Department of Economics, Brown University, 2004.
Hausman, J. A., A. W. Lo, and A. C. MacKinlay. "An Ordered
Probit Analysis of Transaction Stock Prices." Journal of
Financial Economics, 31(1992):319379.
Hellwig, M. "Rational Expectations Equilibrium with Conditioning
on Past Prices: A MeanVariance Example." Journal of Economic
Theory, 26(1982):279312.
Houthakker, H. "Systematic and Random Elements in ShortTerm
Price Movements." American Economic Review, 51(1961):164172.
Hudson, R., M. Dempsey, and K. Keasey. "A Note on the Weak
Form Efficiency of Capital Markets: The Application of Simple Technical
Trading Rules to UK Stock Prices  1935 to 1964." Journal
of Banking & Finance, 20(1996):11211132.
Irwin, S. H., and B. W. Brorsen. "A Note on the Factors Affecting
Technical Trading System Returns." Journal of Futures Markets,
7(1987):591595.
Irwin, S. H., and J. W. Uhrig. "Do Technical Analysts Have
Holes in Their Shoes?" Review of Research in Futures Markets,
3(1984):264277.
Irwin, S. H., C. R. Zulauf, M. E. Gerlow, and J. N. Tinker. "A
Performance Comparison of a Technical Trading System with ARIMA
Models for Soybean Complex Prices." Advances in Investment
Analysis and Portfolio Management, 4(1997):193203.
James, F. E. Jr. "Monthly Moving Averages  An Effective Investment
Tool?" Journal of Financial and Quantitative Analysis,
(September 1968):315326.
Jegadeesh, N. "Foundations of Technical Analysis: Computational
Algorithms, Statistical Inference, and Empirical Implementation
 Discussion." Journal of Finance, 55(2000): 17651770.
Jensen, M. C. "Random Walks: Reality or Myth  Comment."
Financial Analysts Journal, 23 (1967):7785.
Jensen, M. C. "Some Anomalous Evidence Regarding Market Efficiency."
Journal of Financial Economics, 6(1978):95101.
Jensen, M. C., and G. A. Benington. "Random Walks and Technical
Theories: Some Additional Evidence." Journal of Finance,
25(1970):469482.
Kaufman, P. J. Trading Systems and Methods, New York, NY: John Wiley
& Sons, 1998.
Kho, B. "TimeVarying Risk Premia, Volatility, and Technical
Trading Rule Profits: Evidence from Foreign Currency Futures Markets."
Journal of Financial Economics, 41(1996): 249290.
Kidd, W. V., and B. W. Brorsen. "Why Have the Returns to Technical
Analysis Decreased?" Journal of Economics and Business,
56(2004):159176.
Korczak, J., and P. Roger. "Stock Timing Using Genetic Algorithms."
Applied Stochastic Models in Business and Industry, 18(2002):121134.
Koza, J. Genetic Programming: On the Programming of Computers by
Means of Natural Selection. Cambridge, MA: MIT Press, 1992.
Kwan, J. W. C., K. Lam, M. K. P. So, and P. L. H. Yu. "Forecasting
and Trading Strategies Based on a Price Trend Model." Journal
of Forecasting, 19(2000):485498.
Kwon, K., and R. J. Kish. "Technical Trading Strategies and
Return Predictability: NYSE." Applied Financial Economics,
12(2002):639653.
Lakonishok, J., and S. Smidt. "Are Seasonal Anomalies Real?
A NinetyYear Perspective."Review of Financial Studies,
1(1988):403425.
LeBaron, B. "Technical Trading Rule Profitability and Foreign
Exchange Intervention." Journal of International Economics,
49(1999):125143.
Lee, C. I., and I. Mathur. "Trading Rule Profits in European
Currency Spot CrossRates." Journal of Banking & Finance,
20(1996a):949962.
Lee, C. I., and I. Mathur. "A Comprehensive Look at the Efficiency
of Technical Trading Rules Applied to CrossRates." European
Journal of Finance, 2(1996b):389411.
Lee, C. I., K. C. Gleason, and I. Mathur. "Trading Rule Profits
in Latin American Currency Spot Rates." International Review
of Financial Analysis, 10(2001):135156.
Lee, C. I., M. Pan, and Y. A. Liu. "On Market Efficiency of
Asian Foreign Exchange Rates: Evidence from a Joint Variance Ratio
Test and Technical Trading Rules." Journal of International
Financial Markets, Institutions and Money, 11(2001):199214.
Leigh, W., N. Modani, R. Purvis, and T. Roberts. "Stock Market
Trading Rule Discovery Using Technical Charting Heuristics."
Expert Systems with Applications, 23(2002):155159.
Leigh, W., N. Paz, and R. Purvis. "Market Timing: A Test of
a Charting Heuristic." Economic Letters, 77(2002):5563.
LeRoy, S. F. "Efficient Capital Markets and Martingales."
Journal of Economic Literature, 27 (1989):15831621.
Leuthold, R. M. "Random Walk and Price Trends: The Live Cattle
Futures Market." Journal of Finance, 27(1972):879889.
Leuthold, R. M., J. C. Junkus, and J. E. Cordier. The Theory and
Practice of Futures Markets. Lexington, MA: Lexington Books, 1989.
Levich, R. M., and L. R. Thomas. "The Significance of Technical
Trading Rule Profits in the Foreign Exchange Market: A Bootstrap
Approach." Journal of International Money and Finance,
12(1993):451474.
Levy, R. A. "Random Walks: Reality or Myth." Financial
Analysts Journal, 23(1967a):6977.
Levy, R. A. "Relative Strength as a Criterion for Investment
Selection." Journal of Finance, 22 (1967b):595610.
Levy, R. A. "The Predictive Significance of FivePoint Chart
Patterns." Journal of Business, 44 (1971), 316323.
Lo, A., and A. C. MacKinlay. "Data Snooping Biases in Tests
of Financial Asset Pricing Models." Review of Financial
Studies, 3(1990):431467.
Lo, A., and A. C. MacKinlay. A NonRandom Walk Down Wall Street.
Princeton, NJ: Princeton University Press, 1999.
Lo, A., H. Mamaysky, and J. Wang. "Foundations of Technical
Analysis: Computational Algorithms, Statistical Inference, and Empirical
Implementation." Journal of Finance, 55 (2000):17051765.
Locke, P. R., and P. C. Venkatesh. "Futures Market Transaction
Costs." Journal of Futures Markets, 17(1997):229245.
Logue, D. E., and R. J. Sweeney. "'WhiteNoise' in Imperfect
Markets: The Case of the Franc/Dollar Exchange Rate." Journal
of Finance, 32(1977):761768.
Logue, D. E., R. J. Sweeney, and T. D. Willett. "The Speculative
Behavior of Foreign Exchange Rates during the Current Float."
Journal of Business Research, 6(1978):159174.
Lovell, M. C. "Data Mining." Review of Economics and
Statistics, 65(1983):112.
Lui, Y. H., and D. Mole. "The Use of Fundamental and Technical
Analyses by Foreign Exchange Dealers: Hong Kong Evidence."
Journal of International Money and Finance, 17(1998): 535545.
Lukac, L. P., and B. W. Brorsen. "The Usefulness of Historical
Data in Selecting Parameters for Technical Trading Systems."
Journal of Futures Markets, 9(1989):5565.
Lukac, L. P., and B. W. Brorsen. "A Comprehensive Test of Futures
Market Disequilibrium." Financial Review, 25(1990):593622.
Lukac, L. P., B. W. Brorsen, and S. H. Irwin. "A Test of Futures
Market Disequilibrium Using Twelve Different Technical Trading Systems."
Applied Economics, 20(1988):623639.
Lukac, L. P., B. W. Brorsen, and S. H. Irwin. "Similarity of
Computer Guided Technical Trading Systems." Journal of Futures
Markets, 8(1988):113.
Lukac, L. P., B. W. Brorsen, and S. H. Irwin. A Comparison of Twelve
Technical Trading Systems. Greenville, SC: Traders Press, Inc, 1990.
Maillet, B., and T. Michel. "Further Insights on the Puzzle
of Technical Analysis Profitability." European Journal of
Finance, 6(2000):196224.
Malkiel, B. G. A Random Walk Down Wall Street. New York: W. W. Norton,
1996.
Mandelbrot, B. "The Variation of Certain Speculative Prices."
Journal of Business, 36(1963): 394419.
Mandelbrot, B. "Forecasts of Future Prices, Unbiased Markets,
and 'Martingale' Models." Journal of Business, 39(1966):242255.
Marsh, I. W. "HighFrequency Markov Switching Models in the
Foreign Exchange Market." Journal of Forecasting, 19(2000):123134.
Martell, T. F. "Adaptive Trading Rules for Commodity Futures."
Omega, 4(1976):407416.
Martell, T. F., and G. C. Phillipatos. "Adaptation, Information,
and Dependence in Commodity Markets." Journal of Finance,
29(1974):493498.
Martin, A. D. "Technical Trading Rules in the Spot Foreign
Exchange Markets of Developing Countries." Journal of Multinational
Financial Management, 11(2001):5968.
Menkhoff, L. "Examining the Use of Technical Currency Analysis."
International Journal of Finance and Economics, 2(1997):307318.
Menkhoff, L., and M. Schlumberger. "Persistent Profitability
of Ttechnical Analysis on Foreign Exchange Markets?" Banca
Nazionale del Lavoro Quarterly Review, No. 193(1997):189216.
Merton, R. C. "On the Current State of the Stock Market Rationality
Hypothesis." In Dornbusch R., S. Fischer, and J. Bossons, (ed.),
Macreoeconomics and Finance: Essays in Honor of Franco Modigliani,
Cambridge, MA: MIT Press, 1987.
Mills, T. C. "Technical Analysis and the London Stock Exchange:
Testing Trading Rules Using the FT30." International Journal
of Finance and Economics, 2(1997):319331.
Neely, C. J. "Technical Analysis in the Foreign Exchange Market:
A Layman's Guide." Review, Federal Reserve Bank of St. Louis,
September/October, (1997):2338.
Neely, C. J. "The Temporal Pattern of Trading Rule Returns
and Exchange Rate Intervention: Intervention Does Not Generate Technical
Trading Profits." Journal of International Economics,
58(2002):211232.
Neely, C. J. "RiskAdjusted, Ex Ante, Optimal Technical Trading
Rules in Equity Markets." International Review of Economics
and Finance, 12(2003):6987.
Neely, C. J., and P. A. Weller. "Technical Trading Rules in
the European Monetary System." Journal of International
Money and Finance, 18(1999):429458.
Neely, C. J., and P. A. Weller. "Technical Analysis and Central
Bank Intervention." Journal of International Money and Finance,
20(2001):949970.
Neely, C. J., and P. A. Weller. "Intraday Technical Trading
in the Foreign Exchange Market." Journal of International
Money and Finance, 22(2003):223237.
Neely, C. J., P. A. Weller, and R. Dittmar. "Is Technical Analysis
Profitable in the Foreign Exchange Market? A Genetic Programming
Approach." Journal of Financial and Quantitative Analysis,
32(1997):405426.
Neftci, S. N. "Naïve Trading Rules in Financial Markets
and WienerKolmogorov Prediction Theory: A Study of 'Technical Analysis.'"
Journal of Business, 64(1991):549571.
Neftci, S. N., and A. J. Policano. "Can Chartists Outperform
the Market? Market Efficiency Tests for 'Technical Analysis.'"
Journal of Futures Markets, 4(1984):465478.
Nison, S. Japanese Candlestick Charting Techniques. New York Institute
of Finance, New York, 1991.
Oberlechner, T. "Importance of Technical and Fundamental Analysis
in the European Foreign Exchange Market." International
Journal of Finance and Economics, 6(2001):8193.
Olson, D. "Have Trading Rule Profits in the Currency Markets
Declined over Time?" Journal of Banking and Finance,
28(2004):85105.
Osler, C. L. "Support for Resistance: Technical Analysis and
Intraday Exchange Rates." Economic Policy Review, Federal
Reserve Bank of New York, 6(2000):5365.
Osler, C. L. "Currency Orders and Exchange Rate Dynamics: An
Explanation for the Predictive Success of Technical Analysis."
Journal of Finance, 58(2003):17911819.
Osler, C. L., and P. H. K. Chang. "Head and Shoulders: Not
Just a Flaky Pattern." Federal Reserve Bank of New York, Staff
Reports, 4, 1995.
Parisi, F., and A.Vasquez. "Simple Technical Trading Rules
of Stock Returns: Evidence from 1987 to 1998 in Chile."
Emerging Markets Review, 1(2000):152164.
Peterson, P. E., and R. M. Leuthold.
"Using Mechanical Trading Systems to Evaluate the Weak form
Efficiency of Futures Markets." Southern Journal of Agricultural
Economics, 14(1982):147152.
Poole, W. "Speculative Prices as Random Walks  An Analysis
of Ten Time Series of Flexible Exchange Rates." Southern
Economic Journal, 33(1967):468478.
Praetz, P. D. "Testing the Efficient Markets Theory on the
Sydney Wool Futures Exchange." Australian Economic Papers,
14(1975):240249.
Pring, M. J. Technical Analysis Explained. New York, NY: McGrawHill,
2002.
Pruitt, S. W., and R. E. White. "The CRISMA Trading System:
Who Says Technical Analysis Can't Beat the Market?" Journal
of Portfolio Management, (Spring 1988):5558.
Pruitt, S. W., and R. E. White. "ExchangeTraded Options and
CRISMA Trading." Journal of Portfolio Management, (Summer
1989):5556.
Pruitt, S. W., K. S. Maurice Tse, and R. E.White. "The CRISMA
Trading System: The Next Five Years." Journal of Portfolio
Management, (Spring 1992):2225.
Qi, M., and Y. Wu. "Technical TradingRule Profitability, Data
Snooping, and Reality Check: Evidence from the Foreign Exchange
Market." Working Paper, 2002.
Ratner, M., and R. P. C. Leal. "Tests of Technical Trading
Strategies in the Emerging Equity Markets of Latin America and Asia."
Journal of Banking & Finance, 23(1999):18871905.
Raj, M. "Transactions Data Tests of Efficiency: An Investigation
in the Singapore Futures Markets." Journal of Futures Markets,
20(2000):687704.
Raj, M., and D. Thurston. "Effectiveness of Simple Technical
Trading Rules in the Hong Kong Futures Markets." Applied
Economics Letters, 3(1996):3336.
Ready, M. J. "Profits from Technical Trading Rules." Financial
Management, 31(2002):4361.
Roberts, M. C. "Technical Analysis in Commodity Markets: Risk,
Returns, and Value." Paper Presented at the NCR134 Conference,
St. Louis, Missouri, 2003.
Roll, R. "A Simple Implicit Measure of the Effective BidAsk
Spread in an Efficient Market." Journal of Finance,
39(1984):11271139.
Saacke, P. "Technical Analysis and the Effectiveness of Central
Bank Intervention." Journal of International Money and Finance,
21(2002):459479.
Samuelson, P. A. "Proof
That Properly Anticipated Prices Fluctuate Randomly." Industrial
Management Review, 6(1965):4149.
Sapp, S. "Are All Central Bank Interventions Created Equal?
An Empirical Investigation." Journal of Banking & Finance
28(2004):443474.
Schmidt, A. B. "Modeling the DemandPrice Relations in a HighFrequency
Foreign Exchange Market." Physica A, 271(1999):507514.
Schmidt, A. B. "Modeling the Birth of a Liquid Market."
Physica A, 283(2000):479485.
Schmidt, A. B. "Why Technical Trading May Be Successful? A
Lesson from the AgentBased Modeling." Physica A, 303(2002):185188.
Schulmeister, S. "Currency Speculation and Dollar Fluctuations."
Banca Nazionale del Lavoro Quarterly Review, No. 167(1988):343365.
Schwager, J. D. "Alternative to Sharpe Ratio Better Measure
of Performance." Futures, (March 1985):5658.
Schwager, J. D. Schwager on Futures: Technical Analysis. New York,
NY: John Wiley & Sons, 1996.
Schwert, G. W. "Anomalies and Market Efficiency." Handbook
of the Economics of Finance, eds. George Constantinides, Milton
Harris, and René Stulz, NorthHolland (2003):937972.
Shiller, R. J. "Speculative Prices and Popular Models."
Journal of Economic Perspectives, 4 (1990):5565.
Shiller, R. J. "From Efficient Markets Theory to Behavioral
Finance." Journal of Economic Perspectives, 17(2003):83104.
Shleifer A., and L. H. Summers. "The Noise Trader Approach
to Finance." Journal of Economic Perspectives, 4(1990):1933.
Silber, W. L. "Technical Trading: When It Works and When It
Doesn't." Journal of Derivatives, 1(1994):3944.
Skouras, S. "Financial Returns and Efficiency as Seen by an
Artificial Technical Analyst." Journal of Economic Dynamics
& Control, 25(2001):213244.
Smidt, S. Amateur Speculators. Ithaca, NY: Graduate School of Business
and Public Administration, Cornell University, 1965a.
Smidt, S. "A Test of Serial Independence of Price Changes in
Soybean Futures." Food Research Institute Studies, 5(1965b):117136.
Smith, T., and R. E. Whaley. "Estimating the Effective Bid/Ask
Spread from Time and Sales Data." Journal of Futures Markets,
14(1994):437455.
Solt, M. E., and P. J. Swanson. "On the Efficiency of the Markets
for Gold and Silver." Journal of Business, 54(1981):453478.
SosvillaRivero, S., J. AndradaFélix, and F. FernándezRodríguez.
"Further Evidence on Technical Trade Profitability and Foreign
Exchange Intervention." Applied Economics Letters, 9(2002):827832.
Stevenson, R. A., and R. M. Bear. "Commodity Futures: Trends
or Random Walks?" Journal of Finance, 25(1970):6581.
Stewart, B. "An Analysis of Speculative Trading in Grain Futures."
Technical Bulletin, No. 1001, US Department of Agriculture, Washington,
D.C., 1949.
Sullivan, R., A. Timmermann, and H. White. "Data Snooping,
Technical Trading Rule Performance, and the Bootstrap." Journal
of Finance, 54(1999):16471691.
Sullivan, R., A. Timmermann, and H. White. "Dangers of DataMining:
The Case of Calendar Effects in Stock Returns." Journal
of Econometrics, 105(2001):249286.
Sullivan, R., A. Timmermann, and H. White. "Forecast Evaluation
with Shared Data Sets." International Journal of Forecasting,
19(2003):217227.
Sweeny, R. J. "Beating the Foreign Exchange Market." Journal
of Finance, 41(1986):163182.
Sweeny, R. J. "Some New Filter Rule Tests: Methods and Results."
Journal of Financial and Quantitative Analysis, 23(1988):285300.
Sweeny, R. J., and P. Surajaras. "The Stability of Speculative
Profits in the Foreign Exchanges." In R. M. C. Guimaraes et
al., (ed.), A Reappraisal of the Efficiency of Financial Markets,
Heidelberg: SpringerVerlag, 1989.
Szakmary, A. C., and I. Mathur. "Central Bank Intervention
and Trading Rule Profits in Foreign Exchange Markets." Journal
of International Money and Finance, 16(1997):513535.
Takens, F. "Detecting Strange Attractors in Turbulence."
In Dynamical Systems and Turbulence, Rand, D. A., and L. S. Young,
(ed.), Berlin: Springer, 1981.
Taylor, M. P., and H. Allen. "The Use of Technical Analysis
in the Foreign Exchange Market." Journal of International
Money and Finance, 11(1992):304314.
Taylor, S. J. "Conjectured Models for Trends in Financial Prices,
Tests and Forecasts." Journal of the Royal Statistical Society,
A 143(1980):338362.
Taylor, S. J. "Trading Rules for Investors in Apparently Inefficient
Futures Markets." In Futures Markets  Modeling, Managing and
Monitoring Futures Trading, Oxford, UK: Basil Blackwell, 1983.
Taylor, S. J. "The Behaviour of Futures Prices over Time."
Applied Economics, 17(1985):713734.
Taylor, S. J. Modelling Financial Time Series. Chichester, England:
John Wiley & Sons, 1986.
Taylor, S. J. "How Efficient are the Most Liquid Futures Contracts?"
A Study of Treasury Bond Futures." Review of Futures Markets,
7(1988):574592.
Taylor, S. J. "Rewards Available to Currency Futures Speculators:
Compensation for Risk or Evidence of Inefficient Pricing?"
Economic Record, 68(1992):105116.
Taylor, S. J. "Trading Futures Using a Channel Rule: A Study
of the Predictive Power of Technical Analysis with Currency Examples."
Journal of Futures Markets, 14(1994): 215235.
Taylor, S. J. "Stock Index and Price Dynamics in the UK and
the US: New Evidence from a Trading Rule and Statistical Analysis."
European Journal of Finance, 6(2000):3969.
Taylor, S. J., and A. Tari. "Further Evidence against the Efficiency
of Futures Markets." In R. M. C. Guimaraes et al., (ed.), A
Reappraisal of the Efficiency of Financial Markets, Heidelberg:
SpringerVerlag, 1989.
Thompson, S. R., and M. L. Waller. "The Execution Cost of Trading
in Commodity Futures Markets." Food Research Institute Studies,
20(1987):141163.
Timmermann, A., and C. W. J. Granger. "Efficient Market Hypothesis
and Forecasting." International Journal of Forecasting,
20(2004):1527.
Tomek, W. G., and S. F. Querin. "Random Processes in Prices
and Technical Analysis." Journal of Futures Markets,
4(1984):1523.
Treynor, J. L., and R. Ferguson. "In Defense of Technical Analysis."
Journal of Finance, 40(1985):757773.
Van Horne, J. C., and G. G. C. Parker. "The RandomWalk Theory:
An Empirical Test." Financial Analysts Journal, 23(1967):8792.
Van Horne, J. C., and G. G. C. Parker. "Technical Trading Rules:
A Comment." Financial Analysts Journal, 24(1968):128132.
Wang, J. "Trading and Hedging in S&P 500 Spot and Futures
Markets Using Genetic Programming." Journal of Futures Markets,
20(2000):911942.
White, H. "A Reality Check for Data Snooping." Econometrica,
68(2000):10971126.
Wilder, J. W. Jr. New Concepts in Technical Trading Systems, Trend
Research, Greensboro, NC: Hunter Publishing Company, 1978.
Wong, M. C. S. "Market Reactions to Several Popular TrendChasing
Technical Signals." Applied Economics Letters, 2(1995):449456.
Wong, W. K., M. Manzur, and B. K. Chew. "How Rewarding Is Technical
Analysis? Evidence from Singapore Stock Market." Applied
Financial Economics, 13(2003):543551.
Working, H. "A RandomDifference Series for Use in the Analysis
of Time Series." Journal of the American Statistical Association,
29(1934):1124.
Working, H. "The Investigation of Economic Expectations."
American Economic Review, 39 (1949):150166.
Working, H. "A Theory of Anticipatory Prices." American
Economic Review, 48(1958):188199.
Working, H. "New Concepts Concerning Futures Markets and Prices."
American Economic Review, 52(1962):431459.
Zhou, X., and M. Dong. "Can Fuzzy Logic Make Technical Analysis
20/20?" Financial Analysts Journal, 60(2004):5475.
Endnotes
[1]
CheolHo Park is a Graduate Research Assistant in the
Department of Agricultural and Consumer Economics at the
University of Illinois at UrbanaChampaign. Scott H. Irwin
is the Laurence J. Norton Professor of Agricultural Marketing
in the Department of Agricultural and Consumer Economics
at the University of Illinois at UrbanaChampaign. Funding
support from the Aurene T. Norton Trust is gratefully
acknowledged.
[2]In futures markets, open interest is defined as "the
total number of open transactions" (Leuthold, Junkus,
and Cordier 1989).
[3] In fact, the history of technical analysis dates
back to at least the 18th century when the Japanese developed
a form of technical analysis known as candlestick charting
techniques. This technique was not introduced to the West
until the 1970s (Nison 1991).
[4]In this survey, an amateur trader was defined as
"a trader who was not a hedger, who did not earn
most of his income from commodity trading, and who did
not spend most of his time in commodity trading (p. 7)."
[5]Pyramiding occurs when a trader adds to the size
of his/her open position after a price has moved in the
direction he/she had predicted.
[6]Timmermann and Granger used t as a symbol
for the information set. The symbol, t, has been changed
to t for consistency.
[7]Working (1934) independently developed the
idea of a random walk model for price movements. Although
he never mentioned the "random walk model,"
Working suggested that many economic time series resemble
a "randomdifference series," which is simply
a different label for the same statistical model. He emphasized
that in the statistical analysis of time series showing
the characteristics of the randomdifference series in
important degree, it is essential for certain purposes
to have such a standard series to provide a basis for
statistical tests (p. 16), and found that wheat price
changes resembled a randomdifference series.
[8]Modern studies were surveyed through August
2004.
[9]See Wilder (1978) for detailed discussion.
[10]Wilder (1978) originally set the parameter
values at n = 14 and ET = 30.
[11]Dryden (1969) argued that Fama and Blume's
results were biased because they assumed that the short
rateofreturn for a transaction is simply the negative
of the corresponding long rateofreturn. Dryden illustrated
this problem with a simple example: "If a transaction
is initiated at a price of 100 and concluded at a price
of 121, assuming the duration of the transaction is two
days, the rate of return is 10% if the filter rule signaled
a long transaction, and 11.1% if the transaction is a
short one" (p. 322). Thus, the long rateofreturn
is always less (absolutely) than the short rateofreturn
except in cases that either the total number of days for
which the filter had open positions equals one or an opening
price equals a closing price. As a result, the rate of
return of the buyandhold strategy may be overestimated.
Dryden argued that about a 20% reduction of Fama and Blume's
buyandhold rate was appropriate. In this case, additional
six filters would have long rates of return in excess
of the buyandhold rate.
[12]Levy (1967a) showed that some relative strength
rules outperformed a benchmark of the geometric average.
[13]Problems caused by the survivorship biases
will be discussed in the next section.
[14]Because of this threeyear reoptimization
method, the outofsample period in Lukac, Brorsen, and
Irwin's work was from 19781984.
[15]These returns are based on the total investment
method in which total investment was composed of a 30%
initial investment in margins plus a 70% reserve for potential
margin calls. The percentage returns can be converted
into simple annual returns (about 3.8%5.6%) by a straightforward
arithmetic manipulation.
[16]These are unlevered returns.
[17]The following parable on the testing of
coinflipping abilities provided by Merton (1987, p. 104)
clarifies this problem. "Some three thousand students
have taken my finance courses over the years, and suppose
that each had been asked to keep flipping a coin until
tails comes up. At the end of the experiment, the winner,
call her A, is the person with the longest string of heads.
Assuming no talent, the probability is greater than a
half that A will have flipped 12 or more straight heads.
As the story goes, there is a widely believed theory that
no one has coinflipping ability, and, hence, a researcher
is collecting data to investigate this hypothesis. Because
one would not expect everyone to have coinflipping ability,
he is not surprised to find that a number of tests failed
to reject the null hypothesis. Upon hearing of A's feat
(but not of the entire environment in which she achieved
it), the researcher comes to MIT where I certify that
she did, indeed, flip 12 straight heads. Upon computing
that the probability of such an event occurring by chance
alone is 212, or .00025, the researcher concludes that
the widely believed theory of no coinflipping ability
can be rejected at almost any confidence level."
[18]Breakeven oneway transaction costs are
defined as the percentage oneway trading costs that eliminate
the additional return from technical trading (Bessembinder
and Chan, 1995, p. 277). They can be calculated by dividing
the difference between portfolio buy and sell means by
twice the average number of portfolio trades
[19]This result contrasts sharply with that
of Taylor (2000), who found a breakeven oneway transaction
cost of 1.07% for the DJIA data during the 19681988 period
using an optimized moving average rule.
[20]This result contrasts sharply with the result
of Ready (2002), who argued that Brock, Lakonishok, and
LeBaron's results were spurious because of the data snooping
problem.
[21]The nominal pvalue was obtained from applying
the Bootstrap Reality Check methodology only to the best
rule, thereby ignoring the effect of data snooping.
[22]These calendar frequency trading rules are
based on calendar effects documented in finance studies.
Several famous calendar effects are the Monday effect,
the holiday effect, the January effect, and the turnofthemonth
effect. See Schwert (2003) for further details.
[23]See Hansen (2003, 2004) for detailed discussion.
[24]In fact, Brock, Lakonishok, and LeBaron's
trading range breakout rules (support and resistance levels)
can be regarded as chart patterns.
[25]Readers should carefully interpret this
result. A breakeven oneway transaction cost indicates
gross return per trade. For instance, if the trading rule
generates ten trades per year, the corresponding annual
mean return would be 10.7%.
[26]One notable exception is the Japanese yen
market in which the three studies found net profits even
in recent periods.
[27]Using the Commodity Futures Trading Commission
(CFTC) audit trail transaction records (complete trade
history), Locke and Venkatesh (1997) estimated the actual
transaction costs of 12 futures contracts, which were
measured by the difference between the average purchase
price and the average sale price for all customers including
market makers and floor brokers, with prices weighted
by trade size. They found that the actual transaction
costs were generally lower than the minimum price changes
(tick) or customermarket maker spreads, with the exception
of several currency futures.
[28]Hausman, Lo, and MacKinlay (1992) quantified
the magnitude of marketimpact in the stock market by
applying the ordered probit model to transactions data
from the Institute for the Study of Security Markets (ISSM).
Click
here for Order Form
