The informational impact of electronic trading systems on the FTSE 100 stock index and its futures contracts

This article examines the partial adjustment factors of Financial Times Stock Exchange (FTSE) 100 stock index and stock index futures. Using high frequency data from 15 January 1997 to 17 March 2000, it aims to assess the informational impact of the electronic trading systems implemented at the London Stock Exchange and London International Financial Futures Exchange (LIFFE). The results suggest that information runs mainly from the futures market to the spot market. We find that the introduction of stock exchange trading system, in October 1997, has increased the FTSE 100 index's absolute efficiency; however, it reduced the informational feedback to the futures market. The implementation of LIFFE CONNECT at LIFFE, in May 1999, has reduced the absolute and relative efficiency of FTSE 100 futures. These findings seem to imply that during the period under scrutiny electronic trading increased the level of microstructural noise, probably due to the bid–ask bounce and order flow imbalances.


Introduction
This article aims to assess the impact of electronic trading systems, implemented at the London Stock Exchange and London International Financial Futures Exchange (LIFFE), on the absolute and relative efficiency of Financial Times Stock Exchange (FTSE) 100 index and futures contracts using partial adjustment factors.
The informational linkage between stock indices and stock index futures has been extensively analysed, both theoretically and empirically, in the finance literature (see, e.g. Stoll and Whaley 1990;Chan 1992;Abhyankar 1995;Pizzi, Economopoulos, and O'Neil 1998;Booth, So, and Tse 1999;Frino, Walter, and West 2000). However, few papers have attempted to characterize the adjustment process to new information in a high frequency framework.
There are several economic reasons to support the claim that in highly liquid markets for individual assets, such as the futures market, it takes just a few minutes or even seconds for the complete adjustment of prices to new information. This claim is empirically sustained by event studies designed to examine the effect of firm-specific information disclosure, such as earnings, dividends, takeovers announcements, and, most particularly, by those designed to explore the effect of macroeconomic announcements on index futures contracts Lee 1993, 1995).
systems implies that more endogenous information and public exogenous information is available to a greater number of potential investors, which reduces the overall opportunity trading costs (Locke and Sarkar 2001). Bloomfield and O'Hara (1999) present a laboratory experiment where, ultimately, post-trade transparency increases liquidity costs but it also increases informational efficiency.
Usually, in electronic systems, traders communicate anonymously via a screen and therefore adverse selection problems are allegedly weaker on the floor. Because a limited number of traders is physically concentrated in the 'pit', traders can observe each others' behaviour and infer valuable information. The lower anonymity of quote-driven markets in conjunction with the ability to select a counterpart may reduce the overall trading costs but can also disincentivize trading with information (Benveniste, Marcus, and Wilhelm 1992;Pirrong 1996). Martens (1998) and Massimb and Phelps (1994) argue that the disadvantage of the limit-order book lies in the time delay between the withdrawal of old quotes and the submission of new orders. In outcry markets, a single-hand signal is sufficient enough to change the price quotes. However, given the advances in telecommunications, computer technology and trading software, electronic trading can be quite flexible, with important savings in execution time and other trading costs. Moreover, the high pre-and post-trade transparency of electronic systems surely imply that this system provides a more timely response to exogenous information. Against the superior resiliency of open-outcry systems, Pirrong (1996) states that, during 'fast market'situations, order flow on the floor may be fragmented and different prices may coexist in different areas of the 'pit'. Following this argument, in 'fast markets' open outcry may introduce noise in the pricing process.
The most obvious effect of electronic trading is that it probably disrupts the liquidity and stability services provided by human intermediation. Intermediaries in the trading process, such as market makers and other 'locals', can anticipate future order imbalances and their intervention can consequently reduce transitory volatility. These services are particularly valuable for less liquid stocks (Madhavan and Sofianos 1998). In a similar perspective, it is argued that human intermediation provides market stability in the presence of severe asymmetric information problems (Glosten 1989;Tse and Zabotina 2004). Conversely, Bloomfield, O'Hara, and Saar (2003) argue that in electronic systems informed traders can also provide liquidity to the order book through the submission of limit orders when the value of their information is low. According to these authors, this would explain why electronic markets can endogenously create liquidity even in the presence of information asymmetry.
It is worth noticing that stability and price discovery (i.e. dynamic price efficiency) may be conflicting market functionalities. If there is no human intermediation, the increase in information asymmetry may have a more pronounced negative impact on liquidity, the bid-ask spread widens and market depth declines. Informed trades will have a bigger price impact and market prices will more rapidly incorporate information (Easley and O'Hara 1992). But, this market is less stable, in the sense that it exhibits large price volatility.
The ambiguity of the theoretical arguments about the impact of electronic trading on price discovery is also present in the empirical evidence, and it seems that the conflicting results are very sensitive to the intrinsic liquidity of the market under study. Grünbichler, Longstaff, and Schwartz (1994) use data for the German stock index (DAX), whose component stocks are traded on the floor at the Frankfurt Stock Exchange, and for the DAX index futures that are screen-traded on the German Futures and Options Exchange (DTB) and conclude that screen trading accelerates the price discovery process. Martens (1998) focus on the relative price discovery during the two extremes of 'fast markets' and very quite periods for the Bund futures traded simultaneously at the LIFFE, open-outcry market, and the DTB, automated The European Journal of Finance 615 exchange, and shows that there is a drop in the information share of LIFFE from 57.8% in high volatility periods to 33.8% in low periods. Franke and Hess (2000) find that the DTB's market share is inversely related to price volatility and trading volume. Theissen (2002) examines the mid-quotes of the 30 stocks, traded simultaneously for 3 h every day at the floor of the Frankfurt Stock Exchange and on the electronic system IBIS, and found that the results are favourable for the electronic market with an estimated Gonzalo-Granger common factor weight of 0.65. Taylor et al. (2000) find that in the pre-SETS period the adjustment in the FTSE 100 futures market is faster than the adjustments in the spot market; however, this asymmetry disappears in the electronic trading period. Tse and Zabotina (2001) examine the impact of LIFFE CONNECT and find that the variance of the pricing error is about five times more in the electronic period than in the open-outcry period. Frino and McKenzie (2002) also study the impact of LIFFE CONNECT and conclude that the strengthening of the simultaneity of price discovery probably reflects that the cash market is also screen traded, which enhances program trading and index arbitrage. Chng (2004) finds that electronic trades at LIFFE CONNECT are more than twice as informative as the previous floor trades and concludes that this is originated by the increase in the order flow visibility. Hasbrouck (2003) examines the relative price discovery of exchange-traded funds, regular futures contracts, traded through open outcry at CME, and E-mini futures contracts, screentraded at GLOBEX, on the S&P 500 and NASDAQ 100 indices and estimates information shares of around 85% for the E-mini contract. Kurov and Lasser (2004) revisit the work of Hasbrouck (2003) -excluding the exchange-traded funds -and report that almost all price discovery is attributed to the electronic market, with information shares of about 98% and 96% for the E-mini S&P 500 and NASDAQ 100 futures, respectively. Ates and Wang (2005) also study the regular futures contracts and E-mini futures contracts on the S&P 500 and NASDAQ 100 indices, and find that, after the learning period and when the electronic system achieves a sufficient level of liquidity, this market presents an increasing informational superiority, with information shares of 84.2-89.5% in the maturity period. Brailsford et al. (1999) examine the impact of automated trading, introduced on September 1990 at the Australian stock market, on the information transmission between the stock and the futures market, and find that automated trading provides a richer and timelier information set which accelerates the price discovery process. Fung et al. (2005) study the switching, on 6 June 2000, of the Hang Seng Index futures (Hong Kong) from floor trading to electronic trading, and show that the futures information share increases from 56% to 66%, while the futures common factor weight increases from 0.602 to 0.664.

The 'partial adjustment with noise' model
The theoretical background for the derivation of several partial adjustment estimators is the 'partial adjustment with noise' model of Amihud and Mendelson (1987) and Damodaran (1993). This model is easily adapted to a bivariate price process of fundamentally related assets (Theobald and Yallup 1998).
Let us suppose that the futures (log) price process, denoted by f , and the underlying asset (log) price process, denoted by s, follow an adjustment process with noise: In this structural model, the price process p t = [p s,t p f,t ] depends on the information, specified by the latent fundamental price, m t , and on a bivariate process η t = [η s,t η f,t ] , that we will, with a slight abuse of language, call noise. The existence of only one efficient price for the underlying asset and the futures contract is not a restrictive assumption in a high frequency setup because the fundamental determinants of the futures basis do not accrue intraday (Miller, Muthuswamy, and Whaley 1994).
The fundamental (log) price follows a random walk, and its innovations, u t , have a permanent impact on both prices. The dynamics generated by information are completely described by the convergence rates δ i , with i = s, f . In the hypothetical situation where δ i = 1, prices are fully adjusted to information, in the sense that information is completely and immediately impounded into prices, but still not completely revealing due to the existence of noise. Accordingly, the 'life' of the adjustment process diminishes as δ i approaches 1. To guarantee a finite adjustment process it is assumed that 0 < δ i < 2. If 0 < δ i < 1, then prices undershoot the efficient price, that is, in the presence of a permanent impact, evaluation errors always have the same sign and the sequence of prices provides valuable information. If 1 < δ i < 2, then there are overshooting effects as prices alternatively overestimate and underestimate the efficient price, most particularly there is contemporaneous overreaction. 1 Since the work of De Bondt and Thaler (1985), overshooting effects have been commonly explained by cognitive misperceptions of market participants. For example, overreaction can result from rational anticipation of positive feedback trading, characterized by buying (selling) pressure when prices rise (fall) (De Long et al. 1990), or can be the observable result of investors' overconfidence about the precision of their information signal, specially in face of unreliable information pieces (Gervais and Odean 2001). Alternatively, Lehmann (1990) argues that observable returns overreaction can be originated by imbalances in the market's short-run (weekly) liquidity.
The second source of uncertainty, η, is modelled as two idiosyncratic noise processes with zero mean and variance σ 2 η s and σ 2 η f for the stock index and futures contract, respectively. The noise variances aim to capture indistinguishably all microstructural imperfections such as bid-ask bounce, inventory adjustments, short selling restrictions, taxes, price discreteness, etc., as well as temporary order imbalances caused by liquidity or noise trading.
In this model, there are information-induced price movements and there is 'everything else' (i.e. the second source of uncertainty is left unspecified). This does not mean that all informationinduced price movements are in fact information as they can be the result of overshooting effects and therefore are, in fact, noise. 2 There is no distinction between public and private information; all sources of information are condensed in the latent efficient price and markets are distinct in the way they adapt to the information arrival process. This implies that the efficient price is resolved in the information market and thus is exogenous to the order flow. Differences between markets due to asymmetric information, different strategic behaviour of informed traders, different number The European Journal of Finance 617 of informed traders, different expectations, opinions or interpretations of public news are all assumed to be expressed by measurable differences between convergence speeds. If a specific market experiences an adjustment coefficient closer to unity, that means it incorporates new information more rapidly and, assuming that all the rest is the same, informationally dominates the other market. However, it remains to be known if this is due to differences in gathering and interpreting public information or private information.
Considering the instantaneous return process, (the first difference of the logarithmic price process), model can be expressed as a moving average process in permanent and transitory shocks: or, using the lag operator, The return process in each market, defined by Equations (2) and (3), is therefore a function of two unrelated processes, {u t } and {η i,t }, but the permanent and transitory components of returns are contemporaneously correlated as long as δ i = 1; consequently, transaction prices are not strong-form efficient. In other words, information is exogenously produced but some part of it, depending on the value of |1 − δ i |, is endogenously revealed through the trading process itself.
Equation (3) basically states that the return process is 'observationally' equivalent to an ARMA(1,1) process. Considering that (a) u t is i.i.d. with mean zero and variance σ 2 u and (b) the noise terms are i.i.d. with mean zero and variance σ 2 η i , partial adjustment coefficients for each market can be estimated byδ i = 1 −φ i , whereφ i is the auto-regressive parameter estimate obtained from the return series.
Additionally, let us assume that (c) information innovations and noise processes are contemporaneous and serially uncorrelated, that is, Cov(u t , η i,τ ) = 0, ∀t, τ, i. Then, from Equation (2) the variance of the individual return process is defined as follows: Hence, the variance of returns depends both on noise and information, with the contribution of information-induced volatility being a function of the adjustment speed. One market can experience a higher volatility because it reacts faster or overreacts to new information, or simply because it is noisier.
The auto-covariances have the following functional formula: Therefore, a second type of partial adjustment coefficient estimator is derived naturally from the auto-covariance ratios: A third adjustment coefficient estimator can be obtained from the cross-covariances ratios. Considering that (d) the noise processes are contemporaneous and serially uncorrelated, that is, Cov(η i,τ , η j,τ ) = 0, ∀t, τ, i, j , the cross-covariances are given by From the contemporaneous and first-order cross-covariances, In real markets, information and noise interact and prevent markets from being continuous and completely arbitraged away. In this simple partial adjustment model, markets are different to the extent in which noise is 'local' to each market but also because they may have different dynamic reactions to information. If the prices converge to the efficient price at different rates, the observed returns will exhibit asymmetrical lead-lag structures. From Equation (7) the first-order cross-covariances are given as follows: is lower than unity; the futures market exhibit a lead over the underlying stock market. Another recurring issue in the study of partial adjustment coefficients is the behaviour of the estimators with the sampling frequency. Let us suppose that the sampling interval t is divided into n equal subintervals l (i.e. n = t/ l). Theobald and Yallup (2004) show that the auto-correlation coefficient at sampling frequency t, Corr{R(t), R(t − 1)}, can be expressed in terms of autocovariances and variances at the differencing interval l. Denoting the auto-covariance of order k at differencing interval l as Cov(k) (the variance is denoted as Cov(0)), then Hence, the 'intervalling' properties of the auto-correlations ratio estimator depend on n and higher order auto-covariances. If the higher order auto-covariances (in the summations of Equation (9)) are sufficiently small, then So, as the differencing interval increases, the adjustment coefficient estimates tend to unity. If the higher order auto-correlations are nonzero, over-and underreactions may occur at longer differencing intervals. Theobald and Yallup (1998) demonstrate that if the returns processes are completely described by the partial adjustment model, then the adjustment factor for a differencing interval of n periods, δ(n), is related to the partial adjustment factor for a single differencing The European Journal of Finance 619 interval, δ(1), according to Therefore, when δ(1) = 1, δ(n) = 1; when δ(1) < 1, δ(1) < δ(n) < 1 and when δ(1) > 1, δ(1) > δ(n) > 1.

Bid-ask bounce, order flow dependencies and nonsynchronous trading
There are several economic factors that induce serial correlation in asset returns, with its effect varying according to the sampling frequency. Besides the lagged impact of information, which is integrally captured by the partial adjustment model, there are other microstructural factors that have been identified in the literature, such as the bid-ask bounce, temporal dependences in the order flow and nonsynchronous trading.
To account for the impact of the bid-ask bounce, Roll (1984) develops a simple model, where the occurrence of buy and sell orders have the same probability. Assuming that the only source of noise is the bid-ask spread defined by 2s, so that the feasible values for the ask and bid quotes are s and −s with probability 1/2, then the noise variance in the model stated by Equations (1) is σ 2 η = s 2 . If the prices fully adjust to information, δ = 1, and using the convention that δ 0 = 1 (Hasbrouck and Ho 1987), then from Equation (5) Thus, if the prices fully adjust to information, then the bid-ask bounce increases the returns volatility and induces negative first-order correlation. If the prices undershoot the efficient price, then the bid-ask bounce produces negative auto-correlation of order higher than one, whereas if there are overshooting effects the auto-correlations have alternating signs. Independent of the adjustment process, the bid-ask spread influences the auto-covariances only via σ 2 η , and therefore its impact cancels out in the auto-covariance estimator of δ i . On the other hand, because bid-ask bounce produces a fluctuation of transaction prices around the fundamental price, it is commonly modelled as a MA(1) (see, e.g. Stoll and Whaley 1990). Therefore, when estimating the partial adjustment coefficient using the ARMA procedure, the bid-ask bounce is isolated by the MA component. Furthermore, in the case of a well diversified stock index, bid-ask errors in component stocks tend to compensate each other and its overall effect is probably trivial. Hasbrouck and Ho (1987) extend Roll's model and allow the order flow to follow an autoregressive structure of order one. The authors assume that the mid-quote is subjected to a lagged adjustment process, and that the transaction price is equal to the mid-quote plus an error ε t . This error defines the bid-ask process conditional on the previous occurrence, that is, with a probability function given as Because of the bid-ask error structure, transaction prices are now 'observationally' equivalent to an ARMA(2,2), with the auto-regressive parameters being jointly defined by the partial adjustment coefficient δ and the probability determinant ϑ. More precisely, φ 1 = (1 − δ) + ϑ and φ 2 = −(1 − δ)ϑ. Hence, the first auto-regressive parameter no longer provides an unequivocal estimate of the partial adjustment coefficient. Moreover, because the auto-covariances of order k of the {ε t } process are proportional to ϑ k , the auto-covariance ratios do not purge the order flow dependence, implying that they provide a biased estimator of the partial adjustment factors.
The Hasbrouck and Ho (1987) analysis puts into perspective the typical limitations in any attempt to estimate the partial adjustment coefficients. In general, if noise is a subordinated stochastic process, then the structure of this process is transferred into the returns auto-covariances and it is impossible to isolate noise from fundamental information innovations. In fact, any violation of assumptions (c) and (d) introduces an error in the estimation of δ i . For example, in the presence of contemporaneous correlation between the information innovations and the noise processes and between the noise processes, the multiplicative error of the cross-covariance estimator is given as (see the Appendix) According to Equation (14), the estimation error is a nonlinear function of δ s , δ f , σ 2 u , σ u,η i and σ η i ,η j implying that the estimated parameter will most probably be biased upwards, given that the two assets are related not only via the information innovations but also via the noise processes in both markets.
In order to purge the contemporaneous correlation effect, instead of using the estimator given by Equation (8), one could just increase the order of both cross-covariances, and the consistent estimator would now be given as follows: However, there are no a priori reasons to believe that the fundamental processes are contemporaneous but not serially correlated, especially in a high frequency framework. In sum, estimator in Equation (15) reduces the contemporaneous correlation effects of noise, but increases the error produced by the serial correlations, while the opposite happens with estimator in Equation (8).
Probably, the most important issue when estimating the partial adjustment coefficients for stock indices is the presence of stale prices in the index computations. Stale prices decrease the variance of well diversified portfolios and induce serial auto-correlation, positive for well diversified portfolios and negative for individual securities such as futures contracts, decrease the contemporaneous cross-correlation between index and futures returns and increase the cross-correlation of lagged futures returns (Campbell, Lo, and MacKinlay 1997, 85-98). Yallup (1998, 2004) argue that stale prices introduce a MA(q) component in the returns processes, where q is the maximum lagged 'true' return to have an impact on the observed current return. Campbell, Lo, and MacKinlay (1997, 92-94) show that, in the presence of nonsynchronous trading, observed portfolio returns follow a first-order auto-regressive process. Stoll and Whaley (1990) claim that stale prices should be modelled as an ARMA(p,q) process. In sum, it appears that nonsynchronous trading tends to contaminate the ARMA estimator and surely biases the auto-covariance and cross-covariance estimators.
The basic procedure developed by Yallup (1998, 2004) to adjust the covariance ratio estimators for nonsynchronous trading consists in lagging the covariances for q periods, that is, the consistent auto-covariance and cross-covariance estimators in the presence of stale prices The European Journal of Finance 621 up to q i lags in market i are, respectively, and Although these estimators remove the nonsynchronous trading effect, it is quite an ad hoc procedure because it simply implies that there is a priori knowledge about q i . While there are theoretical and empirical reasons supporting to believe that in a highly liquid futures market q f ≈ 0 even at higher frequencies, for the underlying index q s is most probably different from zero and is a function of the type of variable used to compute the value of the index (mid-quotes or transaction prices) and the liquidity of each component stock.
The previous discussion suggests a two-step procedure when computing the partial adjustment coefficients from bivariate return processes (i.e. when using the cross-covariances estimator). First, an ARMA(p,q) is fitted to the univariate time series and then the cross-covariance estimator is applied to the model residuals (see Yallup 1998, 2005). There is an important drawback in this procedure: the ARMA model not only mitigates the effect of noise-induced volatility and auto-correlation but it also tends to normalize the adjustment processes across markets. Hence, without additional information, one cannot generally declare that the resulting estimates correspond to the partial adjustment coefficients for the common fundamental factor; they are in fact estimates of the relative adjustment process and accordingly they should be properly interpreted in relative terms.
To support the above statement, we generate 100,000 pseudo-observations from the model given by Equations (1) with parameters {σ 2 u = 4, σ 2 η s = σ 2 η f = 1, δ s = δ f = 0.5}. The ARMA(1,1) estimates areδ s = 0.5026 andδ f = 0.5044, the cross-covariance estimates obtained from unfiltered returns areδ s = 0.4922 andδ f = 0.4969 and the cross-covariance estimator applied to the residuals of an ARMA(1,1) givesδ s = 0.6143 andδ f = 0.6175. So, even when the data satisfy all the basic assumptions (a) through (d) stated in the previous section, the two-step procedure outlined previously only gives the correct adjustment coefficient estimates if σ 2 u 2σ 2 η i , with i = s, f . In the above simulation, information-induced variance is higher than the noise-induced variance; consequently, the estimates from the ARMA residuals are biased upwards, but they assign correctly a similar adjustment process to both markets.
The above discussion motivates the use of multiple adjustment factor estimators, in order to obtain a more reliable picture about the adjustment processes in both markets.

Data and descriptive statistics
The empirical analysis is performed on the FTSE 100 index and FTSE 100 futures contracts for a sample of more than 3 years, since 15 January 1997.
The futures data were extracted from the LIFFEstyle CD-ROM. These files contain seven columns corresponding to date, time stamp to the nearest second, trade price indicator, delivery month, price, volume and trading platform. Besides the usual indicators for transaction and bid and ask prices, there are also special indicators for transactions embedded in declared spread and delta neutral strategies. However, the special indicators for wholesale transactions ('J' for basis trading and 'K' for block trading) are not available during the period under scrutiny. Futures prices are expressed in index points multiplied by 10. The trading platform column indicates the price origin: 'floor' or electronic platform, designated by 'APT'. The index data have three columns for dates, time stamps and index values. The index was computed at regular time intervals, at a reporting frequency of 1 min before 23 February 1998, and 15 s afterwards.
The overall sample was partitioned into three subsamples according to the existing trading mechanisms. The main concern when designing these subsamples was to compare similar stages of maturation of the trading systems, this was particularly relevant when defining the period used to measure the effects of the introduction of SETS. This new electronic platform suffered some hostile reactions, particularly from large UK-based investors that were comfortable with the opaqueness of SEAQ and the existence of a network of intimate relationships with market makers. The abandonment of the computerized settlement system TAURUS on 11 March 1993, overwhelmingly mentioned in the specialized press, generated a lack of confidence in the ability of LSE to implement and manage electronic projects. Furthermore, the first months of trading, characterized by low liquidity and erratic pricing, raised a fierce criticism against the new trading system. One year after the introduction of SETS, the system was considered to be quite successful (Naik and Yadav 1999).
To discard the transition period of SETS, our study only considers for the second subsample (denoted hereafter as P2), when shares were traded on SETS and futures were traded on the floor, the data from 20 July 1998 to 7 May 1999. 3 The other two subsamples were obtained considering, after filtering and sampling, the nearest number of observations to the one in P2 for an integer number of days. The first subsample corresponds to the period from 15 January 1997 to 17 October 1997, when UK shares were traded on SEAQ and FTSE 100 futures were traded on the floor (subsample named as P1). The third subsample corresponds to the period from 10 June 1999 to 17 March 2000, when SETS was in place at the LSE and the FTSE 100 futures were already trading on LIFFE CONNECT (denoted as P3).
In order to obtain a synchronized time series for the FTSE 100 index and futures contract, several filtering rules were applied to the raw data: (1) only futures transaction prices, with the marker 'Trd', that is, not embedded in spread or delta trades, were considered; (2) the rollover procedure for the futures contract was based on the trading activity measured by the number of trades per day; (3) for each trading day, the time series include only those prices when the stock market and the futures markets were open; (4) days missing completely or in part (with a gap of more than 30 min) for at least one series were removed from the sample 4 ; (5) for each day, the first 5 min of common trading were discarded and (6) for the LIFFE CONNECT period all prices with a corresponding volume equal to 750 or more contracts were removed from the series. 5 Finally, the futures series were sampled minute-by-minute, using the last price before the sampling point. The final price series have a total of 87,043, 86,970 and 87,300 observations, which corresponds to 193, 195 and 180 trading days for the three periods, respectively. The number of minute-by-minute prices considered in each trading day depends on the common normal trading hours in the two markets (excluding trading in the electronic facility APT when the central market at LIFFE was conducted on the floor). Therefore, excluding the first five common trading minutes, a particular day can have a total of 451, 446 or 506 minute-by-minute prices.
The statistical properties of FTSE 100 index and FTSE 100 futures minute-by-minute percentage logarithmic returns are reported in Table 1.
For both markets the mean is almost equal to zero. The standard deviation of the futures market is higher than the standard deviation of the index returns, particularly for the first period, where spot volatility is about one-quarter of the futures volatility. The distributions of returns are leptokurtic (i.e. they exhibit negative skewness and fat tails, especially for the index during P2). The excess kurtosis of the index in relation to the futures contract during P2 and P3 most probably reflects the persistence of some of the problems affecting the implementation of SETS. The auto-correlations of the index returns are all positive and show a marked trend to decrease through the overall sample, implying that most probably the effect of nonsynchronous trading in the observed index returns has decreased through time, and particularly with the introduction of SETS. So, it seems that the actual transaction prices in the SETS are revised more frequently than the market makers' mid-quotes in SEAQ. 6 Assuming that markets are completely efficient and that the FTSE 100 index measures a perfectly diversified stock portfolio, then the first-order auto-correlation coefficient for the index returns provides a simple estimate of the nontrading probability (Campbell, Lo, and MacKinlay 1997, 93). According to this simple procedure, during P1, the nontrading probability is 57.66% (i.e. on average, less than half of the index value is reviewed in each minute). For P2 and P3, the nontrading probabilities are just 12.42% and 7.52%, respectively.
The auto-correlations of the FTSE 100 futures returns are clearly lower than those of the index, and arguably are economically insignificant. The number of zero futures returns is about 40% in P1, 24% in P2 and 16% in P3. However, the percentages of minutes with no transactions are only 14.89, 6.55 and 3.30% in the three periods, respectively, meaning that the existence of stale prices in the futures series is not a severe problem, especially in P2 and P3.
Finally, the auto-correlations and the Ljung-Box portmanteau tests indicate that both the index and the futures squared returns present high linear dependence which is typical in high frequency data (see, e.g. Areal and Taylor 2002).

Empirical results
As a preliminary study on the FTSE 100 index and futures partial adjustment coefficients, Table 2 presents the estimates of these factors using the auto-and cross-covariances ratios for 1-min returns.
When no allowance is made for the existence of stale prices (Lag 0), the auto-covariance ratio estimator assesses a factor of 0.2379 for the index during P1 and slightly higher factors during P2 and P3, these being 0.3582 and 0.3356, respectively. The estimated factors for the FTSE 100 futures contract have similar magnitude except for P3, where the point estimate of 1.3557 is significantly higher than one. Although the futures estimate in P1 is relatively small, the t-statistic is only 0.99, not rejecting the null hypothesis that this coefficient is different from one.
The cross-covariance ratios offer a completely different ordering of the partial adjustment coefficients. Although we have already suggested that the FTSE 100 index may have been largely computed with the stale prices during the period when the stocks were traded on SEAQ, a negative cross-covariance estimate of −0.3175 is not a reasonable figure for the adjustment factor. In P2, the index cross-covariance factor increases to 0.0341 and in P3 it increases to 0.2037. This provides some evidence that the implementation of SETS has reduced the effect of stale prices in the reported index. This decrease may be due to a higher operational or informational efficiency in the stock market or may simply be the result of new computing and/or reporting procedures. The cross-covariance factors for the futures contract are remarkably stable, with estimates of 0.6139, 0.6873 and 0.6711 for the three subsamples, respectively.
The index results seem quite unreliable. Conceivably, the existence of stale prices in the index is biasing the auto-covariance estimator upwards and the cross-covariance estimator downwards.
for i = s, f and i = j . The rows 'Lag q' represent the additional lag introduced in the covariances, that is, q i , to account for the stale price effect in the index. The t-statistics for π i = 1 − δ i are presented in parentheses. For the auto-covariance ratio estimator the t-statistic is computed as follows: where σ 2 ε i is the estimated variance of the disturbance in the regression R i,t−2−q i = a + bR i,t−1−q i + ε i,t . For the cross-covariance ratio estimator, the t-statistic is computed as follows: where σ 2 ε j is the estimated variance of the disturbance in the regression R j,t−1−q i = a + bR j,t−q i + ε j,t Yallup, 1998, 2004).
For the futures contract, other microstructural processes might also be nontrivially affecting the auto-covariance estimates, most specifically during P3. When lags are introduced into the auto-covariances and cross-covariances ratios for the index, an interesting pattern emerges in P1; the auto-covariance estimates remain at around 0.13 and 0.14 until lag 9, while the crosscovariance estimates present a tendency to increase until lag 7. This suggests that during P1 the FTSE 100 index may have been computed with lagged stock prices at least up to the previous 7 min. The same reasoning implies that during the other subsamples the effect of stale prices is lower; with the index being computed with stock prices up to 3 min during P2 and up to 2 min during P3. Theobald and Yallup (2004) present some evidence on the superiority of the ARMA estimator in relation to the auto-covariance estimator. In order to obtain more precise insights into the partial adjustment coefficients, Panels A, B and C of Table 3 report the ARMA estimates and some diagnostic statistics for different sampling frequencies using the sampled returns without any previous filtering.
For the index during P1, at 1-min frequency, the order chosen by AIC for the moving average is seven and the estimated factor is only 0.1499, the R-squared is 36.13% and the Ljung-Box portmanteau test emphasizes highly significant correlation structure in the ARMA innovations and squared innovations. Once again, all these results support the hypothesis of severe stale price effects in the index during P1. At a sampling frequency of 1 min, the other subsamples exhibit ARMA factors not statistically different from unity, except the futures series during P3; in this subsample the estimated coefficient for the futures contract is only 0.2054.
As predicted by the theory, decreasing the sampling frequency has only trivial effects on the estimated factors when these factors are not statistically different from unity at higher frequencies.
Although this is the general tendency for the futures returns during P1 and for the index and futures returns during P2 and P3, the estimates of the first-order auto-regressive parameter are somewhat unstable. This instability reflects the existence of significant higher order auto-correlations (at 1-min frequency) and the interaction between the first-order moving average parameter and the first-order auto-regressive parameter. When the auto-regressive estimates for each sampling frequency are averaged, part of this instability disappears, giving more supportive evidence on the previous claims. Once again, results are different for the index during P1, where the average ARMA factor estimates increase almost monotonically towards unity with the differencing interval, providing some evidence that it takes at least 30 min for the index to completely adjust to new information.
If the order flow process is an important determinant of the returns auto-correlation structure as predicted by Hasbrouck and Ho (1987), one would expect a significant second-order auto-regressive parameter, as long as the adjustment coefficient is sufficiently different from unity. Furthermore, the order flow effect would be more visible in the futures returns at higher frequencies, because the effects in the constituent stocks are probably diversified away in the FTSE 100 index. The results on the significance of the second-order auto-regressive parameter for the futures market are not conclusive, most probably due to the fact that adjustment coefficients are approximately equal to unity.
The significant auto-correlation in the residuals and most particularly in the square residuals of the ARMA(1,q) cast some doubts on the adequacy of this model. Table 4 reports the ARMA estimators applied to the returns divided by the conditional volatility estimated by a GARCH(1,1). 7 Although some significant auto-correlation structure still remains at higher frequencies, the twostep procedure is generally successful in removing most of the auto-correlation in the returns and squared returns. The index during P1 is once again the exception.
The European Journal of Finance 629 Table 3. ARMA factors at different frequencies for sampled returns.

Minutes
Nobs  in Panels A, B and C. Partial adjustment coefficients are estimated for different sampling frequencies, which are shown in the first column (in minutes). The number of observations for each regression is displayed in the second column. The estimating procedure is the following: first, the auto-regressive component is fixed at order one, and the moving average order is chosen according to the Akaike information criterion among different competing models, ranging from order 0 to 7; then the auto-regressive coefficient is used to compute 1 − φ 1 and test H 0 : φ 1 = 0, which is equivalent to a t-test on δ = 1. For each estimation, the table also shows the moving average order, q, the R-squared, R 2 , (negative values are rounded to zero), the Ljung-Box portmanteau tests for up to the 10th order serial correlation in return innovations and up to the 5th order serial correlation in the squared return innovations, denoted by Q(10) and Q 2 (5), respectively. The significance of the second-order auto-regressive parameter (column φ 2 ) is tested using the t-statistic on an ARMA(2,q). The last column, (1 −φ 1 ), is the average of the adjustment coefficients estimated from ARMA(1,q) models, for every possible time series using different sampling points. For example, for a sampling frequency of 3 min, we estimate three ARMA models, the first one is estimated using the returns computed from prices (observations) 1, 4, 7,…, the second is estimated using prices 2, 5, 8,… and so on; then the estimates are averaged. * * , * Statistics significant at the 1% and 5% levels.  Table 3 is that, before the estimation of the ARMA models, returns are filtered for the conditional volatility estimated by a GARCH(1,1). Tests are conducted onR i,t = R i,t / h i,t , where R it is the sampled returns and h i,t is the conditional variance of asset i,estimated by Partial adjustment coefficients are estimated for different sampling frequencies, present in the first column (in minutes). See Table 3 for the construction of other columns. * * , * Statistics significant at the 1% and 5% levels.

The European Journal of Finance 633
Results in Table 4, in conjunction with the previous remarks drawn from Tables 2 and 3, lead to the following conclusions: (1) At 1-min frequency, the absolute partial adjustment coefficient for the index market during P1 was relatively low, somewhere between 0.15 and 0.20. During P1 it took more than 30 min for the reported index to complete the adjustment process to new information. The introduction of SETS decreased the importance of stale prices in the index computation and strengthened the index adjustment process to new information. The increased absolute and relative efficiency of the FTSE 100 index is even more visible during P3.
(2) During the three subsamples the relative partial adjustment coefficient of futures contract at 1-min frequency was around 0.60-0.70. This factor increased during P2 and decreased with the introduction of LIFFE CONNECT. During P3 it takes, on average, 1 or 2 min more for the futures prices to be fully adjusted to new information than in P1.
Arguably, in a high frequency framework, the auto-correlation ratio and the auto-regressive parameter in the ARMA model are quite noisy estimators of the partial adjustment factor. First, the difference between the ARMA point estimates and the ARMA averaged estimates and, second, the existence of estimates economically different from unity coupled with statistical inference in the opposite direction, suggest that the use of univariate series provides very poor results. Ultimately, this is probably due to the existence of several microstructural variables with some kind of structure, and the existence of stale prices in the index and the fact that the data are not regularly spaced.
Cross-covariance adjustment factors estimated at different sampling frequencies, using unfiltered (sampled) returns (CCU), ARMA innovations (CCA) and ARMA-GARCH(1,1) standardized innovations filtered for conditional volatility (CCG), are shown in Panels A, B and C of Table 5. The reported results are strikingly coherent and economically meaningful.
At 1-min, the CCG estimates support the previous claim that the partial adjustment factor for the index during P1 was situated between 0.15 and 0.20. At a 30-min frequency, all index estimates are still significantly different from unity and there is some evidence that the 'true' factor is approximately equal to 0.80.
On one hand, it appears that the implementation of SETS has introduced more noise into the index price formation process at 1-min frequency but, on the other hand, it has also diminished the existence of stale prices in the index. This is discernible by the fact that during P2 the estimates for the index at 1-min frequency are all positive; however, the CCG estimate is lower than the corresponding figure in P1.
Index estimates from the three procedures present a tendency to be closer to each other, not only through the decrease in the sampling frequency for a given subsample, but also as time elapses. For P2, at a 5-min frequency, the estimates are 0.4853 (CCU), 0.5946 (CCA) and 0.5703 (CCG); at half-hour frequency, the index price formation is completely dominated by information and the estimators are around 0.87. In P3, even for higher frequency returns the three estimates are very close; at 1-min, the estimated factors for the index are 0.2037, 0.2668 and 0.2760. Hence, from P2 to P3, partial adjustment factors have increased on average by about 0.20 at higher frequencies and by approximately 0.05 at lower frequencies. The regularity of this increment highlights the existence of pronounced learning and market maturation effects in SETS, which continued even 2 years after the implementation of this electronic trading system.
The FTSE 100 futures market is more efficient than the underlying asset through all the subsamples. On average, at 1-min frequency, the futures factor is about three times higher than the . These coefficients are computed using the cross-covariances from unfiltered returns (column 'CCU'), the cross-covariances from ARMA innovations (column 'CCA') and the cross-covariances from innovations adjusted for the conditional volatility using ARMA-GARCH(1,1) (column CCG). The moving average order for the ARMA process is chosen according to the AIC criterion.
The t-statistics, reported in parentheses, are computed as follows: where σ 2 ε j is the estimated variance of the disturbance in the regression R j,t−1−q i = a + bR j,t−q i + ε j,t Yallup, 1998, 2004). * * , * Significantly different from unity at the 1% and 5% levels.
corresponding index factors. Also at this frequency, the futures partial adjustment factors remain relatively stable through the overall sample; however, there is a slight increase from around 0.61 in P1 to 0.69 in P2 and a small decrease to 0.65 during P3. During P1, it takes at least 10-15 min for futures prices to be relatively fully adjusted, during P2 this figure is only 7 min and in P3 it is 8 min.
At 30-min sampling frequency, the difference between the index and the futures factors for the unfiltered returns are about 0.41 in P1, 0.10 during P2 and 0.04 during P3. When filtering for nonsynchronous trading and heteroscedasticity (CCG), the differences remain almost unchanged for P2 and P3, but suffer a pronounced decrease (about 46%) during P1. Hence, this provides some evidence that the differences in the estimated adjustment processes in the index and futures markets are not solely explained by nonsynchronous trading.

Conclusions
The main empirical findings can be summarized as follows: (1) In all subsamples, the FTSE 100 futures market reacts more rapidly to information than the underlying index and accordingly converges more quickly to the full-information equilibrium.
(2) The implementation of SETS has reduced substantially the amount of nonsynchronous trading effects found in the index. With the new electronic system the FTSE 100 index provides more reliable and timely information on the stock market movements. (3) The introduction of SETS has enhanced the informational efficiency of UK's stock market. This is discernible from (a) the increase of the number of ARMA estimates at different frequencies not statistically different from one and (b) the rate according to which all crosscovariance estimates increase with the sampling interval. (4) SETS has collaterally increased the level of noise, at least during P2. This is supported by (a) the reduction of cross-covariance estimate for the ARMA-GARCH adjusted innovations at 1-min frequency and (b) the increase of the futures cross-covariance estimates at higher frequencies than 15 min. (5) There is some evidence that LIFFE CONNECT not only has slowed down the adjustment process but also has increased the relative level of noise in the futures price formation process. Several results point in this direction: (a) the number of ARMA estimates statistically different from unity increases in P3, (b) all estimates at a frequency of 1-min (except the one resulting from the auto-covariance ratio) have decreased substantially and (c) the unfiltered, ARMA and ARMA-GARCH cross-covariance estimates decrease, on average, at all frequencies.
In accordance with the main stream of the literature, the results highlight that the new trading systems have increased the level of market informational integration. However, it seems that the new trading systems have increased the level of market's endemic noise, which supports the claim that electronic trading disrupts the liquidity and stability services provided by human intermediation and implies that electronic trading is quite sensitive to order imbalances and noiseinduced volatility. Hence, normatively, this provides some evidence for the need for trading mechanisms that mitigate those problems such as price limits, trading halts and auction procedures. The microstructure history of UK stock and futures markets shows that these trading protocols have, actually, received a great deal of attention since the implementation of electronic trading systems.

Acknowledgement
The present work is part of my PhD thesis in Financial Markets at the Management School, Lancaster University and was supported by the POCTI program of the 'III Quadro Comunitário de Apoio' sponsored by the Portuguese Foundation for Science and Technology and the European Social Fund. Thanks are also due to two anonymous referees, Gary Xu, Nelson Areal, Michael Theobald, Pradeep Yadav, Erik Theissen and, most especially, to my supervisor, Stephen J. Taylor. The usual disclaimer applies. Notes 1. Although the term 'overreaction' has been normally associated with long horizon investments, where 'short-run' refers to a period of weeks or even months (see, e.g. Jegadeesh and Titman 1995), several papers, among which stands out Lee (1993, 1995), provide empirical evidence that high frequency returns may also overreact to news events. Accordingly, the 'intraday overreaction hypothesis' has already been empirically tested (see, e.g. Grant, Wolf, and Yu 2005). 2. One theoretical example of noise associated with information is found in the noisy rational expectations model of Brown and Jennings (1990). Here, traders are endowed with some informative signal; however, this signal also contains noise, which is carried through trading to the return process. 3. There are some arguments that support the choice of 20 July for the beginning of this subsample: first, the minimum order size was removed from SETS and the maximum order size was increased from 10 NMS to 20 NMS 1 month before; second, the futures tick value diminished from £12.5 to £5 after the contract of June 1998 and finally, and more importantly, on 20 July 1998, SETS began to open half an hour later, at 9:00, in response to the lack of liquidity and disturbed price formation at the opening. 4. This rule implies not only the synchronization of the two time series, but also guarantees the continuity of the same trading mechanism for each trading day (exclusion of trading halts and related auctions) and excludes atypical days such as 24 December and 31 December, when both markets are open only until 12:30. 5. Undoubtedly, this is the most controversial filtering rule, but the economic reasons underlying the exclusion of big transactions during P3 are quite compelling. In this period, the average volume per trade was only 3.74 contracts and 90% of all trades had a volume equal to seven or less contracts; therefore, it is highly probable that trades with a volume of 750 contracts or more are originated outside the order book through the wholesale facilities. Because block prices can be quite different from the prevailing prices in the order book at the time when a big trade is reported, the recorded prices show a jump and immediately afterwards a reversion to the main stream of prices, inducing spurious volatility and especially higher kurtosis. During floor trading, the reported 259 transactions with a volume equal to or higher than 750 contracts have no discernible impact on the returns series in transaction time; however in LIFFE CONNECT, the reported 310 big trades increase the kurtosis from 54 to 1414 (i.e. by a factor of 26!). 6. The unusually high index auto-correlations during P1 are probably the result of the filtering and sampling procedures used when constructing the prices series. The exclusion of data when one market is closed, the removal of the first 5-min of each trading day and the extraction of days with gaps over 30 min may have led to an overestimation of nontrading intervals for the individual stocks. For instance, when we include the overnight returns, the index presents a first-order auto-correlation of only 0.0997, 0.1101 and 0.0636 for the three subsamples, respectively. 7. Other model specifications for the conditional variance were also tested, namely, increasing the GARCH order, using an EGARCH or an alternative error distributions, such as the standardized Student t distribution and the GED. The properties of the adjusted residuals were similar to those of the GARCH(1,1) standardized innovations.