This article by Glattfelder, Dupuis and Olsen, brought to my attention by a reader, proposes an empirical set of scaling laws that apply to FX markets.

After considering them, in view of devising an interesting indicator for trading, the problem appears to be that these laws mostly concerns averages taken over 5 years, that is a serious limitation for their applicability on a short period of time.

Nonetheless, I identified one, the law (12) that may be of interest, provided some more work:

This law(applied to the total move, *=tm) gives the length of the coastline for a given pair for a year of activity (250 days) as a percentage, relatively to a resolution defined as the directional-change threshold (cf chapter 2.3 in the article).

Considering the case without the transaction costs (an assumption, I think, justified by the small scale considered), I then look at Table A19 to know the parameters of the Law relative to the currency pair I am interested in. For the following I will consider EUR/USD, which is the pair I trade most often, the law therefore becomes :

As I am interested in moves around 10 PIPs, I shall then consider a resolution of 0.001 for EUR/USD, so:

Which gives me a resolution between 12 and 14 PIPs (for the current value of the EUR/USD) since 0.001 is a percentage.

As a result, I get:

This is the annualised length of the coastline, I am more interested in this length for 15 minutes, I therefore have to divide it by 250*24*4, for a result of:

Which is equal to about 520 PIPs (taking 1.35 for EUR/USD) as the length of the coastline for 15 minutes.

This information is the best I can get so far from the scaling laws described in the article. It may be used to determine the width of a channel (volatility), though, even for this, it needs to be included in further calculations (that will likely used the Graph Dimension, or the Hurst exponent). I am currently thinking of ways to do that, and will publish any success I may have with this line of thought in the future.

## 15 comments:

JP,

Possibly I'm missing the point here, but how is the *average* going to help you?

The whole point behind fractals is to highlight the fact that probabilities do not diminish exponentially as the size of the move increases.

Therefore,using the average, is simply a move back towards a methodology that more resembles using a Gaussian distribution, which totally defeats the object.

jog on

duc

Duc,

I am not sure, maybe the average won't help me, still trying to figure out a way.

So far, in this blog, I have placed myself in the model of a FBM, which sometimes, when the Hurst exponent happens to be 0.5, becomes a WBM (Wiener Brownian Motion), and is a Gaussian process. What the Fractal dimension (whichever the one we consider) measures is really how far from the Gaussian model we are at a precise time.

What I am hoping to do with the coastline length is to use it in order to "distribute" it around the MA, so as to get an idea of the likely volatility, and I want to do this in relation to the current fractal dimension.

So actually, when this one will be 1.5 (for the Graph Dimension, i.e. H=0.5) we should simply get something acquainted to the classic Bollinger Bands.

When H>0.5, we will be in a trend and I then would like the bands to be non-symmetric in relation to the MA, being wider in the sense of the trend.

But for H<0.5, the MA should again be in the middle but with wider bands than in the WBM case.

This is the idea, but I am still facing problems in relating numerically H to the width of the bands, in a way that is mathematically sensible.

As for the general remark you make on fractals, yes, it is one point behind FBM (not behind fractals though, because WBM is also a fractal), but it's not the only one,especially in the case we are in, where H varies with time, which is an empirical fact for us, but can be formally expressed via the concept of multifractal. This variation tells us whether we are in a trend or not, and also tells us about the strength of the trend or of the side market.

Your remark on averages is right though, it is a limitation, I may be able to overcome this partially by adapting the resolution to the value of H, thereby increasing or decreasing the length of the coastline, but still, it is clear some prices will fall outside the bands.

Thanks for your input,

Cheers

JP

JP,

With regards to volatility, and future volatility, simply looking at say the $VIX will give you current and historical volatility.

To estimate future volatility, surely, higher, lower, or either, is all that you really need to know?

Correlations, are due to the non-independence of financial markets, linked increasingly through a variety of derivatives and strategies, which have created complexity that is not modelled well, or possibly at all.

Prediction...after all is not a function of fractals.

Gaussian distributions have erroneously been used to *predict* due to the exponential problem in high sigma events.

A trend, or strength of a trend, can due to future events, change rather quickly. Therefore, how will this differ from MA's, VWAP's, Money Flows, MACD's and any other trend following indicator currently on offer?

jog on

duc

Duc,

"With regards to volatility, and future volatility, simply looking at say the $VIX will give you current and historical volatility.

To estimate future volatility, surely, higher, lower, or either, is all that you really need to know?"

I am placing myself in the context of the FBM model, and am interested in having the volatility within this model.

Correlations, are due to the non-independence of financial markets, linked increasingly through a variety of derivatives and strategies, which have created complexity that is not modelled well, or possibly at all.

Prediction...after all is not a function of fractals.

I am not correlating different instruments, I am considering a random process modeling one currency pair, and try to use the information about the current state of this pair in order to make decisions, not to make predictions.

Gaussian distributions have erroneously been used to *predict* due to the exponential problem in high sigma events.I have said so many times on the blog, and explained my position on this matter. I read your post about the copula, though I did not reply to it as it would just be to repeat myself (already replied in some detail about this on Ars Mathematica blog).

A trend, or strength of a trend, can due to future events, change rather quickly. Therefore, how will this differ from MA's, VWAP's, Money Flows, MACD's and any other trend following indicator currently on offer?The essential difference is that it assumes that the prices are modelised by a Fractional Brownian Motion (FBM), that's all. It may therefore give signals ahead of other indicators since it assumes something beyond the historical data. Which does not mean that those signals will always be correct.

JP,

Yes, I appreciate the fact that your model refers to only the currency pair mentioned.

However this is my point, an event in an alternative financial instrument, that until now had exhibited minor correlation, moves to 1.0 due to linking of positions via derivatives or possibly only sentiment.

How can you model for one currency pair, relationships that currently may not exist, but, appear nonetheless in the future?

Is not a decision, a prediction?

jog on

duc

JP,

Ok, I've just read your response to Ars Mathematica.

Merton & Scholes, were involved intimately with Long Term Capital Management, that utilised amongst other things Gaussian distributions and equilibrium.

Their own money was at risk. They indeed lost their investments when LTCM blew up.

Were they greedy, or just did they not understand their own mathematics?

jog on

duc

Duc,

However this is my point, an event in an alternative financial instrument, that until now had exhibited minor correlation, moves to 1.0 due to linking of positions via derivatives or possibly only sentiment.

How can you model for one currency pair, relationships that currently may not exist, but, appear nonetheless in the future?

I am not totally sure I understand your point here, but if so, there may be a confusion about the nature of the model I am using.

There are 2 kinds of model (I simplify a bit, and neglect the hybrid models) to account for a given phenomenon:

- A causal model that identifies causal chains that determine the phenomenon. This one examines this piece of reality via experiments and measures to come up with a quantitative model accounting for the phenomenon.

- A non-causal model, or statistical model, that is purely built out of measures of the said phenomenon, but the causal chains that leads to changes in these measures are completely ignored. this model is much weaker in terms of determination than the first one, but in many cases, it is the best we can get.

The FBM model for prices variations belongs to the second category. The model doesn't posit any relationships, it is simply a relevant model in an historical, statistical sense. It does not consider the causal mechanisms that lead to this or that move in price; the causal chains are therefore not part of the model, but this absence, that is perfectly acknowledged, does not undermine its relevance, it simply is a different approach.

Is not a decision, a prediction?

Not necessarily, decision can be made under uncertainty, and in acknowledgment of it.

Prediction is a term usually reserved for the causal models, non-causal models only give forecast, probability, they don't give predictions in the sense of ephemerides for instance.

Merton & Scholes, were involved intimately with Long Term Capital Management, that utilised amongst other things Gaussian distributions and equilibrium.

Their own money was at risk. They indeed lost their investments when LTCM blew up.

Were they greedy, or just did they not understand their own mathematics?

As far as I know, neither Merton nor Scholes were mathematicians, Black was, and was not part of LTCM. But they may have understood the mathematics of their article, that would not have prevented them to ignore the limitations of it, in terms of application, and to eventually misapply them, many decent mathematicians have done so following LTCM until now.

But, well, this is pure speculation, I have no idea of the way Merton and Scholes were involved in LTCM.

Cheers

JP

JP,

Let me try to clarify my point.

Monetary units, currency, are currently free floating, they are not exchangeable to a fixed value, viz. Gold.

Thus, their values will fluctuate against each other, reflecting their purchasing power.

What variables determine their exchange values?

I won't bore you with a list, that would almost certainly be incomplete anyway.

Prices [which are the exchange values] fluctuate daily, minute-by-minute, reflecting all variables as determined by market participants.

Thus when you state:

"The FBM model for prices variations belongs to the second category. The model doesn't posit any relationships, it is simply a relevant model in an historical, statistical sense. It does not consider the causal mechanisms that lead to this or that move in price;"

That is exactly what it is doing, viz. examining causal mechanisms, through the analysis of price.

Therefore:

"the causal chains are therefore not part of the model, but this absence, that is perfectly acknowledged, does not undermine its relevance, it simply is a different approach."

they most certainly are part of the model. Thus, causality, reflected in price data is highly relevent.

The problem being, that cause is so mired in complexity and opaqueness, that it ceases to be noticed as a potential risk...until it blows up.

Sure, all decisions are made under uncertainity...this almost by definition makes them a prediction.

I make a decision not to carry an umbrella...I am predicting that I will not need it, for either rain, excessive sunshine, nor to fight off the dog that leaps over the gate.

In the same way...when I enter a trade, I am predicting an expectation...whether that expectation, or prediction eventuates, drives my next decision.

Merton & Scholes were definitely mathematicians, of a very high calibre.

They were both principals in LTCM, Scholes was far more involved day-today than Merton by all accounts. Scholes was directly responsible for designing a Warrant to further leverage the Partners capital and was instrumental in selling the Warrant to the Swiss [bankers]

Which rather begs the question of how well mathematics translates out of academia, into financial markets.

jog on

duc

""The FBM model for prices variations belongs to the second category. The model doesn't posit any relationships, it is simply a relevant model in an historical, statistical sense. It does not consider the causal mechanisms that lead to this or that move in price;"

That is exactly what it is doing, viz. examining causal mechanisms, through the analysis of price."No, I am not examining causal mechanisms since I don't even identify them. You don't seem to understand what a statistical model is about. Let me try to take an example:

If I measure the temperature at a fixed point in a place near the equator, by plotting my point, I will get something like a sinusoid, by extending this sinuoid in the future, I will get a forecast of the temperature at a given point of time in the future, and my model can just be that. Obviously the temperature variation has a simple cause, the presence and position of the sun at the time of measurement, but my model does not account for it, from the model point of view, somebody can just come and light up a fire next to the point where the measure is taken at a periodic time, it would just look and say the exact same thing.

It's the same for the model I use, it does not account or care for the causal relationships.

The problem being, that cause is so mired in complexity and opaqueness, that it ceases to be noticed as a potential risk...until it blows up.That might be a problem that bothers you, it's not for me in the context of the FBM Model, since it takes into account this risk (while not considering the causes of it), this is what distinguishes it from WBM.

Sure, all decisions are made under uncertainity...this almost by definition makes them a prediction.Sure, you may define "prediction" at your convenience. Most people though makes a distinction between prediction and forecast, like I do, like my dictionary does, and like wikipedia does (http://en.wikipedia.org/wiki/Prediction).

As for Merton and Scholes, I am not extremely interested in discussing their biography. The fact that some people (including some mathematicians) used mathematics to justify their speculative activities does not entail the irrelevance of mathematics in Finance, it just questions the intellectual honesty or understanding of these people. I have already discussed these problems in some detail in other posts.

Cheers

JP

JP,

You state that causation is not a factor in your model.

That you are modelling price data.

The point that I am trying to convey is this: even though your model does not seek to identify causation, simply model the data...the data being modelled is subject to fluctuations due to multiple variables.

These variables change. They are not consistent. Therefore, the data generated can...under extreme, or new conditions, completely change its historical distributions.

This is the concept that you used in your example of someone lighting a fire. The data will change its nature due to the fire being lit...but we won't know that a fire has been lit.

In the future, 3 fires may be lit. This will change the data again. What if however, next week, someone comes and lights 100 fires?

Whether the model seeks to identify causality, or not, the data may reflect changes in causality due to changes, statistically in the data.

You state that:

"non-causal models only give forecast, probability, they don't give predictions in the sense of ephemerides for instance."

Which from Wikipedia:

"Forecasting is the process of estimation in unknown situations. Prediction is a similar, but more general term. Both can refer to estimation of time series, cross-sectional or longitudinal data."

The key word being "estimation." Forecasting a range implies that a decision [under uncertainity] may be made.

I used prediction. From Wikipedia:

"A prediction is a statement or claim that a particular event will occur in the future in more certain terms than a forecast."

Again the key word being event, in mathematics [also from Wikipedia] is defined as:

"Event (probability theory), a set of outcomes to which a probability is assigned"

So either way, the model is forecasting, or predicting, a range of values [based on price data] that may drive a decision.

1 fire may be captured in the forecast range, possibly 2...how about 100?

Causality, is irrelevant. As a traders, we are interested in making money, thus, the model drives a decision tree, that will attempt to make us money.

Regulatory bodies, Federal Reserve, FDIC etc might be interested in causality, as, they are interested in preventing financial crises in the future.

Thus the models employed may well have causation underlying their design.

With regard to Merton & Scholes, that's fine. I simply used them as examples of Nobel Prize winning mathematicians...as, if they don't understand their math, what chance for the rest of us?

jog on

duc

Duc,

"These variables change. They are not consistent. Therefore, the data generated can...under extreme, or new conditions, completely change its historical distributions."And I completely agree with that, and have already made the same remark in this blog.

I never claimed that FBM was all there was to know about trading any instrument. It would be silly of me to forecast the price of an instrument in, let say, 2 years from now.

My point is simply to explore how FBM model can improve a little on existing technical analysis tools (which, by the say have the same limitation you are mentioning) in the context of short-term (intraday) trading, that's all.

About forecast and prediction:

I know they can be taken as perfectly synonymous, but I also know that one can make a distinction between them, and I believe that this second approach is more productive in linguistic terms, as it allows me to express more clearly my point of view.

But language and linguistic practice are not written in stone, so we can certainly argue ad infinitum about the proper use of this or that word.

As for Merton and Scholes, I make a distinction between understanding one given mathematical formulation and understanding the scope of real phenomena this formulation can be applied to.

I don't consider that having received the Nobel Prize is absolutely relevant to the quality of their work; the prize is essentially a popularity contest within the population working in the domain considered, not a guarantee of truth, especially in a social science like economics, whose academia clearly suffers from dogmatism.

You may want to read the article referenced as [2] in my post about "Flapping butterflies...", that specifically discussed the case of the equation they are famous for.

Cheers

JP

Hi from a guy from Japan.

I'm looking for the mixture way of quantitative measures and so-called technical rules to trade on fx market and have read Olser's origin paper.

found your application of scaling law and other blog pages from google. it's pretty impressive of your math explanation on fractal processes.

by the way, did you try any traditional time series models with classic technical rule?

for example, assume an ARIMA series as a 'ture' price dyanamics at time t and calc the volatility by GARCH or using realized volatility ideas to build volatility bands?

do you think such strategys work?

(I can call a dll to do calc but don't know the effect of the idea)

---

xsnowing

Hi xsnowing,

Thanks for your comment.

No, I haven't tried any ARIMA/GARCH style of strategies. In his "Fractals and Scaling in Finance", Mandelbrot dismissed them as rather useless in terms of prediction though. He wrote:

"the ARMA or ARIMA parameters obtained from successive samples are near-invariably mutually contradictory, and have no intrinsic meaning."

Such a lack of intrinsic meaning would be a problem for me, as I use them to somehow fine-tune my indicators.

On a more theoretical point of view, these models (if I am to believe Mandelbrot) seem to assume the process describing price variations is markovian, and can therefore only accounts for a short-term dependence, which is not verified in empirical data.

That's all I can say about this matter.

Cheers

JP

Hi, JP,

Thanks for your comments, which sounds reasonable even I haven't read "Fractals and Scaling in Finance"...

I'm trying to apply your fractal bands to make a kind of EA. let me report its performance later.

By the way, what do you think about "ARFIMA" model? it uses a fractional integration on ARIMA.

Here is a paper about ARFIMA-GARCH for forecasting realized volatility.

http://gcoe.ier.hit-u.ac.jp/research/discussion/2008/gd08-032.html

---

Hi xsnowing,

I was also thinking of developping an EA with the fractal bands, but I still feel uneasy about the "alpha", and still try to improve on it. I've been using it for a few weeks, and even though, I am quite happy with it, I have difficulties to really modelise a decision procedure that works well.

Anyway, please, do let me know any of your insights on this, and if you get any good results.

As of the ARFIMA paper, it seems interesting, though I will need to print it out and go through it before giving you my feedback, I shall try to do that over next week, and will come back to you as soon as possible. I might even create a specific post if I feel good about it.

Thanks anyway for your feedback, it is what makes writing a blog worthy.

Cheers

JP

Post a Comment