Monday, April 16, 2012

The absent signal

All that Technical Analysis is about is to reveal the underlying signal that is nested in the time series. In a narrow sense (for a specific kind of time series), TA is a form of signal processing. There is however a fundamental difference with classical signal processing; when we process a signal, we know beforehand that the signal exists and our goal is to exhibit it free of noise, to clean it in order to enjoy thoroughly its meaningful content, and to possibly alter it. A contradiction is then becoming apparent in TA since we assume the existence of such a signal while all the signal processing we perform seems to reveal its absence.

This obviously applies to the fractals-derived tools we may use to analyse the time series, as indeed, what this analysis is telling us is that the time serie keeps evading its nature as a fractal, since the Hurst exponent (and therefore the Fractal Dimension) keeps changing. These perpetual changes mean that the time serie cannot be identified as a fractal, at least not via the methodology we are employing, we basically failed to identify (or even to detect) the signal we intended to study. Nevertheless, we ignore this conclusion and go on with our assumption of an existing signal in order to make a decision to enter, modify or exit a trade.

That the time serie is not a fractal is not a surprise, since fractals, after all, are mathematical objects that are not to be found in nature, they are just models that may be helpful to describe natural phenomena. Even the most classical examples of fractals are not genuine fractals, for instance the coast of Brittany is clearly not a fractal since its “fractalisation” stops at best at a molecular level, it does not go on infinitely as is the case for a mathematical fractal. But even, in this sense, a financial time serie does not qualify as a phenomenon that can usefully be modelised as a fractal, since its parameters keep changing.
So is there any sense in which the market could be said to be fractal, given that the time serie does not seem to be satisfyingly modelised by a fractal?

In “The Blank Swan”, Elie Ayache provides a very interesting element to answer this question at the page 295:

“If it were known what the price of some traded asset would be in the next instant, and different from the present price, it would immediately become the present price because buyers or sellers would immediately want to hold the asset at that price. The price function catsches up with its virtual before it becomes actualized. Another, more regressive virtual is therefore needed.

The result of this doubly-composed breaking and differentiating is the infinitely broken line, whose simplest instance is Brownian motion. It is essentially fractal. There is no scale at which the differentiating can settle and the function look smooth. Let us call it the line of the market. From it, the whole world of derivatives unfolds, which will give us, retro-actively, the point of the market.

The point of the market does not just stop at noting that the underlying must follow Brownian motion (or some other, more complex and jagged trajectory), in order that its next movement may always be unpredictable. The abyss of differentiation, opening at every point, must not concern the price of the underlying alone but the price of any 'virtual' derivative that might be written, right there and right then, in that pit (for there is only one pit), on that underlying. That means that all the potential coefficients, not only of the 'initial' Brownian motion (or any other more complex process) but of all the following processes that will complicate that Brownian motion, differentiating it even further, should themselves never settle but perpetually differentiate. If there were a stage at which the coeffcicients settled, the price of the corresponding derivative would become a deterministic function of the preceding prices and would no longer admit a market, that is to say, an unsettlement, an unpredictability and a differentiation of its own.” [EA10, p.295
]

From this extract, we can see that the fractal behaviour of the market is not to be identified with the time serie as a fractal, rather the fractality applies to the existence of the market as a whole, including the infinitely many virtual derivatives. At best therefore, the time serie would appear as a truncated version of the 'fractal of the market'. What this fractality implies cannot be detected at the level of the time serie, it may however be possible to account for this fractality, but this is not going to be done by considering the process of price formation, rather I believe its implication must be found at a topological level.

Without presuming what a topological study, which I am not yet equipped mathematically to conduct, would reveal, I may try to precise a bit what I expect to find from it. For that, I would use an analogy ( as all analogies, this one is limited and should not be carried forward too much): It seems to me that the classical approaches (TA or Mathematical Finance), by assuming the exitence of a signal or a stochastic process, assumes that we are exploring an unknown land by following an already built road. We don't know where the road is going, and we can only detect it by walking it, so we don't even see where it is leading us, we're just happy to follow it. Unfortunately, the more we are walking on the supposed road, the more we are told that there is no road at all (this is the absence of signal or of process). A topological study should not tell us anything about a road, but rather it may inform us of the topography of the land we are travelling through and help us, in some limited way, to chose the direction where we want to take the next step.

22 comments:

Elizabeth Taylor said...

Technical indicators such as "MACD" and RSI when a cost is in a range give details about over purchased and oversold amounts in a variety and describes the progress about returning change. 123 technical sign is probably the last essential sign on the record which actually informs the actual position where to obtain position. It is a change design which reveals that the industry has actually peaked in an uptrend and bottomed in a down pattern. Share Technical Analysis

Jean-Philippe said...

Not sure how real is this reply, it looks a bit like an advertising, but anyway, it allows me to precise a bit what I meant in the post.

MACD or RSI say something about over purchased or oversold amount relatively to the assumed normality of a MA. But there is actually no reason to believe that prices are to follow any historical MA (mathematically it is the MA that follows prices and this is not equivalent to its opposite), and therefore no reason to measure any convergence or divergence (or a relative strength in the case of RSI) around this MA.
The mathematical fact that prices oscillate around an MA (given the way MA are constructed) does not say anything about what will happen in the near future. A sudden move may happen at any time without any warning signal from the MACD or the RSI. Conversely, the MACD or the RSI may be interpreted at times as giving a signal of a large move, and nothing will eventually happen.
And even statistically, the MACD or the RSI indications will not be significantly meaningful. What will eventually decide the profitability of a trader is his money management, his knowledge (fundamentals), his experience (and the intuition he derives from it) and, in some cases, his luck.

Tony Pong said...

I am increasingly starting to be convinced that it is impossible to have a single bot (AKA using signal processing) to continuously beat a single market. It might be possible to have a bot run across multiple charts that it could be profitable, where the winnings of one can hedge out the losses/drawdown of another, of course the losses could add up together and be infinitely worse.

As the market makes more derivatives upon derivatives, I think these derivatives can be more exploitable because of the mathematical filter and algorithms it goes through leading it further away from the "natural" market.

If a single chart is not fractal, and one can see the finite complexity of the market, is it possible to use the 1 tick chart to interpret the entire market?

As you said, one cannot predict the road that they are walking on, but instead another approach is needed; the terrain, or obstacles that interfere with your path. In market terms, I think this terrain/pattern is the jumping and ranging pattern. And the obstacles could be support/resistance points where there are high levels of volume.

Jean-Philippe said...

Hi Tony,

Just a few remarks.

As for your second paragraph, I seem to disagree with your view. To me, the market of a derivative product is as "natural" as the one from an underlying. That being said, derivatives are exploitable for a certain purpose but they are just as unpredictable as any other product that accepts a market.
This is due to the fact that each underlying product admits an infinity of derivatives and each one creates its own market, that, however mathematical is the design of this product,the market will necessarily be at variance with this mathematics (if it was not so, no market would exist at all, as the product would be exchangeable at only one price).
Therefore it does not really matter how many derivatives we actually have at disposal, this number will always fall infinitely short of infinity. So the predictability will never be improved, as such an improvement is incompatible with the existence of all the markets (the one from the underlying as well as the ones for the derivatives).

As for your question of the third paragraph, in my view, the market has infinite complexity, the unfractality of the chart is merely a truncation, due to the fact that trading, in practice, is not continuous. The "finite complexity of the market" is therefore erroneous, and cannot be used to justify any truthful interpretation of the market (in the sense of a scientific objective interpretation). Furthermore, no market can be interpreted in an objective way since their nature is speculative, and any interpretation acts upon it, thereby modifying it and somehow invalidating itself.

Tony Pong said...

Jean-Philippe,

Can you also add remarks on the other paragraphs? I would like to know what you think.

Jean-Philippe said...

Hi Tony,

For the rest, really not much to say. As you say in your first paragraph, I also believe that a profitable automatic trading algorithm is impossible to build, in the same way as an algorithm to imitate human thought does not seem a real possibility to me (in that, I agree with Roger Penrose's point in "The Emperor's new mind", even though his demonstration is not all that convincing at times). This relates to my belief that the human factor cannot wholly be cancelled out from the workings of the market.

As for your last remark about the ranging, jumping and resistance, it is indeed the most likely candidates but as I have not yet conducted a thorough topological analysis, I cannot say for sure if these are the features that may lead to some limited levels of forecasting.
My current idea is to look at reconstructing the market (the full meaning of this term being still unknown, is it a specific market or the market as a whole, includind all financial product) from a topological continuum that would correspond to the Virtual, in the sense of Deleuze or Ayache (see http://fractalfinance.blogspot.com/2011/02/eternally-returning-to-virtual.html), this continuum would be of infinite dimension like the Hilbert Cube (http://en.wikipedia.org/wiki/Hilbert_cube). Even though it may be helpful to think of this virtual as a potential, it should not be confused with a range of possibilities in a probabilistic sense as this one is already a truncation of the former.
The reconstruction would be done, in my current analysis, by considering the inverse limit(http://en.wikipedia.org/wiki/Inverse_limit). The difficulty is then to identify which bonding maps are at work to actualize the virtual and engender the topology of the market.

Tony Pong said...

Jean-Philippe,

It seems to me that the main difference between computers and humans is the ability to accept and understand new ideas. Most human thought comes from past experiences and our ability to relate them to each other and draw connections. If a computer were able to take in information at the most basic byte-to-byte level and make it's own connections between them, instead of human inputted parameters, then I argue that given the same information on that particular subject, a computer is "smarter." A human becomes more advantageous when it comes to out of the box thinking and creating new ideas. The question is then if these new ideas are actually new, or the human has simply made connections between past ideas. This translates to ideas as a whole, are all ideas based off of other ideas? If this were the case, then computers should always be better than humans as they will be able to make connections between information much quicker and unforeseen to humans. But if ideas are created off new experiences that humans encounter, then computers are then limited to what they can experience at a physical level, and is confined to the information that humans put into it. Most likely, ideas are a combination of past ideas and new experiences. But because computers can make connections much faster than humans, the sole limitation of computers are new experiences. In practical terms, there is a physical limitation for computes as they cannot simply get up and walk around to gather new information.

I believe that all markets have jumping, ranging, resistance and natural growth and decay ratios, but each market has it's own individual characteristics and behavior, with it's derivatives having similar properties. Much like how all trees have similar properties of branching, leafs, bark, roots etc..., there are trees that are in the same family/class that have more similar properties. What other topological characteristics have you been testing? The inverse limit is very interesting that I will look into further. But doesn't looking at inverse limits also go back to a signal processing approach? The exact point you are trying to get away from

Jean-Philippe said...

Hi Tony,

Your analysis of the relationship between the human mind and a computer rests on the assumption that there is no fundamental difference in the way these two systems work.
While we clearly understand the workings of a computer (as we design them), there is more doubt as to the way we understand the workings of a human mind. Penrose exploits this unknown to develop the thesis that while computers are known to be working algorithmically (as Turing machines), there is no reason to think it is the case for human minds. It may seem daring at first, since we don't have a clear idea of a non-algorithmic decision procedure, but it is quite a commitment to deny such a possibility. Penrose goes on to develop over many pages some of the clues that may be interpreted as indicative of a possibly non-algorithmic procedure, from the limitations implied by the halting problem to Godel's undecidability theorems. I can't possibly do justice to his argument in a few sentences here, but the main idea is that the undeterminacy found in QM could elicit a non-algorithmic decision procedure.

You view of markets as trees might be helpful at an intuitive level. As for me, though I am not testing any topological features yet, as such an approach would be to fall back into a signal analysis, rather, my objective nowadays is to devise a "model" for the generation of the market (or more precisely of its topology). In doing so, I must not assume to find any precise features (and therefore I cannot test them, as to test them would already be assuming that I may find them), it is still too early in the process.
My most immediate goal is then to devise bonding maps (i.e, a family of continuous and bijective functions) so as to reflect the way the market forms. The problems I am then facing is the relationship between markets as none can be said to be independent of any of the others, also the way a derivative market relates to its underlying,...
The ranging, jumping,... will appear later and I may have to deal with them with completely different tools (possibly probability), but again I don't want to presume of anything at this point.
So inverse limits merely allow me to construct the topology of the market in a very flexible way, without starting from the signal, but rather from assumptions I make on the way the market is constructed. Such assumptions still remain to be finalized, they can be offer/demand, influence of derivatives on the underlying,..., there is still a lot of work to be done here, and I am just starting to think about it, while I am still developing my grasp of inverse limits further.

Tony Pong said...

Jean-Philippe,

As computers get more advanced, is it possible that they will eventually work non-algorithmically as well? By advancing the design of our computers, I think it gives us a better understanding of our own minds.

It seems that humans are able to recognize that induction may be false, but machines always interpret induction to be true. If one can change the behavior/algorithm of that aspect, it seems to be one step closer to replicating a human mind.

Doesn't topology only apply for continuous objects? As the market has gaps, can we still use it? Gaps seem to be a humungous problem in both modelling and trading aspects. It is much more troublesome to model something that is discontinuous; and trading-wise one will not get filled on their trade even if they know what way it will gap. Of course in Forex the gaps are less frequent, but they are still there. I have always wondered if it is possible to take advantage of gaps. As for modelling the market, do we ignore that fact?

When you talk about the relationship between markets, to what extent do you think this relationship goes? Trivially the underlying is related to it's derivative. But are Forex markets tied together with equity and futures markets? The advances in technology and rapid communication speeds have made it a global market. So to model the "complete" fractal of the market, would you need to incorporate every possible chart?

Mandelbrot talks about how Algebra is the measure of things binding together which is inherently opposite of fractals which is the measure of things falling apart. This makes me skeptical on inverse limits, but I am not familiar enough to fully understand. I am also still confused on what I can use to measure the divergence of things.

Jean-Philippe said...

Hi Tony,

The advancement of computing science may certainly lead to new ideas, but it is important to see that the ones involved in "non-algorithmic thinking" are qualitatively different than what computers are doing now. So for such a progress to happen, a qualitative leap (a new paradigm) needs to be found.

The recognition that induction may be false is actually more generally the recognition that some assumptions cannot be proven (in such a case, the assertion that induction is false or true is already referring to a commanding logic, the existence of which is a pure conjecture) ; that whatever logic we decide to use, it is unreasonable to expect it to prove itself.
The decision to use this or that logic is a mere question of convenience relative to the specific problem we are intending to answer, this decision cannot be based on an inference taking place within the logic itself.

Topology does not only applies to continuous objects, as disconnectedness is a fundamental property of objects that is studied by topology. Topology is actually at the forefront of mathematics in studying such disconnected objects, and inverse limits is one method of constructing such disconnected objects starting from a continuum.
As for now, the market models indeed ignore the physical gaps (when trading stops, over the week end for FOREX) but also the gap there is between 2 consecutive ticks, the market is actually completely disconnected. Its connectiveness is merely a consequence of assuming the existence of a signal, we talk of a market raising or falling, but it actually jumps up or down.

As to the extent of the interrelations between markets that needs to be considered, this is still an open question for me. At some points I believe some simplifying assumptions will have to be done, maybe not with regard to the number of markets to be considered, but al least on the way they interfere with the market we aim to analyse.

Mandelbrot is right in that mathematics for most of its history has been considering continuous processes and objects. The problem I see with fractals though is that it is mostly descriptive (at least when it comes to finance), it does not provide an understanding of the processes at play in the formation of the objects under examination, and therefore it does not help clarifying their future evolution either.
As to measure divergence of things, again, to measure such a divergence only makes sense if you have made some assumptions as to the way these things are to behave. So you wish to measure the divergence between the experimental behaviour and the expected behaviour, but then, any measure of this divergence will have value only insofar that the original assumptions have value. So we are back to the original question of this pos: does assuming a signal (fractal or other) make any sense at all? Or have we just doing it because we have nothing better to do?

Tony Pong said...

Is the "non-algorithmic thinking" a computer limitation or a programming one? I'm sure you have heard about quantum computers, where not only does it recognize 0 and 1, but also "quantum superposition of the two states." This gives much more flexibility than the binary system (assuming we learn how to actually program in it). Even so, I do not know if this computer will be able to be involved in "non-algorithmic thinking".

Certain assumptions are traditionally needed for computer programming, at least for human inputted programming. To recognize that some assumptions cannot be proven seems counter intuitive to programming. This looks to be a software problem and not a hardware one, and a new style of programming is needed, a one with much more lenient rules. I have always wondered if a program that could make other programs (without human interference) would follow a different set of assumptions. This seems to lead into the field of artificial intelligence.

I am very interested in topology and inverse limits, but do not know where to start looking. I have tried reading randomly online, but the abstract is too vast for me to understand how to apply. Can you point me in the right direction to start more research?

I have never considered the market being completely discontinuous, it is a very unique point. When modelling with gaps, do you simply "connect the dots" and assume continuous like the jumps between ticks? I assume it is much harder to model something that is both continuous and discontinuous, rather than something that is completely continuous or completely discontinuous. Based on this, I wonder what a model with it's main focus on the discontinuity could imply? A discontinuous model would not only drive you away from a signal processing approach, but it is also a more accurate representation of the market.

I agree that the extent of interrelations between markets to consider may need to be limited. Mainly because I speculate that unlimited market derivatives can be made.

From what I understand, the whole point of fractals is that it is impossible to accurately predict, but some overall pattern can be determined. Based on those patterns, I think you can get more than a 50/50 chance, where you need proper betting schemes. As with your question, it seems until we find a better approach than a signal, we have nothing better to do.

Jean-Philippe said...

Hi Tony,

As for “non-algorithmic thinking”, nobody really know what it is, it is just an hypothesis to explain the gap that still exists between human intelligence and artificial intelligence. So nobody, as of now, can answer the questions you're raising here, those are still open questions. Quantum computers may indeed provide some elements of answers, but as you say, we are still at the beginning of investigating the range of possibilities they are opening.

Artificial Intelligence is of course central to this questionning, and all these questions are investigated by the Philosophy of AI (http://en.wikipedia.org/wiki/Philosophy_of_artificial_intelligence ).

As for Inverse Limits, several resources can be found on the net, but my main reference is a book by W.T. Ingram and William S. Mahavier called “Inverse Limits: From Continua to Chaos” (http://www.amazon.com/Inverse-Limits-Continua-Developments-Mathematics/dp/1461417961 ). I find it very accessible and well-written. It only assumes basic knowledges of topology from the reader, and even that is detailed in the Appendix via An Introduction to the Hilbert Cube, so it is a rather self-contained book. The focus of the book is also much more on pure topology and dynamical systems than, as is often the case with Inverse Limits, on Abstract Algebra; so it appears somehow much more concrete and ready to apply to the resolution of real problems.

The subtitle of the book: From Continua to Chaos is also very telling and seems to provide us with some hope in understanding how a completely disconnected reality (chaos) can come about from a continuum (that, I assume, the virtual is). So my hope with inverse limits is to get a mathematical model able to account for the generation of chaos, in a realistic enough way.
On the other hand, Fractals provides a model for the reality as it is, but say very little on the way this reality came about, except in the very simple case of Iterated Functions Systems, which is not very useful as a model for real processes of formation (except maybe in the case of crystallisation).

My point therefore is really to find a model of the dynamical process at play in forming the market, rather than of the market itself. The philosophical assumption behind this is that there is no market as such, but only a dynamical process at play, that is, at variance with itself; in other words, the market never settles (contrary to a crystal, that finds an equilibrium), it is always in becoming, always a negation of what it isn't (i.e. settled), following in that, the logic of place A=not-not-a.

Tony Pong said...

The philosophical arguments of artificial intelligence seem to be at a stand still. Only time can tell what the potential of computers may hold.

I found a copy of the book on Inverse Limits and will be reading it extensively. I will be studying this and hope to give detailed comments soon.

I agree that there is only the continuous formation of the market. And I also think there is no ultimate equilibrium of the market. But I do think that there are small clusters of equilibrium, namely ranging times. Doesn't ranging inherently mean some sort of equilibrium? Similarly, there is no universal average of clouds in the sky, but clearly there clusters of clouds where you can do certain averages.

Jean-Philippe said...

Hi Tony,

To me, the ranging times seem more like a market in waiting (waiting for news, I believe), so in a sense, we can consider them as an equilibrium, but at best, it's an unstable one. So one must expect to be thrown away (and possibly far away) from this equilibrium at anytime.
Besides, from a trading point of view, the brokers know that and it is relative to this ranging times that they get their commission (the width of the spread), it is therefore extremely difficult for an individual trader to make significant profit from the ranging periods.

Tony Pong said...

Jean-Philippe,

I have noticed that brokers have made it extremely hard to trade during ranging times with commission and spread. But from a predictability point of view, I think ranging is more predictable. I am thinking this is where a different style of trading comes in, namely options where you can win even if the market just ranges.

Wealth Management Australia said...

Very nice blog website… I always enjoy to read your blog post… Very good writing skill.. I appreciated what you have done here… Keep posting. Financial Planner Australia

Unknown said...

Wow, Nice blog post. I have never seen this type post. I founded good information about ppi claim. Thanks for sharing this one... reclaim charges

home said...

as for “non-algorithmic thinking”, nobody very grasp the things, it is simply an hypothesis to elucidate the gap that also exists between human intelligence and artificial intelligence. thus nobody, as of currently, will answer the queries youre raising here, those are still open queries. quantum computers could indeed offer a few components of answers, other then while you say, we are still with the starting of investigating the choice of potentialities they will are opening.

artificial intelligence is after all central of this questionning, and these queries are investigated via the philosophy of ai ( http ://en. wikipedia. org/wiki/philosophy_of_artificial_intelligence ).
Finance BLog

Payday Lender

Unknown said...

Whats the average cost for teenage car insurance?

Auto Finance Leads

sapna said...
This comment has been removed by the author.
Unknown said...

Hi there, I'm just wondering if u heard about Forex striker which I`m currently using right now, the first ever U.S.A. patented that destroys brokers and makes cash everyday. Pls visit my FOREX STRIKER REVIEW and discover how i earned almost $18,000 in less than a month.

Financial Ontology said...

we saw the need for a web-based solution that was capable of integrating the multiple business and technical tools currently in use by financial organizations.  That is why we decided to work exclusively with Adaptive who is helping us migrate our Semantics Repository to their standards-based Metadata