1

I am optimizing algorithmic strategies. In the process of choosing from a pool of many optimized strategies, I am in the phase of searching (evaluating) for robustness of the strategy.

Following the guidelines of Dr. Pardo's book "The Evaluation of Trading Strategies" in page 231 Dr. Pardo recomends, in the Numeral 3 to apply the following ratio to the optimized data:

" 3. The ratio of the total profit of all profitable simulations divided by the total profit of all simulationsis significantly positive"

The Question: from the optimization results, I am not being able to properly understand what does Mr. Pardo means by stating "...all simulationsis significantly positive"; what does Mr. Pardo means by 'significantly positive?

a.) with 95% confidence level? b.) with a certain p value? c.) the relation of the average net profit of each simulation minus it' standard deviation

Even though the sentence might seem 'simple' I would REALLY like to understand what Mr. Pardo means by the statement and HOW to calculate it, in order to filter the most robust algorithmic strategies.

ROMANIA_engineer
  • 54,432
  • 29
  • 203
  • 199
  • This question appears to be off-topic because it is about model validation rather than programming. – tmyklebu Aug 09 '14 at 19:12

2 Answers2

1

The aim of analyzing the optimization profile of an algorithmic simulation is to be able to filter robust strategies.

Therefore the ratio should help us to uncover if the simulation results are on the right track or not.

So, we would like to impose some 'penalties' to our results, so we can select the robust cases from those of doubtful (not robust) result.

I came to the following penalizing measures (found in the book of Mr. Pardo and other sources).

a.) we can use a market return (yearly value) as a benchmark, so all the simulations whose result are below such level, can be excluded from our analysis,

b.) some other benchmark to divide those 'robust' results from those more 'doubtful' (for example, deducing to each result one standard deviation)

From (a) and (b), we can create the ratio:

the total sum of all profitable simulations divided by the profitable results considered robust

The ratio should be greater or equal than 1.

If the ratio is equal to 1 then it means that our simulation result has given interesting results (we are analyzing the positive values in this ratio, but profitable results should always be compared to the negative results also).

If the ratio is greater from 1, then we have not reach the possible scenario, and the result should be compared with the other tests for optimizations.

While simulating trading algorithms, no result is absolute but partial and it's value is taken in relationship to what we expect from the algorithm.

If someone has a better explanation or idea or concept you might find interesting please share, I would gladly read it.

Best regards to all.

1

Remark on the subject

With all due respect to the subject ( published in 2008 ) the term robustness has its own meaning if-and-only-if the statement also clarifies in which particular respect is the robustness measured and against what phenomena is it to be exposed & tested the Model-under-review's response ( against what perturbances -- type and scale -- shall the Model-under-test hold its robust behaviour, measures of which were both defined and quantified a-priori the test ).

In any case, where such context of the robustness is not defined, the material, be it printed by any bold name, sounds -- and forgive me to speak in plain English -- just like a PR-story, an over-hyped e-zine headline or like a paid advertorial.

Serious quantitative model evaluations, the more if one strives to perform an optimisation ( with respect to some defined quantitative goal ), requires a more thorough insight into the subject than to axiomatically post a trivial "must-have" imperative of

large-average && small-HiLo-range && small StDev.

Any serious Quant-Modelling effort, if it were not to just spoil the consumed hundreds-of-thousands CPU core hours of deep parametric-spaces' scans, shall incorporate a serious parametrisation decision in either dimension of the main TruTrading Strategy sub-spaces --

{ aSelectPOLICY, aDetectPOLICY, anActPOLICY, anAllocatePOLICY, aTerminatePOLICY }

A failure to do so, either cripples the model or leads to a blind-belief, where it is hard to guess, whether the former or the latter is a greater of the both Quant-sins.

Remark on the cited hypothesis

The book states, without any effort to proof the construction, that:

The more robust trading strategywill have an optimization profile with a: 1. Largeaverageprofit 2. Small maximum-minimumrange3. Small standarddeviation

Is it correct?

Now kindly spend a few moments and review this 4D-animated view of a Model-under-test ( visualisation of which is reduced into just four dimensions for easier visual perception ), where none of the above stands true.

<aMouseRightCLICK>.openPictureOnAnotherTab to see full HiRes picture details

enter image description here

Based on contemporary state-of-art adaptive money-management practice, that fails to be correct, be it due to a poor parametrisation ( thus artificially leading the model into a rather "flat-profits" sub-space of aParamSetVectorSPACE ) or due to a principal mis-concept or a poor practice ( including the lack thereof ) of the implementation of the most powerful profit-booster ever -- the very money-management model sub-space.

Item 1 becomes insignificant at all.

Item 2 works right on the contrary to the stated postulate.

Item 3 cannot yield anything but the opposite due to 1 & 2 above.

halfer
  • 19,824
  • 17
  • 99
  • 186
user3666197
  • 1
  • 6
  • 50
  • 92