0

Ok, I realise that the question is vague so I will supply some context, and perhaps I will receive some context related answers.

I am conducting a final year project as part of my BSc Computer Science with Maths degree, and my chosen project is to evaluate pitch tracking algorithms running on a mobile device. There are a few standard algorithms that I am likely to implement on an Android based device.

I will evaluate the frequency estimators on performance, reliability and accuracy, so I am required to produce some quantitative measures to relate to.

My concern is that my conclusions are going to be heavily related to my own implementation of these algorithms. How would I go about detecting or minimising inefficiencies that I've introduced?

Furthermore, are there any performance issues related to mathematical calculations on mobile devices in general that I should be aware of? I read that integer arithmetic is favoured because floating point values aren't always supported by the processor?

I read some of the related questions and they point towards books with standard algorithms, but it's not so easy when a number of pitch tracking algorithms only exist as a description in an academic paper.

I am also directed towards performace evaluation software, but not in the direction of any specific applications. Are there any popular choices?

Matt Esch
  • 22,661
  • 8
  • 53
  • 51

1 Answers1

1

As to detecting and minimizing inefficiencies...are there evaluations of pitch tracking algorithms on other types of systems? Perhaps you can use them as a reference to evaluate your evaluation (so to speak). That is, there might be some baseline problem(s) that will let you see if your implementations introduce an obvious bias.

It might be better to stick to integer or fixed-point arithmetic, as some (most?) devices will not have floating-point processors. (Floating-point calculations are done in software on those platforms.) Of course, it's also fair to investigate the trade-offs between floating-point and not. You might even implement the same algorithm two ways, just to study that issue.

Generic performance evaluation software may or may not be appropriate to your assignment. You need to define exactly what "performance, reliability, and accuracy" mean in terms of specific metrics. Then you can ask about the best way to instrument your implementations to measure or estimate those metrics.

Ted Hopp
  • 232,168
  • 48
  • 399
  • 521
  • The idea of investigating the integer/floating point tradeoffs is an interesting prospect, though I suspect I don't have enough hardware to test it on. However I think there is still merit in doing so. Of course I wonder about reference implementations: there are many that will run on my i7 machine without trouble at all, but can latency, for example, easily be compared? I am not so sure that I would expect to see excatly the same latency ratios on a PC as I would a mobile phone. I would have to factor in the "expense" of operations on the processors I am using. – Matt Esch Jan 26 '11 at 11:39
  • Actually, I had in mind a slightly different use for reference implementations. If algorithms A and B are roughly equivalent according to some existing test suite, but your implementation has A significantly outclassing B, that would suggest that you should explore the possibility that you screwed up the implementation of B. (Of course, there's always the possibility that the existing test suite screwed up A, which was a superior algorithm all along.) With things like latency, ratios can be dangerous when there are fixed as well as proportional components to the performance. – Ted Hopp Jan 26 '11 at 17:25