I believe the answer is one does not exist and I'm going to try to outline a proof of why this is so.
Consider a 'uesful' online algorithm to be defined by two criteria:
- It must have fixed memory requirements during processing.
- Each update should take a fixed amount of time.
This is stricter than the literal definition of an sequential/incremental/online algorithm which really just requires that data can be passed in one piece at a time. However, consider that if either 1) or 2) were not true then after processing a large enough amounts of elements, the memory required or time required to run the algorithm would eventually become infeasible. Usually, one of the reasons why online algorithms are used is that they can be used continuously without fear of the performance slowly getting worse. Also, note that there are online algorithms for calculating the mean and variance that satisfy both 1 & 2 and I think that's what we are aiming to achieve.
Now to the problem posed. During processing, the mean will change with every bit of new data. That in turn means the set of observations that fall below the mean will change. When this happens, we need to adjust our running semi-variance according to the set "delta", defined as the elements that are not in the union between the set of elements below the old mean and the set of elements below the new mean. We will have to calculate this delta in the process of adjusting the old-semivariance to the new-semivariance in the presence of new data.
Now let's consider the complexity of calculating this set delta. We will need to find all elements that fall between the old mean and the new mean. We will always keep track of the old mean, while the new mean can be calculated incrementally in fixed time so they pose no problem. However to calculate the delta itself, there is no way to do it other than requiring us to keep track of all the previous elements in our set. This immediately breaks the memory condition of an online algorithm. Secondly, even if we keep the previous elements in our set sorted, the best speed we can achieve to find those that are between the old mean and new mean is O(log(number of elements)), which is worse than fixed. So eventually, with enough elements, the online algorithm will not only require more memory than we have, but it will also require more time.