I briefly reviewed the literature on line intersection and line arrangement problems in computational geometry. Most of them are based on plane sweep algorithm. From the angle of computational complexity, it seeems to me that the asymptotic algorithmic bounds are a function of the number of line segments and the term "k" where "k" is the number of intersections among the edges. For example, one of the best known algorithms has time complexity of O(nlogn + "k") which is output sensitive. My problem is the difficulty in understanding the fact that why we cannot get rid of the term "k" while providing the time complexity. Because if we look at other algorithms for e.g sorting problems, the complexity is not a function of how many swaps or comparisons are done. It is simply a function of number of inputs. Any insights will be helpful.
Asked
Active
Viewed 667 times
1 Answers
1
If you'd like to express the worst-case complexity strictly in terms of the number of line segments in the input, then you would have to assume for K the largest possible number of intersections (namely N2). So an algorithm with O(N log N + K) time complexity (such as Balaban's) could also be called O(N2 + N log N) or O(N * (N + log N)) if you prefer.

Andrew Durward
- 3,771
- 1
- 19
- 30