4

I could not make my title very descriptive, apologies!

Is it the case that for every data structure, supporting some operations with certain amortized running times, another data structure supporting the same operations in the same running times in worst case? I am interested in both the iterative, ephermal data structures and functional ones too.

I am certain that this question must have been asked before. I cannot seem to find the correct search keywords (in Google, SO, TCS). I am looking for a cited answer in {yes, no, open}.

aelguindy
  • 3,703
  • 24
  • 31
  • This is a really interesting question! Every data structure that I know of for which there's a nice amortized bound has some other data structure with the same worst-case time bounds, but I'm not sure if it's always possible to guarantee this. – templatetypedef Jan 31 '12 at 22:01
  • Yes, usually a much uglier complex one! I was interested in asking this because every possible answer to this will be very surprising to me :), except perhaps, open. – aelguindy Jan 31 '12 at 22:03
  • 2
    I am skeptical. I think amortization really buys you something. I don't see how you could manage to make an O(1) worst case add, O(1) worst case access resizable array, for example. – Rob Neuhaus Jan 31 '12 at 22:03
  • 2
    You may want to look at http://cstheory.stackexchange.com/ — it sounds like you're looking for proofs, not engineering. – derobert Jan 31 '12 at 22:09
  • @derobert I wanted to do that but I thought it might not be a research level problem. – aelguindy Jan 31 '12 at 22:12
  • 2
    @rrenaud- Such a data structure exists! It's called the extendible array. I have a Java implementation available here: http://keithschwarz.com/interesting/code/?dir=extendible-array – templatetypedef Jan 31 '12 at 22:26
  • @templatetypedef Your solution does not guarantee elements to be located in contiguous memory. –  Jan 31 '12 at 22:31
  • What about vector? Which gives amortized expandable array like semantics. –  Jan 31 '12 at 22:22
  • There is a data structure called the **extendable array** that supports worst-case O(1) random access and append just like the dynamic array. I have an implementation of it here on my personal site: http://keithschwarz.com/interesting/code/?dir=extendible-array – templatetypedef Jan 31 '12 at 22:25
  • @Bazinga- Your comment that the values are not stored in contiguous memory is partially correct but can easily be fixed. The idea is that since we have the new array space for the result, every time someone wants to look at one of those values, we can just copy it down from the old array. This means that the values will indeed be stored in contiguous memory. – templatetypedef Jan 31 '12 at 22:35

1 Answers1

2

No, at least in models where element distinctness of n elements requires time Ω(n log n).

Consider the following data structure, which I describe using Python.

class SpecialList:
    def __init__(self):
        self.lst = []
    def append(self, x):
        self.lst.append(x)
    def rotationalsimilarity(self, k):
        rotatedlst = self.lst[k:] + self.lst[:k]
        count = sum(1 if x == y else 0 for (x, y) in zip(self.lst, rotatedlst))
        self.lst = []
        return count

Clearly append and rotationalsimilarity (since it clears the list) are amortized O(1). If rotationalsimilarity were worst-case O(1), then we could provide an O(1) undo operation that restores the data structure to its previous state. It follows that we could implement element distinctness in time O(n).

def distinct(lst):
    slst = SpecialList()
    for x in lst:
        slst.append(x)
    for k in range(1, len(lst)):  # 1 <= k < len(lst)
        if slst.rotationalsimilarity(k) > 0:
            return False
        slst.undo()
    else:
        return True
swen
  • 964
  • 7
  • 4
  • Can you justify why rotationalSimilarity is amortized O(1)? This doesn't seem correct, since the very first thing it does is an O(n) operation to build the rotated list. – templatetypedef Feb 01 '12 at 03:33
  • X appends and y rotsim calls together is worst case O(x + y). I prepay for the eventual big deletion with one coin for every append, and then it only costs me those coins worth of operations for the rotsim call. – Rob Neuhaus Feb 01 '12 at 04:13
  • I don't see how rotsim and undo are O(1) amortized together. Can't I force you to do a lot of work with a sequence of rotsim and undos? – Rob Neuhaus Feb 01 '12 at 04:24
  • @rrenaud This is under the hypothesis (leading to a contradiction) that rotsim were reimplemented to be O(1) *worst-case*. If so, all I have to do is copy-on-write the O(1) memory locations that get written during the rotsim call. – swen Feb 01 '12 at 13:02