1

I'm trying to implement the unique stack. If I try to extend the stack class and use the contains it will look the entire stack and the time complexity will become O(n). So I tried to extend the LinkedHashSet which will remove duplicates by itself and maintains the order. I'm able to implement all the methods except the pop.

Can anyone share thoughts here?

import java.util.Collection;
import java.util.LinkedHashSet;
import java.util.Stack;
import java.util.stream.Stream;

public class UniqueStack<E> extends LinkedHashSet<E>{

    private int size = 0;

    @Override
    public Stream<E> stream() {
        return super.stream();
    }

    public boolean push(E e) {
        if (!contains(e)) {
            ++size;
            return super.add(e);
        }
        return false;
    }

    public E pop() {       
       // struck here
    return;
    }
}

Siva
  • 1,078
  • 4
  • 18
  • 36
  • 4
    Why not use both a `Stack` and `HashSet` as a part of `UniqueStack`? Also, `Stack` should be replaced with `ArrayDeque`. – Jacob G. Feb 05 '20 at 20:34
  • @JacobG. If I use stack and HashSet. I have to store the same data in both stack and hashset. I will end up, maintaining the same data. it will consume more memory – Siva Feb 05 '20 at 20:38
  • Is memory *really* an issue here? How many elements would you be storing at most? – Jacob G. Feb 05 '20 at 20:40
  • I need to crawl a site. Which has millions of urls. I will be using the DFS. So I need stack. – Siva Feb 05 '20 at 20:43
  • 1
    The amount of memory required is identical whether or not you use a Stack/HashSet combo or LinkedHashSet, so long as you're storing objects. If you want O(1) performance, you need a set, and if you want a stack, you need a stack or a linked list. The LinkedHashSet combines the two by storing references to itself via as part of a hashset and a linked list. – Compass Feb 05 '20 at 20:50
  • @Compass yeah, you're correct. Actually using a HashSet and Stack/ArrrayDequeAsStack will save memory by removing duplicates. While crawling we may get more number of duplicates and adding them to the stack will consume more memory whereas maintaining a hashset can actually prevent adding the duplicates and saves a lot of memory. In turn also reduces the number of calls we make to the DB to check if the url is already processed. – Siva Feb 05 '20 at 21:01
  • @ArvindKumarAvinash Actually the comment added by Compass@ resolved the issue. – Siva May 28 '20 at 20:48

1 Answers1

0

I believe it would be best to use an instance of a regular stack for stack operations but use the efficiency of a HashSet to track duplicates. Additional methods could be added using the same idea.

    UniqueStack<Integer> stack = new UniqueStack<>();

    stack.push(10);
    stack.push(20);
    stack.push(30);
    stack.push(40);
    stack.push(30); // ignored
    stack.push(50);

    System.out.println(stack);
    int v = stack.pop();
    System.out.println(v);
    System.out.println(stack);
    stack.push(1);
    stack.push(2);
    System.out.println(stack);
    v = stack.pop();
    System.out.println(v);
    System.out.println(stack);

The output of this is

[10, 20, 30, 40, 50]
50
[10, 20, 30, 40]
[10, 20, 30, 40, 1, 2]
2
[10, 20, 30, 40, 1]

class UniqueStack<E> extends LinkedHashSet<E> {

    Stack<E> stack = new Stack<>();
    private int size = 0;


    public Stream<E> stream() {
        return stack.stream();
    }

    public boolean push(E e) {
        if (!contains(e)) {
            ++size;
            stack.push(e);
            return add(e);
        }
        return false;
    }

    public E pop() {
        E val = null;
        if (!stack.isEmpty()) {
            --size;
            val = stack.pop();
            remove(val);
        }
        return val;
    }
}
WJS
  • 36,363
  • 4
  • 24
  • 39