1

Is there any methods available to remove every n th element from the Scala List?

I hope we can do this inside filter method and return another list by writing a logic. But is this efficient way to do it?

Shankar
  • 8,529
  • 26
  • 90
  • 159
  • Possible duplicate http://stackoverflow.com/questions/18847249/how-to-remove-an-item-from-a-list-in-scala-having-only-its-index ? – GuiSim Aug 23 '16 at 15:29

8 Answers8

7

Simplest so far, I think

def removeNth[A](myList: List[A], n: Int): List[A] = 
  myList.zipWithIndex collect { case (x,i) if (i + 1) % n != 0 => x }

collect is an oft-forgotten gem that takes a partial function as its second argument, maps elements with that function and ignores those that are not in its domain.

Alec
  • 31,829
  • 7
  • 67
  • 114
  • This will remove everything but the `nth` element. Needs to switch to `!=`. – Alvaro Carrasco Aug 23 '16 at 15:37
  • It removed the first element from the collection, for example if I have `List(1,2,3,4,5,6)` , the output it returns is `List(2, 3, 5, 6)` when I replace n with 3 – Shankar Aug 23 '16 at 15:47
  • 1
    @Ramesh That is pretty straightforward to fix... This is more efficient than `filter`. And you get only two traversals of the list instead of three. – Alec Aug 23 '16 at 15:49
  • @Alec : Thanks, I liked your answer very much, +1 for collect method takes partial function as its second argument. – Shankar Aug 23 '16 at 15:55
2

Simply:

list.zipWithIndex
    .filter { case (_, i) => (i + 1) % n != 0 }
    .map { case (e, _) => e }
Jean Logeart
  • 52,687
  • 11
  • 83
  • 118
  • That will be ok if he really needed `List` here as lists provide sequential access. But if that is not the case and he just wanted to use any `collection` type for his elements then the better approach would have been to use `ArrayBuffer` and just `null`ify all the corresponding indexes. – sarveshseri Aug 23 '16 at 16:05
  • @SarveshKumarSingh In most cases it wouldn't be an adequate replacement: 1. it doesn't work if you have `List[Int]` or other `AnyVal` (or a `List[A]` without a bound on `A`); 2. you now have to handle `null` downstream; 3. you may need indices of the resulting list; etc. – Alexey Romanov Aug 23 '16 at 18:52
  • For any list of length `m` and for any `n` such that `n < m` this list approach takes `3n` operations. Where as if we were using `Array` or `ArrayBuffer` with nullifying we can do it in `m/n` operations. And `nullifying` does not have to involve `null`. – sarveshseri Aug 24 '16 at 04:36
1

An approach without indexing, by chopping the list into chunks of length nth each,

xs.grouped(nth).flatMap(_.take(nth-1)).toList

From each chunk delivered by grouped we take up to nth-1 items.

This other approach is not efficient (note comment by @ Alexey Romanov), by using a for comprehension which desugars into a flatMap and a withFilter (lazy filter),

for (i <- 0 until xs.size if i % nth != nth-1) yield xs(i)
elm
  • 20,117
  • 14
  • 67
  • 113
1

Here is a recursive implementation without indexing.

  def drop[A](n: Int, lst: List[A]): List[A] = {
    def dropN(i: Int, lst: List[A]): List[A] = (i, lst) match {
      case (0, _ :: xs) => dropN(n, xs)
      case (_, x :: xs) => x :: dropN(i - 1, xs)
      case (_, x) => x
    }
    dropN(n, lst)
  }
Jegan
  • 1,721
  • 1
  • 20
  • 25
1

One more alternative, close to @elm's answer but taking into account that drop(1) is much faster for lists than takeing nearly the entire list:

def remove[A](xs: List[A], n: Int) = {
  val (firstPart, rest) = xs.splitAt(n - 1)
  firstPart ++ rest.grouped(n).flatMap(_.drop(1))
}
Alexey Romanov
  • 167,066
  • 35
  • 309
  • 487
1

Here is tail-recursive implementation for List using accumulator:

  import scala.annotation.tailrec
  def dropNth[A](lst: List[A], n: Int): List[A] = {
    @tailrec
    def dropRec(i: Int, lst: List[A], acc: List[A]): List[A] = (i, lst) match {
      case (_, Nil) => acc
      case (1, x :: xs) => dropRec(n, xs, acc)
      case (i, x :: xs) => dropRec(i - 1, xs, x :: acc)
    }
    dropRec(n, lst, Nil).reverse
  }

Update: As noted in the comments, I have tried the other solutions here on large (1 to 5000000).toList input. Those with zipWithIndex filter/collect fail on OutOfMemoryError and the (non-tail) recurcive fails on StackOverflowError. Mine using List cons (::) and tailrec works well.

That is because the zipping-with-index creates new ListBuffer and is appending the tuples, that leads to OOM. And the recursive simply has 5 million levels of recursion, which is too much for the stack.

The tail-recursive creates no unnecessary objects and effectively creates two copies of the input (that is, 2*5 million of :: instances), both in O(n). The first is to create the filtered elements, which are in reverse order, because the output is prepended x :: acc (in O(1), while appending a List is O(n)). The second one is simply the reverse of the recursive output.

  • @SarveshKumarSingh: it does. `scala> dropNth((1 to 20).toList, 3) res0: List[Int] = List(1, 2, 4, 5, 7, 8, 10, 11, 13, 14, 16, 17, 19, 20)` as you see 3rd, 6th etc. are not there. – Martin Milichovsky Aug 24 '16 at 05:21
  • Yes. My bad for not noticing. I think this is a better solution compared to all others. The only problem with this is that `GC` will be troublesome for huge lists. But why do you need that reverse ? Why not replace `x :: acc` by `acc + x` ? – sarveshseri Aug 24 '16 at 05:55
  • If you don't see how `GC` will be a problem, then try this - `(1 to 5000000).toList` simple looking thing. `GC` is the achilles bane of immutable programming in Scala. – sarveshseri Aug 24 '16 at 05:58
  • @SarveshKumarSingh I do not append the list because it is `O(n)` so the overall complexity would be quadratic. Prepend to list on the other hand is `O(1)` and do the reverse is `O(n)`, but is done just once. `(1 to 5000000)` input was computed in a few seconds. I thing GC is not an issue because this implementation actually creates just two copies of the input. This computation could be done efficiently on Streams, without need to store the whole sequence in memory. But that was not OPs question. – Martin Milichovsky Aug 24 '16 at 07:12
  • 1
    I have tried the other solutions here on large `(1 to 5000000).toList`. Those with `zipWithIndex filter`fail on `OutOfMemoryError` and the (non-tail) recurcive fails on `StackOverflowError`. Mine using List cons (`::`) and tailrec works well. – Martin Milichovsky Aug 24 '16 at 07:23
  • @MartinMilichovský : Thanks for the detailed analysis. – Shankar Aug 24 '16 at 17:48
1

Simplest solution

scala> def dropNth[T](list:List[T], n:Int) :List[T] = {
     | list.take(n-1):::list.drop(n)
     | }
igx
  • 4,101
  • 11
  • 43
  • 88
0

Yet another approach: make a function for List that does exactly what you need. This does the same as Martin's dropNth function, but doesn't need the O(n) reverse:

    import scala.collection.mutable.ListBuffer

    implicit class improvedList[A](xs: List[A]) {
      def filterAllWhereIndex(n: Int): List[A] = {
        var i = 1
        var these = xs
        val b = new ListBuffer[A]
        while (these.nonEmpty) {
          if (i !=  n) {
            b += these.head
            i += 1
          } else i = 1
          these = these.tail
        }
        b.result
      }
    }

    (1 to 5000000).toList filterAllWhereIndex 3

If you want efficient this does the trick. Plus it can be used as infix operator as shown above. This a good pattern to know in order to avoid using zipWithIndex, which seems a bit heavy handed on both time and space.

TRuhland
  • 126
  • 1
  • 5