4

Could you guys explain how to use new groupBy in akka-streams ? Documentation seems to be quite useless. groupBy used to return (T, Source) but not anymore. Here is my example (I mimicked one from docs):

Source(List(
  1 -> "1a", 1 -> "1b", 1 -> "1c",
  2 -> "2a", 2 -> "2b",
  3 -> "3a", 3 -> "3b", 3 -> "3c",
  4 -> "4a",
  5 -> "5a", 5 -> "5b", 5 -> "5c",
  6 -> "6a", 6 -> "6b",
  7 -> "7a",
  8 -> "8a", 8 -> "8b",
  9 -> "9a", 9 -> "9b",
))
  .groupBy(3, _._1)
  .map { case (aid, raw) =>
    aid -> List(raw)
  }
  .reduce[(Int, List[String])] { case (l: (Int, List[String]), r: (Int, List[String])) =>
  (l._1, l._2 ::: r._2)
}
  .mergeSubstreams
  .runForeach { case (aid: Int, items: List[String]) =>
    println(s"$aid - ${items.length}")
  }

This simply hangs. Perhaps it hangs because number of substreams is lower than number of unique keys. But what should I do if I have infinite stream ? I'd like to group until key changes.

In my real stream data is always sorted by value I'm grouping by. Perhaps I don't need groupBy at all ?

expert
  • 29,290
  • 30
  • 110
  • 214

4 Answers4

5

A year later, Akka Stream Contrib has a AccumulateWhileUnchanged class that does this:

libraryDependencies += "com.typesafe.akka" %% "akka-stream-contrib" % "0.9"

and:

import akka.stream.contrib.AccumulateWhileUnchanged
source.via(new AccumulateWhileUnchanged(_._1))
Yossi
  • 134
  • 1
  • 3
4

You could also achieve it using statefulMapConcat which will be a bit less expensive given that it does not do any sub-materialisations (but you have to live with the shame of using vars):

source.statefulMapConcat { () =>
  var prevKey: Option[Int] = None
  var acc: List[String] = Nil

  { case (newKey, str) =>
    prevKey match {
      case Some(`newKey`) | None =>
        prevKey = Some(newKey)
        acc = str :: acc
        Nil
      case Some(oldKey) =>
        val accForOldKey = acc.reverse
        prevKey = Some(newKey)
        acc = str :: Nil
        (oldKey -> accForOldKey) :: Nil
    }
  }
}.runForeach(println)
johanandren
  • 11,249
  • 1
  • 25
  • 30
1

If your stream data is always sorted, you can leverage it for grouping this way:

val source = Source(List(
  1 -> "1a", 1 -> "1b", 1 -> "1c",
  2 -> "2a", 2 -> "2b",
  3 -> "3a", 3 -> "3b", 3 -> "3c",
  4 -> "4a",
  5 -> "5a", 5 -> "5b", 5 -> "5c",
  6 -> "6a", 6 -> "6b",
  7 -> "7a",
  8 -> "8a", 8 -> "8b",
  9 -> "9a", 9 -> "9b",
))

source
  // group elements by pairs
  // the last one will be not a pair, but a single element
  .sliding(2,1)
  // when both keys in a pair are different, we split the group into a subflow
  .splitAfter(pair => (pair.headOption, pair.lastOption) match {
    case (Some((key1, _)), Some((key2, _))) => key1 != key2
  })
  // then we cut only the first element of the pair 
  // to reconstruct the original stream, but grouped by sorted key
  .mapConcat(_.headOption.toList)
  // then we fold the substream into a single element
  .fold(0 -> List.empty[String]) {
    case ((_, values), (key, value)) => key -> (value +: values)
  }
  // merge it back and dump the results
  .mergeSubstreams
  .runWith(Sink.foreach(println))

At the end you'll get these results:

(1,List(1c, 1b, 1a))
(2,List(2b, 2a))
(3,List(3c, 3b, 3a))
(4,List(4a))
(5,List(5c, 5b, 5a))
(6,List(6b, 6a))
(7,List(7a))
(8,List(8b, 8a))
(9,List(9a))

But compared to groupBy, you're not limited by the number of distinct keys.

shutty
  • 3,298
  • 16
  • 27
1

I ended up implementing custom stage

class GroupAfterKeyChangeStage[K, T](keyForItem: T ⇒ K, maxBufferSize: Int) extends GraphStage[FlowShape[T, List[T]]] {

  private val in = Inlet[T]("GroupAfterKeyChangeStage.in")
  private val out = Outlet[List[T]]("GroupAfterKeyChangeStage.out")

  override val shape: FlowShape[T, List[T]] =
    FlowShape(in, out)

  override def createLogic(inheritedAttributes: Attributes): GraphStageLogic = new GraphStageLogic(shape) with InHandler with OutHandler {

    private val buffer = new ListBuffer[T]
    private var currentKey: Option[K] = None

    // InHandler
    override def onPush(): Unit = {
      val nextItem = grab(in)
      val nextItemKey = keyForItem(nextItem)

      if (currentKey.forall(_ == nextItemKey)) {
        if (currentKey.isEmpty)
          currentKey = Some(nextItemKey)

        if (buffer.size == maxBufferSize)
          failStage(new RuntimeException(s"Maximum buffer size is exceeded on key $nextItemKey"))
        else {
          buffer += nextItem
          pull(in)
        }
      } else {
        val result = buffer.result()
        buffer.clear()
        buffer += nextItem
        currentKey = Some(nextItemKey)
        push(out, result)
      }
    }

    // OutHandler
    override def onPull(): Unit = {
      if (isClosed(in))
        failStage(new RuntimeException("Upstream finished but there was a truncated final frame in the buffer"))
      else
        pull(in)
    }

    // InHandler
    override def onUpstreamFinish(): Unit = {
      val result = buffer.result()
      if (result.nonEmpty) {
        emit(out, result)
        completeStage()
      } else
        completeStage()

      // else swallow the termination and wait for pull
    }

    override def postStop(): Unit = {
      buffer.clear()
    }

    setHandlers(in, out, this)
  }
}

If you don't want to copy-paste it I've added it to helper library that I maintain. In order to use you need to add

Resolver.bintrayRepo("cppexpert", "maven")

to your resolvers. Add add foolowingto your dependencies

"com.walkmind" %% "scala-tricks" % "2.15"

It's implemented in com.walkmind.akkastream.FlowExt as flow

def groupSortedByKey[K, T](keyForItem: T ⇒ K, maxBufferSize: Int): Flow[T, List[T], NotUsed]

For my example it would be

source
  .via(FlowExt.groupSortedByKey(_._1, 128))
expert
  • 29,290
  • 30
  • 110
  • 214
  • 1
    Emit already handles the case where out hasn't been pulled when emit is called by switching to a new behavior, so no need to fail the stage for that. – johanandren Jul 06 '18 at 09:12