Spark is for data processing what Akka is to managing data and instruction flow in an application.
TL;DR
Spark and Akka are two different frameworks with different uses and use cases.
When building applications, distributed or otherwise, one may need to schedule and manage tasks through a parallel approach such as by using threads. Imagine a huge application with lots of threads. How complicated would that be?
TypeSafe's (now called Lightbend) Akka toolkit allows you to use Actor systems (originally derived from Erlang) that gives you an abstraction layer over threads.
These actors are able to communicate with each other by passing anything and everything as messages, and do things parallel and without blocking other code.
Akka gives you a cherry on the top by providing you ways to run the Actors in a distributed environment.
Apache Spark, on the other hand, is a data processing framework for massive datasets that cannot be handled manually. Spark makes use of what we call an RDD (or Resilient Distributed Datasets) which is distributed list like abstraction layer over your traditional data structures so that operations could be performed on different node parallel to each other.
Spark makes use of the Akka toolkit for scheduling jobs between different nodes.