In short: it is to stop the two normal nodes of the replica set getting into a split-brain situation if they lose contact with each other.
MongoDB replica sets are designed so that, if one or more members goes down or loses contact, the other members are able to keep going as long as between them they have a majority. The majority clause is important: without that, you might have a situation where the network is split in two, and the nodes on each side of the partition think that they're still carrying on the replica set, and end up with different sets of data.
So to avoid the split brain problem, the nodes of a replica set will not continue if they can't command an absolute majority. An example of this is if you have two nodes, in a replica set like this:

If they lose communication, the outcome is symmetrical:

Each one will reason the same way:
- realise it has lost communication with the other
- assess whether it is possible to keep the replica set going
- realise that 1 node (out of 2) does not constitute a majority
- revert to Secondary mode
The difference an Arbiter makes
If there is a third node, then even if the two main nodes lose contact with each other then there will still be one of them in contact with the arbiter. This allows the two main nodes to make different decisions, and keep the replica set going while avoiding the split-brain problem.
Consider the following example of a 3-node replica set:

Whichever way the network partition goes, one node will still be in contact with the arbiter; for example like this:

Node A will:
- realise it can contact neither node B nor the arbiter
- assess whether it is possible to keep the replica set going
- realise that 1 node (out of 3) does not constitute a majority
- revert to Secondary mode
Whereas node B is able to react differently:
- realise it cannot contact node A, but still has contact with the arbiter
- assess whether it is possible to keep the replica set going
- realise that 2 nodes (out of 3) do constitute a majority
- take over as Primary
This also illustrates how you should deploy an arbiter to get that benefit:
- try to put the arbiter on a system independent of both the data-bearing nodes, to maximise the chance of it still being able to communicate with either throughout network problems
- it doesn't need to store data, so you don't need high-spec hardware
- Just 1 arbiter is enough to break the deadlock; you don't get any benefit from multiple arbiters