The results show that as I increase the number of processors from 2 to 4 to 10 the runtime decreases each time, but when I get to 20 processors there is a large increase in runtime. Each node has two 8-core processors, so I want to limit each node to 16 mpi processes. Am I doing this correctly? I think the problem may have to do with my sbatch file. And especially since the large increase in runtime occurs when I go from using one node to two. Here is my sbatch file:
#!/bin/bash -x
#SBATCH -J scalingstudy
#SBATCH --output=scalingstudy.%j.out
#SBATCH --error=scaling-err.%j.err
#SBATCH --time=03:00:00
#SBATCH --partition=partition_name
#SBATCH --mail-type=end
#SBATCH --mail-user=email@school.edu
#SBATCH -N 2
#SBATCH --ntasks-per-node=16
module load gcc/4.9.1_1
module load openmpi/1.8.1_1
mpic++ enhanced_version.cpp
mpirun -np 2 ./a.out 10000
mpirun -np 4 ./a.out 10000
mpirun -np 10 ./a.out 10000
mpirun -np 20 --bind-to core ./a.out 10000
mpirun -np 2 ./a.out 50000
mpirun -np 4 ./a.out 50000
mpirun -np 10 ./a.out 50000
mpirun -np 20 --bind-to core ./a.out 50000
mpirun -np 2 ./a.out 100000
mpirun -np 4 ./a.out 100000
mpirun -np 10 ./a.out 100000
mpirun -np 20 --bind-to core ./a.out 100000
mpirun -np 2 ./a.out 500000
mpirun -np 4 ./a.out 500000
mpirun -np 10 ./a.out 500000
mpirun -np 20 --bind-to core ./a.out 500000
mpirun -np 2 ./a.out 1000000
mpirun -np 4 ./a.out 1000000
mpirun -np 10 ./a.out 1000000
mpirun -np 20 --bind-to core ./a.out 1000000