You have mixed many inter-related concepts which are not alternatives to each other.
Have a look at hadoop ecosystem
Apache Map Reduce is : A YARN (Yet Another Resource Negotiator) based system for parallel processing of large data sets. It provides simple programming API.
Apache Kafka is a Distributed publish-subscribe system for processing large amounts of streaming data. You can treat Kafka as a simple "Message Store"
Apache Flume is specially designed for collection, aggregation, and movement of large amounts of log data (in unstructured format) into HDFS system. It collects data from various HTTP sources and web servers.
Once data is imported from Flume to HDFS, it can be converted into structured data with PIG or Hive and reports can be generated in Structured form. PIG or HIVE runs a series of Map Reduce Jobs to process this data and generate reports.
Have a look at this article to have better understanding on log file processing architecture.