0

My log file is this:

Jan 1 22:54:17 drop   %LOGSOURCE% >eth1 rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 70.77.116.190; dst: %DSTIP%; proto: tcp; product: VPN-1 & FireWall-1; service: 445; s_port: 2612;
Jan 1 22:54:22 drop   %LOGSOURCE% >eth1 rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 61.164.41.144; dst: %DSTIP%; proto: udp; product: VPN-1 & FireWall-1; service: 5060; s_port: 5069;
Jan 1 22:54:23 drop   %LOGSOURCE% >eth1 rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 69.55.245.136; dst: %DSTIP%; proto: tcp; product: VPN-1 & FireWall-1; service: 445; s_port: 2970;
Jan 1 22:54:41 drop   %LOGSOURCE% >eth1 rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 95.104.65.30; dst: %DSTIP%; proto: tcp; product: VPN-1 & FireWall-1; service: 445; s_port: 2565;
Jan 1 22:54:43 drop   %LOGSOURCE% >eth1 rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 222.186.24.11; dst: %DSTIP%; proto: tcp; product: VPN-1 & FireWall-1; service: 2967; s_port: 6000;
Jan 1 22:54:54 drop   %LOGSOURCE% >eth1 rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 74.204.108.202; dst: %DSTIP%; proto: udp; product: VPN-1 & FireWall-1; service: 137; s_port: 53038;
Jan 1 22:55:10 drop   %LOGSOURCE% >eth1 rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 71.111.186.26; dst: %DSTIP%; proto: tcp; product: VPN-1 & FireWall-1; service: 445; s_port: 38548;
Jan 1 23:02:56 accept %LOGSOURCE% >eth1 inzone: External; outzone: Local; rule: 3; rule_uid: {723F81EF-75C9-4CBB-8913-0EBB3686E0F7}; service_id: icmp-proto; ICMP: Echo Request; src: 24.188.22.101; dst: %DSTIP%; proto:

This is my config file that I have ran:

input {
  file {
      path => "/etc/logstash/external_noise.log"
      type => "external_noise"
      start_position => "beginning"
      sincedb_path => "/dev/null"
  }
}
  filter {

    grok {
      match => [ 'message', '%{CISCOTIMESTAMP:timestamp} %{WORD:action} %{SPACE} %{DATA:logsource} %{DATA:interface} %{GREEDYDATA:kvpairs}' ]
     }
    kv   {
       source => "kvpairs"
       field_split => ";"
}

}
    output {
elasticsearch {
    action => "index"
    host => "localhost"
    index => "noise-%{+dd.MM.YYYY}"
    workers => 1
    }
 }

In my Kibana, my fields are somewhat different from what I have specified. Also, the timestamp of it is the time when I startup my logstash with the config file. There is one field that is

message: Jan 1 22:54:17 drop   %LOGSOURCE% >eth1 rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 70.77.116.190; dst: %DSTIP%; proto: tcp; product: VPN-1 & FireWall-1; service: 445; s_port: 2612;

From my grok, I have filtered it. Do I need to mutate to add fields? Sorry I'm not an expert at ELK and I'm interested to find out and learn more.

imbadatcoding
  • 57
  • 5
  • 14
  • As a start, "% {DATA:logsource}" should be "%{DATA:logsource}". – Alain Collins Aug 18 '15 at 03:35
  • sorry, editted it. Some spacing error – imbadatcoding Aug 18 '15 at 03:37
  • Your corrected pattern will split the 'message' field into several other fields ('timestamp', 'action', 'logsource', 'interface', 'kvpairs'). Is that not what you see? – Alain Collins Aug 18 '15 at 03:58
  • Yes it has it but i split the kvpairs using `;` but it doesn't separate it though. Do I have to manually specify fields or add keys? – imbadatcoding Aug 18 '15 at 04:00
  • 1
    You are only providing `field_split` to kv{}, which is what separated one key/value pair from another. Since your keys are separated from the values by a colon, you would need to specify `value_split` as well. Be sure to read over the kv{} man page! – Alain Collins Aug 18 '15 at 06:20
  • Oh. I'm reading it. So the left side of colon would be the key and the other would be value. Will it be automatically processed by logstash where it will store the keys as columns? – imbadatcoding Aug 18 '15 at 09:14
  • logstash will use the key name as the field. – Alain Collins Aug 18 '15 at 16:12

1 Answers1

0

As said in your other question you need a few adjustments. However, you could have figured it out yourself.

If this is your input (copied from your question):

Jan 1 22:54:17 drop   %LOGSOURCE% >eth1 rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 70.77.116.190; dst: %DSTIP%; proto: tcp; product: VPN-1 & FireWall-1; service: 445; s_port: 2612;
Jan 1 22:54:22 drop   %LOGSOURCE% >eth1 rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 61.164.41.144; dst: %DSTIP%; proto: udp; product: VPN-1 & FireWall-1; service: 5060; s_port: 5069;
Jan 1 22:54:23 drop   %LOGSOURCE% >eth1 rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 69.55.245.136; dst: %DSTIP%; proto: tcp; product: VPN-1 & FireWall-1; service: 445; s_port: 2970;
Jan 1 22:54:41 drop   %LOGSOURCE% >eth1 rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 95.104.65.30; dst: %DSTIP%; proto: tcp; product: VPN-1 & FireWall-1; service: 445; s_port: 2565;
Jan 1 22:54:43 drop   %LOGSOURCE% >eth1 rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 222.186.24.11; dst: %DSTIP%; proto: tcp; product: VPN-1 & FireWall-1; service: 2967; s_port: 6000;
Jan 1 22:54:54 drop   %LOGSOURCE% >eth1 rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 74.204.108.202; dst: %DSTIP%; proto: udp; product: VPN-1 & FireWall-1; service: 137; s_port: 53038;
Jan 1 22:55:10 drop   %LOGSOURCE% >eth1 rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 71.111.186.26; dst: %DSTIP%; proto: tcp; product: VPN-1 & FireWall-1; service: 445; s_port: 38548;
Jan 1 23:02:56 accept %LOGSOURCE% >eth1 inzone: External; outzone: Local; rule: 3; rule_uid: {723F81EF-75C9-4CBB-8913-0EBB3686E0F7}; service_id: icmp-proto; ICMP: Echo Request; src: 24.188.22.101; dst: %DSTIP%; proto:

And this is your filter section:

filter {
    grok {
            match => [ "message", "%{CISCOTIMESTAMP:timestamp} %{WORD:action}%{SPACE}%{DATA:logsource} %{DATA:interface} %{GREEDYDATA:kvpairs}" ]
         }
    kv   {
            source => "kvpairs"
            field_split => ";"
            value_split => ":"
    }
}

Then this is (part of) your output:

     "timestamp" => "Jan 1 23:02:56"
        "action" => "drop",
     "logsource" => "%LOGSOURCE%",
     "interface" => ">eth1",
       "kvpairs" => "rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 74.204.108.202; dst: %DSTIP%; proto: udp; product: VPN-1 & FireWall-1; service: 137; s_port: 53038;",
          "rule" => " 7",
     " rule_uid" => " {C1336766-9489-4049-9817-50584D83A245}",
          " src" => " 74.204.108.202",
          " dst" => " %DSTIP%",
        " proto" => " udp",
      " product" => " VPN-1 & FireWall-1",
      " service" => " 137",
       " s_port" => " 53038"

And this works for all your given log lines. I've tested it. (Be sure to delete the spaces around %{SPACE} in your grok pattern.)

If you want to delete the kvpairs field in your output add a line to your kv filter:

remove_field => "kvpairs"

And if you want to overwrite logstash's @timestamp add a date filter:

date {
    match => [ "timestamp", "MMM dd HH:mm:ss" ]
}
Community
  • 1
  • 1
hurb
  • 2,177
  • 3
  • 18
  • 32
  • 1
    Thank you! Do you have any guides on ELK? I would be certainly happy to learn more about it. – imbadatcoding Aug 18 '15 at 12:10
  • I think there is no perfect guidance for elk-stack. In my opinion it is the best way to mess around with some configurations and simply learn from mistakes. But don't forget the [man pages](https://www.elastic.co/guide/en/logstash/current/logstash-reference.html) ;) – hurb Aug 18 '15 at 17:36
  • Oh. Can I ask 1 last question? My `@timestamp` is as of today but not Jan1. Is it because of the index I have set? – imbadatcoding Aug 19 '15 at 02:21
  • Have you added the date filter which I have mentioned in my answer? – hurb Aug 19 '15 at 08:36
  • I did it. I added `target => "@timestamp"` as well in the date filter. Is it acceptable? I will have to do mapping as well because they are considered as `analyzed fields` where strings would be broken up by space. Did I say it correctly? I'm currently learning! – imbadatcoding Aug 20 '15 at 02:19
  • `target => "@timestamp"` is just fine. With `match => [ "timestamp", "MMM dd HH:mm:ss" ]`it should retrieve the correct date from your events. Be sure that the date filter comes after grok. Just [delete your index](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-delete-index.html) in order to reindex your field types. – hurb Aug 24 '15 at 08:09