0

I made a simple script for representing a Cacti Graph, and it works, but obviously it's not really optimized, because I'm doing 8 queries with snmpbulkwalk and splitting by line number (FNR==x), while I could have done just 2 queries, using 2 arrays, one for the first OID and the other for the second OID. I'm doing myself this question because obviously for Cacti the faster it's the script, the better can the Poller handle the data input.

#!/bin/bash
#hwCBQoSIfQueueMatchedBytes
qos_match_2=$(snmpbulkwalk -v 2c -c $MYCOMMUNITY $1 .1.3.6.1.4.1.2011.5.25.32.1.1.5.1.6.1.2 | awk 'FNR==2 {print $4}')
qos_match_3=$(snmpbulkwalk -v 2c -c $MYCOMMUNITY $1 .1.3.6.1.4.1.2011.5.25.32.1.1.5.1.6.1.2 | awk 'FNR==3 {print $4}')
qos_match_4=$(snmpbulkwalk -v 2c -c $MYCOMMUNITY $1 .1.3.6.1.4.1.2011.5.25.32.1.1.5.1.6.1.2 | awk 'FNR==4 {print $4}')
#hwCBQoSIfQueueDiscardedBytes
dis_match_2=$(snmpbulkwalk -v 2c -c $MYCOMMUNITY $1 .1.3.6.1.4.1.2011.5.25.32.1.1.5.1.6.1.6 | awk 'FNR==2 {print $4}')
dis_match_3=$(snmpbulkwalk -v 2c -c $MYCOMMUNITY $1 .1.3.6.1.4.1.2011.5.25.32.1.1.5.1.6.1.6 | awk 'FNR==3 {print $4}')
dis_match_4=$(snmpbulkwalk -v 2c -c $MYCOMMUNITY $1 .1.3.6.1.4.1.2011.5.25.32.1.1.5.1.6.1.6 | awk 'FNR==4 {print $4}')

printf "qos_match_2:$qos_match_2 "
printf "qos_match_3:$qos_match_3 "
printf "qos_match_4:$qos_match_4 "
printf "dis_match_2:$dis_match_2 "
printf "dis_match_3:$dis_match_3 "
printf "dis_match_4:$dis_match_4 "
printf "\n"

A sample output of the 'snmpbulkwalk' command on a host would be:

SNMPv2-SMI::enterprises.2011.5.25.32.1.1.5.1.6.1.2.3.2.1 = Counter64: 0
SNMPv2-SMI::enterprises.2011.5.25.32.1.1.5.1.6.1.2.3.2.2 = Counter64: 431032480
SNMPv2-SMI::enterprises.2011.5.25.32.1.1.5.1.6.1.2.3.2.3 = Counter64: 12456864036
SNMPv2-SMI::enterprises.2011.5.25.32.1.1.5.1.6.1.2.3.2.4 = Counter64: 69821418510
SNMPv2-SMI::enterprises.2011.5.25.32.1.1.5.1.6.1.2.3.2.5 = Counter64: 0

and a sample output of the script would be

qos_match_2:431741706 qos_match_3:12464887554 qos_match_4:69827660661 dis_match_2:0 dis_match_3:345524650 dis_match_4:0 

Where each line corresponds to a Qos class, and we're using only Class 2, 3 and 4

The OID .1.3.6.1.4.1.2011.5.25.32.1.1.5.1.6.1.2 matches QoS matched bytes and the OID .1.3.6.1.4.1.2011.5.25.32.1.1.5.1.6.1.6 matches instead QoS discarded bytes.

How can I solve this?

In pseudo-code I would like to to:

#hwCBQoSIfQueueMatchedBytes
qos_match=$(snmpbulkwalk -v 2c -c $MYCOMMUNITY $1 .1.3.6.1.4.1.2011.5.25.32.1.1.5.1.6.1.2)

printf "qos_match2:" $qos_match[0]
printf "qos_match3:" $qos_match[1]
printf "qos_match4:" $qos_match[2]

and so on


UPDATE 07.05.2020 after luciole75w suggestion:

#!/bin/bash

#set -x

MYCOMMUNITY="mycommunity"

mapfile -t qos_match < <(
    snmpbulkwalk -v 2c -c $MYCOMMUNITY $1 .1.3.6.1.4.1.2011.5.25.32.1.1.5.1.6.1.2 |
    awk '2 <= NR && NR <= 4 { print $4 }'
)

mapfile -t dis_match < <(
    snmpbulkwalk -v 2c -c $MYCOMMUNITY $1 .1.3.6.1.4.1.2011.5.25.32.1.1.5.1.6.1.6 |
    awk '2 <= NR && NR <= 4 { print $4 }'
)

for n in {2..4}; do
    printf "qos_match_$n:${qos_match[n-2]} "
done
for n in {2..4}; do
    printf "dis_match_$n:${dis_match[n-2]} "
done

printf "\n"

#set +x
Andrea
  • 35
  • 6
  • Side note: `printf` is expected to be called with a format string including format specifiers (`%s`, `%d`...) and the variable parts as arguments. If you put shell variables directly in the format string and some of them expand to strings containing `%` or escape sequences, then it will print something wrong, or just fail. – luciole75w May 07 '20 at 18:42
  • Thanks for the reminder, yes i've been using 'printf' in an improper way, just to avoid the 'echo' newline at the end. Obviously, I could have done different. – Andrea May 08 '20 at 15:33
  • For information, in most shells `echo` also supports the `-n` option to skip the newline. – luciole75w May 10 '20 at 17:30

1 Answers1

1

To save output lines in a bash array variable you can use mapfile with a process substitution as input. The following command should work for you, noting that I don't know snmpbulkwalk so I just copy your command as is.

mapfile -t qos_match < <(
    snmpbulkwalk -v 2c -c $MYCOMMUNITY $1 .1.3.6.1.4.1.2011.5.25.32.1.1.5.1.6.1.2 |
    awk '2 <= NR && NR <= 4 { print $4 }'
)

for n in {2..4}; do
    echo qos_match$n: "${qos_match[n-2]}"
done

If your goal is actually the output string (only) and not the arrays, here is an alternative command to get your expected output more efficiently without using shell variables.

awk '
    2 <= FNR && FNR <= 4 {
        printf "%s_match_%d:%s%s", name, FNR, $4, (++n < 6 ? OFS : ORS)
    }
' name=qos <(
    snmpbulkwalk -v 2c -c $MYCOMMUNITY $1 .1.3.6.1.4.1.2011.5.25.32.1.1.5.1.6.1.2
) name=dis <(
    snmpbulkwalk -v 2c -c $MYCOMMUNITY $1 .1.3.6.1.4.1.2011.5.25.32.1.1.5.1.6.1.6
)
luciole75w
  • 1,109
  • 6
  • 12
  • Thanks, I adapted your work to my context and it works! I didn't know about the 'mapfile' feature and I would have for sure spent much time before finding that feature :) – Andrea May 07 '20 at 08:56
  • Looking at your update, I wonder if the bash script you added is complete or just a part of your processing. If you really want to store the results in shell arrays, as per the title, in order to do further processing after the output string, then `mapfile` is what you need. But if the output string is your final purpose, then you don't need shell variables, awk can easily do the job alone. See my updated answer. – luciole75w May 07 '20 at 18:40
  • Hey thanks for the feedback. Sorry, but I've been busy at work and I didn't have the time to analyze your update. I'll give a look at it as soon as I can. BTW, thanks for your time :) – Andrea May 08 '20 at 12:49
  • Btw I had a few minutes to test you work with awk and it works the same. I believe that I'll keep the bash array because it's clearer to me. I've always had trouble with AWK, less with bash syntax. My only aim was to keep the time of the script low (measured with "time" command in bash) because it's leass heavy for Cacti network monitor. I thought about using an array because it's the first data structure that came to my mind to keep data indexed without doing 3 times the same query. I tried with AWK, but obviously I did something wrong and I was nowhere near your example. – Andrea May 08 '20 at 15:27
  • 1
    You're welcome. Awk is a powerful multipurpose tool and I think it's worth spending some time to understand the main features, it will be rewarding sooner or later. When you decide to use a tool like awk, in my opinion it's good to see how it could make your whole task easier instead of picking just a little piece and mix with other tools. – luciole75w May 10 '20 at 17:28