plexer Posted October 16, 2008 Posted October 16, 2008 I'm trying to create a plugin for nagios to get the consumables status for our Konica K5440dl printers. I tried a snmpwalk on one and it only goes so far so I was hoping to try and copy the HP 2600 one which uses wget to download the supplies status webpage to a temporary file and then extracts the details from that. <br /> <!--<br /> a { color: #000000; text-decoration:none}<br /> body { background-color: #FFFFFF }<br /> <br /> .menu { background-color: #6666FF; color: #FFFFFF }<br /> .menu2 { background-color: #9999FF; color: #FFFFFF }<br /> .menu3 { background-color: #CCCCFF; color: #000000 }<br /> .bar { background-color: #6666FF }<br /> --><br /> System - Detail - Consumable </pre><table width="760" cellpadding="0" cellspacing="2"> space.gif space.gif head_s2.gif Summary head_a1.gif Detail ./space.gifhead_s2.gif Input Tray ./space.gifhead_s2.gif Network ./space.gifhead_a2.gif Consumables head_s2.gif Counter head_s2.gif Online Assistance space.gif space.gif Consumables Status Max Life Black Toner Cartridge 53% 12000 Cyan Toner Cartridge 1% 12000 Magenta Toner Cartridge 100% 12000 Yellow Toner Cartridge 100% 12000 Transfer Roller Unit Ready - Transfer Belt Unit Ready - Waste Toner Bottle Ready - space.gif </table><br> <br Thats what the downloaded page looks like. If anyone could come up with an idea to split that into values for each toner that would be cool. Ben
RabbieBurns Posted October 16, 2008 Posted October 16, 2008 wget http://blah.html && cat blah.html | grep % > output.txt will put the following into an output.txt file. 53 1% 100% 100% Not sure how to strip the html out though, will have a play
plexer Posted October 16, 2008 Author Posted October 16, 2008 I then need to do a split on those lines then I suppose? Ben
CyberNerd Posted October 16, 2008 Posted October 16, 2008 pipe the output to awk, then $1, $2, $3 etc to get each column awk '{ print $1, $2 }' so wget http://blah.html && cat blah.html | grep % | awk '{ print $1, $2 }'
RabbieBurns Posted October 16, 2008 Posted October 16, 2008 pipe the output to awk, then $1, $2, $3 etc to get each column awk '{ print $1, $2 }' so wget http://blah.html && cat blah.html | grep % | awk '{ print $1, $2 }' that just displays changing it to just $5 gives [code]size="2">53% size="2">1% size="2">100% size="2">100% which is nearly there
RabbieBurns Posted October 16, 2008 Posted October 16, 2008 wget http://temp.html && cat temp.html | grep % | awk '{ print $5 }' > test.txt && sed -e 's/size="2">/ /' test.txt > test2.txt strips out the first bit of the html, but I cant get sed to remove the latter part. Not sure if its to do with the \ in the html tag 53% 1% 100% 100%
CyberNerd Posted October 16, 2008 Posted October 16, 2008 that just displays Sure, I was just giving an example how about replacing the < with whitespace, then running awk again [code]sed 's/ /g' test.txt | awk '{print $1}'
plexer Posted October 16, 2008 Author Posted October 16, 2008 Thanks for all the suggestions guys looking good. I just want to end up with some variables that contain the cartridge colour and the corresponding % or just % as long as I know variable1 is black, 2 is cyan I suppose. Ben
tom_newton Posted October 16, 2008 Posted October 16, 2008 I would suggest learning a bit of perl.. much easier.. then use a few match lines.. but... try: wget | grep % | cut -d\> -f3 | cut -d\< -f1 If you really want it doing nicely I will perl it for you and post here as an example?
plexer Posted October 16, 2008 Author Posted October 16, 2008 Yes please Tom then once I can get the check working I'll add that. Ben
RabbieBurns Posted October 16, 2008 Posted October 16, 2008 Cracked it. Probably not the cleverest or neatest way, but it works. wget http://temp.html && cat temp.html | grep % | awk -F\> '{print $3}' | awk -F\< '{print $1}' add > filename.txt to output to a file. Will list 1 per line, black, cyan, magenta, yellow. Saving to: `temp.html' 100%[============================================>] 6,315 in 0.04s 2008-10-16 18:26:37 (167 KB/s) - `temp.html' saved [6315/6315] 53% 1% 100% 100%
RabbieBurns Posted October 16, 2008 Posted October 16, 2008 try: wget | grep % | cut -d\> -f3 | cut -d\< -f1 works when you change it to wget && cat | grep % | cut -d\> -f3 | cut -d\< -f1 1
tom_newton Posted October 17, 2008 Posted October 17, 2008 works when you change it to wget && cat | grep % | cut -d\> -f3 | cut -d\< -f1 Actually whatya need is wget -O - | grep %..... no need for temp files I was assuming plexer had the "file grabby bit" down already. 1
tom_newton Posted October 17, 2008 Posted October 17, 2008 tom@white-elephant:/tmp$ cat plex.txt | ./plex.pl Black: 53 Cyan: 1 Magenta: 100 Yellow: 100 where "plex.txt" is your HTML. Could be done on the commandline but this is neater IMO. Next step: remove wget, and go with LWP::Simpleplex.pl.gz 1
plexer Posted October 17, 2008 Author Posted October 17, 2008 Thanks Tom. I can't seem to open that it just comes out gobbledegook when I un compress it. Ben
CyberNerd Posted October 17, 2008 Posted October 17, 2008 Thanks Tom. I can't seem to open that it just comes out gobbledegook when I un compress it. Ben It's perl, what did you expect
tom_newton Posted October 17, 2008 Posted October 17, 2008 tom@white-elephant:/tmp$ zcat plex.pl.gz #!/usr/bin/perl -Tw # use strict; my $colour = "Polka Dot"; my $level = 0; while(<>) { if (m/(\w+) Toner Cartridge<\/td>/) { $colour = $1; } if (m/(\d+)%<\/font><\/td>/) { $level = $1; print "$colour: $level\n"; } } tom@white-elephant:/tmp$ 1
plexer Posted October 17, 2008 Author Posted October 17, 2008 That works for me. Now to do the rest of the check Ben
plexer Posted October 21, 2008 Author Posted October 21, 2008 Ok here's a working script, many thanks to Tom for sorting this out for me. Benk5440.txt
tom_newton Posted October 21, 2008 Posted October 21, 2008 Looks like you've got it working well Ben - thanks for reposting the perl we worked on via email.. hope it helps someone else Now go get "Learning Perl" (O'Reilly) it's good, it is.
plexer Posted October 21, 2008 Author Posted October 21, 2008 Yeah the only thing I've commented out is trying to get it to return a value as well as printing the output to stdout as it does now. Ben
tom_newton Posted October 22, 2008 Posted October 22, 2008 Yeah the only thing I've commented out is trying to get it to return a value as well as printing the output to stdout as it does now. Ben Did you not get that to work? TBH I got as far as working out what you were trying to do, but didn't check the code
plexer Posted October 22, 2008 Author Posted October 22, 2008 nagios takes the single line of text from stdout as a return but I'd seen some perl plugins that also returned the value as well. Such as exit $state; Allthough it may not be needed in my case as it works as we have it now. Just need to do add in some error checking like you mentioned becuase if I run it against a non k5440 printer for instance I presume it will baulk. Ben
tom_newton Posted October 22, 2008 Posted October 22, 2008 Would have thought so - though you might adapt it to detect the type of printer and find the correct URL automatically
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now