Jeff.Woeber

LogicMonitor Staff
  • Content Count

    18
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Jeff.Woeber

  • Rank
    Community Whiz Kid

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. WARNING - This method will deleted all existing instances along with all historical data to any devices currently being monitored. DO NOT modify the original SNMP64_if datasource. Instead clone the SNMP64_if datasource or download my copy from the LogicMonitor Exchange RX4ZAG. It's recommended to run both the original SNMP64_if and modified SNMP64_if until historical data has accumulated. I needed to dynamically group interface instances based off the description. In this case, the interface description held details regarding the interface by including DIA, UPLS, or CID. It is possible to group instances with a RegEX using this method. However, this only supports instance Name or Value. It is not possible to group by the instance description. Then a thought accord to me The ##WILDVALUE## would remain the same, just the name and description would be switched. Instead of your instances named "GigabitEthernet2/1/4", they would be named "Level3 100M DIA-Pri | CID BBQG24909" and allow a Regular Expression to dynamically group them using the RegEx "CID=".*CID.*" Everything else will work as before. I cloned and modified the datasource SNMP64_if and switched the SNMP OID (Name) with the Description OID (Description). You can download a copy of this datasource from the LogicMonitor Exchange using the Locator ID: RX4ZAG Then I was able to use a Regular Expression to dynamically group the instances. This will create three groups CID, DIA, and UPLS that will group interfaces that include that string in the interface Name (Description).
  2. This datasource monitors higher end graphic cards via Perfmon counters available on Github from Alexey Kamenv This generates great graphs for a "Gaming Machine Dashboard" so you can visualize your complete hardware overkill LM Exchange: HMXG2X
  3. I setup a Raspberry Pi with a Sense Hat so I can track the internal Temperature and Humidity of my living room in Logic Monitor. The data is accessable from running a python script on the Raspberry Pi itself, but I didn't want to install a collector on that device. The solution I decided on was using Extend Snmp. This guide can be used as a template for similar logicMonitor task. I used Steve Francis's "How to teach an old snmpd new tricks" Blog post as a guide. The first step was creating the python script. This script reads the humidity and temperature readings from the Sense Hat, rounds the results and prints the output to STDOUT. from sense_hat import SenseHat sense = SenseHat() sense.clear() hum = sense.get_humidity() hum = round(hum,1) hum= str(hum) print hum temp = sense.get_temperature() temp = ((temp/5)*9)+32 temp = round(temp,1) temp= str(temp) print temp Next I added a line to the to the /etc/snmp/snmpd.conf file to extend SNMP. It basically tells the SNMP daemon to run the python script and include the output in the SNMP extend OIDs. extend lm-temp /usr/bin/python2.7 /home/pi/scripts/weather/tempature.py After a restart of the SNMP Daemon, you can see the actual output by walking .1.3.6.1.4.1.8072.1.3.2 $ !snmpwalk 10.73.42.116 .1.3.6.1.4.1.8072.1.3.2 Walking OID .1.3.6.1.4.1.8072.1.3.2 from host=10.73.42.116, version=v2c, port=161, timeout=3 seconds: 1.0 => 1 2.1.2.7.108.109.45.116.101.109.112 => /usr/bin/python2.7 2.1.20.7.108.109.45.116.101.109.112 => 4 2.1.21.7.108.109.45.116.101.109.112 => 1 2.1.3.7.108.109.45.116.101.109.112 => /home/pi/scripts/weather/tempature.py 2.1.4.7.108.109.45.116.101.109.112 => 2.1.5.7.108.109.45.116.101.109.112 => 5 2.1.6.7.108.109.45.116.101.109.112 => 1 2.1.7.7.108.109.45.116.101.109.112 => 1 3.1.1.7.108.109.45.116.101.109.112 => 47.3 3.1.2.7.108.109.45.116.101.109.112 => 47.3 79.0 3.1.3.7.108.109.45.116.101.109.112 => 2 3.1.4.7.108.109.45.116.101.109.112 => 0 4.1.2.7.108.109.45.116.101.109.112.1 => 47.3 4.1.2.7.108.109.45.116.101.109.112.2 => 79.0 Those last two OIDs is the humidity and Temperature output from my python script. Putting everything together. The OID for the humidity and temperature reported by my Raspberry Pi are .1.3.6.1.4.1.8072.1.3.2.4.1.2.7.108.109.45.116.101.109.112.1 and .1.3.6.1.4.1.8072.1.3.2.4.1.2.7.108.109.45.116.101.109.112.2 respectively and can be included into a SNMP datasource as shown below I did run into some trouble with the SNMP daemon accessing the hardware. Daemons usually run under self-named unprivileged users and I had to add the snmp user to the groups input and gpio so the daemon had access to the hardware. This was easy enough with the command "sudo addgroup snmp gpio" and "sudo addgroup snmp input".
  4. This datasource uses the Weather Underground API to monitor weather conditions for multiple locations. This will require a Free Weather Underground API key. This key should be added as a device properties weather.api.key The datasource exchange locator is KT6NLM I have an example setup to monitor Austin, TX, Jupiter, FL, and Santa Barbara, CA To get started, add a new device under Expert mode and in the “Link to a URL” box add in api.wunderground.com Once the URL has been added to LogicMonitor click on “Add Monitored Instance”. Add the weatherdeatails datasource and an instance of your choice. In my example I used TX/austin. You can add in other locations under the instances tab. For a better understanding on what the Instances Values should be, a complete API URL for Santa Barbara is http://api.wunderground.com/api/Your_Key/conditions/q/ca/santa_barbara.json The data source is using an HTTP GET appended to the host URL GET /api/##weather.api.key##/conditions/q/##WILDVALUE##.json HTTP/1.1 Host:##HOSTNAME## The instance values is just the STATE/city part of the API URL.
  5. I used this trick today to monitor future alert rule changes for a customer. Works great and they love it!
  6. This is a PropertySource which runs with active discovery and adds the property auto.lmstatus to the device properties with the current Hoststatus value. It does this using the REST API and It works great with Dynamic grouping, for example if you wanted to know which device in your portal were currently in a dead status you could create a dynamic group with the applies to of “auto.lmstatus=="dead". One advantage to using a property source is if the device comes back on-line, Active Discovery will immediately run and change the property to "normal" removing the device from the group. A copy of the PropertySource is at the bottom of the post. Let’s walk through the groovy script. Define the account information This is polling the device properties. I recommend setting this at the group level so they are inherited to the devices. an example of the properties would be api.user = 5kCPqLgY4DGYP27uw2hc api.pass = ye[$3y7)_4g6L6uH2TC72k{V6HBUf]Ys+9!vB)[9 *note, any property with .pass in the name will not have visible data. api.account = lmjeffwoeber //Account Info def accessId = hostProps.get("api.user"); def accessKey = hostProps.get("api.pass"); def account = hostProps.get("api.account"); Define the Query. We just need the HostSatatus for the device the script is running on. def queryParams = '?fields=hostStatus&filter=displayName:'+hostName; def resourcePath = "/device/devices" Next we build the URL for the API. def url = "https://" + account + ".logicmonitor.com" + "/santaba/rest" + resourcePath + queryParams; This next part builds the security and runs the API. It can pretty much be copy\pasted into any groovy scripts that use the REST API. //get current time epoch = System.currentTimeMillis(); //calculate signature requestVars = "GET" + epoch + resourcePath; hmac = Mac.getInstance("HmacSHA256"); secret = new SecretKeySpec(accessKey.getBytes(), "HmacSHA256"); hmac.init(secret); hmac_signed = Hex.encodeHexString(hmac.doFinal(requestVars.getBytes())); signature = hmac_signed.bytes.encodeBase64(); // HTTP Get CloseableHttpClient httpclient = HttpClients.createDefault(); httpGet = new HttpGet(url); httpGet.addHeader("Authorization" , "LMv1 " + accessId + ":" + signature + ":" + epoch); response = httpclient.execute(httpGet); responseBody = EntityUtils.toString(response.getEntity()); code = response.getStatusLine().getStatusCode(); The API will return a JSON payload. We use the Groovy Slurper to transfer the payload to respose_obj were we can use the data. // user groovy slurper json_slurper = new JsonSlurper(); response_obj = json_slurper.parseText(responseBody); They JSON will look like data=[total:1, items:[[hostStatus:dead]] We can use the value with response_obj.data.items[0].hostStatus.value Now we print the Key=Value for the property source //print output println "LMStatus=" +response_obj.data.items[0].hostStatus.value; httpclient.close(); This will add the property source "auto.lmstatus" to the device finally we return 0 do indicate a success. return (0); This is how the PropertySource will appear on the device Lastly, we can create Dynamic Groups based off the Hostatus. For example use an applies to auto.lmstatus=="dead" to group all of the dead devices into one group, or auto.lmstatus=~"dead" to also include Dead-Collector LMStatus PropertySource import org.apache.http.HttpEntity import org.apache.http.client.methods.CloseableHttpResponse import org.apache.http.client.methods.HttpGet import org.apache.http.impl.client.CloseableHttpClient import org.apache.http.impl.client.HttpClients import org.apache.http.util.EntityUtils import javax.crypto.Mac; import javax.crypto.spec.SecretKeySpec; import org.apache.commons.codec.binary.Hex;import com.santaba.agent.groovyapi.http.*; import groovy.json.JsonSlurper; def hostName = hostProps.get("system.displayname"); //Account Info def accessId = hostProps.get("api.user"); def accessKey = hostProps.get("api.pass"); def account = hostProps.get("api.account"); data = '' def queryParams = '?fields=hostStatus&filter=displayName:'+hostName; def resourcePath = "/device/devices" def url = "https://" + account + ".logicmonitor.com" + "/santaba/rest" + resourcePath + queryParams; //get current time epoch = System.currentTimeMillis(); //calculate signature requestVars = "GET" + epoch + resourcePath; hmac = Mac.getInstance("HmacSHA256"); secret = new SecretKeySpec(accessKey.getBytes(), "HmacSHA256"); hmac.init(secret); hmac_signed = Hex.encodeHexString(hmac.doFinal(requestVars.getBytes())); signature = hmac_signed.bytes.encodeBase64(); // HTTP Get CloseableHttpClient httpclient = HttpClients.createDefault(); httpGet = new HttpGet(url); httpGet.addHeader("Authorization" , "LMv1 " + accessId + ":" + signature + ":" + epoch); response = httpclient.execute(httpGet); responseBody = EntityUtils.toString(response.getEntity()); code = response.getStatusLine().getStatusCode(); // user groovy slurper json_slurper = new JsonSlurper(); response_obj = json_slurper.parseText(responseBody); //print output println "LMStatus=" +response_obj.data.items[0].hostStatus.value; httpclient.close(); return (0);
  7. Amazon’s Alexa is a fantastic tool for working with APIs including LogicMonitor’s. The Amazon developer tools make this remarkably easy to script, but it can be a bit of a challenge the first time through. Let’s change that and walk through setting up a Alexa app that will acknowledge LogicMonitor Alerts through a API. Credit: I found this Youtube tutorial from Jordan Leigh very informative and used it as a base for building this app To my understanding, a Alexa Applications has two parts. * The Skill which includes the interaction model - setup the intents (functions) the app will execute and Utterances - which are the words and phrases that will trigger the functions * The function itself that will build and call the API. Once done, you will need to load the LogicMonitor Alexa skill by saying "Alexa load LogicMonitor". Then perform task by saying the pre-programmed utterance. Let's build a LogicMonitor Alexa Skill! This example uses the RPC API for simplicity and to illustrate how Alexa Skills are configured. The RPC API will eventually be depreciated, for long-term functionality it is recommended using the REST API. An example of using the REST API in a Python script is below the JavaScript Skill Function Code. *By request and for the brave, a walkthrough of creating a REST API in Python is below the APC Javascript skill Let’s start by creating the LogicMonitor Alexa Skill Login to https://developer.amazon.com, create an account if needed. And click on Alexa link at the top. Next press the “Get Started” button under “Alexa Skill Kit” For Skill information I used the below values. Skill type = Custom Interaction Model Name = LogicMonitor Invocation Name = LogicMonitor Next is the Interaction Model that will set up the intents or what I would call functions. This is in json format. We will create two intents for this walk through. One to acknowledge Alerts and a second to acknowledge Events, since these are two different API calls. Let’s take a look at one in detail "intent": "acknowledgeAlert", "slots":[ { "name": "AlertID", "type": "AMAZON.NUMBER" "intent": "acknowledgeAlert", names the intent. acknowledgeAlert will need to be defined in the function we will create later as well as the utterances so Alexa will know when to execute this intent. SLOTS are variables or the strings we can pass to our function to include in our API. They need to be a pre-defined type like AMAZON.NUMBER or a custom slot type. This walk through will pass alert numbers and we will use the amazon.number slot. More information on this can be found at https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/built-in-intent-ref/slot-type-reference The full attached intent schema can be copy\pasted into your intent schema Next is the Sample Utterances. This is the fun part where we setup the voice commands. Let’s take a look acknowledgeAlert - This is the intent to be executed and the utterance has to start with the intent. The utterance also has to include the Invocation Name we setup previously which is “LogicMonitor”. The rest are common phrases the end user may use to invoke the intent with the slot passed in {}. You can setup as many utterances for an intent as needed. An example of the spoken utterances will be “Alexa Acknowledge LogicMonitor Alert 64532 This should send an API acknowledging LMD64532 Next is the Endpoint Configuration where we call the actual function. Lets pause here and create the function. This walk through will create a Java Script function on Amazons LAMBDA service. Sign into https://aws.amazon.com/, create an account if needed. Search for “LAMBDA” and press the “Create a Lambda Function” For the Select Blue print, add “Alexa” to the filter and select “Blank Function” Select “Alexa Skill Set” for Configured triggers For the configure Function, lets name it “LogicMonitor” and set the runtime to Node.js. You can copy\paste the attached LogicMonitor function into the window, but lets take closer look. This is basically a java script with case statement for “LaunchRequest” a default intent, and IntentRequest to call or two intents. Lets walk through our “AcknowlegeAlert” intent case "acknowledgeAlert": var AlertIDstr = (event.request.intent.slots.AlertID.value); var endpoint = "https://lmjeffwoeber.logicmonitor.com/santaba/rpc/confirmAlerts?c=lmjeffwoeber&u=api&p=*******&ids=LMD"+AlertIDstr; var body = ""; https.get(endpoint, (response) => { response.on('data', (chunk) => { body += chunk }); response.on('end', () => { var data = JSON.parse(body); var status = data.status; var errmsg = data.errmsg if (status != 200) { context.succeed( generateResponse( buildSpeechletResponse("Failed to acknowledge alert L M D"+AlertIDSt+" with error "+ errmsg, true) ) ) } else{ context.succeed( generateResponse( buildSpeechletResponse("Alert L M D"+AlertIDstr+" has been acknowledged", true) ) ) } }); }); break; case "acknowledgeAlert": this is were we tie the Interaction mode, Utterances, and function together. var AlertIDstr = (event.request.intent.slots.AlertID.value); - This is how we get the Alert ID slot (Variable) defined as AlertIDstr inside of the java script. var endpoint = "https://lmjeffwoeber.logicmonitor.com/santaba/rpc/confirmAlerts? - this is how we define our API call. Note the end of the string &ids=LMD"+AlertIDstr; I am hardcodeing the LMD into the string and only passing the AlertIDstr wich will be a number i.e.64521. You’ll of course need to change the url to match your portal. I’m using the RPC API and more information can be read at https://www.logicmonitor.com/support/rpc-api-developers-guide/manage-alerts/acknowledge-alerts/ The https.get will send the URL and var data = JSON.parse(body); Will load the JSON file returned from LogicMonitor. We need to read the JSON file to ensure the API call was a success or if there was an error. Status=200 is error, so there is a simple if (status != 200) { statement to verify success and determine an appropriate buildSpeechletResponse return. If the status=200 Alexa will say “Failed to acknowledge alert L M D"+AlertIDSt+" with error "+ errmsg, true)” or “Failed to acknowlege alert LMD65432 with error [access denied]. If the status is other than 200 Alexa will say “Alert LMD65432 has been acknowledged.’ Save the function and use the action on the main page to get the ARN string. Now, back to the Skill, we can add are function (Endpoint) using the ARN string and continue. Next is testing and this is the fun part where we put everything together. Next is the testing, and this is the another fun part. Find an active alert in your LogicMonitor portal and type in the Utterance according to the Intent Schema. I am going to use LMD2033 so In my case it will be “acknowledge logicmonitor alert 2033” and press “Ask LogicMonitor” Go ahead and press the “Listen” button to hear Alexa’s response. That’s it, now it’s possible to acknowledge LogicMonitor Events and Alerts using Alexa voice commands. Intent Schema { "intents":[ { "intent": "acknowledgeAlert", "slots":[ { "name": "AlertID", "type": "AMAZON.NUMBER" } ] }, { "intent": "acknowlegeEvent", "slots":[ { "name": "EventID", "type": "AMAZON.NUMBER" } ] } ] } LogicMonitor Java Script Function var https = require('https'); exports.handler = (event, context) => { try { if (event.session.new) { // New Session console.log("NEW SESSION"); } switch (event.request.type) { case "LaunchRequest": // Launch Request console.log(`LAUNCH REQUEST`); context.succeed( generateResponse( buildSpeechletResponse("Welcome to LogicMonitor, I can acknowlege alerts and Events", true), {} ) ); break; case "IntentRequest": // Intent Request console.log(`INTENT REQUEST`); switch(event.request.intent.name) { case "acknowledgeAlert": var AlertIDstr = (event.request.intent.slots.AlertID.value); var endpoint = "https://lmjeffwoeber.logicmonitor.com/santaba/rpc/confirmAlerts?c=lmjeffwoeber&u=api&p=changeme=LMD"+AlertIDstr; var body = ""; https.get(endpoint, (response) => { response.on('data', (chunk) => { body += chunk }); response.on('end', () => { var data = JSON.parse(body); var status = data.status; var errmsg = data.errmsg if (status != 200) { context.succeed( generateResponse( buildSpeechletResponse("Failed to acknowlege alert L M D"+AlertIDSt+" with error "+ errmsg, true) ) ) } else{ context.succeed( generateResponse( buildSpeechletResponse("Alert L M D"+AlertIDstr+" has been acknowledged", true) ) ) } }); }); break; case "acknowlegeEvent": var EventIDstr = (event.request.intent.slots.EventID.value); var evendpoint = "https://lmjeffwoeber.logicmonitor.com/santaba/rpc/confirmAlerts?c=lmjeffwoeber&u=api&p=changeme&eids=LME"+EventIDstr; var evbody = ""; https.get(evendpoint, (response) => { response.on('data', (chunk) => { evbody += chunk }); response.on('end', () => { var data = JSON.parse(evbody); var status = data.status; var errmsg = data.errmsg; if (status != 200) { context.succeed( generateResponse( buildSpeechletResponse("Failed to acknowlege event L M E"+EventIDstr+" with error "+ errmsg, true) ) ); } else{ context.succeed( generateResponse( buildSpeechletResponse("Event L M E"+EventIDstr+" has been acknowledged", true) ) ); } }); }); break; case "SessionEndedRequest": // Session Ended Request console.log(`SESSION ENDED REQUEST`); break; default: context.fail(`INVALID REQUEST TYPE: ${event.request.type}`); } } } catch(error) { context.fail(`Exception: ${error}`) } }; // Helpers buildSpeechletResponse = (outputText, shouldEndSession) => { return { outputSpeech: { type: "PlainText", text: outputText }, shouldEndSession: shouldEndSession }; }; generateResponse = (speechletResponse, sessionAttributes) => { return { version: "1.0", sessionAttributes: sessionAttributes, response: speechletResponse }; }; You should now have a working RPC javascript Alexa LogicMonitor Skill set For the brave: OK that was the easy part, lets take a look at running Python and the LogicMonitor REST API. The difficult part with the REST API is the authentication, which requires importing of Modules. This isn't native to Amazon LAMBDA so we we have to build or own environment. I used a terminal in a Ubuntu environment for easy access to the PIP installer. I also used this excellent A minimalist SDK for developing skills for the Amazon Echo's ASK - Alexa Skills Kit using Amazon Web Services's Python Lambda Functions. written by Anjishnu Kumar . I'm not going to walk through step-by-step as Anjishnu's readme does an excellent job. I will share my pain-points I had when setting this up. Getting the Modules installed. Amazon's Docs, plus a walkthrough in Anjishunu's readme file. Cliff notes version, you need to use PIP to install the modules to a the script directory. Once everything is installed we will ZIP the directory and upload to LAMBDA. First make a directory in your terminal window with mkdir logicmonitor (or whatever you would like to call the skill) and go through Step 1 in the Readme File. Next we install the modules. This two we have to worry about are: from ask import alexa import requests The first is just the alexa SDK and as in Anuishunu's readme the command it pip install ask-alexa-pykit --target logicmonitor Net is request and the command is pip install requests --target logicmonitor You will see the Ask and Request folders in the directory. You can skip step 2 and 3 as we already have a intent schema from the previous example. Step 4 is the creation of the lambda_funtion.py. You can use mine (Shown below) as a template. Save the file as lambda_function.py in the logicmonitor folder. Step 5 is compressing the logicmonitor folder or as in the Readme ask-lambda.zip, ether name is fine. Step 6 is creating the LAMBDA funtion. WATCH THE RUNTIME. I lost a hour because Amazon switched Python 2.7 back to Node.js, and I notice it kept switching back when I re-uploaded my code. The other gotcha is the handler, lambda_function.lambda_handler is expecting a python script named lambda_function just like logicmonitor.lambda_handler would expect a script named logicmonitor. Hopefully this is enough information to write your own LogicMonitor Alexa Skill! -Enjoy! from ask import alexa import requests import hashlib import base64 import time import hmac def lambda_handler(request_obj, context=None): ''' This is the main function to enter to enter into this code. If you are hosting this code on AWS Lambda, this should be the entry point. Otherwise your server can hit this code as long as you remember that the input 'request_obj' is JSON request converted into a nested python object. ''' metadata = {'user_name': 'SomeRandomDude'} # add your own metadata to the request using key value pairs ''' inject user relevant metadata into the request if you want to, here. e.g. Something like : ... metadata = {'user_name' : some_database.query_user_name(request.get_user_id())} Then in the handler function you can do something like - ... return alexa.create_response('Hello there {}!'.format(request.metadata['user_name'])) ''' return alexa.route_request(request_obj, metadata) @alexa.default def default_handler(request): """ The default handler gets invoked if no handler is set for a request type """ return alexa.respond('Just ask').with_card('Hello World') @alexa.request("LaunchRequest") def launch_request_handler(request): ''' Handler for LaunchRequest ''' return alexa.create_response(message="Hello Welcome to Logic Monitor, I can Acknowledge alerts and events!") @alexa.request("SessionEndedRequest") def session_ended_request_handler(request): return alexa.create_response(message="Goodbye!") @alexa.intent('acknowledgeAlert') def acknowlege_alert(request): alertid = request.slots["AlertID"] alertidstr = str(alertid) # Account Info AccessId = '4R9CX5fQf3MrKvjLS59T' AccessKey = 'ye[$3y7)_4g6L6uH2TC72k{V6HBUf]Ys+9!vB)[9' Company = 'lmjeffwoeber' # Request Info httpVerb = 'POST' resourcePath = '/alert/alerts/LMD'+alertidstr+'/ack' queryParams = '' data = '{"ackComment":"acked with Alexa"}' # Construct URL url = 'https://' + Company + '.logicmonitor.com/santaba/rest' + resourcePath + queryParams # Get current time in milliseconds epoch = str(int(time.time() * 1000)) # Concatenate Request details requestVars = httpVerb + epoch + data + resourcePath # Construct signature signature = base64.b64encode(hmac.new(AccessKey, msg=requestVars, digestmod=hashlib.sha256).hexdigest()) # Construct headers auth = 'LMv1 ' + AccessId + ':' + signature + ':' + epoch headers = {'Content-Type': 'application/json', 'Authorization': auth} # Make request response = requests.post(url, data=data, headers=headers) if response.status_code !=200: return alexa.create_response(message="I'm sorry, I was unable to acknowledge Alert L M D"+alertidstr) else: return alexa.create_response(message="Alert acknowledge!") @alexa.intent('acknowlegeEvent') def acknowlege_event(request): eventid = request.slots["EventID"] eventidstr = str(eventid) # Account Info AccessId = '4R9CX5fQf3MrKvjLS59T' AccessKey = 'ye[$3y7)_4g6L6uH2TC72k{V6HBUf]Ys+9!vB)[9' Company = 'lmjeffwoeber' # Request Info httpVerb = 'POST' resourcePath = '/alert/alerts/LME'+eventidstr+'/ack' queryParams = '' data = '{"ackComment":"acked with alexa"}' # Construct URL url = 'https://' + Company + '.logicmonitor.com/santaba/rest' + resourcePath + queryParams # Get current time in milliseconds epoch = str(int(time.time() * 1000)) # Concatenate Request details requestVars = httpVerb + epoch + data + resourcePath # Construct signature signature = base64.b64encode(hmac.new(AccessKey, msg=requestVars, digestmod=hashlib.sha256).hexdigest()) # Construct headers auth = 'LMv1 ' + AccessId + ':' + signature + ':' + epoch headers = {'Content-Type': 'application/json', 'Authorization': auth} # Make request response = requests.post(url, data=data, headers=headers) if response.status_code !=200: return alexa.create_response(message="I'm sorry, I was unable to acknowledge Event L M E"+eventidstr) else: return alexa.create_response(message="Event acknowledge!")
  8. Let’s walk through the attached Python script that utilizes the LogicMonitor API. This script will automate a collector install for Linux. It will Create the Collector Object, Download the collector installer, give the collector executable permissions, and finally install and register the collector. Full documentation including examples for the LogicMonitor REST API can be fond in the below link https://www.logicmonitor.com/support/rest-api-developers-guide/ We have examples for most of the functions in the API on our support site. Using these examples, it’s possible to copy\paste only making minor changes to create a script for your needs. This example will explain how to do this as well as combine multiple example scripts together. Declaring the environment and classes. This can be mostly copied\pasted from the example templates. #!/bin/env python import requests import json import hashlib import base64 import time import hmac import commands This is the authentication setup. It’s easier than it looks. To get the AccessID and AccessKey, open the console and create a new or use an existing user. Under the API Tokens, press the + button to generate the tokens. The company name is the portal ie, mine is ‘lmjeffwoeber.logicmonitor.com’ so my company is 'lmjeffwoeber' #Account Info AccessId ='bB29823sN4MdwC9B3k3i' AccessKey ='P-6D)xH_76Q4+AS2-T67hZ%N3|Nc2]6LC4U647%5' Company = 'lmjeffwoeber' Now that we have our authentication setup we need to gather the data to put into our API call URL and create the collector object on the portal. This is from the example on the Add a Collector page https://www.logicmonitor.com/support/rest-api-developers-guide/collectors/add-a-collector/ Feel free to add or remove properties to the “data=’{" string to fit your environment, a complete list of properties can be found in the above link. #Create Collector Object #Request Info httpVerb ='POST' resourcePath = '/setting/collectors' queryParams = '' data = '{"description":"TestCollector1","backupAgentId":39}' #Construct URL url = 'https://'+ Company +'.logicmonitor.com/santaba/rest' + resourcePath +queryParams Next construct the URL, this can be mostly copied\pasted as it does not really change. #Get current time in milliseconds epoch = str(int(time.time() * 1000)) #Concatenate Request details requestVars = httpVerb + epoch + data + resourcePath #Construct signature signature = base64.b64encode(hmac.new(AccessKey,msg=requestVars,digestmod=hashlib.sha256).hexdigest()) #Construct headers auth = 'LMv1 ' + AccessId + ':' + signature + ':' + epoch headers = {'Content-Type':'application/json','Authorization':auth} This part executes the URL and the API will create the Collector Object #Make request response = requests.post(url, data=data, headers=headers) Next is the tricky part. We need the collector ID for the rest of the script so we have to extract that from the returned JSON file and add it to a local variable for later use. LogicMonitor uses the JSON format as payloads. You can read about parsing JSON files from http://docs.python-guide.org/en/latest/scenarios/json/ Lets use the script to walk through the parsing process. This is JSON snippet is what the Add Collector function will return. Response Body: { "status" : 200, "errmsg" : "OK", "data" : { "id" : 84, "createdOn" : 1486761412, "updatedOn" : 1486761412, "upTime" : 0, "watchdogUpdatedOn" : 1486761412, "status" : 0, The collectorID:# or '"id" : 84' is needed for the rest of the script. The jsonResponse = json.loads(response.content) is used to grab the JSON response from the above response function and define it to the local variable jsonResponse. This will give the script access to the returned JSON data. The JSON dictionaries needs to be specified to pull specific data, to accomplish this take a look at the "{", entries in the JSON snippet. These are the dictionaries. To simplify this, we need to place them in [ ] to pull the data out of the JSON file. In this case their is only the main body dictionary which does not need to be references and a "data" dictionary. To pull the value for the ID dictionary use the syntax ['data']['id'] as seen in the example below. #Parse response jsonResponse = json.loads(response.content) #The CollectorID is needed to download the installer deviceID=(jsonResponse['data']['id']) Now the ID number is assigned to the variable 'deviceID'. We are using the Collector ID for URLs so we need to convert this from a int to a string. Easy enough using the str() function #Converting the int to a str deviceIDstr=str(deviceID) #print(deviceIDstr) Now the string version of the collector ID is in the variable deviceIDstr and we can use that in the API URLs. The print function provides nice feedback for the end user so they can track what is happening. print"Create Collector Object ID:"+(deviceIDstr)+" success!" The first part is done and we have created a Collector Object on the portal and stored the Collector ID locally in a variable. Time to download the collector installer. The template for this section can be found on our support site https://www.logicmonitor.com/support/rest-api-developers-guide/collectors/downloading-a-collector-installer/ Declaring the environment and classes and setting up authentication can be skipped as it’s already done. Lets build the URL according to the example script. #Download Collector Installer #Request Info httpVerb ='GET' Next add the collector ID to the URL this can do this by adding in the deviceIDstr var directly in the resource path. #Adding the CollectorID to the URL resourcePath = '/setting/collectors/'+deviceIDstr+'/installers/Linux64' Any property found on the above link to the REST API Collector download can be set on the queryParams variable. The example is using collectorsize= and setting it to nano, as this is just a lab. #Specify size [nano|small|medium|large] queryParams = '?collectorSize=nano' data = '' Just in the case as the previous section, it's possible to copy\paste the construction of the URL as it’s mostly the same as the example. #Construct URL url = 'https://'+ Company +'.logicmonitor.com/santaba/rest' + resourcePath +queryParams #Get current time in milliseconds epoch = str(int(time.time() * 1000)) #Concatenate Request details requestVars = httpVerb + epoch + data + resourcePath #Construct signature signature = base64.b64encode(hmac.new(AccessKey,msg=requestVars,digestmod=hashlib.sha256).hexdigest()) #Construct headers auth = 'LMv1 ' + AccessId + ':' + signature + ':' + epoch headers = {'Content-Type':'application/json','Authorization':auth} #Make request response = requests.get(url, data=data, headers=headers) This part downloads the collector as “LogicMonitorSetup.bin” to the current working directory. #Print status and write body of response to a file print 'Response Status:',response.status_code file_ = open('LogicMonitorSetup.bin', 'w') file_.write(response.content) file_.close() The download is now complete, now we need to give the LogicMonitorSetup.bin file execute permissions and run it. I used the commands function feel free to use any preferred alternative, we are just running shell commands at this point. #Give execute perm to collector install and install with the silent option commands.getstatusoutput('chmod +x LogicMonitorSetup.bin') End user feedback print"adding execute permissions to the collector download" print"Starting collector install" Run the Setup.bin file with the silent option -y A full list of install options can be seen by running ./LogicMonitorSetup.bin -h commands.getstatusoutput('./LogicMonitorSetup.bin -y') print"Script Complete" And that's it! A new collector should be registered on the portal. -Jeff Below is a copy of the example script used for this post. #!/bin/env python import requests import json import hashlib import base64 import time import hmac import commands #Account Info AccessId ='j53939s54CP3Z9SUp6S5' AccessKey ='k[t2nP6-Srie[=M9Ju%-H8riy8Sww3Mp9j%kse5X' Company = 'lmjeffwoeber' #Create Collector Object #Request Info httpVerb ='POST' resourcePath = '/setting/collectors' queryParams = '' data = '{"description":"TestCollector1"}' #Construct URL url = 'https://'+ Company +'.logicmonitor.com/santaba/rest' + resourcePath +queryParams #Get current time in milliseconds epoch = str(int(time.time() * 1000)) #Concatenate Request details requestVars = httpVerb + epoch + data + resourcePath #Construct signature signature = base64.b64encode(hmac.new(AccessKey,msg=requestVars,digestmod=hashlib.sha256).hexdigest()) #Construct headers auth = 'LMv1 ' + AccessId + ':' + signature + ':' + epoch headers = {'Content-Type':'application/json','Authorization':auth} #Make request response = requests.post(url, data=data, headers=headers) #Parse response jsonResponse = json.loads(response.content) #The CollectorID is needed to download the installer deviceID=(jsonResponse['data']['id']) #Converting the int to a str deviceIDstr=str(deviceID) #print(deviceIDstr) print"Create Collector Object ID:"+(deviceIDstr)+" success!" #Download Collector Installer #Request Info httpVerb ='GET' #Adding the CollectorID to the URL resourcePath = '/setting/collectors/'+deviceIDstr+'/installers/Linux64' #Specify size [nano|small|medium|large] queryParams = '?collectorSize=nano' data = '' #Construct URL url = 'https://'+ Company +'.logicmonitor.com/santaba/rest' + resourcePath +queryParams #Get current time in milliseconds epoch = str(int(time.time() * 1000)) #Concatenate Request details requestVars = httpVerb + epoch + data + resourcePath #Construct signature signature = base64.b64encode(hmac.new(AccessKey,msg=requestVars,digestmod=hashlib.sha256).hexdigest()) #Construct headers auth = 'LMv1 ' + AccessId + ':' + signature + ':' + epoch headers = {'Content-Type':'application/json','Authorization':auth} #Make request response = requests.get(url, data=data, headers=headers) #Print status and write body of response to a file Print"Downloading Collector Installer" print 'Response Status:',response.status_code file_ = open('LogicMonitorSetup.bin', 'w') file_.write(response.content) file_.close() #Give execute perm to collector install and install with the silent option commands.getstatusoutput('chmod +x LogicMonitorSetup.bin') print"adding execute permissions to the collector download" print"Starting collector install" #-m bypasses mem check as my lab only has 1 gig of ram. Remove in production commands.getstatusoutput('./LogicMonitorSetup.bin -y -m') print"Script Complete"
  9. These are some simple troubleshooting steps I use when dealing with ESX servers. LogicMonitor has debug tools that can be run in the debug window on the collector the ESX currently assigned collector. The first useful tool is !http. This simply sends a HTTP request to a host and print the response. The ESX API has a few pages we can use that DOES NOT require authentication. This is helpful to test a connection outside of credential issues. For example the below debug command returns “The Web Services Description Language (WSDL) file containing definition of the VMware Infrastructure Management API.” !http https://10.73.42.10/sdk/vim.wsdl What data is returned isn't important, what this command will tell us is can the collector connect to the ESX device or is network infrastructure somehow stopping communication. The next command is !esx and it's a bit more powerful help !esx !esx: query a list of esx performance counter against the given host and print the result usage: !esx [username=foo password=bar] <host> <entityName> <entityType[host|vm|datastore|cluster|resourcepool|hoststatus|cpu|memory|disk|network]> [counter1 [counter2...]] If you don't give the username/password, the agent will use esx.user/esx.pass properties of the host. !esx is a debug tool that allows us to query the VMware API directly in the same way the datasources poll data. To decode the help example let’s run this on the ESX server 10.73.42.10 and the virtual machine “marvin”. The example !esx command is "!esx vc-server esx-name host cpu.usage.average mem.consumed.average" Broken down for the test environment "!esx 10.73.42.10 marvin vm cpu.usage.average mem.consumed.average" If you don't give the username/password, the agent will use the esx.user/esx.pass properties of the host. This is a fantastic way to test the credentials entered into LogicMonitor. You could also push the credentials by using the username= and password= options with the !esx command to verify they work with LogicMonitor. So far we have only tested connectivity which is the most common form of ESX troubleshooting. We can also use the !esx to query individual datapoints in the datasources to ensure the data presented by LogicMonitor is accurate. The command can be built by viewing the datapoint in question. For this example we can use the Cpu Usage counter used in previous examples. Lets take another look at the !esx usage usage: !esx [username=foo password=bar] <host> <entityName> <entityType[host|vm|datastore|cluster|resourcepool|hoststatus|cpu|memory|disk|network]> [counter1 [counter2...]] We know the host is the ESX server 10.73.42.17, Entity Name is the Virtual Machine Marvin, EntityType can be found in the datapoint which is "VM" and the ESX counter is cpu.usage.average. !esx 10.73.42.10 marvin vm cpu.usage.average cpu.usage.average which will return the value cpu.usage.average=211.0
  10. I recently wrote a datasource that pulled an API and alerted when the return value was greater than 0 The problem I ran into is the API never returned a 0, instead it would return NaN. I worked around this issue by using Key = Value datapoints and a "if (strv.isEmpty) {" statement. Basically, if their is a value returned the output in the script will be "events=[returned value]" the same as most key=value datapoints. If the returned value is empty, the script will fill out the entire string returning "events=0" which puts a 0 in the datapoint and allows the alert to clear. This a nice workaround for a LogicMonitor Admin's bag of tricks. //Print KeyValue strv = response_obj['results']['2']; if ( strv.isEmpty() ) { println "events=0" } else { println "events=" + strv; } return(0);
  11. Accessing the Zendesk API with LogicMonitor. Details for the Zendesk API can be found from the below link https://developer.zendesk.com/rest_api/docs/core/introduction The below blog is not intended as a copy\paste Zendesk datasource but as instructions to create Groovy Script datasources based on custom criteria. Zendesk data can be imported and alerted on with LogicMonitor using various API methods. This post will focus on using Zendesk Views and the Zendesk query First we will focus on the View, specifically the count of tickets in the view. Zendesk views are a way to organize your tickets by grouping them into lists based on certain criteria. In my example I’ve created a view for tickets for my rated tickets within the last 7 days. The criteria can be anything you require. Zendesk has various json files already created for views. This example will use the count.json. Create a Zendesk view. Load the view in Zendesk and the view ID will be in the URL For example: https://logicmonitor.zendesk.com/agent/filters/90139988 90139988 is the view ID. Zendesk documentation on views can be viewed with the below link https://support.zendesk.com/hc/en-us/articles/203690806-Using-views-to-manage-ticket-workflow From the LogicMonitor side we can use a datasource with an embedded groovy script with the built-in json slurper to parse the data. More information on Groovy Script Datasources can be found in the below URLs https://www.logicmonitor.com/support/datasources/scripting-support/embedded-groovy-scripting https://www.logicmonitor.com/support/datasources/groovy-support/how-to-debug-your-groovy-script/ Create a new Groovy Script Datasource and be sure to import the Jason slurper and http API by adding the below 3 lines to the top of the script. // import the logicmonitor http and JsonSlurp class import com.santaba.agent.groovyapi.http.*; import groovy.json.JsonSlurper; The URL and authentication information to retrieve the view’s count json is defined as url=’https://logicmonitor.zendesk.com/api/v2/views/90139988/count.json’ Authentication is using a Zendesk token instead of a password by appending the ‘/token’ string to the user ID user = 'jeff.woeber@logicmonitor.com' + '/token' pwd = '##ZENDESK TOKEN##' Next use the groovyapi HTTP // get data from the ZenDesk API http_body = HTTP.body(url, user, pwd); This will return the count json file which should look similar to the below view_count=[url:https://logicmonitor.zendesk.com/api/v2/views/101684888/count.json, view_id:101684888, value:45, pretty:45, fresh:false] Value=45 is the count data, so we need to parse the Value data You can print the json to output in the !groovy debug window by using the below code // Debug - print json slurp to identify keyvalues // iterate over the response object, assigning key-value pairs response_obj.each() { key, value -> // print each key-value pair println key + "=" + value; } This can be done by using the json slurper // user groovy slurper json_slurper = new JsonSlurper(); response_obj = json_slurper.parseText(http_body); LogicMonitor can use multi-line key value pairs, so parse so adding in a key of “TicketCount=” to the ouput will make it easy to add a datapoint for the count value. Do this by printing to the output // Print key Value pair for Logicmonitor Datapoint println "TicketCount=" + response_obj.view_count.value In your datasource you can add a new datapoint *Content the script writes to the standard output Inturpret output with - Multi-line Key Value Pairs Key = TicketCount LogicMonitor will reconize the TicketCount as a datapoint and alert thresholds can be set accordingly. In the attached Example Zendesk_TixCnt.xml The zendesk specific values have been tokenized to be added on the device level were the datasource will be applied. Tokens required on the device are ZEN.ACCOUNT - i.e. logicmonitor ZEN.EMAIL - i.e. Jeff.Woeber@LogicMonitor.com ZEN.TOKEN - Token for authentication ZEN.VIEW - view ID if using a view (This can be found in the URL while viewing the view) The second example uses the search API to query for tickets with a specified status for the last 48 hours. When building the URL it’s important to remember spaces and special characters are not allowed. Use Encoded Characters instead http://www.w3schools.com/tags/ref_urlencode.asp an example for solved tickets in the last 48 hours query=type:ticket status:solved created>48hours The URL after including encoded Characters. url = "https://logicmonitor.zendesk.com/api/v2/search.json?query=tickets%20status:solved%20created%3E48hours" The output will look similar to ~brand_id:854608, allow_channelback:false, result_type:ticket]] facets=null next_page=null previous_page=null count=29 We only need the count. Using the same key value output as the previous example we can add a key for "Solved" println "solved=" + response_obj.count.value In the attached example ZenDesk_TixStatus this process is repeated for Created, Open, Solved, Pending, On-Hold, and closed. Zendesk_TixStatus.xml Zendesk_TixCount.xml