Search the Community

Showing results for tags 'datasource'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • LogicModule Exchange
    • LM Exchange
    • LM Staff Contributions
  • Product Announcements
    • LogicMonitor Notices
  • LogicMonitor Product Q&A
    • Feature Requests
    • Ask the Community
    • From the Front

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me

Found 38 results

  1. The datasource tokens ##DataSource## and ##INSTANCE## do not allow for enough granularity. For example, if I want to the an alert to state that it is a "Citrix Services" alert, I cannot just use the ##DataSource## token as this will also include the Instance name, which may be long and confusing such as "WinCitrixServices-Citrix Independent Management Architecture". A ##DataSourceDisplayName## or similar token would resolve this, the name is stored by LM as dataSourceDisplayName. This token is what I am requesting. With the current setup, ##DataSource## equals Datasource.dataSourceName + instance.DisplayName And ##INSTANCE## equals instance.Name Thank You, John
  2. How do we monitor our DataSources? One of our customers asked an interesting and challenging question. He would like to know how he can track and alert changes to his customised DataSources. Well, there was no straightforward way, not until recently. This is made possible with the recent release of the ConfigSource add-on module and the publishing of the dataSource REST API. At a high-level, we can create a Groovy script ConfigSource which makes a REST API call to export a targeted DataSource to XML format, store and check for changes to the XML in ConfigSource, then send an alert when there is a change. Creating the ConfigSource:- 1. Create REST API token 2. Create an embedded groovy script ConfigSource with the following information:- Name : DS_XML Display Name : DS_XML Applies To : This ConfigSource can be applied to any device Collect Every : Up to your company policy, minimum 1 hour Multi-instance? : Check this option Enable Active Discovery : Uncheck this option Collector Attributes : Select Embedded Groovy Script Groovy Script : [... Attached Below ...] Config Check : Select Any Change (Check For: option) 3. Save the ConfigSource import org.apache.http.HttpEntity import org.apache.http.client.methods.CloseableHttpResponse import org.apache.http.client.methods.HttpGet import org.apache.http.impl.client.CloseableHttpClient import org.apache.http.impl.client.HttpClients import org.apache.http.util.EntityUtils import javax.crypto.Mac; import javax.crypto.spec.SecretKeySpec; import org.apache.commons.codec.binary.Hex; //define credentials and url def accessId =hostProps.get("api.access.id"); def accessKey = hostProps.get("api.access.key"); def account =hostProps.get("api.account"); def resourcePath ="/setting/datasources/##WILDVALUE##"; def url = "https://" + account + ".logicmonitor.com" + "/santaba/rest" + resourcePath + "?format=xml"; // get current time epoch = System.currentTimeMillis(); //calculate signature requestVars = "GET" + epoch + resourcePath; hmac = Mac.getInstance("HmacSHA256"); secret = new SecretKeySpec(accessKey.getBytes(), "HmacSHA256"); hmac.init(secret); hmac_signed = Hex.encodeHexString(hmac.doFinal(requestVars.getBytes())); signature = hmac_signed.bytes.encodeBase64(); // HTTP Get CloseableHttpClient httpclient = HttpClients.createDefault(); httpGet = new HttpGet(url); httpGet.addHeader("Authorization" , "LMv1 " + accessId + ":" + signature + ":" + epoch); response = httpclient.execute(httpGet); responseBody = EntityUtils.toString(response.getEntity()); code = response.getStatusLine().getStatusCode(); println responseBody httpclient.close(); 4. Go to the device where the ConfigSource is applied to, define the following device properties :- api.access.id : < API Token Access Id > api.access.key : < API Token Access Key > api.account : < LM Account > Adding ConfigSource Instances 1. Identify the DataSource id. You can find it in the UI by looking at the URL of the DataSource definition 2. Add ConfigSource instances by selecting 'Add Monitored Instance' from the Manage Dropdown next to the manage button for the device Name : < DataSource Name > Wildcard value : < DataSource Id > DataSource : DS_XML 3. Repeat above step 1 and 2 to add more datasource instances. Point to Note: 1. To execute a ConfigSource, you will need a minimum collector version of 22.110 2. One Datasource Id per instance 3. Differences in DataSource are viewed in XML format 4. Previous DataSource version can be restored by downloading and importing the previously compared XML from the ConfigSource history 5. Thanks and credits to David Lee (Our Jedi Master) for enhancing the original concept to a more user-friendly multi-instances ConfigSource. Screenshots of the ConfigSource result:
  3. I recently wrote a datasource that pulled an API and alerted when the return value was greater than 0 The problem I ran into is the API never returned a 0, instead it would return NaN. I worked around this issue by using Key = Value datapoints and a "if (strv.isEmpty) {" statement. Basically, if their is a value returned the output in the script will be "events=[returned value]" the same as most key=value datapoints. If the returned value is empty, the script will fill out the entire string returning "events=0" which puts a 0 in the datapoint and allows the alert to clear. This a nice workaround for a LogicMonitor Admin's bag of tricks. //Print KeyValue strv = response_obj['results']['2']; if ( strv.isEmpty() ) { println "events=0" } else { println "events=" + strv; } return(0);
  4. When updating DataSources we lose customization of some attributes. Of course, we could clone and rename, but then we lose out on updates. What I would like is to be able to preserve a subset of the attributes, namely thresholds for alerting and the custom alert subject and message body.
  5. I have a number of datasources that apply to devices that have a property assigned to them... Previously my custom properties would be in the autocomplete list when adding a property to a device. I would only have to put the first few characters of my properties (they all follow the same naming convention) and select it. This functionality has recently disappeared and the only properties listed appear to be from LM supplied datasources.
  6. Singapore is an island nation located in Southeast Asia and also home of LogicMonitor Asia office. Singapore while it is relatively small in sizes and population, it is strategically located blooming South East Asian market. Singapore is well known as country with good infrastructure, modern government and multi cultural background. One good example of this is Singapore government have been publishing various information such as Economics, Society, Health, Environment and other data publicly through their own website (data.gov.sg). Example of published data include Singapore population, Birth Rate, GDP data, government expenditure, and even environmental related data. Some example of environmental related data that we can retrieved are Temperature, PSI Reading, and rainfall reading. Here is an example of PM2.5 data which is formatted in JSON. We can make a LogicMonitor datasource based on this which can monitor air quality in Singapore. These information can be retrieved using HTTP request and supplying API key as part of HTTP headers. The response data can be divided into two parts, the first part mainly listed down regions and second part of the response is the actual air quality data. We can use Groovy script for Active Discovery to parse each region and listed it as an individual instance in LogicMonitor. Here is the sample of Active Discovery script What this script will do is to send http request to api.data.gov.sg and look for region name in the response body and print region name to this format instance1_id##instance1_name. After all instances are defined we can use regular HTTP data collection method to properly collect relevant data and displayed them in LogicMonitor. Here is data collection part of this datasource This is how it looks like on LogicMonitor when data is being collected and from here we can easily create graphs which can be use to show history of Singapore air quality On top of PM2.5 reading, we can also create another datasource that show PSI reading and current temperatures. We can clone this datasource and make some modification to match with http output for PM2.5 and Temperature. The next logical step would be to create a dashboard that can show all these information at glance which would look great if we show it on large screen such as TV in the meeting room. This would be useful especially during hazy season where air quality can go up to dangerous level from time to time. Here is an example of how we display these information on large screen using dashboard We can also use the same datasource to create an alert when air quality drops to unhealthy range and then deliver to our HipChat room. This is just an example of how LogicMonitor can monitor more than just IT infrastructure. In our case this will help us decide where we should go for lunch and whether we should bring our N95 masks along.
  7. Hi everyone, I'm trying to get some results off a Simplivity system using power shell and json. I have the powershelgl working as I would expect, but I can't get the results to show up in LogicMonitor. In collector debug, I see this when running the script. >!posh returns 0 output: [ { "id": "53466cdf-ee82-435e-8619-334b3a3e7583", "name": "Fakename", "type": "OMNISTACK", "members": [ "4230cce1-f672-e708-0ed6-3310d6db8222", "4230e3c6-4cf6-7228-fc86-cbce8cfa4af7", "564dcac8-b774-c644-cb22-e63acfd07cb9" ], "arbiter_connected": true, "arbiter_address": "10.1.1.6", "stored_compressed_data": 731283120128, "compression_ratio": "1.8 : 1", "deduplication_ratio": "300.9 : 1", "free_space": 14779864866201, "capacity_savings": 387454100800512, "efficiency_ratio": "530.8 : 1", "local_backup_capacity": 385825262157824, "used_logical_capacity": 388185382895616, "remote_backup_capacity": 0, "allocated_capacity": 15511146961305, "stored_virtual_machine_data": 2360120737792, "stored_uncompressed_data": 1290225192960, "used_capacity": 731282095104 } ] I've attached example of what I've tried for output. Nothing is showing up.
  8. Accessing the Zendesk API with LogicMonitor. Details for the Zendesk API can be found from the below link https://developer.zendesk.com/rest_api/docs/core/introduction The below blog is not intended as a copy\paste Zendesk datasource but as instructions to create Groovy Script datasources based on custom criteria. Zendesk data can be imported and alerted on with LogicMonitor using various API methods. This post will focus on using Zendesk Views and the Zendesk query First we will focus on the View, specifically the count of tickets in the view. Zendesk views are a way to organize your tickets by grouping them into lists based on certain criteria. In my example I’ve created a view for tickets for my rated tickets within the last 7 days. The criteria can be anything you require. Zendesk has various json files already created for views. This example will use the count.json. Create a Zendesk view. Load the view in Zendesk and the view ID will be in the URL For example: https://logicmonitor.zendesk.com/agent/filters/90139988 90139988 is the view ID. Zendesk documentation on views can be viewed with the below link https://support.zendesk.com/hc/en-us/articles/203690806-Using-views-to-manage-ticket-workflow From the LogicMonitor side we can use a datasource with an embedded groovy script with the built-in json slurper to parse the data. More information on Groovy Script Datasources can be found in the below URLs https://www.logicmonitor.com/support/datasources/scripting-support/embedded-groovy-scripting https://www.logicmonitor.com/support/datasources/groovy-support/how-to-debug-your-groovy-script/ Create a new Groovy Script Datasource and be sure to import the Jason slurper and http API by adding the below 3 lines to the top of the script. // import the logicmonitor http and JsonSlurp class import com.santaba.agent.groovyapi.http.*; import groovy.json.JsonSlurper; The URL and authentication information to retrieve the view’s count json is defined as url=’https://logicmonitor.zendesk.com/api/v2/views/90139988/count.json’ Authentication is using a Zendesk token instead of a password by appending the ‘/token’ string to the user ID user = 'jeff.woeber@logicmonitor.com' + '/token' pwd = '##ZENDESK TOKEN##' Next use the groovyapi HTTP // get data from the ZenDesk API http_body = HTTP.body(url, user, pwd); This will return the count json file which should look similar to the below view_count=[url:https://logicmonitor.zendesk.com/api/v2/views/101684888/count.json, view_id:101684888, value:45, pretty:45, fresh:false] Value=45 is the count data, so we need to parse the Value data You can print the json to output in the !groovy debug window by using the below code // Debug - print json slurp to identify keyvalues // iterate over the response object, assigning key-value pairs response_obj.each() { key, value -> // print each key-value pair println key + "=" + value; } This can be done by using the json slurper // user groovy slurper json_slurper = new JsonSlurper(); response_obj = json_slurper.parseText(http_body); LogicMonitor can use multi-line key value pairs, so parse so adding in a key of “TicketCount=” to the ouput will make it easy to add a datapoint for the count value. Do this by printing to the output // Print key Value pair for Logicmonitor Datapoint println "TicketCount=" + response_obj.view_count.value In your datasource you can add a new datapoint *Content the script writes to the standard output Inturpret output with - Multi-line Key Value Pairs Key = TicketCount LogicMonitor will reconize the TicketCount as a datapoint and alert thresholds can be set accordingly. In the attached Example Zendesk_TixCnt.xml The zendesk specific values have been tokenized to be added on the device level were the datasource will be applied. Tokens required on the device are ZEN.ACCOUNT - i.e. logicmonitor ZEN.EMAIL - i.e. Jeff.Woeber@LogicMonitor.com ZEN.TOKEN - Token for authentication ZEN.VIEW - view ID if using a view (This can be found in the URL while viewing the view) The second example uses the search API to query for tickets with a specified status for the last 48 hours. When building the URL it’s important to remember spaces and special characters are not allowed. Use Encoded Characters instead http://www.w3schools.com/tags/ref_urlencode.asp an example for solved tickets in the last 48 hours query=type:ticket status:solved created>48hours The URL after including encoded Characters. url = "https://logicmonitor.zendesk.com/api/v2/search.json?query=tickets%20status:solved%20created%3E48hours" The output will look similar to ~brand_id:854608, allow_channelback:false, result_type:ticket]] facets=null next_page=null previous_page=null count=29 We only need the count. Using the same key value output as the previous example we can add a key for "Solved" println "solved=" + response_obj.count.value In the attached example ZenDesk_TixStatus this process is repeated for Created, Open, Solved, Pending, On-Hold, and closed. Zendesk_TixStatus.xml Zendesk_TixCount.xml
  9. Currently, LM has hard-coded the Ping dataSource to use 250ms ICMP Ping intervals. We need the flexibility to adjust the Ping interval (ms) in the DataSource (Either static or system property value). Background: We've seen at least one company "Mimosa" that has changed it's newest firmware to block ICMP messages if they are sent "too quickly". For Mimosa wireless gear, this is represented in LM as an 80% packetloss (2 pings permitted, 8 are then rejected). Mimosa does not want their hardware resources depleted by multiple, quick, Ping requests. The workaround currently is to alter the thresholds in LM to compensate for an 80% packetloss reading. By having the ability to adjust Ping Interval for these hosts in LM, we can have better visibility into network issues.
  10. I have found the new "Test" button on the Datasources' LogicModule setup pages to be invaluable for testing which hosts are associated with my "Applies to" filter. I can make a change to the filter and then get instant feedback, pointing out typographical errors, mispellings, or even possibly that I got it correct! This will be equally valuable for the Eventsources LogicModule setup pages.
  11. I was a bit surprised yesterday to find that there is no support (other than basic SNMP interfaces, etc.) for HPE Comware switches, like the HP 5900AF datacenter switch. This seems like a big gap in coverage for one of the more common switch vendors on the platform that is by all accounts the future (Procurve is still a thing, but HPE is more and more moving toward Comware). Please add support for HPE Comware switches similar to what we get on the Procurve side! This would include FRU, sensors, clustering (IRF in the 5900AF, for example) and so on.
  12. Hi--I see on the Script Active Discovery page that there is a new type of token that can be used for instances in script-based instance discovery called "Second Part Alias". What is this and how exactly is it used? Maybe can you provide an example of how it is being used in the product currently? I am particularly interested in storing more information about instances ("instance custom properties") if possibel..which coukd then be sourced in an alert message/webhook..so if it can be used for this purpose in any way that would be ideal. Awesome forum & support site BTW
  13. For a datasource, we would like to be able to set the alert threshold over more than a single sample. You can set the number of threshold violations needed for an alert, but this is far different in nature than setting a threshold over a time range. For example, 60% CPU over 2 hours versus 60% CPU over 10 samples. You might see CPU fluctuate within that period, preventing an alert, but the average over a longer period is valuable. Similarly, we would like to get alerts not just on average over a time period, but also on slope over a time period, though perhaps the latter should be a separate request. Thanks, Mark