• Content count

  • Joined

  • Last visited

  • Days Won


Community Reputation

24 Excellent

About Brandon

  • Rank
    Community Whiz Kid

Recent Profile Visitors

406 profile views
  1. I would really like to see a feature implemented that allows for easy and adjustable graph smoothing. This can be accomplished by adding a switch to the UI in the graph configuration screen. If the switch is turned to "enabled", a drop-down appears prompting for an integer between 1% and 5%. This number would represent the percentage of total datapoints that would be used to calculate the "smoothed" values. A second drop-down prompts for the position of the calculation: past, future, or both (default). For example: A graph containing 500 values - Rolling average is enabled and set to 3% (both). Each value on the graph would be recalculated to reflect the average of itself, plus the 15 (3% of 500) preceeding and proceeding datapoints. Here's what the original data might look like: (All examples below were created with TimeLion in Kibana using a similar algorithm) With 3% smoothing: And with 5% smoothing:
  2. Well, I had a few use cases come up for this, so I decided to take another crack at it. I think this is going to work out better for us than my first version: hostname=hostProps.get("system.hostname") my_query="Select Name, DisplayName, PathName from Win32_Service" def session =; import com.santaba.agent.groovyapi.win32.WMI def result = session.queryAll("CIMv2", my_query, 15); def exclude = ['ALG', 'AppVClient', 'COMSysApp','diagnosticshub.standardcollector.service', 'FontCache3.0.0.0', 'EFS', 'KeyIso', 'msiserver', 'MSDTC', 'Netlogon', 'NetTcpPortSharing', 'RpcLocator', 'RSoPProv', 'PerfHost', 'SamSs', 'SensorDataService', 'Spooler', 'sppsvc', 'TrustedInstaller', 'UI0Detect', 'UevAgentService', 'VaultSvc', 'vds', 'VSS', 'WdNisSvc', 'WinDefend', 'wmiApSrv', 'WRSVC', 'WSearch'] result.each(){ if ((!(it.PATHNAME.toLowerCase() ==~ /c:\\windows\\system32\\svchost\.exe .*/)) && (!(it.NAME in exclude))) { println "WinService." + it.NAME.replaceAll(' ', '') + "=" + it.DISPLAYNAME } } return 0; Someone try this one and let me know if I missed anything obvious. I tried to cut out all the noise so it would only spit out the services anyone really cares about.
  3. Cisco Info PropertySources

    @Tom Lasswell I noticed it wasn't pulling anything for switch stacks so I modified it a bit to get around that. Of course, this will only show the data for the stack master, but it can definitely still be useful. I know there's a separate datasource that pulls the data for all the units in each stack, but oh well - better to have it twice than not at all! import com.santaba.agent.groovyapi.snmp.Snmp; // set hostname variable. def hostname = hostProps.get("system.hostname") // Wrap code in try/catch in case execution experiences an error. try { // OID which contains the serial number of Cisco devices. def entPhysicalSerialNum = "" // Initiate SNMP GET command. def output = Snmp.get(hostname, entPhysicalSerialNum); // Null response could mean switch stack. if (output == null) try { def entPhysicalSerialNum2 = "" def output2 = Snmp.get(hostname, entPhysicalSerialNum) println "auto.Cisco_Serial_Number=" + output2 } catch (Exception e) { // print out the exception. println e; return 1; } // Print out the serial number. println "auto.Cisco_Serial_Number=" + output } // Catch the exception. catch (Exception e) { // print out the exception. println e; return 1; } // exit code 0 return 0;
  4. Cisco Info PropertySources

    @Tom Lasswell - This is magnificent! I don't know why I never pulled this before, but I'm glad I did. Maybe someone at LM could be convinced to pull this into the official repository? Or even bake it into the core product! Definitely useful for us.
  5. AWS Lambda Alias

    GX2WXT A single Lambda function might have several versions. The default Lambda datasource monitors and alerts on the aggregate performance of each Lambda function. Using the Alias functionality in AWS, this datasource returns CloudWatch metrics specifically for the versions to which you have assigned aliases, allowing you to customize alert thresholds or compare performance across different versions of the same function. This datasource does not automatically discover aliases and begin monitoring them (as this could very quickly translate into several Aliases being monitored and drive up your CloudWatch API bill). Instead, add only the Aliases you want monitored by adding the device property "lambda.aliases" either to individual Lambda functions or at the group level if you're using the same Alias across several lambda functions. To add more than one, simply list them separated with a single space - e.g: "Prod QA01 QA02". If an alias does not exist, no data will be returned. This datasource is otherwise a clone of the existing AWS_Lambda datasource with the default alert thresholds.
  6. Updated AWS EC2 ScheduledEvents

    @Sarah Terry - Thanks for the reply! Filtering out the discovery of the events using ILP's would help us prevent alerting, but the event also wouldn't be discovered. So, unless I'm mistaken, if we wanted to have a history of events that have occurred on an instance or environment, LM wouldn't show them. This might be a "pick your poison" scenario where we would need to choose between collecting the data, or alerting on it - but not necessarily both.
  7. When cloning an existing datasource and changing the type from one type to another similar type, all of the datapoints immediately disappear. I am overhauling some of my script and webpage datasources where the output is json or key-value pair into batchscript. As soon as I change the collection type, the datapoints immediately disappear and it's like I'm starting from scratch. The script auto-discovery and collection panes also get wiped out. Unfortunately this has translated into me spending a lot of time recreating datapoints instead of modifying existing ones. I understand why those datapoints are not compatible with SNMP or CIM collection methods, but they should largely be transferable between certain datasources. On the bright side - this has forced me to start overhauling my datasources by exporting and importing the XML files. Still, it would be great if the GUI could play nice for some of us power users.
  8. Updated AWS EC2 ScheduledEvents

    Any chance you guys could implement these changes into the back end of the core product? I only ask because the new datasource doesn't expose the code that's running, so I can't tweak the output myself. I really only need to be alerted on scheduled events that haven't happened yet.
  9. NTP Datasource

    @Andrey Kitsen - Thanks! That was fast!
  10. Solr 6.6+ JVM

    Haha, thanks @Andrey Kitsen I 'm just getting around to posting the stuff I've been meaning to make available to the community for a while.
  11. Solr Error Logs

    This datasource monitors the solr logs via an http call to the web front end and parses the json response to output in a LogicMonitor-friendly format. This appears to work on all of the versions of Solr I have tested on (6.0+). *Note - I have disabled the applies-to on this datasource because it can be quite noisy if your logging hasn't been tuned on the Solr nodes. I did include some useful filters to strip out some of the more common noise - but I still recommend applying this with caution. W9PN3Y
  12. Solr 6.6+ JVM

    967XA7 This datasource queries the new Solr metrics API to gather JVM performance data without having to actually enable jmx on the solr node. This will only work on Solr nodes running version 6.6 or higher. Feedback always appreciated.
  13. NTP Datasource

    I'd really appreciate it if the NTP datasource could be updated with better descriptions for datapoints. I'm trying to dissect it, but I don't really understand the raw output or how it's interpreted. Maybe I'm overthinking it.
  14. Thanks @mkerfoot and @Andrey Kitsen. I'll have more good stuff to share soon. I always appreciate feedback!
  15. datasource migration function

    Two things: 1. I'd greatly appreciate it if you could share that datasource. Is this the one in the official repository? 2. I largely agree with your point that it's not always obvious when a datasource change or update is going to cause data loss - a pain I've experienced a few too many times. Even when updating official datasources, it's a risk due to the custom applies-to functions we might be using. It would be great if there was at least some logic that allowed the import of a datasource, but allowing the administrator to choose to override the applies-to function or not. Or maybe even (for advanced users) make manual changes to the XML doc before importing to prevent datapoint renaming.