• Content count

  • Joined

  • Last visited

  • Days Won


Community Reputation

25 Excellent

About Brandon

  • Rank
    Community All Star

Recent Profile Visitors

491 profile views
  1. I have a device property that I would like to update every 15 minutes or so. This is because I have groups with auto-include rules that are looking for that property. I need to have the device move in and out of the groups on the fly. It would be great if we could set individual custom propertysources to update on a more frequent basis. Currently I'm achieving this using the LogicMonitor Rest API which I have baked right into a datasource as a workaround - but I think this solution is messy. Thanks!
  2. Hey Joe, I'm not sure that this exactly meets your needs, but I think it's a good start. Basically, you can call hostProps.toProperties() method which spits out an array that you can now dig through and filter using regex. Something like this: def allProps = hostProps.toProperties() allProps.each{ if(it ==~ /.*\.databases=/){ println it } } Let me know if this doesn't address what you're trying to accomplish.
  3. SOLR JVM Stats (non-JMX)

    Do the instances at least get discovered? If so, could you do a poll-now and shoot me the error? I'm curious as to what the issue is.
  4. SOLR JVM Stats (non-JMX)

    Hey @George Bica! Did you get any of the other datasources to work?
  5. Apache Solr datasources need scrubbing

    @George Bica - Last one. This is an eventsource, so no pretty graphs here. It makes an API call to pull all of the events in the solr log and then alerts on Error and Severe events only. It doesn't apply to servers by default because it can be quite noisy if you don't have it tuned properly. Once you're sure it's not going to blow up after you've applied it, go ahead and change the Applies To rules and you should be good to go. Hope these help!
  6. SOLR Error Logs

    W9PN3Y I thought I had already posted this one, but regardless - here it is. This does not apply to any servers by default as it can be extremely noisy if you don't have it tuned. This makes an API call to solr to pull error and severe logs and then formats them so that LogicMonitor can understand them. Before applying this, it's not a bad idea to review those logs manually to make sure something isn't repeatedly triggering (as is common with SOLR). Still - it's helped us detect and diagnose a range of issues that would have otherwise been difficult to see.
  7. Apache Solr datasources need scrubbing

    @Michael Rodrigues - Thanks! I've got more on the way. @George Bica - Here's another one. It might not work for your version of SOLR, but it has the exact same requirements otherwise. Add solr.port and solr.append to your solr servers and this datasource will provide lots of useful JVM metrics without the need for enabling JMX. I'm still going through my datasources, so I might still have one or two to post. I'll keep replying to this thread with whatever I've got.
  8. SOLR JVM Stats (non-JMX)

    ANLX64 This monitors solr JVM stats via the SOLR API without the need to enable jmx. This datasource may not work on older solr versions as this particular API call was only recently introduced. Still, it should be very useful for monitoring the overall health of the JVM application.
  9. Apache Solr datasources need scrubbing

    @George Bica Here's another for monitoring the status of the SOLR collections (as opposed to the performance stats).
  10. ZLPJP3 This datasource monitors the status of each solr collection without the need to enable JMX. It is done via batchscript and seems to be very efficient. The only alert set up is for cores that are recovering. Other alerts can be set up at your discretion. There are a few graphs included as well.
  11. Apache Solr datasources need scrubbing

    @George Bica - I cooked this up and have been steadily improving it over time. I figured I might as well share it since it seems to be causing you some pain. This datasource polls SOLR via the API and port. On the device, you'll need to set up these two properties: solr.port (default is 8983) solr.append (default is solr) The scripts I put together are pretty efficient, so discovery and monitoring should only take a minute or two to get going. If it still doesn't work, you might be running into a firewall issue or something. I've got more SOLR datasources I'll be posting in the next few minutes so keep an eye out. Though - you might not be able to pull them down until they're cleared by the fine folks at LogicMonitor.
  12. 3Z32Z4 This datasource monitors a large amount of SOLR performance data for each SOLR collection/core. It is done via batchscript and appears to return data extremely reliably and efficiently. There are no alert thresholds set up as performance expectations may vary depending on usage. I've also set up basic overview and per-instance graphs. There are 65 datapoints here though, so I'm sure more can be added.
  13. Windows_Cluster_Failover

    R79DJL I'm not sure if this datasource was removed intentionally, but there was a datasource named "Windows_Cluster_Failover" that monitored for cluster failover events. We've been using this datasource, but we noticed that it triggered a false alarm last night which prompted me to go back and rebuild it. This datasource will monitor a Windows cluster for a failover event. If a failover is detected, it will be logged in the LogicMonitor's installation directory (Default is "c:/Program Files (x86)/LogicMonitor/bin/"). The datasource exits if certain calls fail altogether (to prevent false alerts) and there is a separate alert that will trigger if the datasource continuously fails to get data. So far this appears to be working more reliably, so I thought I'd share.
  14. Apache Solr datasources need scrubbing

    This datasource doesn't poll SOLR via the API. It uses JMX which has to be enabled on your SOLR cores (collections) in solrconfig.xml. You can read how to do that here: JMX provides lots of data that might not be exposed via the standard api queries. However, SOLR is actually quite good about making sure all sorts of monitoring data is available without JMX and they don't recommend enabling it in production.
  15. I would really like to see a feature implemented that allows for easy and adjustable graph smoothing. This can be accomplished by adding a switch to the UI in the graph configuration screen. If the switch is turned to "enabled", a drop-down appears prompting for an integer between 1% and 5%. This number would represent the percentage of total datapoints that would be used to calculate the "smoothed" values. A second drop-down prompts for the position of the calculation: past, future, or both (default). For example: A graph containing 500 values - Rolling average is enabled and set to 3% (both). Each value on the graph would be recalculated to reflect the average of itself, plus the 15 (3% of 500) preceeding and proceeding datapoints. Here's what the original data might look like: (All examples below were created with TimeLion in Kibana using a similar algorithm) With 3% smoothing: And with 5% smoothing: