Search the Community
Showing results for tags 'Metrics'.
Found 3 results
LogicMonitor Portal Metrics is a DataSource that queries the API of a specified LogicMonitor portal for overall statistics such as device, collector, and alert counts. It was originally written by fellow Sales Engineer @Jake Cohen, and updated by Monitoring Engineer @Julio Martinez (credit where credit is due!) It can be useful for tracking the activity within an account over time. The recommended/ required method for implementing the DataSource is as follows: Download the LogicMonitor Portal Metrics DataSource from the LogicMonitor Repository using locator code J7RGZY. Add a new device to your account in Expert Mode - use 'logicmonitor.account' in place of IP Address/ DNS and whatever you'd like for the Display Name (LogicMonitor Portal, for example.) - This device won't respond to standard DataSources, so you'll probably want to do some alert tuning once it's been added. Add properties to the device to allow the DataSource to authenticate. The required properties are: lmaccount (LogicMonitor account name - without the logicmonitor.com at the end) lmaccess.id (LogicMonitor API Key Access ID) lmaccess.key (LogicMonitor API Key Access Key) Once those properties are in place, the DataSource should automatically apply to the new device. Download the LogicMonitor Portal Metrics dashboard from Github. Let us know what you think!
Would it be possible to provide an API call or calls that provide a 'hit count' (historical and current) against alert rules and escalation chains? Ideally it would allow a filter to be assigned for alert levels of interest. This would help in providing metrics around how many alerts are being generated, and to what areas of responsibility, and help drive additional questions around configuration and maintenance. I know there is a report to extract thresholds and their destinations, but these metrics are not available currently, it seems. Many Thanks ~Nick
One thing everybody is looking for is convergence, a single tool that does everything for observability. Monitoring, metrics, log analysis - LM does a good job on the first two, but I still need a separate tool to get useful metrics and trends out of my application logs. LM should look into adding ELK-as-a-Service to the LM feature stack (provide customers with an API endpoint they can feed logs to or something), and then customers could have service-level monitoring (URL response times, etc.), plus the traditional LM suite of monitors/metrics, plus LM Cloud, *plus* the most useful info of all: data mined from application logs. That's generally where the really good insights come from (and most of what's unique to each customer's business/offering). ELK is well-known, open source, and fairly mature. Relatively easy to scale as well; should be easy for LM engineering to put together for a proof of concept anyway. Meanwhile, I'm looking at things like Papertrail, Librato and Logz.io for my application logs - but I'd really like to have One Tool to Rule Them All.