Search the Community

Showing results for tags 'alert'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • From LogicMonitor
    • General Announcements
    • LM Staff Contributions
    • Community Events
  • LogicMonitor Product Discussion
    • Feature Requests
    • LM Exchange
    • Ask the Community

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...


  • Start



About Me

Found 19 results

  1. Hello guys, We have created a few alarm 'views' for our different monitoring teams. Is there any way to share the alarm console filters (across users, roles, etc...)? If not, is that possibility coming in a near future? Appreciate your time. Regards,
  2. Hi I'm fairly new to APIs and would like a little help please. I am trying to query the LM API for specific alerts with Python. I am able to retrieve a full list of alerts via Python which is a good starting point. I was using the following doc: What I would like some help with is the following: - Is there a way to retrieve alerts only for a specific folder? We have customers under specific folders. - How would I retrieve alerts with only a specific string in the resource name? E.g. all customer devices will have I would like to filter for only alerts of devices with ** in the resource name. - How would you do multiple queries in one API call? e.g. a query with a filter, and a sort? Thank you
  3. I send alerts to a chat channel and I want to remove the lines below from the alert message since it would not work in the chat channel. You may reply to this alert with these commands: - ACK (comment) - acknowledge alert - NEXT - escalate to next contact - SDT X - schedule downtime for this alert on this host for X hours. - SDT datasource X - SDT for all instances of the datasource on this host for X hours - SDT host X - SDT for entire host for X hours I tried using custom email integration the above was still included.
  4. Actually i want to add recipient group using REST API and i'm not be able to get any documentation. Am I doing any mistake or anything else? Thanks Regards, ShahzadAli
  5. We have script DataSources that output useful diagnostics information that help Operations to understand the number value when an alert is generated. We want to include the raw output from a DataSource in the alert and email body. What we need is a ##DSRAWOUTPUT## token which contains the complete raw output sent to standard out from a DataSource script. For example, we monitor for processes running under credentials they are no supposed to be running under, and we want to include that info as textual information in the alert/email body.
  6. I started a chat under ticket 119191 and discussed this with Seth. I would like you to consider this for your next roadmap. I want to be able to see what alerts "would fire" without enabling the alerts. Scenario: Onboarding 10 new devices to a new group with alerting disabled. I want to QUICKLY see how many would fire if I enabled them. No hunting, no slowly turning each one up one by one to prevent the new alert deluge. Maybe a report with applied thresholds and current values with clear indicators what alert level the value is within at the report runtime.
  7. I started a chat under ticket 119191 and discussed this with Seth. I would like you to consider this for your next roadmap. I want to be able to see what alerts "would fire" without enabling the alerts. Scenario: Onboarding 10 new devices to a new group with alerting disabled. I want to QUICKLY see how many would fire if I enabled them. No hunting, no slowly turning each one up one by one to prevent the new alert deluge. Maybe a report with applied thresholds and current values with clear indicators what alert level the value is within at the report runtime.
  8. I think it would greate if you add some headers to your mails. This will helps to mail program to create conversation for every alert and clean message for it. Now we have only separate messages: LMD... critical - Host1 Ping PingLossPercent LMD... critical - Host2 Ping PingLossPercent LMD... ***CLEARED***critical - Host2 Ping PingLossPercent LMD... ***CLEARED***critical - Host1 Ping PingLossPercent In my opinion, it will better if this message will create conversation for every alert: LMD... ***CLEARED***critical - Host1 Ping PingLossPercent LMD... critical - Host1 Ping PingLossPercent LMD... ***CLEARED***critical - Host2 Ping PingLossPercent LMD... critical - Host2 Ping PingLossPercent As I know, the header is Thread-Index
  9. Hi, Per discussion with Russ G. & Kenyon W. & Jake C. yesterday, I would like to submit this as a feature request to the DEV team and see whether there is any way to add this feature into future roadmap. In short, it'll be great if end user can configure multiple incident/alerts into 1 group and generate only 1 alert (with highest severity). Here is an example of Tomcat being shutdown which shows a number of alerts generated: 1. Tomcat shutdown ‘critical’ alert is generated (1 alert) 2. ActiveMQ consumer count of specific queue alert has reached zero ‘Error’ alert (about 10-12 alerts for our case) In this case end user would like to be able to configure such that LM will consolidate all alerts into one critical alert (i.e. all AMQ 'Error' alerts are cleared)? I saw something like this in PagerDuty and must say it’s a great feature to have in LogicMonitor to reduce # of alerts being processed by the TechOps team: Thanks & Best Regards, Horace
  10. Often when an alert pops up, I find myself running some very common troubleshooting/helpful tools to quickly gather more info. It would be nice to get that info quickly and easily without having to go to other tools when an alert occurs. For example - right now, when we get a high cpu alert the first thing I do is run pslist -s \\computername (PSTools are so awesome) and psloggedon \\computername to see who's logged in at the moment. I know it's possible to create a datasource to discover all active processes, and retrieve CPU/memory/disk metrics specific to a given process, but processes on a given server might change pretty frequently so you'd have to run active discovery frequently. It just doesn't seem like the best way and most of the time I don't care what's running on the server and only need to know "in the moment." A way to run a script via a button for a given datasource would be a really cool feature. Maybe on the datasource you could add a feature to hold a "gather additional data" or meta-data script, the script could then be invoked manually on an alert or datasource instance. IE when an alert occurs, you can click on a button in the alert called "gather additional data" or something which would run the script and produce a small box or window with the output. The ability to run periodically (every 15 seconds or 5 minutes, etc) would also be useful. This would also give a NOC the ability to troubleshoot a bit more or provide some additional context around an alert without everyone having to know a bunch of tools or have administrative access to a server.
  11. Dear LogicMonitor Team, Greetings. I searched through the forums for a similar request and I apologize if I did not find one previous to this. After continuous use of LogicMonitor with an integration, it has been determined that in the event an alert is generated before SDT is enabled, if the alert should clear the SDT prevents that update from being passed into the ticketing system. This means that there is no notification, nor confirmation, of an issue being cleared during or after an SDT window. What could be implemented to address this issue is a simple toggle switch, in the global settings page, to allow for clear notifications of alerts to be enabled even during SDT. This would allow any situation in which an issue occurs before SDT and generates notifications to also be followed up with the clear condition even during an active SDT window. I would imagine that in many environments, this would be beneficial. I thank the team in advance for consideration of this request. Respectfully, Alejandro Esmael Align
  12. Please, please, please make the alert filter GLOB expression input field wider, much wider. It's very frustrating not being able to see the entire expression as you type. It would also be great if the GLOB mattcher results did not cover over the existing GLOB expressions so we can see which ones we have already added.
  13. Please make it so that items in the Group column of the alert widget are each placed on a new row, just as the Full Path column does already. This would make it much easier to read the items.
  14. One of the most common support cases we face every day is 'why am I receiving this alert', this article would explain to you the steps on how to determine why are you receiving the alerts. 1) Understand the alert received 2)Checking on validity via raw data and threshold 3)Checking on delivery 1) Understanding the alert received The first step when you receive an alert either via email, text or via any ticketing system is to understand the alert. Understand an alert is to look at which device is the alert for, which datapoint and value of the alert. For example in an email alert message, it would appear as per below. LogicMonitor Alert: Host: ##HOST## Host Group: ##GROUP## Datasource: ##DATASOURCE## Datapoint: ##DATAPOINT## Description: ##DSIDESCRIPTION## Value: ##VALUE## Level: ##LEVEL## Start: ##START## Duration: ##DURATION## Reason: ##DATAPOINT## ##THRESHOLD## ##ALERTID## 2) Checking on validity via raw data and threshold Next, once you determined the alert source, you need to understand why this alert is triggered. This can be done by first looking at the threshold that is set for that particular datapoint. After checking the threshold you can go to the raw data tab of the datapoint to check if it meets the threshold being sent. For example In this case, a critical alert was received and a threshold of 80 90 95 and an alert will only be triggered if you have 20 consecutive polls that fall within this range. Now the next step would be to check on the RAW DATA tab to determine if this condition was met. Judging from the raw data above if you look at the values all the 20 polls have met the threshold level of 80 90 95, but to determine the level of the alert it would be the last poll since the last poll was 96.67 will falls to the range of a critical alert thus a critical alert was send. 3) Checking on delivery The last process is to check the alert rule and escalation chain to see if it was applied to the correct rule and escalation chain. To do so you can go the alert tuning tab and check on the alert routing for that particular instance and datapoint. Here you can see that the Alert Rule applied is Critical - Default and the Alert Chain/Escalation Chain is Critical - Default. Under the Alert Chain is the list of email address that will receive a notification, when the threshold is met.
  15. I would like to have the option within an alert rule to "continue" processing to the next rule. For example, we would like to handle integrations differently than email alerts. I we could create one rule at the top with the highest priority to take an action with our integration, then allow me to customize everything else in separate rules. The only other way to handle this is to add our integration to every escalation chain we create, which is tedious and will lead to manual errors.
  16. I'm coming around to love clustered alerts as more of my company moves to dynamic environments. But I really need to be able to customize the email alert messaging for clustered alerts. So I would like to see two things: 1. The ability to set a custom alert message per clustered alert 2. The ability to assign properties to clustered alerts so that they can be referenced in the alert message via ##TOKENS##.
  17. We would like to see a ##STEP## variable in the template. It is difficult to determine which part of the site is down by reading the email alert. Current variables in Services Overall Alert template are: Subject ##LEVEL## - ##SERVICE## gets ##VALUE## in ##CHECKPOINT## since ##START## Body Service: ##SERVICE## Checkpoint: ##CHECKPOINT## Description: ##DETAIL## There is the ##URL## variable but it only reflects the main page, not any of the steps.
  18. For a datasource, we would like to be able to set the alert threshold over more than a single sample. You can set the number of threshold violations needed for an alert, but this is far different in nature than setting a threshold over a time range. For example, 60% CPU over 2 hours versus 60% CPU over 10 samples. You might see CPU fluctuate within that period, preventing an alert, but the average over a longer period is valuable. Similarly, we would like to get alerts not just on average over a time period, but also on slope over a time period, though perhaps the latter should be a separate request. Thanks, Mark
  19. Hi, We get quite a few alerts in our LM, about 90% of them are made up of a few problems, like Vmware storage luns. It would be useful to be able to group these, to get rid of the 'noise' calls so we can focus on the more urgent ones. What would be even better would be if you could assign permissions to these groups. For instance, the first line can see certain groups and third line can see other groups. Kris