All Activity

This stream auto-updates

  1. Today
  2. I am having issues when I try to use your Data Collection script for this. The discovery works fine. When doing the data collection, it times out with no results. I do have my company defined in the URI. Can you please point me in the right direction?
  3. Yesterday
  4. Last week
  5. This sounds perfect! Thank you for the suggestion! I put the 2 instances I want to monitor together and made a rule for it. Just need to test it and see if it works as intended.
  6. Sure, you can use Service Insight for this, but it is a premium feature, which is using an expensive mallet to handle something that should be available without that extra cost. Or, there should be a Service Insight light for this stuff, leaving the costly part for the intended enhanced features of Service Insight (like Kubernetes). My recommendation on this was to extend cluster alerts so you could at least match up instances. My use case at the time was to detect an AP offline on a controller cluster. There is no way to do this without SI, which as you say is complex, and it is an ex
  7. You could create a service out of those two links. The service metric would be interface status. You would choose to aggregate the status data by "mean". If both links are up, they'd both return 1, so the average would be 1. If one link is down, you'd get the average of 1 and 2 (1.5). If both links are down, you'd average 2 and 2 (2). Set your threshold to >=2 and you should be good to go. The only tedious part is setting this up for each pair of links you have.
  8. Example: I have one router connected to the network's other router with 2 links (interfaces, tunnels, etc). If one of the links goes down the normal alert rule to email me is fine. However, if BOTH links go down I want a page. Cluster alerts was close to what I needed but it seemed to only be able to be set for if ANY 2 links go down then do this, instead of if these 2 links go down. I care about the relation between 2 specific links on a device, not the other ports going to random servers and stuff happening to go down. (I have different alerts for those.) Has any one dealt with an
  9. I've updated the script to reflect changes made since the original post. It now includes more examples for handling different alerts (including restarting a Windows service) and a helper function for forwarding syslogs if needed. Heaven forbid you ever get such an alert storm, but I tested triggering 500 alerts/minute to see if the script could handle them all and it successfully processed all of them within a second of the collector performing its regular 60-second External Alerting check. For reference, the test was simply logging each alert to a file using the script's LogWrite functio
  10. @Sarah Terry do you know what is required in order to get the custom columns to be returned either in the API or through the Python SDK?
  11. DBA-ONE

    Using API v2

    I will change the code to do that, but I still have the issue of the custom columns not being exposed. The docs don't offer any information other than the possibility of accessing them.
  12. Haven't used chunker much, but if devicealert contains the response of the api_instance.get_alert_list() method, then you should be looping through devicealert.items. Not looping through device alert looking for items on each child in devicealert.
  13. The API call to get at the data that's shown on the raw data tab is documented here: https://www.logicmonitor.com/swagger-ui-master/dist/#/Data/getDeviceDatasourceInstanceData. 1) You'll need to get the device id. You can get that by calling "/device/devices/?fields=displayName,id,name" 2) You'll also need to get the devicedatasourceid (known as "hdsId" in the documentation). Do this call to get the list of datasources by name and id: "/device/devices/{deviceId}/devicedatasources?fields=dataSourceName,id" 3) You'll also need to get the instance id that you want to fetch data for
  14. DBA-ONE

    Using API v2

    If I place the sample you provided elsewhere in that function I do get all none values.
  15. I hope they don't roll out the new UI. I am not a fan of the way it looks and our customers don't seem too fond of that Alerts view.
  16. DBA-ONE

    Using API v2

    def get_devicealert(): value_list = [] cursor = conn.cursor() #devicealert = chunker(api_instance.get_alert_list,'needMessage:"true", cleared:"*"') devicealert = chunker(api_instance.get_alert_list,'custom_Columns:"snow_company_sys_id"') for i in devicealert: for c in i.items: print(c.custom_columns) query = """INSERT INTO LMDeviceAlert(Internal_ID, chain,id, Monitor_Object_ID, Monitor_Oobject_Name, Monitor_Object_Type, [Rule], Rule_ID, Severity, Start_Epoch, End_Epoch, Acked_Epoch, Alert_Value, Ack_Comment, Acked, Instance_ID, Instance_Name, SDT
  17. yes @Stuart Weenig, exactly like that. Can you please help me to find any sample API scripts in which can referred for this purpose? Thanks in advance.
  18. Sorry folks, I inadvertently paused the recording at the start of the webinar and it remained paused for the entirety of today's webinar. We don't have a recording of today's session. You can access a few of the past sessions (which cover the same content) here, here, and here. There was a question about time based escalation chains. The official documentation for this feature is here. Look for "Create time-based chain". I mentioned a community developed solution. That can be found here. There was also discussion about our YouTube channel. That channel is here.
  19. What happens if you do this? for alert in lm.get_alert_list().items: print(alert.custom_columns) If you get a bunch of "none"s, then it may be that it's not fetched as part of the method. I sort of remember there being something special you had to do to get the custom columns, but I can't remember what it was. Try the above and let me know if all you get are "none"s.
  20. DBA-ONE

    Using API v2

    Does anyone have an example for accessing the custom columns? I cannot figure it out.
  21. We're working on getting "Windows Parity" on Linux collectors this year. WMI, Powershell, PDH. We're not using wmic, but more info will come as we progress.
  22. It's actively being worked on. Covid is partially to blame for the slowdown of transitioning the entire product to the new version of the UI. I think the Resources page is next on the list of upgrades. Not sure when, but probably soon.
  23. Hi, Quite a long time ago now LM rolled out a beta view to the alerts tab with plans to eventually update the rest of the UI however that was some time ago and I haven't seen any update, does anyone know what is happening with this?
  24. I'll reply to your email (will include our CSM as well for his awareness). Thanks!
  25. Hello, my name is Ryan Albert Donovan, Senior Product Managers here at LM. If either of you have some time this week I'd love to have a brief call to better understand the issue or see it recreated. Both myself and my developers haven't been able to reproduce in-house. If not a screen cap of the issue would suffice. Let me know if this work four either of you. Ryan
  26. Here's one i did long ago. I recently rebuilt it to be an SSH based DS, which is also in that repo.
  27. Hi Stu, if we don't have an ssh user/pass then we need to still use the old method? What's a simple OOTB snmp DS that I can use to clone, change the OID to poll to create this custom DS? Its for monitoring MISP jobs status?
  1. Load more activity