Mike Moniz

Members
  • Content Count

    103
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by Mike Moniz

  1. I'm not sure if I follow but I don't see how you can setup an ignore since LM SNMP Datapoints just use GET requests only (afaik) which means it needs to know the exact OID to use. So if I understand correctly, can you do this? Setup Active Discovery to use 1.3.6.1.4.1.41916.9.2.1.6.4.64.68.161.220.4.107.199.135.25.2.12426 using wildcard discovery. That will get you wildvalues of 12386, 62520, etc. Then have a DataPoint using a OID of 1.3.6.1.4.1.41916.9.2.1.10.4.10.0.41.2.4.107.199.135.25.2.12386.##WILDVALUE## ? The fact that the base ActiveDiscovery OID and the DataPoint OIDs are different are normal, or are you saying that the OID for Latency can have all sorts of different groups of numbers instead of just the last number changing? So that you need a wildvalue of more then just the last number? I might be misunderstanding since I've never seen OIDs that long before. Also at worse you can use scripting to do SNMP (see https://www.logicmonitor.com/support/terminology-syntax/scripting-support/access-snmp-from-groovy/ ) and make the wildvalue or the check itself whatever is needed, including using Walks.
  2. I haven't dealt with this issue myself but did find a reference to the same issue and someone using a 3rd party tool to track it down: https://community.spiceworks.com/topic/2199890-nagios-xi-windows-logon-errors?page=1#entry-8292751
  3. Have you looked at the data APIs? I haven't used them myself but seems to fit the request. https://www.logicmonitor.com/support/rest-api-developers-guide/v1/data/get-graph-data/#Get-widget-data https://www.logicmonitor.com/swagger-ui-master/dist/#/Data/
  4. When you add snmp.meraki.com to LogicMonitor, with the correct SNMP details from the cloud website, LogicMonitor should pull in all the devices hosted in that cloud account. It will look something like below. You can also add the devices directly, assuming the collector is on the same network, which provides a bit more detail like listing each interface for switches, but also some overlap with the cloud version. I would suggest setting the cloud version first, then perhaps add local versions of each type of device and review differences. You can always do both and disable checks that overlap if needed. Also note that if you have multiple Meraki Cloud accounts, they each need to be on their own collector OR you can use DNS tricks like discussed here: Meraki Multiple Organizations https://www.logicmonitor.com/support/monitoring/networking-firewalls/meraki-cloud-wireless-access-controllers/
  5. Some possible workarounds: I think LM Config might be the better option for this, although I haven't played with it personally yet. It's designed for config files, but I don't see why the "config" file can't just be the version info. Also rather than attempt to use properties to store state, you can try using a file on the collector, for example "../tmp/firmware_${hostname}.LastRun" then it would be easy to read in the script. There might be an issue if resource bounces around collectors though. You might also be able to using Auto Discovery Instances with DataSources, which lets you set auto properties, but that might be tricky to implement.
  6. If your are not aware you should be able to edit the soap xml values to fill in whatever fields you need. I don't use Autotask myself but it looks very similar to other Integrations. When you setup an integration, you will fill some values on the top half and then click on a Generate button. This will then auto-populate the HTTP Delivery section based on your values. You can edit the various parts of the HTTP Delivery section and modify the default Soap XML request to add in any other field you want to add. You can use various LM Tokens or hard-coded values.
  7. When you create the Escalation Chain, you need to create Stages. In the escalation chain choose the "+" next to Stages to create a new stage. This will popup a "Add new recipients" window. Click on the "+" which asks for a user, pick/type-in a user as mentioned above. Once you pick a user, it will offer a "Contact Method" box to the right where you can pick your PD integration. This will only show up if you pick a normal user, does not work for groups nor api-only users. Click on the white save button then the blue save button.
  8. I don't use PagerDuty itself so I'm making some guesses, but with Integrations in general you can use any active normal user in the escalation chain. I have a special "integration send" user that has a role with virtually no access and use that user just for our integrations.
  9. LogicMonitor itself will take care of most of that for you and you don't normally need to worry about which collector is going to be used. Make sure that the AppliesTo covers all servers you want to check against and that is all you need to do. When you add a Device/Resource to LogicMonitor, you assign each one to a Collector or Collector ABCG. Anytime you run a check or script against that device, LogicMonitor will automatically look up what collector is assigned to it and use that collector. You don't need to tell it what collector to use each time. Ignore what I said about ##HOSTNAME##, you have that covered by the "hostname = hostProps.get("system.hostname");" line.
  10. If you didn't know, you can use the slideshow option to cycle thru a list of dashboards. It uses a fixed time for all the dashboards (so can't be 30s then 15s then 15s, but a fix 15s or 30s per dashboard works. I guess you can cheat and setup 15s cycle then have two identical dashboards to make it look like it's shown for 30sec. Slideshow doesn't seem to let you pick the same dashboard twice.
  11. You can embed javascript in the Text Widget. Add Text Widget > Source button and enter in something like this: <p id="today">[today]</p> <script> var d = new Date(); var date = d.getFullYear() + "/" + d.getMonth() + "/" + d.getDate() document.getElementById('today').innerHTML = date; </script> Note that the javascript needs to be after the html code it modifies. I don't think you can use this javascript to modify other parts of the dashboard or extract data from monitoring because it's all contained inside an iframe. I also don't know if this will continue to work in the future. If you need to get monitoring data into the widget, you might want to look at the various examples of datasources that directly modify widgets on these forums like: https://communities.logicmonitor.com/topic/2173-sql-results-table-display/#comment-5571
  12. While not in a table but if it helps you can use the Gauge widget to show the current value and also show a Peak value for a particular period, including 24 hours. https://www.logicmonitor.com/support/dashboards-and-widgets/widgets/which-widget-should-i-use/gauge-widget/#id:peak-time-range-selection
  13. Are you looking for all the collectors to telnet to the same set of hosts or each collector telnet to a different set of servers? For example CollectorA would telnet to Server1 and Server2, CollectorB would telnet to Server3 and Server4, etc? If the latter, you would just add Server1, Server2, Server3 and Server4 into LogicMonitor as Resources. Then create the DataSource which telnet to port 25 on ##HOSTNAME## as coded. Then whatever collector is assigned to each server will do the actual telnet check. Any particular thing you need to do with telnet to port 25 (smtp)? If you are just checking if the port is live or just get the version header, you might be able to clone and modify the existing "Port-" Datasource. I did not test this though.
  14. Almost all Windows-based monitoring using WMI, that is what Microsoft provided as the primary method of remote monitoring/information collection, so it's not a surprise. If you worry about using too many ports, perhaps you should lower the number of threads you are using? I think it's wmi.stage.threadpool.maxsize in the sbproxy.conf. I have not played with this option myself. https://www.logicmonitor.com/support/collectors/collector-overview/collector-capacity/ I expect that writing a custom BatchScript for WMI calls would only help if the WMI Datasource has lots of instances AND you can collect the same data in fewer calls. You also need to balance supporting custom coding for your org vs built-in functionality.
  15. Thanks. I think I would just more quickly run into the rate limit cap if I ran API calls in parallel. The caveat wouldn't really apply here since this is not a DataSource or running on the collector and this is directly querying the LM API and not devices.
  16. Powershell and this is not a DataSource but a independent script to check for issues. I've already limited the request best as I can but the issue is more about having to make thousands of API calls, 1 per device and I do want to check all of them. The API doesn't have a way to query multiple device's datasource instances or batch requests together. Not a big deal but wanted to see if anyone knows of any "tricks" with the LM API for batching.
  17. LogicMonitor's Host Status DataSource is very important DataSource that provides notification of Host Down and we check that it's never disabled. It's a common thing I see were someone "just wants to check ping" and disabled all other DataSources including Host Status without realizing it's importance. I've had a script that checks every device in the system in a loop and verifies none of them are disabled. It basically runs a REST call /device/devices/{ID}/devicedatasources?filter=dataSourceName:HostStatus for each device and checks alertDisableStatus and stopMonitoring. With thousands of devices this takes a long time and I'm wondering if anyone has suggestions on how to get this information any quicker? Some way to query multiple/all devices at once perhaps? I tried looking at from the DataSource side (/setting/datasources/) but it doesn't provide that information. Thanks!
  18. I never used the email response system (have LM tied to a ticketing system). Took me a bit to notice, is the beginning hyphen the issue?
  19. try: $resourcePath = "/device/devices" $queryParam = "?size=1000&filter=preferredCollectorId:`"$($collector.id)`"" I don't think CollectorId is a valid property for devices and APIv2 docs say to use double quotes with filters.
  20. Yeah, it depends far more on what you are monitoring per device then the number of devices you have. Monitoring all the shares on a windows file server via WMI will put more load then doing SNMP on a switch. I personally don't have a lot of experience in balancing collectors myself and Auto-Balanced Collector Groups (ABCG) are very new and I haven't played with them. I would likely setting up 1 (or 2 for failover) in an auto-balanced group per site and then grow with more collectors as needed. But as I haven't used ABCG perhaps others on the forums can make better suggestions or you can open a chat with LM to look at your particular environment, especially about choosing small/med/large.
  21. There are several reasons why you would want multiple collectors in an environment: High Availability: If a collector system fails you can have another one take it over (LM supports active-active). Also useful for collector upgrades without downtime Load: Depending on how many items (not just devices) your monitoring, you may need many collectors to handle the load. Site: Some site-to-site VPN are not stable enough for monitoring remotely over and there can be a major side-effect mentioned below. Network Segmentation: The collector and it's failover collectors needs to be able to directly communicate with each resource being monitored, so network segmentation may require multiple collectors. (Guess that also depends on your definition of "environment"). I don't think it really matter if you setup a virtual machine or use physical boxes. I think that would depend on your infrastructure. Most of ours are virtual without problems. A big possible problem to keep in mind about remote monitoring is due to LM's current lack of full dependencies. If a site that goes down that is being monitored by a remote collector will cause an alert for every resource at that site. Without the collector also going down, it will not cause a Collector Down condition and will not prevent the those alerts from occurring. So I avoid doing remote monitoring (on the Resource tab) whenever possible personally.
  22. The multipart boundary stuff is more related to sending large amount of data like XML file contents while most API calls including widget updates just need to pass direct JSON content. You might want to look at the examples at https://www.logicmonitor.com/support/rest-api-developers-guide/v1/rest-api-v1-examples/#Example-PowerShell-GET which uses JSON content. I also think that JSON always starts and ends with "{" or "}" so you might need to add that to your $data string. Also since you are using PowerShell, you can use ConvertTo-JSON to convert PSObjects to JSON if that makes it easier to build and modify.
  23. Not to speak for Joe but I know in my environment and as an MSP we have a template dashboard group that contains our standard templates we deploy when adding a new customer to LogicMonitor. These templates heavily use dashboard group tokens to customize it per customer. I scripted most of the complete onboarding process except for this dashboard group cloning. I actually pause the script and instruct the user to clone the group, then the script would continue and modify group tokens. At the time it looked too complex to attempt to script the cloning vs saving time of a few clicks to clone. The UI seems to have access to a /dashboard/groups/#/asyncclone?recursive=true method that is not available in the API. Something available like this would be very useful for us at least. Cloning individual dashboards would also be useful.
  24. Sure, then I would remove the threshold on daystoexpire and let that be for information/graphing use. Then create a complex DataPoint with the expression of "daystoexpire" which has the valid range and thresholds for alerting.
  25. My previous answer assumed OR conditions (if hit thresholdA OR thresholdB). If you want to do a AND like condition, which as I re-read the question, might be what you are asking, that may depend on the situation. Here I'm assuming you want to basically consider any Certs that have daystoexpire < -100 to just be ignored in all cases, in other words anything under -100 is invalid. So you can edit the "Valid value range" for daystoexpire to be "-100 to blank". That way you will just get a NaN if it's less then -100, and still have an easily changed main thresholds of < 30.