Leaderboard


Popular Content

Showing content with the highest reputation since 07/09/2020 in Posts

  1. 2 points
    The strings are host properties, so set them on the collector you want to run this from. Those would have to be bound to a collector host. As written, that supports only a single remote SFTP test. If you wanted to do more, you would need to rewrite that to handle instances either manual or active discovery. I do the latter often even with a manual property list as it is the only way to define automatic instance properties. It may be possible to do this via an internal "website" check, but I have not tried going full groovy on those yet :). Even then, each would be a separate copy of code, so better to use this and perhaps extend to support multiple checks.
  2. 1 point
    I've been spinning my wheels on this same idea. Without having dug into the depths that are MIBs, I'm thinking it might be worth running an expect script to gather the port channel interface assignments directly from the devices. Then each port channel becomes its own instance group, and each interface assigned (and the portchannel interface) become instances within the group. The idea is plausible, but my problem doesnt demand the time-cost to build it out. I figured I'd share it here encase it helps anyone else in the future.
  3. 1 point
    Ah already ahead of you there, I came to the same conclusion after attempting to do a 'single device' model, which just couldn't do it. (If I weren't trying to do API switchport monitoring, it probably would've been fine). I use the MX at each location as the 'anchor' for all of the Meraki monitoring at the location: But they somehow still manage to sync up. I had the AMP and IPS checks at 4 hour intervals, and after restarting the collector, started getting buckets of 429's: And now what's really got me confused ... I set both of those to 5 minute collection intervals about 30 minutes ago and now everything's fine!
  4. 1 point
    When you use alert tunning a the host/group level for a particular data source, I would like the ability to select to alert on No-Data when creating custom alert thresholds.
  5. 1 point
    Looks like they have been approved, although no idea how to create a package with them all in so can supply the individual locators for each one addCategory_MacOS_SSH - FCHCFT / MacOS_SSH_CPUMemory - L2XZTW / MacOS_SSH_Info - 7DYF4E / MacOS_SSH_Filesystems - TKNF3L / MacOS_SSH_Uptime - 3K3NN6 / MacOS_SSH_mysql - PWDC6L / MacOS_SSH_Tomcat - M7NXF9 Hopefully these will be helpful for a few people!
  6. 1 point
    Nice to hear that! We'll start mapping our stuff & see if it works. Using a sample event (that I can trigger on purpose) in order to test this out. Will further update
  7. 1 point
    I've further checked the event table on the device itself & the date isn't the same (obviously ). I've raised a support case #208119 to engage support. Thank you for the availability!!!
  8. 1 point
    The datasource provided by LM does not handle superscopes (multiple subnets per scope). I wrote one that does handle superscopes and works properly. I don't have a way to monitor across split scopes (portions handled by different servers), but I think that if one of those independently was filling up you would still want to get an alert so it should not matter. With superscope monitoring, you will know only when all the subnet IPs are running out. I will see if I can get that one published in LM Exchange.
  9. 1 point
    Yes, for Windows events you can do this -- we do as well. You lose the event detail, but it can alert only if N events in a window are seen (something customers ask for often). Even then, since the "collect every" value is not visible to the script, you have to take special care to ensure your event scan window and the collect every value are in sync. And this does nothing for any other type of event -- we have to use Sumo Logic (or other similar tools, like Graylog, etc.) to solve this problem in general.
  10. 1 point
    One of the items we would like to see implemented is the ability to pick what method of alert messages that clear alerts will be sent to and have them enabled or disabled. An example of this would be in an escalation chain we would like to notify our technicians via a text message that an alert has been cleared but we do not want the system to send a call if the alert has been cleared if that stage was reached in the escalation chain.
  11. 1 point
    VMWARE ESXi can perform snapshots on Virtual Machines which create the effect of a point in time (freezing) of a virtual disk. The new disk (delta disk) is created to continue to record changes in the I/O since the snapshot was created. When a snapshot is deleted, then all the changes are written down into the earlier disk. If you continue to delete snapshots until there are no snapshots then all the changes will be written finally down into the base vmdk disk. The delta disks are removed after this process which is called "Consolidation" of the delta disks. Due to situation in VMWARE ESXi where the files are locked, then the consolidation process will not occur or be interrupted by the condition. This will abort the process, and vm will continue to use the delta disk, and back out the changes or not commit them to the disk earlier. This will then create a snapshot chain that continues to grow. In our situation we use a Backup Solution that utilizes Snapshot Technology to freeze (Quiesce) the operating system and take a recoverable backup. Backups are taken every 15 minutes. In this scenario we have reach a snapshot situation where the disk chain is 255 Delta disks in length. There are no Snapshots in the GUI (LogicMonitor can see these). However, LogicMonitor can not see the delta disks. In theory, when a snapshot is taken, it will create a delta disk named "vmdiskname-000001.vmdk", and if a second snapshot is taken it will create a delta disk called "vmdiskname-000002.vmdk". When you remove or delete the snapshots, and the process completes, then the delta disks no longer exist. We need to see if the number of delta disks exceeds the number of snapshots in the GUI. If it does, then we have to repair the VM Manually. This situation is very dangerous for an ESX Host, because with a disk chain at around 255 Delta Disks, you will start to see very strange Delay Behaviors. The ESX has to track all the data through the Chain. Behaviors include VM Freezing, and then releasing, and then speeds up and processes faster than normally until all the data has been processed. Then it will freeze again. The more delta disks the worst the decay in performance for the ESX host.
  12. 1 point
    Use case: Large enterprise, multiple geographical regions, multiple networks. We have 192.x.x.x in Europe and in Pacific and in Americas. We have Collectors in each region's networks too. Problem: LogicMonitor does not allow duplicate IP addresses. Solution: Allow duplicate IP addresses if they are on different Collectors.
  13. 1 point
    This is almost guaranteed in an MSP environment and I guess we have just been lucky we have not run into this issue yet. This is a huge showstopper and (to me) not a feature request but a bug report. I thought for sure uniqueness was required only per collector (or failover collector pair), there really is no other good answer. Mark