Stuart Weenig

Administrators
  • Content Count

    708
  • Joined

  • Last visited

Community Reputation

90 Excellent

2 Followers

About Stuart Weenig

  • Rank
    Myth
  • Birthday 02/29/1980

Recent Profile Visitors

729 profile views
  1. Yeah, I wonder what problem they're trying to solve. The collector only does HTTPS connection back to LM, so there wouldn't be much point in building that functionality into the product when setting up an internet proxy is just as easy/easier for the customer to do and works just as well. This sounds like an ex-Nimsoft customer?
  2. This is sort of happening now. The "Use dashboard time range to filter alerts" is checked in your screenshot. If this is checked, alerts outside the dashboard's timeframe are hidden. If the dashboard doesn't have a timeframe set, the default is 24 hours. Agreed this could be improved.
  3. Why not just setup an internet proxy?
  4. You could create a single instance SNMP DS polling sysuptime (1.3.6.1.2.1.1.3.0) and put a no data threshold on that.
  5. You're talking about per-group and/or per-website options right? More than this:
  6. Ah, I see. And when it was just one platform (resource?), it was not in a group. Once more resources were added, a group was created and you want to move the SDT from that one resource to the group that now contains it.
  7. There are a couple methods that could probably work on most every Linux distro. Neither work via SNMP as far as I'm aware, without some additional modification. Luckily, it wouldn't take much to create a PropertySource that can determine this and set a custom property. The PropertySource would use SSH. I whipped this together this morning. Give it a shot. It sets a property called "auto.chassis". It'll be "vm" for a virtual linux machine. It'll be "desktop" or something else for a non-virtual machine.
  8. Ah, ok. So the main thing you want to track is not necessarily the content of the response, but the actual response time. Cool. Might be more elegant to even use internal website checks instead, but that's a bigger leap. So, do you have the servers and the VIPs in LM as distinct devices? Sounds like one payload with multiple destinations. Is that the case or is it multiple payloads to each of multiple destinations?
  9. Of course, if you're getting results, that's good. I think it would be to your advantage to try for some of that "elegance". If you want to leave it alone, just let me know and I'll drop it. Otherwise, let's talk about what data you're gathering and how we might make things easier for you in the long run. Generally, what do the different payloads represent? Different functions on the target device? Different micro-services? Different hardware components? Multiple instances of the same thing, just with different names/purposes/responses? The answers to these questions will help drive towa
  10. Sounds like what you want is a multi-instance datasource. Each instance would represent a different payload you want your script to process. As long as the result is a similar datapoint across all the instances, then you should be good. For example, if one payload causes the CPU utilization to be returned and another payload causes disk status to be returned, you'd likely want different DataSources. This is because within a single datasource, the output should be mapped to the same set of datapoints for each instance. You wouldn't want one instance to populate the cpu_util datapoint and not p
  11. This sounds like a separate feature request: more flexible repetition options in SDTs. I get this if you've put the SDT on the device and it would be better for the group. Is there a use case outside that?
  12. I doubt this will be incorporated into the core set of Datasources. You can submit feedback through your portal that this be built into the core set though.
  13. Two different datasources and two different instances? Or same datasource, multiple instances? If it's the latter, Service Insight is the way to do it: https://www.logicmonitor.com/support/lm-service-insight/about-lm-service-insight
  14. You'd need to talk to your CSM or possibly Support about that.
  15. Most API endpoints, including the alerts list, have a filter capability where you can filter on the properties of the objects. Some properties you could use would be acked_epoch, start_epoch, and maybe end_epoch. I don't see any property that gives a timestamp of when the alert severity changed. It's too early in the morning and I can't remember if alerts change severity or if it's actually a new record in the alerts list (with its own corresponding epoch timestamps). I'll have to test it. You'd use these as filters like this: /alert/alerts?filter=startEpoch>:1617213624&filter=