Stuart Weenig

Administrators
  • Content Count

    457
  • Joined

  • Last visited

Community Reputation

47 Excellent

About Stuart Weenig

  • Rank
    Myth
  • Birthday 02/29/1980

Recent Profile Visitors

337 profile views
  1. I believe it's on the roadmap to make this setting one that you can configure on the group/device/instance level. I know it's been requested a few times.
  2. Yeah, well, the instance would still be present for up to 30 days in case the process comes back (with the same wildvalue/wildalias). However, it wouldn't be included in polling since it's not showing up in active discovery results. So, the flow would be: instance is up and monitoring, process goes down, instance alerts until the next active discovery when polling for it is disabled. if the process comes back up with the same wildvalue/wildalias within 30 days, polling would resume. Otherwise, it would get deleted at 30 days, along with historical data.
  3. You could set it to delete after 30 days instead of immediately.
  4. Could be a situation where dynamic thresholds could really help as well. Let LM learn how many processes of a given name are running and alert you when it changes.
  5. The only problem is with the duplicates because you can set the wildvalue to be anything. It doesn't have to be numeric. Just avoid/strip out special characters. The real problem is making sure that duplicates are kept straight between polls without something to uniquely pull the together. You could consider doing a datasource that counts processes by name. Each name would be an instance and you could count the number of up processes. Set a static threshold when processes that should be multiple are not or set a property detailing how many of the same process should be running and c
  6. Have you looked at mine? https://github.com/sweenig/lmcommunity/tree/master/ProcessMonitoring/Linux_SSH_Processes_Select Duplicate processes with the same command line and same name will be a problem if you're ignoring PID. Under manual circumstances, how would you differentiate between the two between sessions? I mean, if you logged in once and looked and saw process A and process B. Then if you logged in again 1 minute later and saw the same list of processes, how would you know which was you had previously called A and which one was previously called B? The answer is, of course, t
  7. View the replay here: https://www.logicmonitor.com/resource/level-up-product-keynote-2020?utm_medium=sales-email&utm_source=logicmonitor&utm_term=na&utm_content=na&utm_campaign=DWC_Rec_WBN_Level_Up_Product_Keynote_2020&utm_theme=na
  8. Have you looked at Job Monitors? If there's nothing in the Exchange already, I suggest starting by identifying what data is available. Is there an SNMP OID you can pull? Is there an API that contains the status of the job? Find out that information, then making the DataSource to track it is much simpler.
  9. Yeah, I suggest putting in feedback asking that the search box and the locater code search box be combined. You're not the first to have this problem.
  10. Yes, it's there. You just need to use the locator code: TMHHWX
  11. You can get close to what you're talking about with our Topology mapping. If you're not an enterprise customer, you may not have it, so you may need to reach out to your rep. Background images aren't yet supported and there'd have to be some features built to enable static location definition to make sure your icons don't move around on the background image. Topology does show alert status, but it's based on overall alert status of the device. You'd probably want to combine this mapping feature with Service Insight, which allows you to select certain instances from devices and
  12. Alert trigger interval, as well as alert clear interval, are global settings, unfortunately. There are several requests in to get that setting applicable to groups, resources, and instances. Your request will help get it prioritized. Yes, the workaround is a separate datasource, which has all the disadvantages you listed.
  13. The same question I always ask in response to requests like this is: How would you manually obtain this information? If it's SNMP, what OIDs? If it's by issuing a command and looking at the output, what's the command? If it's an API call, what's the endpoint? By answering these questions, you solve the most difficult part of building custom monitoring in LM. If it's SNMP and you know the OIDs, you simply build an SNMP DataSource. If it's by issuing a command, you build a Groovy DataSource using Expect to issue the command and parse the output. If it's by an API call, then you build a Groo
  14. The reason you can't do it is that the dynamic group membership may bring the resource into the group, which potentially causes new properties to be inherited, which may disqualify it for membership, ejecting it from the group. Upon ejection from the group, the inherited properties reset and the device qualifies to be a member again. This causes an infinite loop. There could and should be something built into the product that prevents that loop from causing issues, but it hasn't been built yet.