• Content Count

  • Joined

  • Last visited

  • Days Won


Posts posted by mnagel

  1. You really really do not want to do this. Within the debugger, you can do very dangerous things, like run ad-hoc Powershell as domain admin, and change pretty much anything within the system.  IMO, debugger access should be more restricted not less.

    • Like 1
  2. On 3/12/2020 at 1:18 PM, Stuart Weenig said:

    Sad to say that this isn't doable in the product today. Who's your CSM/AM? They should be getting this into the system for you.

    CSM is Tim C. I have not brought this to him yet, trying to use "correct" channels. This issue is closing in on bug territory, but not quite there.  I normally open tickets for actual bugs.

  3. 3 minutes ago, JSmith said:

    A couple of thoughts that may help.... Have you tried utilizing device properties to create dynamic groups? Ex. In our MSP model we have different clients assigned to resource groups, within their resource groups we make properties for location, type, etc. Then we also set a 'level property'. This would allow us to create a dynamic group with a query like join(system.staticgroups,",") =~  "ClientGroup1/ClientGroup2" && hasCategory("MicrosoftDomainController") || priority.level =~ "P1" 


    Have you also tried the mapping process and using the alert roll up? We cannot find a use case to work for us (yet) but, it seems like it may be good here. 


    We use groups extensively to manage resource downtime (I have a different F/R in regarding that as we cannot simultaneously grant access to the group for contents management without also exposing the group itself to damage).  The problem here is cross-type SDT.  Consider that a site outage may involve resources down, cross-site website checks down and collectors down, perhaps others. We need to in one fell swoop be able to mark all related elements in SDT without having to track them down through all the dark corners of the UI.  As someone noted, this could be handled by an API script, but that is not something we can easily provide to clients.

  4. Unfortunately it has gotten no attention (or it has, it has been silent) and our clients (and us) are continually frustrated by missing things for simple "I want to set this location in downtime" when the concept of location spans many different elements, including resources, websites and collectors.  As mentioned, an API script can be written, but this is not something we can give to clients.

    • Like 1
  5. Trying to get feedback from support on how to use the undocumented memcached capability in LM -- if that is successful, then that will probably be how I proceed with API caching.  Unfortunately, this means that collectors will need to have memcached installed (or available on the network).  For me, this is easy with puppet for Linux collectors. In theory also for Windows, but in practice I will not be able to do that as LM is piggybacked on client servers.  Again, would sure be nice if LM provided an integrated key/value store (in the feature request graveyard for a couple of years, sadly).  Also would be nice if there was a "submit for approval" option here so code review scheduling could be better anticipated ;).

  6. Another use case -- currently eventsources are of limited use due to lack of correlation/counting.  For script-base eventsources, you could at least use a k/v store to detect the same event and extend its lifetime and update a counter in the k/v store.  Not perfect, but it is impossible now so this would be an improvement.

  7. If there are integrations to get arguments, properties, etc. I agree.  Embedded Powershell with token references is acceptable (sometimes very awkward).  Passing values to scripts with the current argument specification methods is not great.  It might be overkill to wrap everything up into JSON for external scripts, but something along those lines might work.

  8. I have been working on a module to pull Cisco PSIRT Advisories per device.  It is not complete yet, but thought it might be interesting to post since it does work, just not properly for production use (see below).  It is also an example of how lack of library support forces cut-and-paste programming even within a single DS :(

    Please note that due to the way LM works, this cannot be deployed on more than a few devices as it stands!!!! Cisco has strict limits on the API and there is no reason to call more than daily for any platform/version pair, but you have no choice by default. I have some ideas on how to address, but if LM would provide a way to marshal and cache external API calls, I would not have to hack around the issue by running an nginx caching server (TBD) or possibly by writing JSON files to the local filesystem (shudder).

    To use this, you must get an application defined at the Cisco API Console (documented in the technical notes) and you must define the API key/secret from the application in properties.

    Cisco in theory wants you to use ios, iosxe and nxos as endpoint types with the version, but there is no actual way to reliably detect iosxe (that I can find, other than possibly a boot string @ SNMPv2-SMI::enterprises. or SNMPv2-SMI::enterprises.  So I used ios as default and nxos when matched.


  9. May have to do the same as I realized the data is entirely inaccessible for clear alerts (MESSAGE token is not provided to integrations).  As a result, the clear alert subject is always generic even if the alert subject is customized.  Seems more like a bug to me, but probably will have to be treated as a missing feature.

  10. Any movement on this? I have another use case where caching within LM would be very helpful -- API result caching.  Cisco has an API to lookup vulnerabilities based on platform and version.  They also have daily call limits.  There is no reason to call on the same platform/version more than once per day, but the "LM way" would be to setup something akin to the MS Patch DS @Mike Suding wrote, so this needs to be bound to AD per host.  I can hack around the issue by writing files, but it already bugs me that module scripts can even write files to begin with.  Actually doing it makes me remember how dangerous that is and it makes me sad.  Having an integrated cache system like this would also allow all collectors in the cache group to leverage the data rather than each caching separately (and increasing the API use rate).  Other relevant places API result caching would be useful is for weather lookups for a location -- this only needs to be done once per location every so often (30 minutes perhaps).  Not every device in a location needs to actually look up the weather for its location, but you want it to look like that so each device has the status bound.  And those services have limits (and costs for increasing limits).  Maybe redis is not the specific solution, but something to achieve this would be very helpful.

  11. LMConfig does not send changes.  It can alert that "something changed", which is about as useful as a red light in your car console saying "something is wrong".

    As far as the rest -- the issue is there is currently no viable way to solve this without bolting on an API method entirely outside of LM.  I can do this, but I hate being forced to do it all the time.  There are some simple fixes, like allow tables to display properties and allow setting properties without cutting and pasting 50+ lines of API code repeatedly (glad you are on that one!), that would make things much better.  I have had numerous conversations with product development folks and improvements happen all the time.  Unfortunately, most of those are the glitzy marketable types, and the lower level nuts and bolts tend to get less attention.

  12. As far as a general table of results, that would also be awesome.  Unfortunately, the only option for that is reports since dynamic tables cannot present properties, only numeric datapoints.  Reports cannot be embedded in anything, including email, so they can't work for that.  It would be super cool to have stuff like that in a widget with a bonus if there was a "changed recently" colorization option or even just a sort by change date.

  13. LMConfig is wrong for several reasons (which I thought I addressed earlier):

      * it is a premium feature that may not be subscribed to and if the only option here, a very large mallet for a simple problem
      * LMConfig does not report changes (we workaround this via API grabs with git email-on-push integration, but this is not a general solution for folks)

    Again, this is the example that caused me to think of how to best solve the problem -- another client uses SolarWinds and they get a notice when the firmware version changes.  With strings and everything.  I thought of a way to solve it that would buy us all a general improvement and hostProps.set was the answer.  I'll take others, but LMConfig is not it.

  14. I actually received a module from support a ways back that does this, or at least part of it.  It works, but due to the limited communication channel, results are similarly limited.  Input is a property, which is scanned via AD to build the instances, then those are checked using the server the property is defined upon.  I just published the version I have now, but since it is code, it could be available soon or in months -- there is no way to know :).


  15. I actually received a module from support a ways back that does this, or at least part of it.  It works, but due to the limited communication channel, results are similarly limited.  Input is a property, which is scanned via AD to build the instances, then those are checked using the server the property is defined upon.  I just published the version I have now, but since it is code, it could be available soon or in months -- there is no way to know :).


  16. 3 hours ago, Stuart Weenig said:

    Do you need a whole datasource to do this? What timeframe are you looking at for your threshold window? The alert trigger interval does just what you're looking for, if I understand your problem. It can be set to a maximum of 60 poll cycles. If you're polling at 5 minute intervals, that at least gives you a 5 hour window. Since you're not as interested in immediate alerts, you could up the poll interval which would up that threshold window. 


    Alternatively, you could construct a Groovy based complex datapoint to call the LM API to get the data from the last X polls or X minutes/hours/days ago and store that value. Then you could add a complex datapoint to calculate the average temperature increase as the slope of the line intersecting those two points.


    The last is needed as a primitive in LM.  Going to "the API" is a PITA without library support.

  17. The MESSAGE token as it stands is a concatenation of the alert subject and alert message (custom or otherwise).  Please add those elements separately, for example. MESSAGESUBJ and MESSAGEBODY.  I have added code to split it out on our end, but this may not always be easy to do.

    • Upvote 1
  18. We have concluded after some time that using groups to manage SDT for clients is the only real option (compared to repetitive manual effort by our clients) and realized we can grant manager access to "SDT groups" to allow clients to add/remove devices on their own.  Except.... this also means they could remove the group itself.  We need to be able to apply RBAC controls to group members only with the parent group protected from change or deletion.

    • Upvote 1
  19. Due to lack of partitioning for MSP setups, we MUST use FQDNs for all devices, but when scans run, they put in the discovered barenames.  Please allow scans to include an FQDN, either via the scan definition or a property.  FQDNs break things like Windows Cluster checks, but it has to be done.

  20. There are gobs of reasons to be able to transform the raw tokens into meaningful rather than vanilla output.  Most times, the output is just shy of "Something is wrong, good luck."  We created a real template system for Nagios that provided very rich conditional information with callbacks to be able to collect additional data from impacted systems and this is one thing I really miss.  LM has no apparent interest in doing anything like that, or at least has been silent in every F/R thread it has been mentioned within.  My best option so far has been to send all tokens to our ticket system (which is a maintenance headache as the integration must list every token explicitly), but then I hit a wall with email integration bugs.  Need to figure out how to put up a middle web API instead...