• Posts

  • Joined

  • Last visited

  • Days Won



35 Excellent


About Brandon

  • Rank

Recent Profile Visitors

1,106 profile views
  1. Ah - a tale as old as time. πŸ˜„ I think everyone would agree that admin permissions - local or otherwise - isn't ideal. Unfortunately, as you've discovered, so many seemingly innocuous read-only actions are locked to administrators only in Windows. WMI, CIM methods, remote script execution, etc - none of that works predictably without granting uncomfortable levels of permissions. The alternative to the all or nothing approach is to use agent-based monitoring solutions that run locally on the targets themselves. Of course, this has its own nightmarish drawbacks. Who knows - Microsoft might surprise us with a new OS that overhauls user permissions in such a way that - out of the box - remote monitoring can be safely accomplished with some reasonable assurances that there are minimal security implications. A man can dream... That being said - if I'm understanding you correctly, it sounds like the solution I posted above might negate the need for the API accounts you mentioned. So if you migrate your primary API functions to use the native java library, you can shut off the API keys entirely and just let the collector do its thing. I'm no security expert so please correct me if my assertion is incorrect. In any case - please reply to this post if you start playing around with the library. I'm really curious if it works well for others. So far it's been a lifesaver for me. Cheers!
  2. Hey all, So it looks like I'm not the only one trying to find a way to update device properties on the fly using the collector. I'm not sure why a hostProps.set() isn't a working function yet but my workaround involved making API calls when device properties needed to be updated right away. Of course, it doesn't make sense to have a collector server do the extra work of making API calls in order to accomplish this - especially considering it could cause downstream effects like API throttling. So I went through the effort of learning some java and dissecting the collector jar files to figure out if there was some other way to do it. Here's what I found: import com.santaba.agent.debugger.* println "Updating property: system.${LMObj} :: OldValue: ${hostProps.get("system.${LMObj}")} :: NewValue: ${currentObj}" task = "!hostproperty action=add host=${hostProps.get("system.hostName")} property=${LMObj} value=${currentObj}" HostPropertyTask updater = new HostPropertyTask(task) println updater.output return updater.exitCode A few words of caution: ❗❗❗❗❗ This code updates SYSTEM properties, NOT AUTO.* properties. This is a very important distinction. This functionality could really ruin your day if you go deploying a datasource that updates properties such as system.ips, system.hostname, system.<user>, system.<password>, etc. ❗❗❗ This does not work unless you update your Agent config file (agent.conf) on the collector (and restart the service for it to take effect): ❌ groovy.script.runner=sse βœ”οΈgroovy.script.runner=agent More on that here LogicMonitor could break this functionality at any time in a future release. I've only tested this on the latest few general releases and it appears to work well - but it could break at any time. The audit log won't tell you that the host properties were modified. Instead, you only see the results of the change: auto groupmembership, autodiscovery, SDT, etc. If the change itself gets logged somewhere, I don't know where you might find it. Lastly, you definitely should not be using this method each every time your datasource runs. Make sure to implement some logic to only update the property if and when needed. I have no idea how well this code snippet scales outside of a few hundred resources per minute and since I haven't found any documentation on it, I'm using it sparingly and not yet heavily relying on it to work 100% of the time. That being said, so far it seems to work quite well. Feel free to report how well it worked out if you aren't afraid to scale it like crazy and measure the performance.
  3. Hi Antony, Thanks for this. I have been doing this sort of API device updating via datasource for some time now, and it is definitely not ideal. With propertysources only running once every 24 hours 😣, there is definitely a huge need for collectors to be able to natively and dynamically update device properties when it needs to do so. I've been digging into some of the Java methods available under the hood on the collectors and I believe I have found a workaround. Here's the relevant snippet of groovy code: import com.santaba.agent.debugger.* println "Updating property: system.${LMObj} :: OldValue: ${hostProps.get("system.${LMObj}")} :: NewValue: ${currentObj}" task = "!hostproperty action=add host=${hostProps.get("system.hostName")} property=${LMObj} value=${currentObj}" HostPropertyTask updater = new HostPropertyTask(task) println updater.output return updater.exitCode As you can see, I'm manually calling on the collector to run the !hostproperty debug command in order to update the property (I know. sneaky, eh?). I'm still watching to see if this gives me any trouble, but so far it appears to be working far better and faster than the API method I was using previously. Of course, any updates to the collectors might break this functionality, but hopefully by then we will see a native hostProps.set() method instead. cheers!
  4. Please include the storage type for RDS instances as a system property. For example: We would like to monitor the burst balance on these disks and that datasource would only need to be applied to RDS instances using gp2.
  5. Hi @Stuart Weenig! Thanks so much for taking the time to look into this. I did discover some issues with my output and it's now properly formatted. But, unfortunately, it's still not changing the appearance of the icons. I suspect perhaps this is related to the combination of category and type. For example - the in order for the AWS EC2 icon to appear on an EC2 instance, the output would need to look something like this: {"rawERIs":[{"category":"","priority":2,"type":"AWS EC2","value":"i-a1b2c3d4e5f6g7"}]} Of course, this is only a theory. I have yet to find any documentation around categories and how they come into play - but it's very likely that I'm not fully grasping how topologysources work. In any case, I have gotten a number of maps to correctly display - It's just that the icons are very similar which makes it less obvious what type of devices/instances we're looking at.
  6. I've been playing around with topology mapping with some success. However, the icons displayed after the map is built are the standard IP and instance icons. I've modified my custom ERI propertysources to try to get them to display something other than these two icons, but the icons never seem to change. According to the LM documentation regarding these icons, I should be able to choose anything from the list: Has anyone successfully been able to get the map to reflect the icons above? For example, "predef.externalResourceType=AWS EC2" does not appear to do anything. Nor does "LoadBalanceCollector" or "Load Balance Collector", etc. I suspect there is a specific value associated with each of the icons and the value is case-sensitive. However, out of all of the values listed in the image, none have worked for me.
  7. Hi @Nicklas Karlsson and @Jonathan Hill, So you guys are running into an issue where the datasource isn't finding any instances. That's no good! The datasource uses remote Powershell commands from the Windows collector to the target server. I would probably start by using an RDP session to a collector and running the script from an elevated PS session. Before you run it, you'll need to modify the script to use whatever creds you have set up on that target server. For a quick test, just check to see if these two wmi calls return anything from the remote system: $hostname = <target Server> gwmi -Namespace "root\MicrosoftDFS" -ComputerName $hostName -Query 'SELECT * FROM DfsrReplicatedFolderInfo' gwmi -Namespace "root\MicrosoftDFS" -ComputerName $hostName -Query 'SELECT * FROM DfsrConnectionConfig WHERE Inbound = "False"' If these two work, DM me and we can try to figure out what the issue is. I'm guessing we're running into a character limit and I need to account for that. Thanks!
  8. Hi @John Biniewski, The alert thresholds are largely going to depend on how large your share are. You can set the alert threshold to something quite low like 10, and wait for an alert, but unfortunately, this is one of those times when you'll need to revisit your thresholds to determine what is normal for your environment(s). Wish I could be more help here.
  9. We have begun implementing a tagging standard in our cloud accounts to better control discovered resources and route alerts accordingly. I would like to be able to route alerts by default based on the value of a tag. I'm aware that I can already set up specific users and then achieve exactly what I'm requesting, but I would much prefer to have a blanket rule that uses the tag's value as the recipient email address(es) directly. Some examples below: See the screenshots below for a visual example of how I'd like to structure this automation.
  10. I have built a generic StatusPage.IO datasource to allow for monitoring the status of various services we use. Since so many companies are using, I figured it's a good idea to have a heads up in the event there is an outage with one of our many service providers. This has worked well as an early warning system for our service desk guys to know about issues before they start getting calls from end users. LogicMonitor actually uses StatusPage, but of course there are many, many others. Attached is a screenshot of the StatusPage data that we've collected from This datasource should be universal to any site. So far it has worked against every site I have tested it against. NYJG6J
  11. I have been using a custom datasource to collect the metrics for each resource and method (excluding OPTIONS) behind a API Gateway stage. It has been extremely useful in our production environments. I would share the datasource via the Exchange, but the discovery method I'm using will not be universal, so I think it would be best if that discovery were to work natively. If possible, could we please have a discovery method for AWS API Gateway Resources by Stage? *Something to note - This has the potential to discover quite a few resources and thus, create a substantial number of cloudwatch calls which might hit customer billing. For this reason, I added a custom property ##APIGW.stages## so that I could plug in the specific stages I wish to monitor instead of having each one automatically discovered. The Applies To looks like this: == "AWS/APIGateway" && apigw.stages Autodiscovery is currently written in PowerShell (hence why not everyone can take advantage of it) $apigwID = ''; $region = '' $stages = '##APIGW.Stages##'; $resources = Get-AGResourceList -RestApiId $apigwID -region $region $stages.split(' ') | %{ $stage = $_ $resources | %{ if($_.ResourceMethods) { $path = $_.Path $_.ResourceMethods.Keys | where{$_ -notmatch 'OPTIONS'} | %{ $wildvalue = "Stage=$stage>Resource=$Path>Method=$_" Write-Host "$wildvalue##${Stage}: $_ $Path######auto.stage=$stage" } } } }
  12. I agree. Adding instance groups with auto-inclusion rules similar to device groups would be extremely useful for alerting, graphing, etc.
  13. We're experimenting with netflow now and we are also struggling with these very real limitations. It would be great if we could get a response as to whether or not enhancements to Netflow are going to be prioritized. Currently we're finding that we have no other choice but to rely on multiple tools to gather this data.
  14. It would be immensely helpful if I could see and test alert routing from the Cluster Alerts page at the device group level similar to the existing Alert Routing button on the Alert Tuning tab. As we begin to more heavily utilize this functionality, it's critical that we can verify that alerts are routed correctly wherever we set it up.
  15. Wow @mnagel! Thanks so much for this! I'm going to look into running this so I can get that list put together. I don't have any plans to systematically delete the datasources - I'm just wanting to compile a list so I can review them. I'll feed the obvious ones into a script as a one-time purge and once I've done that, I can take a closer look at those that should be working, but aren't for whatever reason.