Search the Community

Showing results for tags 'datasource'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • From LogicMonitor
    • Product Announcements
    • LM Staff Contributions
    • Community Events
  • LogicMonitor Product Discussion
    • Feature Requests
    • LM Exchange
    • Ask the Community

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me

Found 48 results

  1. So we are in the final phases of rolling out LogicMonitor and now the daunting process of Alert Tuning has rolled upon us. In our old monitoring solution we has it very much tweaked and customized and overall all alerts were around ~400-600ish. In LogicMonitor we are currently at 13000+. We need to seriously tune up the Datasources and we need a way to show our SME's what each datasource is monitoring and what it alerts on, what thresholds, etc.. Is there a way to export the Datasource's Monitoring Template to a CSV file so that we can reference that and our SME's can then say turn off, adjust etc.. I see in the reports section there is a "Alert Threshold Report" but that lists out every single datapoint instance on a group/resource and we don't want that. We need what the base DS template looks at and uses and applies to each matching resource.
  2. Hello, why is there no way to filter out the Datasource Repository items so i can find what i'm looking for easier? I have to scroll thru 100's of DS's and hopefully find what I need. Each column should have a filter at the top so we can shorten this list. Can you add this please...
  3. Hi, I had an alert yesterday for the Threads_connected datapoint on the MySQL Global Stats datasource. We actually increased the max_connections in the my.cnf config file. Is it possible to either read my.cnf file (assuming SSH credentials are available), or get the max connections variable to see what it is actually configured as? This will negate the need to manually check the value in the config and adjust thresholds.
  4. Hello, As an MSP we've the need to monitor a lot of directories/shares for the same client. Some of those shares are accessible with the collector user, however, we've some clients that restrict their share to specific users (not the ones running the collector service). I've tried to create a datasource that's a simple runas where we can pass the user/passwd as a parameter, however, that isn't possible to run from the collector level (confirmed by LM staff in a case that I've raised). Can this be implemented? This feature would be very important (since we monitor +100 clients).
  5. Hello, As an MSP we've the need to group multiple interface instances (from different devices & different clients as well) in order to set common thresholds, reports, etc... From my research that isn't possible within LM. Anyone able to do that? This would be very useful from the monitoring/management perspective. Regards,
  6. mkerfoot

    COVID-19

    Hey all, I built out a datasource to monitor the COVID-19 cases and deaths per state. An API key from Finnhub.io is required to run this DataSource. The API key must be saved as a custom property on the device you want this DataSource applied to. The Custom Propery name needs to be "finhubb.api.key" with the value being the Finnhub.io API key. Finnhub.io offers a free API key teir that allows 60 queries a minute. Locator Code: M9WGM3
  7. I would be great to have some sort of control over a given 'DataSource' running cycle. We can define the running interval however, we aren't able to define when it'll actually run. For the majority of the DataSources that isn't required, however, for specific DataSources it would be very useful to have that possibility. Regards,
  8. Hello everyone, I've setup a datasource that retrieves data from a logfile.txt every 10 minutes (located on the collector itself) . That logfile is populated by another script (from a proprietary language) that runs on the collector VM level (it dumps his different variables within the logfile.txt). The datasource will then retrieve the content of the textfile (using a simple powershell script) & act according to the different datapoints that I've setup - until this point I've no issues (everything working smoothly) My problem starts on the datasource polling/run cycles. The script running at the collector VM runs every 10 minutes (00:00, 00:10, 00:20, etc...) & it takes a maximum of 5 minutes to run. What I would like to accomplish is having the datasource in question running right after that maximum run time (.e.g - 00:06, 00:16, etc... ). I noticed that the polling cycle(s) change sometimes (arbitrarily) & I cannot force those to run (when I want). Is there any way that I can accomplish this? Thanks guys!! Regards,
  9. I have built a generic StatusPage.IO datasource to allow for monitoring the status of various services we use. Since so many companies are using StatusPage.io, I figured it's a good idea to have a heads up in the event there is an outage with one of our many service providers. This has worked well as an early warning system for our service desk guys to know about issues before they start getting calls from end users. LogicMonitor actually uses StatusPage, but of course there are many, many others. Attached is a screenshot of the Box.com StatusPage data that we've collected from https://status.box.com. This datasource should be universal to any statuspage.io site. So far it has worked against every site I have tested it against. NYJG6J
  10. I have been using a custom datasource to collect the metrics for each resource and method (excluding OPTIONS) behind a API Gateway stage. It has been extremely useful in our production environments. I would share the datasource via the Exchange, but the discovery method I'm using will not be universal, so I think it would be best if that discovery were to work natively. If possible, could we please have a discovery method for AWS API Gateway Resources by Stage? *Something to note - This has the potential to discover quite a few resources and thus, create a substantial number of cloudwatch calls which might hit customer billing. For this reason, I added a custom property ##APIGW.stages## so that I could plug in the specific stages I wish to monitor instead of having each one automatically discovered. The Applies To looks like this: system.cloud.category == "AWS/APIGateway" && apigw.stages Autodiscovery is currently written in PowerShell (hence why not everyone can take advantage of it) $apigwID = '##system.aws.resourceid##'; $region = '##system.aws.region##' $stages = '##APIGW.Stages##'; $resources = Get-AGResourceList -RestApiId $apigwID -region $region $stages.split(' ') | %{ $stage = $_ $resources | %{ if($_.ResourceMethods) { $path = $_.Path $_.ResourceMethods.Keys | where{$_ -notmatch 'OPTIONS'} | %{ $wildvalue = "Stage=$stage>Resource=$Path>Method=$_" Write-Host "$wildvalue##${Stage}: $_ $Path######auto.stage=$stage" } } } }
  11. I am not sure exactly how to describe this other than by example. We created an API-based method a while back to control alerting on interfaces based on the interface description. This arose because LM discovered interfaces that would come and go (e.g., laptop ports), and then would alarm about the port being down. With our change, those ports are labeled with a string that we examine to enable or disable alerting. The fly in the ointment is that if an up and monitored port went down due to some change, our clients think they should be able to change the description to influence behavior. Which they should. Unfortunately, because LM will not update the instance description due to the AD filter, the down condition is stuck until either the description is manually changed in LM or until the instance is manually removed in LM. Manual either way, very annoying. My proposal is that there should be a way to update the instance description even if the AD filter triggers. Or a second AD filter for updates to existing instances. I am sure there are gotchas here and perhaps a better way exists. I considered using a propertysource, but I don't think that applies here. The only other option is a fake DS using the API to refresh the descriptions, but then you have to replicate the behavior of many different datasources for interfaces.
  12. ... with exceptionStatus ORA-01013: user requested cancel of current operation That is all. 😅
  13. As I am sitting here, trying to explain to one of our internal partners, for what seems like the umpteenth time, on how to read an alert threshold expression from a ##THRESHOLD## token--it would be great if there were individual message tokens for each of the thresholds. Something like ##WARNINGTHRESHOLD##, ##ERRORTHRESHOLD##, and ##CRITICALTHRESHOLD## that should render the comparison operator and that respective threshold value, example--- This way, I can be more clear as to what this string of numbers actually mean in this type of fashion
  14. For JDBC datasources, please create a token that would enable us to include the JDBC driver exception message in the alert for Query Status data point alerts, the ones that are based on: Query status - 1=ok, 2=credential invalid, 3=connection string invalid, 4=connection rejected, 5=driver not supported, 6=connection failure, 7=query failure This would greatly help us to achieve faster time to resolution of incidents when the exception is code of type 6 and 7.
  15. We have script DataSources that output useful diagnostics information that help Operations to understand the number value when an alert is generated. We want to include the raw output from a DataSource in the alert and email body. What we need is a ##DSRAWOUTPUT## token which contains the complete raw output sent to standard out from a DataSource script. For example, we monitor for processes running under credentials they are no supposed to be running under, and we want to include that info as textual information in the alert/email body.
  16. One of the biggest challenges with LM is the reliance on numbers only in datasources -- the reason is understood, but the result is often messy and many backflips must be done to get a usable result in many cases. I recommend that there be a "lookup" function for numbers that can be associated with datapoints so numeric values can be replaced with text when that is appropriate. An example I just ran into today was after I added the OpenWeatherMap datasource from LM Exchange. It did not include the weather condition, so I added that to the datasource. The values for these range from 200's to 800's indicating "clouds", "rain", etc. You could in theory create a legend for this as is done typically, but the list is too long in practice for 60-odd values (https://openweathermap.org/weather-conditions). If those code-to-string lookups could be defined and referenced, then the value displayed in graphs, in alerts and anywhere else applicable could be displayed as the mapped values, not the original codes. This obviously applies much more widely than this example, but this example shows more clearly how the current method breaks down.
  17. AWS lists that the Listener limit is on a per ELB basis. The AWS_ClassicELB_ServiceLimits datasource seems to intimate that the ListenerUsage is returning the total number of listeners for the given region. Is this useful information to capture on a regional basis or should this be refactored to apply to each classic ELB?
  18. This is my first post in the community, so please let me know if I'm doing anything wrong... I decided to write a datasource that allows you to read in the salient information from a Nest thermostat and alert against it. Namely the current temperature (ambient temp) the target temperature and the humidity. You'll have to create a Nest developer account, and authenticate your account to access your Nest thermostat. Nest's API implementation requires that you create an OAuth access token using something like Postman with your Nest developer account. Step by step instructions to create that access token are here: https://codelabs.developers.google.com/codelabs/wwn-api-quickstart/#4 Once you have that, set both your thermostat ID as well as your access token as custom properties for authentication, nest.accesstoken and nest.thermostatid. You'll put these properties in after you add developer-api.nest.com as a device in LogicMonitor. Assuming everything is correct, you should start getting data. The identifier code in the Exchange: L6MW9T N699G4 Updated with new HTTP commands to use collector rather than the JRE. Let me know if you have any questions about it!
  19. I'm trying to clean up datasources that are in our account that do not have any instances associated with them and likely never will. Currently I have to do this manually by inspecting each datasource in the GUI. It would be really great if the datasource instance count was returned as a property. Even better would be if the instances and associated device ID's were returned as well, but for now I'd be happy with just the device/instance counts.
  20. Often when an alert pops up, I find myself running some very common troubleshooting/helpful tools to quickly gather more info. It would be nice to get that info quickly and easily without having to go to other tools when an alert occurs. For example - right now, when we get a high cpu alert the first thing I do is run pslist -s \\computername (PSTools are so awesome) and psloggedon \\computername to see who's logged in at the moment. I know it's possible to create a datasource to discover all active processes, and retrieve CPU/memory/disk metrics specific to a given process, but processes on a given server might change pretty frequently so you'd have to run active discovery frequently. It just doesn't seem like the best way and most of the time I don't care what's running on the server and only need to know "in the moment." A way to run a script via a button for a given datasource would be a really cool feature. Maybe on the datasource you could add a feature to hold a "gather additional data" or meta-data script, the script could then be invoked manually on an alert or datasource instance. IE when an alert occurs, you can click on a button in the alert called "gather additional data" or something which would run the script and produce a small box or window with the output. The ability to run periodically (every 15 seconds or 5 minutes, etc) would also be useful. This would also give a NOC the ability to troubleshoot a bit more or provide some additional context around an alert without everyone having to know a bunch of tools or have administrative access to a server.
  21. The concept of OpsNotes is growing on me. What would really make it shine would be a Groovy helper class/client that can insert an OpsNote from within a Datasource definition for the device context without resorting to using the REST API. Use Case-- we have a datasource for SQL Servers to count the number of blocking sessions on a per database basis. The DBAs want a way to capture some of the session metadata with this metric. OpsNotes seem like a potential way of capturing this.
  22. Code is TXL3W9 This DataSource provides instances for each of the Network adapters, including the following Instance Level Properties: auto.TcpWindowSize auto.MTU auto.MACAddress auto.IPSubnet auto.IPAddress auto.DNSHostName auto.DNSDomain auto.DefaultIPGateway auto.SettingID auto.Description
  23. I will explain the request here as best I can in a written format but a visual would be simple. By example of FortiGate: We had a requirement to monitor the WAPs (the FortiGate can act as a Wireless LAN Controller (WLC) ) for reporting and performance. Depending upon the size of the firewall 1 through N WAPs can be authorized, configured, and monitored. These devices are visible through SNMP queries against the firewall and offer a rich set of information (CPU, memory, version, SN, model, client associations). Now to the feature - when we added the capabilities it became time consuming to first identify the OID and then repeatedly poll to pin down the elements. So, how about a "button" in the UI next to the OID to walk or poll and display the results? Additionally, how about offering the results and on that page an option to add them in this datasource? Look at the Forti AP definition for what we did and try to visualize the simplicity of having the UI trigger a poll on a device (through the associated collector) to make this a breeze.
  24. When cloning an existing datasource and changing the type from one type to another similar type, all of the datapoints immediately disappear. I am overhauling some of my script and webpage datasources where the output is json or key-value pair into batchscript. As soon as I change the collection type, the datapoints immediately disappear and it's like I'm starting from scratch. The script auto-discovery and collection panes also get wiped out. Unfortunately this has translated into me spending a lot of time recreating datapoints instead of modifying existing ones. I understand why those datapoints are not compatible with SNMP or CIM collection methods, but they should largely be transferable between certain datasources. On the bright side - this has forced me to start overhauling my datasources by exporting and importing the XML files. Still, it would be great if the GUI could play nice for some of us power users.
  25. Please add a test function for JDBC active discovery and colelction, just as you have done for scriped DataSources.