Search the Community

Showing results for tags 'DataSources'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • LogicModule Exchange
    • LM Exchange
    • LM Staff Contributions
  • Product Announcements
    • LogicMonitor Notices
  • LogicMonitor Product Q&A
    • Feature Requests
    • Ask the Community
    • From the Front

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me

Found 14 results

  1. I have a version of the "Oracle_DB_BlockedSessions" datasource template deployed and set an alert threshold on a complex datapoint that accounts for WAIT_TIME and SECONDS_IN_WAIT. Here is the complex datapoint expression for those curious--- if( eq(if(un(WAIT_TIME),0,WAIT_TIME), 0), if(un(SECONDS_IN_WAIT_RAW),0,SECONDS_IN_WAIT_RAW), 0) If the complex datapoint has a value over 300 seconds, an alert triggers with all the enriched instance-level autoProps from the Active Discovery script. All other aspects of this template mirror the gold-standard version--including enabling the "Automatically Delete Instance" option. Enter Client X, and they are comfortable with a threshold of 900 seconds. How can I set this custom threshold at a resource group for Client X when they don't currently have any blocking sessions? If I do manage to catch and set this Alert Tuning customization when Client X has a blocking session, will this alert tuning get wiped out when the DSIs are removed automatically? I suppose the Active Discovery script could be modified to always output a dummy instance... but that leaves an unpleasant taste in my mouth. Aside from cloning the datasource just for Client X, are there any other alternatives? And no, I do not want to alert off of the "Oracle_DB_BlockedSessionOverview" template because a it doesn't do a good job of discerning between one really long blocking session versus sequential and short-lived sessions that happen to exist at the time of the poll.
  2. Can I also make a feature request to retain the custom thresholds / attributes (user optional, probably by means of a toggle button to choose between overwrite or leave as is ) while updating LogicModules? I did notice related requests from the past and it seems that it is not yet released.
  3. Hi everybody! I've been using Mike Sudings monitoring solution for a while and i've expanded it a bit to monitor more solutions in Office365. The Monitors included Custom Domains (quantity) WHEKJJ Deleted Users (quantity) ZHADY9 Global Admins (quantity) 7GGZWZ Licenses Assignable ((quantity) based on type) R46EGX Licenses Assigned (quantity) based on type GHRNLL Licensed / Unlicesned Users (quantity) 4PJZJ4 MFA Users (quantity of enabled/disabled users) WZFAWK Users and devices in Office365 Tenant (quantity) Devices if clients are joined to Azure AD PLMP22 Hope they are helping the community out
  4. It would be great to create alerts from multiple data-points from multiple data-sources. For example if CPU is above 30% and SQL database lock timeouts is above 1000. I can see many uses cases to be able to alert on different datapoints that relate to other datapoints in other data-sources.
  5. Is there a pre-package data source or service that runs constant TraceRt between Collector and host, and graphs the results?
  6. Hello LogicMonitor Community ! Andrey here from LM's Monitoring Engineer Team. To supplement the suite of scripted LogicModules provided in the LM Support Center, my colleagues and I have created a public GitHub repository with many more examples. Have a look at the LogicMonitor GitHub Repo, where you'll find various recipes for solving common monitoring script problems. Both Groovy and PowerShell examples are included Feel free to comment below with requests and/or suggested improvements. Happy scripting !
  7. Hi there. I want to request the ability to import data into an instance. Specifically, I am hoping for a way to add historical data to a new instance that is monitoring the same objects, with the same metrics labeled as the exact same datapoints, just the source is different. In our case we see a need to create a new datasource which captures the same data with our old devices and a bunch of new ones, and more in the future. The old datasource used SNMP for monitoring, and we wish to replicate that into the new datasource, but make it all API calls via script, since the new ones do not support SNMP. For proper monitoring and troubleshooting of the devices we need to have the historical data for the old instance of old devices, while being in the same datasource as the new ones. For this very reason, we would like to see the ability to add historical data to a new instance after it has been exported from a separate and old instance, especially if the datapoint names match. I have a feeling this particular case study might be relevant to other users as well.
  8. I'm currently working on a project to validate any configured LM alerts have an associated documentation - our documentation should have the datasource name as part of the title. Furthermore, I need to filter out devices that are not in my team's escalation path/rule. e.x. filter out "dev" environment devices. I'm pulling a list of datasources using the REST API '/setting/datasources' endpoint. I use a filter to only grab those that match my team's given escalation path/rule. I then pull the datasource properties, specifically the "appliesTo" value in the json response. I'm then converting those values into filters that can be used on the 'getDevices' REST API endpoint Is there an easier and/or better way to match datasources back to devices? The 'appliesTo' values returned do not use the same structure and naming convention used with the 'getDevices' filters.
  9. You only have support for an older version of cassandra (v2)
  10. Good Morning all, we are a multi-Team managed service provider who, surprisingly, manages customers team-wise. We are in the process of switching to LogicMonitor right now and we have some issues regarding shared devices and their DataSource Instances, that belongs to different teams. We created Devicegroups for our teams, but if we put shared devices like loadbalancer, firewalls, storages etc. in said groups, ALL teams will see ALL alarms, regardless of responsibility. If one team decides to turn off an alarm on a shared device, the alarm is obviously deactivated on all other teams, as well. I'm aware of the Datasourceinstance grouping mechanism, but afaik it cant help with the problems i mentioned. We would like to request a feature where you can somehow mask datasources, their instances and their respective alarms based on the group you are in while you're viewing the device, so alarms and instances that don't belong to your team doesn't show up at all. It would be really convenient to somehow configure a "view" of a device based on the group you are in that doesn't interfere with the device datasources. Bonuspoints if you bring in alarm-tuning in this somehow. Regards, Bastian
  11. I am not quite sure how to express this in the UI, but there needs to be a way to indicate that for certain device types, a minimum set of datasources are required to be associated. We just had a case where the WinExchangeMBGeneral datasource (32 datapoints, some critical) just didn't get associated and we only found out after a major event. The active discovery stuff is very nice, but it is a "fail open" sort of thing where if AD does not successfully operate, you just have no way to know without heavy investigation. My idea is that once a device has been identified as a particular type (like Exchange server), there are a set of datasources that must be discovered and failure to do so is called out as an alert condition. At this point, you end up relying largely on luck to figure out whether this problem has happened.
  12. Please make it possible to search by other fields than name and keywords. For example, I would like to search for all Powershell datasources so I can have a few working examples in ready reach. Not sure which fields should be in scope, but if in doubt, I would say all of them! Thanks, Mark
  13. Currently it is possible to control discovery of instances for a datasource with filters. It is not, however, possible to avoid data collection in a similar manner. Adding a collection filter (bypass?) capability could greatly improve performance, especially considering cases like interfaces, where 25+ datapoints are collected. In that case, a collection filter might indicate "primary" datapoints always collected, and then depending on their state, skip the rest (e.g., ifAdminStatus == down or ifOperStatus == down). I know this makes it more complex, which is not good, but a generic facility for bypassing heavy data collection not really needed would be a great addition to LM. Thanks, Mark