Leaderboard


Popular Content

Showing content with the highest reputation since 01/25/2018 in all areas

  1. 2 points
    Allow devices to be dependent on one another. If a router goes down, the switch behind it will most likely go down or have an error as well.
  2. 1 point
    I have published a PowerShell module, which refactors part of the REST API, to the PowerShell gallery. Please feel free to make requests (or send me cmdlets you want added). https://www.powershellgallery.com/packages/LogicMonitor/
  3. 1 point
    I'd like to assign roles down to the server AND datasource level. As of right now I am only seeing an option to lock a role down to specific groups, but I'd like to have this opened up to where you could even select specific servers and specific datasources for those servers. The specific use case is a consultant is helping out and I'd rather not give them access to everything in LogicMonitor, only the two servers they need to look at. I realize there are probably "workarounds" to accomplish this, but it would be a pain to have to move servers to different groups each time we wanted to do something like this.
  4. 1 point
    We would like to use our dashboard at a Kiosk. Any way of passing the credential via the URL using the API key or User credential?
  5. 1 point
    Cisco_IOS excludes the ASR platform. Is there a plan to create a unique configsource for the ASRs or update Cisco_IOS?
  6. 1 point
    It is becoming very clear we cannot rely on parameters in LM to drive scripts, either because some tokens are mysteriously unavailable for use as parameters (discovered only when assuming they should be), or because tokens have limitations on length that preclude using them for data passed to logicmodules. Please consider integrating a distributed key/value store like redis into LM, with data replicated among collectors. This would help with access to configuration data as well as cross-run results within or across datasources. Ideally this would work natively with Groovy and PowerShell.
  7. 1 point
    We have had the issue where we are performing maintenance and temporary setups have been autodiscovered that should not have been which start alerting once SDT on the device has finished. EG temporary online network ports getting monitored and some https interfaces that are normally disabled getting monitored because they were enabled during maintenance and auto discovery found them
  8. 1 point
    The entire RBAC mechanism is way too coarse. I had a client ask yesterday why they can't disable alerts for a device group. As far as I can see, that comes along with Manage, and I see no reason why this should be true -- I don't want them to have that level of control but it is all or none -- RBAC granularity improvements are sorely needed.
  9. 1 point
    I would like this functionality to cover when a new data point is added in the middle of a month, there is no data for the time that the sensor didn't exist and all of that gets counted against our SLA. The only options we have now are: exclude the device from the report, which means next month we have to remember to go back and re-add the device to the report Set missing data to not count against the SLA, which now makes the SLA completely invalid because when a device is down it is missing data
  10. 1 point
    When a monitor malfunctions (e.g. service test fails because the monitored site HTML was updated) the uptime for that test will not reflect the site's actual uptime. We'd like to be able to apply retroactive SDT-like windows that would prevent an alert period from counting against a test's uptime. I thought this ability was already available by applying retroactive SDT's, but apparently this isn't actually working. Without this feature, a dashboard SLA widget might report a service/device being up 70%, but it was really up 90% while 20% of that time it had a faulty test/datasource. We should be able to apply an SDT to keep that 20% from negatively impacting the uptime.
  11. 1 point
    Amazon Web Services (AWS) announced a free time synchronization service at re:Invent today. The announcement is at https://aws.amazon.com/blogs/aws/keeping-time-with-amazon-time-sync-service/. Interestingly they recommend that people uninstall NTP and install chrony instead. I tried this on a local Linux host and LM and get the following error: Alert Message: LMD2375 warn - 127.0.0.1 NTP ntpqNotFound ID: LMD2375 The ntpq binary was not found on the agent monitoring 127.0.0.1. Please install the ntpq binary (typically by yum install ntp, or apt-get install ntp) If LM customers running in AWS follow the recommendation their NTP monitoring will fail. Could / will LM support chrony in addition to ntpq?
  12. 1 point
    It is apparently considered a feature (not a bug) that when you turn off alerting for a device group, that any previously active alert in that scope will trigger a CLEARED alert (even when the alert was not really cleared, just disabled), potentially generating a storm of false recovery alarms delivered via email, etc. This is not well documented and violates principal of least surprise, but I am told by support this is not considered a bug at all, just an expected platform behavior. Please add a way to disable alerts so when you do, alerts are not generated. It feels really silly to even have to write that last sentence, but it seems I do... Thanks, Mark
  13. 1 point
    Basically what was made available for dashboards in the previous release, but for datasources! The user with permission on a folder of datasources would not have the ability to change any other settings i.e. would not need the \'\'view/manage settings\'\' permission to be able to make changes to the datasources they have access to.
  14. 1 point
    In the MSP space it would be useful to be able to force SSO partially based on IP Address (ex. from our Office, their home office, etc.) where we could maintain a list of addresses that have opted into forced SSO. Without preventing our clients (historically with read-only access) from logging in with local accounts. This would save our guys a few clicks each day and I think that could add up across all of LMs customers.
  15. 1 point
    Our Infrastructure team uses the default thresholds on all data sources, and we receive all 'Warning, Error, & Critical" alerts for all data sources. We have various teams that want to be notified on a small subset of devices that use different thresholds than the defaults. Currently, in order to accomplish this, we have to clone the dataset, set the desired threshold settings, and configure a new alert rule to notify the the secondary team. This new alert rule has to have a lower priority set than the default Warning alert. The problem with this approach, is the original default Warning alerts for these devices will not be sent to the Infrastructure team, due to the alert rule for the cloned device having a higher priority. I've worked with other monitoring applications that allow for multiple alert notifications to different teams using different thresholds. It would be nice to have this feature in LogicMonitor.
  16. 1 point
    Currently, a table must have static columns and rows defined before the widget will display data. It would be great to be able to dynamically build a table's rows based on * To expand on this, it would be great for the table to have the option to exclude instances with zero/no data from the list. For example, I would like a table that displays all MSMQ queue names and the number of messages in each queue - but not display anything if the current queue length = 0
  17. 1 point
    Was this implemented yet?
  18. 1 point
    Hey there, we have a client that does a multicast stream with video and sometimes that stream drops and they have no way of noticing it unless a user calls in about it. Has anyone developed anything to monitor multicast from a Cisco switch or monitor the multicast stream from the network? Searches provide information from years ago (yes, not many people do multicast streams anymore). Just curious if anyone had something like this and if they've had a way to monitor it.
  19. 1 point
    GX2WXT A single Lambda function might have several versions. The default Lambda datasource monitors and alerts on the aggregate performance of each Lambda function. Using the Alias functionality in AWS, this datasource returns CloudWatch metrics specifically for the versions to which you have assigned aliases, allowing you to customize alert thresholds or compare performance across different versions of the same function. This datasource does not automatically discover aliases and begin monitoring them (as this could very quickly translate into several Aliases being monitored and drive up your CloudWatch API bill). Instead, add only the Aliases you want monitored by adding the device property "lambda.aliases" either to individual Lambda functions or at the group level if you're using the same Alias across several lambda functions. To add more than one, simply list them separated with a single space - e.g: "Prod QA01 QA02". If an alias does not exist, no data will be returned. This datasource is otherwise a clone of the existing AWS_Lambda datasource with the default alert thresholds.
  20. 1 point
    I have run into too many cases now where a new but slightly different DS is setup due to LM support actions, upgrades, etc. and the result is lost data or noncontinuous data. A good example I recently encountered is with NTP. The standard DS was not working in all cases. I was given a new DS that uses Groovy, and it works (which I appreciate!). But the datapoint list and names have changed, and even if they had not, there is no way to maintain data history from the old DS to the new DS. My recommendation is to add a migrate function so you can indicate how to map old to new datapoints in such a situation and thus avoid data loss. Building in a default migration ruleset into a new DS would be a bonus -- this could allow for zero-touch data migrations in at least some cases. Thanks, Mark
  21. 1 point
    Hello, i was wondering if someone knows if it is possible to have a way for LM to open up a relay using a program such as putty rather than in the browser. I understand that this may not be possible. It is more of a QoL rather than a necessity. Preferably i would like the local users SSH application(putty, etc) and LM Remote session web to be usable. such as a 2nd button or an option to use either one after pressing the Remote Sessions button.
  22. 1 point
    I see a need in the design to alert on deviation from rolling average: example 1: Temperature in hardware is based on fixed baseline (default or manual adjusted) or based on fixed Delta. In real world application it would Make a LOT more sense to alert on Deviation from a 5 day or 30 day rolling average Temp of the box. Reason is, units alarm on the weekends because the office shuts off the AC during the summer. or they alert During the week 9-5 because in the winter the offices crank the heat. All of these ignore nuance of RANGE and Average expectation for the location...The alerting should just be how FAR outside the average Range for the site is. My Nashville facility hovers from 56 to 59 all week. I have it set on 57 so I get alerts at least once a weekend. I could move it to 59...but that's a band-aid. The REAL solution would be to have the software TRACK the last 30 days, and alert when we're outside the NORM for that location. furthermore....with hardware it is not the specific temps that kill the hardware....its the RATE at which the temp changes. so, the alerts SHOULD be based on the average range the system has seen in the last 30 days, and alert ONLY when the rate of change accelerates...but I imagine THAT request would be more challenging to reduce to an algorithm. Example 2: PING times.....I have sites where the Latency range is EXTREME (Mumbai, Johannesburg, Taipei etc...) I'd wished the PING would track the 30 day range and common deviation from norm and alert when the sites see latency that is way outside the expected fluctuation range. 30ms typical 90% of the time + 200-500ms spikes 10% of the time. when Ping times hit 300 ms for more then 10% of the last hour of sampling....then notify warning to inform of change in TREND....not fixed threshold in immediate sample