Popular Content

Showing content with the highest reputation since 07/15/2018 in all areas

  1. 3 points
    Please add the option to alert on "no data" condition to the instance level Alert Tuning configuration dialog. We don't want to generate "no data" alerts for everything and we don't want to split the data sources (extra maintenance when updating), so it would be easier to have this as a instance level override.
  2. 1 point
    Hello all! I just recently created a monitor to track the amount of time in seconds the secondary replicas are behind the primary and I figured I would share my solution. This will display the span of time in which past transactions could be lost in the event of a fail over. This assumes the secondaries are asynchronous. We are using SQL 2016, I do not know if this will work on earlier versions of SQL. I created a datasource and set the "Applies to" to the server in question. I set the collector to "SCRIPT" and I set the datasource to multi-instance and enabled active discovery. I used JDBC as the discovery method and Instance List as the discovery type. The details are as follows: Connection String: jdbc:sqlserver://##HOSTNAME##;IntegratedSecurity=true; SQL Statement: (This query gets the name of each secondary replica) SELECT CS.replica_server_name FROM sys.availability_groups_cluster AS C INNER JOIN sys.dm_hadr_availability_replica_cluster_states AS CSON CS.group_id = C.group_id INNER JOIN sys.dm_hadr_availability_replica_states AS RS ON RS.replica_id = CS.replica_id WHERE RS.Role_desc <> 'PRIMARY' I wrote a script in Powershell that actually pulls the data that I need from the server itself. Powershell Script: [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo") | Out-Null $SqlServer = New-Object Microsoft.SqlServer.Management.Smo.Server("##HOSTNAME##") $SqlServer.AvailabilityGroups["your ag name here"].DatabaseReplicaStates | Where-Object {$_.IsLocal -eq $false -and $_.AvailabilityReplicaServerName -eq "##WILDVALUE##" -and $_.AvailabilityDatabaseName -eq "your server name here"} | Select -ExpandProperty EstimatedDataLoss In our environment there is only one database on the server that we are concerned with for tracking this information so I have not explored monitoring multiple databases. This method pulls the same data that is in the Estimated Data Loss(seconds) column in the Availability Groups dashboard in SQL Management Studio. In the event you wanted to track multiple databases on the secondaries, you will likely have to create a separate datasource for each secondary server you want to track. Additionally you would have to adjust the JDBC query to enumerate the databases on the server and the powershell script accordingly. I hope this helps someone. Thanks, Kyle
  3. 1 point
    I got this up and running in my tenant, we put a lot of restrictions in Azure so I had to use my Global Admin to setup the app. Not sure if it requires that high of a privilege but it got the job done.
  4. 1 point
    Please make the number items on the Big Number widget hyperlinks to the instances from which the datapoints are configured in the widget. This would save having to go into the widget config to check where the value is coming from. Our operators instinctively expect to be able to click the item and be navigated to the source of the datapoint.
  5. 1 point
    (BTW, re SNMP, I ended up implementing a uptime via SNMP poll just to alert if there is no SNMP response.)
  6. 1 point
    WALDXL - Download Speed This datasource will run a PowerShell script that downloads a 10MB file and then figures out the speed in Mbps that it was downloaded. CAUTION: This datasource will download a 10Mb file for every Windows machine specified in the applies to field(default is not applied),every poll(deafult is 20 minutes), depending on your environment this could raise the price for your monthly ISP bill. Specifically if your ISP speeds ramp up when needed. I would recommend applying this to: hasCategory("speed") and isWindows() The of course you just need to add the system.property of speed to any Windows machine you want to monitor Download Speed on.
  7. 1 point
    Please move the Manage gear icon to the left of the Users list view to be consistent with rest of UI. It also irritates and confuses our users when the icon is hidden when the list view overflows (due to items where the Roles column has mutiple items), because it's on the right hand side.
  8. 1 point
    Often when an alert pops up, I find myself running some very common troubleshooting/helpful tools to quickly gather more info. It would be nice to get that info quickly and easily without having to go to other tools when an alert occurs. For example - right now, when we get a high cpu alert the first thing I do is run pslist -s \\computername (PSTools are so awesome) and psloggedon \\computername to see who's logged in at the moment. I know it's possible to create a datasource to discover all active processes, and retrieve CPU/memory/disk metrics specific to a given process, but processes on a given server might change pretty frequently so you'd have to run active discovery frequently. It just doesn't seem like the best way and most of the time I don't care what's running on the server and only need to know "in the moment." A way to run a script via a button for a given datasource would be a really cool feature. Maybe on the datasource you could add a feature to hold a "gather additional data" or meta-data script, the script could then be invoked manually on an alert or datasource instance. IE when an alert occurs, you can click on a button in the alert called "gather additional data" or something which would run the script and produce a small box or window with the output. The ability to run periodically (every 15 seconds or 5 minutes, etc) would also be useful. This would also give a NOC the ability to troubleshoot a bit more or provide some additional context around an alert without everyone having to know a bunch of tools or have administrative access to a server.
  9. 1 point
    Could I request collection intervals of longer than 1 hour please? I've had two use cases for this in the last week. The first is a powershell script check we need to run just once a day. The check outputs files as well so running once every can take up unnecessary storage space on the collector. The second case is where we snmp check a value on a device, each time a security update is applied to the device the value is incremented, security updates are released once a week. We want to generate an alert if the update hasn't applied, I have used a delta check for this but I only want to alert if it hasn't incremented in 7 days. At the moment the maximum polling interval of 1 hour and consecutive polls of 60 restricts me to just 60 hours. A higher maximum polling interval would allow me to fulfill bot these requirements.
  10. 1 point
    If a complex datapoint could inject this info into the alert template, now that'd be awesome
  11. 1 point
    @mnagel that's a good point. Opsnotes would be great to put an alert was generated and these were the processes working there, I pieced together a small powershell script that uses perfmon (so we can get the % without any additional calculations) you could convert this to groovy with a wmi query and use a complex datapoint with a groovy calculation with an if cpu datapoint is x then calculate and post the opsnote to the device. Obviously this would need to be adjusted to fit LM wildcards and parameters. $Cred = (Get-Credential) #New-Object –TypeName System.Management.Automation.PSCredential –ArgumentList $User, $Pass $HostList =@('server1','server2') foreach ($CurrHost in $HostList) { if((Test-Connection -Cn $CurrHost -BufferSize 16 -Count 1 -ea 0 -quiet)) { (gwmi Win32_PerfFormattedData_PerfProc_Process) | foreach {if ($_.PercentProcessorTime -gt 1) { $_.name + " " + $_.PercentProcessorTime }} } } I've got a conference this week where I'll have some time may work on this. This is one of the biggest things for our engineers is capturing the processes as they're running when the alert triggers, most often times we miss it.
  12. 1 point
    I see that you've added this to Github . I've created two PRs to address issues I've been having with I'm assuming the Public directory scripts are "bundled" when releases are created and uploaded to PowerShell Gallery.
  13. 1 point
    Lets start with an example, I have a router which has many ports and I want to put some ports to SDT. I have 2 options, put SDT port by port or put SDT on whole group and remove some ports from it. Current logic is: choose a device and apply what do you want with it. It'll be good to have it somehow (ie clicking tree-structure) other way around - first I choose what I want to do and after I select where to apply it. It can speed up some tasks...
  14. 1 point
    96D2GG A revamped version of the existing RabbitMQ Queue datasource with the following improvements: 1. Performance (the biggest difference) - instead of the collector gathering all stats for all queues for each polling period, and then parsing the data looking for only the queue it is looking to monitor, it makes a specific API call for the queue attached to the instance. This drastically reduces the collection time as well as the amount of strain on the collector itself. 2. Fixed datapoint "avgAckEgressRate" as it incorrectly pointed to the Ingress rate, which means the ingress and egress would always show as the exact same value. 3. A number of overview graphs added. 4. Dynamic grouping of instances 5. Queues with ":" in their name can now be discovered and monitored properly. 6. Nine additional datapoints added. This datasource was renamed to prevent it from overwriting any existing data you may have already collected with the default datasource. After enabling and testing this datasource, I would recommend disabling the original.
  15. 1 point
    Hate to bring up a dead thread, but this doesn't work when you need the actual datapoint. In our case there is a range that can be returned. Say 1-7. 2-5 are "OK" returned values. 1, 6 & 7 are bad returned values, but each one of those values tells us what is wrong.
  16. 1 point
    Hi All, We have really been enjoying the Remote Management feature of logic monitor. For sites that we don't have a direct interconnect with its great being able to quickly SSH onto our devices to make adjustments or check config without having to open up a separate VPN tunnel. However with HTTP/HTTPS management becoming common with Firewalls, Controllers, Routers etc... I feel there is a huge opportunity to have logic monitor be able to fit almost every management use case by implementing an HTTP/HTTPS remote session functionally in the same way RDP and SSH remote sessions work. We as a company would primarily use this feature for help managing networking Equipment, but functionality would extend to Printers, IPCameras, Security Systems, Phone systems, UPS and many more. Let me know your thoughts, Thanks, Will.
  17. 1 point
    Don't know if anyone else noticed, but MS released a pretty slick script that enables WMI access remotely without admin rights. I have done a brief test with LM and it seems to be working well. https://blogs.technet.microsoft.com/askpfeplat/2018/04/30/delegate-wmi-access-to-domain-controllers/ That's the article. I created an AD group instead of a user to delegate, and I put the LM collector service in that group. Everything else I've followed as documented. I haven't tested anything else, but this alone is a huge step in the right direction.
  18. 1 point
    Please add an option to hide the Recently Deleted view from user with less than manage rights.
  19. 1 point
    We would like the ability to generate alarm on events generated within the vSphere event log, I really can't believe this isn't in place already.
  20. 1 point
    NOC is an acronym for Network Operation Center. Heads up monitors are typically called NOC monitors, so you can see where the widget being called a NOC widget is useful for those types of dashboards.
  21. 1 point
    We are a global company with resources in Minnesota, New Jersey, Australia, Ukraine, and India all using the Logic Monitor tool set. It would be incredibly useful to be able to set the timezone at a user level instead of only at the company level.
  22. 1 point
    Useful for inventory, auditing, and auto-grouping. Displays the a list of all installed Windows Features separated by commas. Example below. auto.winfeatures [Active Directory Lightweight Directory Services, .NET Framework 3.5.1 Features, Telnet Client, Remote Server Administration Tools, .NET Framework 3.5.1, Role Administration Tools, AD LDS Snap-Ins and Command-Line Tools, AD DS and AD LDS Tools, Active Directory module for Windows PowerShell] WMN9DN
  23. 1 point
    Code is TXL3W9 This DataSource provides instances for each of the Network adapters, including the following Instance Level Properties: auto.TcpWindowSize auto.MTU auto.MACAddress auto.IPSubnet auto.IPAddress auto.DNSHostName auto.DNSDomain auto.DefaultIPGateway auto.SettingID auto.Description
  24. 1 point
    Our NOC is complaining there are a lot of transient no data alarms that are hard for the Ops teams to troubleshoot. Please allow administrators to set consecutive polls of no data before alarming to decrease alerts and to engage Ops team for sustained issues
  25. 1 point
    If you choose to use a netscan and want to rename your devices to the system name using ##SYSTEMNAME## the device must respond to SNMP first. The problem is LogicMonitor uses WMI exclusively for Windows, SNMP exclusively for most network equipment and vmware's api for hosts. So in my case I do not have SNMP configured everywhere. Now lets say you are scanning 5 full /24's and they put these devices in your portal. Now I am stuck having to go to the info tab on each one, locate the system.sysname(if present) and then click on Manage and rename the device. With 5 full /24's this a massive amount of devices to have to manually touch. Netscans have to be able to use the default method of connecting to each device and then ##SYSTEMNAME## must come from that depending on the device it is trying to connect to. Making us rely on just SNMP is not feasible.
  26. 1 point
    With today's technology it's easily possible to remove the LME#########, LMI######## and LMC####### from the subject line of email alerts. To keep replies working for ACK/SDT it can be moved to the message body, hidden it in properties or any method that other companies use to accomplish the same thing. Even moving it to the end of the subject line would help. It takes up the majority of the visible subject line when viewing your emails.
  27. 1 point
    All email alerts (including the custom email alerts which have no tokens such as ##HOSTNAME##) come in with a subject line such as this LMD20057 VROL Alert. I would like to see simply VROL Alert in this instance. Is there a way the LMD##### part can be removed? The number seems to an arbitrary 3 to 5 digit number in no sort of sequence. Also wanted to show support of this idea here, since it create additional pagerduty alerts by acking the alert: https://support.logicmonitor.com/entries/55630034-Disable-Ack-cleared-emails-for-specific-stages-