Kevin Ford

LogicMonitor Staff
  • Posts

  • Joined

  • Last visited


27 Excellent


About Kevin Ford

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I've updated the script to reflect changes made since the original post. It now includes more examples for handling different alerts (including restarting a Windows service) and a helper function for forwarding syslogs if needed. Heaven forbid you ever get such an alert storm, but I tested triggering 500 alerts/minute to see if the script could handle them all and it successfully processed all of them within a second of the collector performing its regular 60-second External Alerting check. For reference, the test was simply logging each alert to a file using the script's LogWrite function. More complex response actions will likely require additional overhead at that scale so please be sure to tune your alerts & actions appropriately.
  2. LogicMonitor's Map widgets are a great and easy way to plot resources/groups geographically, including their status. A question that comes up occasionally is if it's possible to show weather information on top of these maps. While there's currently not a native option to show weather on a Map widget, it is possible to inject a weather layer onto an existing map with a bit of JavaScript. Below is a link to a sample dashboard that can insert various types of weather info onto Map widgets. Simply save the linked JSON file to your local workstation, then in your LogicMonitor portal go to Dashboards and click Add > From File. Dynamic_Weather_Overlay.json The magic happens in JavaScript embedded in the source of the Text widget. Feel free to explore the source code by entering the Text widget's Configure dialog and clicking the 'Source' button. In typical overkill fashion, I included the option for several different types of weather information. The script looks for the following text (regardless of case) in the Map widget's title and adds the appropriate weather layer: "Radar" or "Precip" "NEXRAD Base" "NEXRAD Echo Tops" "MRMS" "Temperature" ( API key required) "Wind Speed" ( API key required) "Cloud Cover" or "Satellite" ( API key required) Prerequisites If you want to use one of the map types noted above as needing an API key (the other types use free APIs that don't require a key), you'll need to register for a free account on Once you've obtained an API key, just add a new dashboard token named 'OpenWeatherAPIKey' and paste your key into its value field. Alternatively, you can also hard-code the key directly in the 'openWeatherMapsAPIKey' variable near the top of the script. The weather overlays should auto-update when the widgets perform their regular timed refresh. For instance, new radar imagery is made available every 10 minutes and will update automatically. Weather sources currently defined within this script: - Excellent source of global weather imaging data. Updates approx. every 10 minutes. Used by the script for radar/precipitation maps. Open Geospatial Consortium - Hosted by Iowa State University, an excellent free source of weather data. Since it sources data from the US National Weather Service, its data is local just US and Canada. Used by the script for NEXRAD and MRMS data. - Good source for some weather data such as wind speed, temperature, and cloud cover. Requires use of an API key, which is available for free. Known Issues: When switching to a different dashboard containing a Map widget, it's possible weather may still be visible on the new dashboard. If that happens just refresh the page.
  3. There have been a few different variations of EventSources for ingesting alerts via email over the years. I recently had need for some additional functionality, such as dynamic property replacement in the subject line, so created a new variation that I'm sharing here in case it's useful to others. At its most basic, the EventSource checks an IMAP inbox for unseen messages that have specific text in the subject line. If a match is found, it captures the body of the email in a new alert associated to the host/resource in LogicMonitor where the properties are defined. By default, processed emails are marked as read but can optionally be flagged for deletion. Because the EventSource supports dynamic replacement of a property value in the subject line, it allows the potential of having the module associate to multiple LogicMonitor resources and alert based on matching hostnames in incoming email subjects. For example, if you have the 'email.subject' property set to "Email Alert on ##system.hostname##" and an email is received with the subject line "Email Alert on host1", it would appear as an alert on the 'host1' resource in LogicMonitor. LM Exchange locator code: AHDXND Required properties: The address of the IMAP server. email.user: Username the module will to login & check for new emails. email.pass: Password for the email user. Optional properties: email.subject: The email subject to search for. This can include a property name (example: "##system.hostname##") for dynamic replacement. Default: "Email Alert". imap.type: IMAP security type (SSL or TLS). email.deleteProcessed: Whether to attempt to delete processed email (versus just marking them as read). Default: false. NOTE: auto-deleting processed email may not work on Gmail due to Google's non-standard handling of IMAP. email.folder: Inbox sub-folder to monitor (example: "Inbox/Errors"). Default: "Inbox". Below is an example email regarding a specific host... This is how the resulting alert displays on that host in LogicMonitor...
  4. @mnagel As a temporary workaround, take a look at my post about dynamic dashboards. It's essentially a widget that allows choosing & changing tokens on-the-fly from a drop-down directly on the dashboard. The current version of my widget is entirely dynamic - meaning it queries groups, resources, & instances to populate the menus based on how you configure it - but my original version was based on static options similar to what I think you're asking about. Let me know if that'd of interest and I'll be happy to help.
  5. Captures virtual interface throughput metrics for AWS Direct Connects. Locator ID: TMHHWX
  6. @drb As an immediate option I've just posted a datasource to the Exchange - TMHHWX - that'll discover & monitor virtual interfaces for Direct Connects. Please give it a shot and let us know if you see any problems with it.
  7. @SenthilK Thanks to some assistance from Andrey & others, I've just posted a datasource to the Exchange - TMHHWX - that'll discover & monitor virtual interfaces for Direct Connects. Please give it a shot and let us know if you see any problems with it.
  8. Overview LogicMonitor has a number of built-in report types that can be customized and sent out on a scheduled basis, including the powerful ability to turn any dashboard into a dynamic report. A common question for cloud-based services like LogicMonitor, however, is how to incorporate hosted data with information from other sources. An example may be a report that combines inventory data & monitoring metrics from LogicMonitor with incident data from systems such as ServiceNow. With Microsoft Power BI’s ability to easily parse and ingest JSON data directly from web services, it’s possible to create reports directly from LogicMonitor’s REST-based APIs without the need for intermediary automation or databases. Below are some basic steps to start pulling data directly from your LogicMonitor portal into a Power BI report. This isn’t meant to be a comprehensive reference though the concepts introduced here can be used for other report types generated directly from LogicMonitor data. Prerequisites The Microsoft Power BI Desktop software and knowledge of its usage for building reports. If needed, the software can be downloaded from the following link: A login for your LogicMonitor portal that has at least read-only permissions for the information to be included on the report. Basic familiarity with LogicMonitor’s REST APIs. The full API reference can be found at: Adding a LogicMonitor REST API as a Power BI Source For this example we will use LogicMonitor's "Get Devices" API method to build a simple inventory report. Documentation for the "Get Devices" method and its options are available at: 1. Launch Microsoft Power BI Desktop. 2. Click the Get Data button, either on the intro dialog or on the toolbar ribbon. 3. On the Get Data dialog, search for the “Web” data type that’s located under the “Other” section. Once “Web” is selected click the Connect button. 4. Enter the URL of the REST method, including optional query parameters. For the example using the "Get Devices" method, the URL used was the following (replace "[portalname]" to match your own LogicMonitor portal's URL) : https://[portalname],autoProperties,displayName,description,id,link,hostStatus,name,systemProperties,upTimeInSeconds This example URL calls the "Get Devices" method (/device/devices) and passes optional parameters specifying to return up to 1,000 records and lists some properties/fields we want for each device. Please refer to the “Get Devices” method’s documentation for more information about the available parameters and options. For instance, if your query has more than 1,000 results available (the maximum results available in a single REST call) then you may have to code a loop in Power BI to make multiple calls that paginate through the available results. 5. Power BI will then try to access that URL. After a moment it will ask how to authenticate with the REST service. For this example we’ll use the Basic authentication method. Enter a valid LogicMonitor username and password that Power BI will use to access your portal’s web services and click the Connect button. (NOTE: as mentioned in LogicMonitor’s REST documentation, the option for “Basic” authentication may be removed at some point in the future.) 6. Power BI will then authenticate with LogicMonitor's REST service. After a moment you'll see the initial results from your REST query. Click the "Record" link on the result's 'data' row. 7. Next, click the "List" link on the 'items' row to expand the list of records. 8. Click the To Table button. 9. Keep the default conversion options and click OK. 10. Click the small icon in the column header to expand the results. 11. Click OK on the column selection dialog. 12. Click the Close & Apply button to apply the changes from the query builder. You've now added the REST method as a dynamic data source in Power BI. At this point you can design the report to suit your specific needs. If you want to browse and manipulate the data that was brought into the model, click the Data button (looks like a small grid) on the left-hand toolbar.
  9. Below is a PowerShell script that's a handy starting point if you want to trigger actions based on specific alert types. In a nutshell, it takes a number of parameters from each alert and has a section of if/else statements where you can specify what to do based on the alert. It leverages LogicMonitor's External Alerting feature so the script runs local to whatever Collector(s) you configure it on. I included a couple of example actions for pinging a device and for restarting a service. It also includes some handy (optional) functions for logging as well as attaching a note to the alert in LogicMonitor. NOTE: this script is provided as-is and you will need to customize it to suit your needs. Automated actions are something that must be approached with careful planning and caution!! LogicMonitor cannot be responsible for inadvertent consequences of using this script. If you want try it out, here's how to get started: 1. Update the variables in the appropriate section near the top of the script with optional API credentials and/or log settings. Also change any of the if/elseif statements (starting around line #95) to suit your needs. 2. Save the script onto your Collector server. I named the file "alert_central.ps1" but feel free to call it something else. NOTE: Store the script somewhere outside the Collector’s directory structure to avoid possibility of it being overwritten during Collector updates. 3. In your LogicMonitor portal go to Settings, then External Alerting. 4. Click the Add button. 5. Set the 'Groups' field as needed to limit the actions to alerts from any appropriate group of resources. (Be sure the group's devices would be reachable from the Collector running the script!) 6. Choose the appropriate Collector in the 'Collector' field. 7. Set 'Delivery Mechanism' to "Script" 8. Enter the path to where you saved the script (in step #2) in the 'Script' field (ex. "c:\scripts\alert_central.ps1"). 9. Paste the following into the 'Script Command Line' field (NOTE: if you add other parameters here then be sure to also add them to the 'Param' line at the top of the script): "##ALERTID##" "##ALERTSTATUS##" "##LEVEL##" "##HOSTNAME##" "##SYSTEM.SYSNAME##" "##DSNAME##" "##INSTANCE##" "##DATAPOINT##" "##VALUE##" "##ALERTDETAILURL##" "##DPDESCRIPTION##" 10. Click Save. This uses LogicMonitor's External Alerting feature so there are some things to be aware of: The Collector(s) oversee the running of the script, so be conscience to any additional overhead the script actions may cause. It could take up to 60 seconds for the script to trigger from the time the alert comes in. This example is a PowerShell script so best suited for Windows-based collectors, but could certainly be re-written as a shell script for Linux-based collectors. Here's a screenshot of a cleared alert where the script auto-restarted a Windows service and attached a note based on its actions. Below is the PowerShell script: # ---- # This PowerShell script can be used as a starting template for enabling # automated remediation for alerts coming from LogicMonitor. # In LogicMonitor, you can use the External Alerting feature to pass all alerts # (or for a specific group of resources) to this script. # ---- # To use this script: # 1. Update the variables in the appropriate section below with optional API and log settings. # 2. Drop this script onto your Collector server under the Collector's agent/lib directory. # 3. In your LogicMonitor portal go to Settings, then click External Alerting. # 4. Click the Add button. # 5. Set the 'Groups' field as needed to limit the actions to a specific group of resources. # 6. Choose the appropriate Collector in the 'Collector' field. # 7. Set 'Delivery Mechanism' to "Script" # 8. Enter "alert_central.ps1" in the 'Script' field. # 9. Paste the following into the 'Script Command Line' field: # "##ALERTID##" "##ALERTSTATUS##" "##LEVEL##" "##HOSTNAME##" "##SYSTEM.SYSNAME##" "##DSNAME##" "##INSTANCE##" "##DATAPOINT##" "##VALUE##" "##ALERTDETAILURL##" "##DPDESCRIPTION##" # 10. Click Save. # The following line captures alert information passed from LogicMonitor (defined in step #9 above)... Param ($alertID = "", $alertStatus = "", $severity = "", $hostName = "", $sysName = "", $dsName = "", $instance = "", $datapoint = "", $metricValue = "", $alertURL = "", $dpDescription = "") ###--- SET THE FOLLOWING VARIABLES AS APPROPRIATE ---### # OPTIONAL: LogicMonitor API info for updating alert notes (the API user will need "Acknowledge" permissions)... $accessId = '' $accessKey = '' $company = '' # OPTIONAL: Set a filename in the following variable if you want specific alerts logged. (example: "C:\lm_alert_central.log")... $logFile = '' # OPTIONAL: Destination for syslog alerts... $syslogServer = '' ############################################################### ## HELPER FUNCTIONS (you likely won't need to change these) ## # Function for logging the alert to a local text file if one was specified in the $logFile variable above... Function LogWrite ($logstring = "") { if ($logFile -ne "") { $tmpDate = Get-Date -Format "dddd MM/dd/yyyy HH:mm:ss" # Using a mutex to handle file locking if multiple instances of this script trigger at once... $LogMutex = New-Object System.Threading.Mutex($false, "LogMutex") $LogMutex.WaitOne()|out-null "$tmpDate, $logstring" | out-file -FilePath $logFile -Append $LogMutex.ReleaseMutex()|out-null } } # Function for attaching a note to the alert... function AddNoteToAlert ($alertID = "", $note = "") { # Only execute this if the appropriate API information has been set above... if ($accessId -ne '' -and $accessKey -ne '' -and $company -ne '') { # Encode the note... $encodedNote = $note | ConvertTo-Json # API and URL request details... $httpVerb = 'POST' $resourcePath = '/alert/alerts/' + $alertID + '/note' $url = 'https://' + $company + '' + $resourcePath $data = '{"ackComment":' + $encodedNote + '}' # Get current time in milliseconds... $epoch = [Math]::Round((New-TimeSpan -start (Get-Date -Date "1/1/1970") -end (Get-Date).ToUniversalTime()).TotalMilliseconds) # Concatenate general request details... $requestVars_00 = $httpVerb + $epoch + $data + $resourcePath # Construct signature... $hmac = New-Object System.Security.Cryptography.HMACSHA256 $hmac.Key = [Text.Encoding]::UTF8.GetBytes($accessKey) $signatureBytes = $hmac.ComputeHash([Text.Encoding]::UTF8.GetBytes($requestVars_00)) $signatureHex = [System.BitConverter]::ToString($signatureBytes) -replace '-' $signature = [System.Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($signatureHex.ToLower())) # Construct headers... $auth = 'LMv1 ' + $accessId + ':' + $signature + ':' + $epoch $headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]" $headers.Add("Authorization",$auth) $headers.Add("Content-Type",'application/json') # Make request to add note.. $response = Invoke-RestMethod -Uri $url -Method $httpVerb -Body $data -Header $headers # Change the following if you want to capture API errors somewhere... # LogWrite "API call response: $response" } } function SendTo-SysLog ($IP = "", $Facility = "local7", $Severity = "notice", $Content = "Your payload...", $SourceHostname = $env:computername, $Tag = "LogicMonitor", $Port = 514) { switch -regex ($Facility) { 'kern' {$Facility = 0 * 8 ; break } 'user' {$Facility = 1 * 8 ; break } 'mail' {$Facility = 2 * 8 ; break } 'system' {$Facility = 3 * 8 ; break } 'auth' {$Facility = 4 * 8 ; break } 'syslog' {$Facility = 5 * 8 ; break } 'lpr' {$Facility = 6 * 8 ; break } 'news' {$Facility = 7 * 8 ; break } 'uucp' {$Facility = 8 * 8 ; break } 'cron' {$Facility = 9 * 8 ; break } 'authpriv' {$Facility = 10 * 8 ; break } 'ftp' {$Facility = 11 * 8 ; break } 'ntp' {$Facility = 12 * 8 ; break } 'logaudit' {$Facility = 13 * 8 ; break } 'logalert' {$Facility = 14 * 8 ; break } 'clock' {$Facility = 15 * 8 ; break } 'local0' {$Facility = 16 * 8 ; break } 'local1' {$Facility = 17 * 8 ; break } 'local2' {$Facility = 18 * 8 ; break } 'local3' {$Facility = 19 * 8 ; break } 'local4' {$Facility = 20 * 8 ; break } 'local5' {$Facility = 21 * 8 ; break } 'local6' {$Facility = 22 * 8 ; break } 'local7' {$Facility = 23 * 8 ; break } default {$Facility = 23 * 8 } #Default is local7 } switch -regex ($Severity) { '^(ac|up)' {$Severity = 1 ; break } # LogicMonitor "active", "ack" or "update" '^em' {$Severity = 0 ; break } #Emergency '^a' {$Severity = 1 ; break } #Alert '^c' {$Severity = 2 ; break } #Critical '^er' {$Severity = 3 ; break } #Error '^w' {$Severity = 4 ; break } #Warning '^n' {$Severity = 5 ; break } #Notice '^i' {$Severity = 6 ; break } #Informational '^d' {$Severity = 7 ; break } #Debug default {$Severity = 5 } #Default is Notice } $pri = "<" + ($Facility + $Severity) + ">" # Note that the timestamp is local time on the originating computer, not UTC. if ($(get-date).day -lt 10) { $timestamp = $(get-date).tostring("MMM d HH:mm:ss") } else { $timestamp = $(get-date).tostring("MMM dd HH:mm:ss") } # Hostname does not have to be in lowercase, and it shouldn't have spaces anyway, but lowercase is more traditional. # The name should be the simple hostname, not a fully-qualified domain name, but the script doesn't enforce this. $header = $timestamp + " " + $sourcehostname.tolower().replace(" ","").trim() + " " #Cannot have non-alphanumerics in the TAG field or have it be longer than 32 characters. if ($tag -match '[^a-z0-9]') { $tag = $tag -replace '[^a-z0-9]','' } #Simply delete the non-alphanumerics if ($tag.length -gt 32) { $tag = $tag.substring(0,31) } #and truncate at 32 characters. $msg = $pri + $header + $tag + ": " + $content # Convert message to array of ASCII bytes. $bytearray = $([System.Text.Encoding]::ASCII).getbytes($msg) # RFC3164 Section 4.1: "The total length of the packet MUST be 1024 bytes or less." # "Packet" is not "PRI + HEADER + MSG", and IP header = 20, UDP header = 8, hence: if ($bytearray.count -gt 996) { $bytearray = $bytearray[0..995] } # Send the message... $UdpClient = New-Object System.Net.Sockets.UdpClient $UdpClient.Connect($IP,$Port) $UdpClient.Send($ByteArray, $ByteArray.length) | out-null } # Empty placeholder for capturing any note we might want to attach back to the alert... $alertNote = "" # Placeholder for whether we want to capture an alert in our log. Set to true if you want to log everything. $logThis = $false ############################################################### ## CUSTOMIZE THE FOLLOWING AS NEEDED TO HANDLE SPECIFIC ALERTS FROM LOGICMONITOR... # Actions to take if the alert is new or re-opened (note: status will be "active" or "clear")... if ($alertStatus -eq 'active') { # Perform actions based on the type of alert... # Ping alerts... if ($dsName -eq 'Ping' -and $datapoint -eq 'PingLossPercent') { # Insert action to take if a device becomes unpingable. In this example we'll do a verification ping & capture the output... $job = ping -n 4 $sysName # Restore line feeds to the output... $job = [string]::join("`n", $job) # Add ping results as a note on the alert... $alertNote = "Automation script output: $job" # Log the alert... $logThis = $true # Restart specific Windows services... } elseif ($dsName -eq 'WinService-' -and $datapoint -eq 'State') { # List of Windows Services to match against. Only if one of the following are alerting will we try to restart it... $serviceList = @("Print Spooler","Service 2") # Note: The PowerShell "-Contains" operator is exact in it's matching. Replace it with "-Match" for a loser match. if ($serviceList -Contains $instance) { # Get an object reference to the Windows service... $tmpService = Get-Service -DisplayName "$instance" -ComputerName $sysName # Only trigger if the service is still stopped... if ($tmpService.Status -eq "Stopped") { # Start the service... $tmpService | Set-Service -Status Running # Capture the current state of the service as a note on the alert... $alertNote = "Attempted to auto-restart the service. Its new status is " + $tmpService.Status + "." } # Log the alert... $logThis = $true } # Actions to take if a website stops responding... } elseif ($dsName -eq 'HTTPS-' -and $datapoint -eq 'CantConnect') { # Insert action here to take if there's a website error... # Example of sending a syslog message to an external server... $syslogMessage = "AlertID:$alertID,Host:$sysName,AlertStatus:$alertStatus,LogicModule:$dsName,Instance:$instance,Datapoint:$datapoint,Value:$metricValue,AlertDescription:$dpDescription" SendTo-SysLog $syslogServer "" $severity $syslogMessage $hostName "" "" # Attach a note to the LogicMonitor alert... $alertNote = "Sent syslog message to " + $syslogServer # Log the alert... $logThis = $true } } ############################################################### ## Final functions for backfilling notes and/or logging as needed ## (you likely won't need to change these) # Section that updates the LogicMonitor alert if 'alertNote' is not empty... if ($alertNote -ne "") { AddNoteToAlert $alertID $alertNote } if ($logThis) { # Log the alert (only triggers if a filename is given in the $logFile variable near the top of this script)... LogWrite "$alertID,$alertStatus,$severity,$hostName,$sysName,$dsName,$instance,$datapoint,$metricValue,$alertURL,$dpDescription" }
  10. One final note: Feel free to explore the selector widget's JavaScript code (accessed via steps 1 & 2 in my comment above). I tried to comment the code well to explain what each section does. I have no doubt that my code (especially the fetch statements) could be more efficient but it's functional. 🙂
  11. (Optional) Setting the API Credentials Inside the Code Instead of Dashboard Tokens Setting API credentials in dashboard tokens may be a concern for some environments since there's no easy method for hiding those tokens. In those situations, you have the option of embedding the credentials directly in the selector widget's source code. To do so: 1. Edit the widget by clicking its down-chevron and choosing Configure. You'll find that it's just a standard Text widget containing some HTML and quite a bit of JavaScript. 2. Click the Source button on the text editor's toolbar. 3. Scrolll down a bit until you get to the section specifying defining the 'apiID' and 'apiKey' variables. Enter your ID and key inside the quotes of the respective variables, then click Save & Close at the bottom of the dialog.
  12. Tokens are one of the most powerful aspects of LogicMonitor's dashboards. They make it possible to easily change the subject matter of an entire dashboard (or a group of dashboards). A question that comes up from time-to-time is if it's possible to dynamically update a dashboard using a drop-down menu on the dashboard itself instead of having to edit the dashboard. While there are some exciting changes coming to LogicMonitor's dashboards in the future, I wanted to see if I could come up an option that works today. The end result is a solution that allows dynamically choosing token values for groups, resources, and/or instances directly from the dashboard itself. It turned out to be a great demonstration of what's possible using LogicMonitor's API. Assuming your dashboard is designed to take advantage of tokens (like all the bundled dashboards are), then this selector makes it easy to quickly flip between different contexts. Refer to the Creating Dashboards support article for more info on using dashboard tokens. Some good use-cases for this dynamic selector: Switching between different MSP customers. Flipping to different cloud subscriptions or vCenters. Flipping between facilities, datacenters, or environments (prod, dev, QA, etc.). Below is a link to a sample dashboard that's a variant of LogicMonitor's bundled Windows dashboard but with two new widgets: a "Resource Selector" widget and a "Read Me" text widget that describes how the selector works. The resource selector is just a normal Text widget but with some JavaScript embedded that leverages the LogicMonitor API to populate the drop-down menus and update the dashboard's tokens as appropriate. Dynamic_Dashboard_-_Windows.json To use the above dashboard, save the linked JSON file to your local workstation, then in your LogicMonitor portal go to Dashboards and click Add > From File. Configuring the widget can be done through dashboard tokens. The tokens mentioned below will determine which dynamic drop-downs are displayed. For instance, if no 'defaultInstance' token is defined then the Instance drop-down won't appear. Refer to the token descriptions below for more details. Required Tokens: 'defaultResourceGroup': a common token used by the widgets on the dashboard. The 'Resource Group' drop-down menu will update this token upon clicking Go. 'apiID' & 'apiKey': a LogicMonitor API ID and secret key used for REST API calls. Note: there's a place near the top of the Resource Selector's Javascript source where you can optionally hard-code these two values instead of more openly exposing them in a token. Optional Tokens: 'defaultResourceName': (optional) a common token used by widgets on a dashboard for specifying the resource(s) to show data for. The 'Resource' drop-down menu will update this token upon clicking Go. If the token isn't defined then the script will hide the 'Resource' menu under the assumption that it's not being used on this dashboard. 'defaultInstance': (optional) Token used by widgets on a dashboard for specifying instances to show data for. The 'Instance' drop-down menu will update this token upon clicking Go. Note that if you want to use the 'Instances' menu, it requires a named resource and a datasource to fetch instances for. If the 'defaultInstance' token or 'defaultDatasource' tokens aren't defined on the dashboard OR the 'Resource' menu is set to "*" then the 'Instance' menu will be hidden. 'dynamicGroupParentID': (optional) The numeric ID of the parent group you want to limit the dynamic group drop-down to. Will default to the root group (ID 1) if not specified. 'defaultDatasource': (optional) Name of a datasource that will be used for gathering instances that dynamically populate the 'Instances' drop-down menu. 'hideDynamicGroupDropdown' or 'hideDynamicResourceDropdown' or 'hideDynamicInstanceDropdown': (optional) can be set to "true" to force the 'Resource Group', 'Resource', or 'Instance' drop-down (respectively) to be hidden. This allows the widget to be flexible for choosing a group, a resource, an instance, or any of them. Enabling the Sample Dashboard In order for the selector to work with your LogicMonitor portal's API, you'll need an API credential that has "Manage" rights to the dashboard (so the dashboard's tokens can be updated) plus "View" rights to any groups & devices you want in the drop-downs. You can learn about creating API credentials here: Aside from an 'defaultResourceGroup' dashboard token, you'll need to add two tokens to the dashboard specifying the ID and key of the API login you created. You can set these by opening the dashboard's Manage dialog and adding two new tokens: 'apiID' and 'apiKey' (note: these can be added at a dashboard group-level as well). Other tokens you set on the dashboard will help determine behavior of the selector widget. For instance, if you set a 'dynamicGroupParentID' token containing a numeric ID of a resource group, then the selector will only show groups & resources under that parent group. Likewise, having a 'defaultResourceName' token will enable the Resource selector. Refer to the "Read Me" widget for full details on all the options. So How Does This Work? When the "Resource Selector" widget is first loaded, Javascript embedded in its HTML source makes a call to the LogicMonitor REST API to get a list of items to populate options in the appropriate drop-downs. The widget will auto-populate parent-child menus as appropriate. For example, when a new group is chosen in the 'Resource Group' menu then the 'Resource' drop-down list will auto-update to reflect what's inside the newly chosen group. When the Go button is clicked, the widget's script does the following actions: 1. Calls LogicMonitor's [Get Dashboard]( REST method to get the current dashboard definition. 2. Loops through the dashboard's tokens retrieved during step #1 looking for 'defaultResourceGroup', 'defaultResourceName', and 'defaultInstance'. If found, it updates them in memory with the values chosen on the drop-downs (unless the respective field was set to be hidden). 3. Calls LogicMonitor's [Update Dashboard]( REST method to save the new token values. 4. Refreshes the page to reflect the new values. How To Use This On Other Dashboards Since the selector widget was designed to leverage tokens, applying it to another dashboard is as simple as cloning the widget to another dashboard and configuring any appropriate dashboard tokens (if not already set at the group-level). Note: if you don't want to set your API ID and key as dashboard tokens, there's a spot you can set them inside the selector widget's source code instead. That allows you to hide them, especially if the user only has read-only privileges for the dashboard since they won't be able to edit widget's configuration. I'll post a follow-up comment on where to do that.
  13. Background/use-case: While the ability to click-and-drag to move items around on the Resources tree is absolutely fantastic, we've had a few but impactful occurrences where administrators accidentally moved groups when their mouse shifted slightly while clicking and it threw their target into the group above/below it. While a simple mistake with a simple fix to undo, some accidental moves have gone unnoticed until users report missed alerts due to alert filters/rules no longer matching. The Ask: It would be nice to have the option on groups & devices to prevent the ability to drag-and-drop that specific item (i.e. lock it to its parent group). After a very brief look at the current UI code, it might be as simple as this option not setting the related drag-and-drop CSS classes on that node of the tree.