All Activity

This stream auto-updates     

  1. Today
  2. Sharing Dashboard views

    Would any one have any vmWare dashboards and willing to share ?even if its just a screen shot or if its not configured to work with dashboard tokens? regards, Medi
  3. Last week
  4. File / Database Size Monitoring

    Yes!!
  5. File / Database Size Monitoring

    @WillFulmer - the *best* way to do this is probably going to be a PowerShell datasource (I have one that I need to clean up and then I can share it out here.) In the meantime, you can use the built-in UNC path monitoring to keep an eye on the database directory and track the total size of that folder. For example - I went to my Exchange server, selected 'Add Other Monitoring' and plugged in a UNC path to a hidden share that's accessible to the collector account: Which yields the following results: Give me a little bit of time to try and cook up the PowerShell version of an Exchange database size datasource - and I'll circle back and publish it here. Cheers, Kerry
  6. Windows Services Check

    @joshlowit1 - the quickest and easiest way to accomplish what you're trying is probably the following: 1.) Find the "WinCitrixServices" DataSource that already exists in LogicMonitor - it automatically looks for all Windows services with "Citrix" in the name. 2.) Clone that DataSource, give it a new name, uncomment the AppliesTo, and change the filter to catch a different service name. 3.) Save it - you can leave the AppliesTo as isWindows() - and now it will look for the service or services you've specified on every Windows machine in your account. Cheers, Kerry
  7. instance equivalence groups

    @mnagel we're working a feature for exactly this At a high level, we'll let you group together instances across devices & then aggregate data across those instances and alert on the aggregate data. This should be helpful in clustered setups such as the example you've described, as well as more ephemeral environments where the aggregate data will provide historical data as devices & instances are being added / deleted from LogicMonitor.
  8. We regularly encounter situations with clustered resource where an alert will always be active on a standby device. For example, the default for Palo Alto firewalls interfaces is to be operDown on the standby firewall. This leads to similar alerts on the connected switches. What I really care about is the status on the active member, but we get tons of alerts on the standby. You can't just disable them as the standby may be active at some point. What is really needed (and again, this is a general issue -- Palo Alto is just one example) is the ability to group equivalent instances. I hoped the Cluster Alerts feature might help, but it is not even close to fine-grained enough. I want to group (in this example) interface pairs so that the alarm triggers only when both instances are down. This applies to many similar situations in real life monitoring, and it is very painful to have to explain to our customers why this basic feature is missing. It is similar to the previously discussed device dependency issue, but different enough that I think it deserves its own focus. Thanks, Mark
  9. I have a device property that I would like to update every 15 minutes or so. This is because I have groups with auto-include rules that are looking for that property. I need to have the device move in and out of the groups on the fly. It would be great if we could set individual custom propertysources to update on a more frequent basis. Currently I'm achieving this using the LogicMonitor Rest API which I have baked right into a datasource as a workaround - but I think this solution is messy. Thanks!
  10. Hey Joe, I'm not sure that this exactly meets your needs, but I think it's a good start. Basically, you can call hostProps.toProperties() method which spits out an array that you can now dig through and filter using regex. Something like this: def allProps = hostProps.toProperties() allProps.each{ if(it ==~ /.*\.databases=/){ println it } } Let me know if this doesn't address what you're trying to accomplish.
  11. Introduction Postman is widely used for interacting with various REST APIs such as LogicMonitor's. However, there is no out-of-the-box support for the LMv1 authentication method which we recommend as a best practice. This document describes how to configure Postman to use LMv1 authentication when interacting with our REST API. Overview Postman's pre-request script functionality provides the ability to generate the necessary Authorization header for LMv1 authentication. As the name suggests, the pre-request script runs immediately before the request is made to the API endpoint. We set the pre-request script at the collection level in Postman so that it will run automatically for every request that is part of the collection. The script requires three input parameters: a LogicMonitor API token (or ID), its associated key, and the full request URL. These parameters are made available to the script by creating a Postman environment and setting the values as environment variables. If you need to access multiple LogicMonitor accounts (portals), create a separate environment for each to store the applicable API and URL information. Since all API requests to a given account use the same base URL (https://<account>.logicmonitor.com/santaba/rest) it is convenient to store this as an environment variable. The output of the script is the value of the Authorization header. The script writes the header value to an environment variable which is then inserted as the Authorization header value in the request. Instructions 1. Download and install Postman. 2. Launch Postman and create a new collection that will be used for all LogicMonitor API requests. 3. In the create collection dialog, select the "Pre-request Scripts" section and paste in the following code. // Get API credentials from environment variables var api_id = pm.environment.get('api_id'); var api_key = pm.environment.get('api_key'); // Get the HTTP method from the request var http_verb = request.method; // Extract the resource path from the request URL var resource_path = request.url.replace(/(^{{url}})([^\?]+)(\?.*)?/, '$2'); // Get the current time in epoch format var epoch = (new Date()).getTime(); // If the request includes a payload, included it in the request variables var request_vars = (http_verb == 'GET'||http_verb == 'DELETE') ? http_verb + epoch + resource_path : http_verb + epoch + request.data + resource_path; // Generate the signature and build the Auth header var signature = btoa(CryptoJS.HmacSHA256(request_vars,api_key).toString()); var auth = "LMv1 " + api_id + ":" + signature + ":" + epoch; // Write the Auth header to the environment variable pm.environment.set('auth', auth); 4. Create a new environment. Create the environment variables shown below. You do not need to provide a value for the "auth" variable since this will be set by the pre-request script. Be sure to use the api_id, api_key, and url values appropriate for your LogicMonitor account. 5. Create a request and add it to the collection with the pre-request script. A sample request is shown below with the necessary parameters configured. 1. Set the environment for the request, 2. Set the HTTP method for the request. 3. Use {{url}} to pull the base URL from the environment variable. Add the resource path and any request parameters your API request may require. 4. Add the Authorization header and set the value to {{auth}} to pull the the value from the environment variable. 5. POST, PUT, and PATCH requests only: if your request includes JSON data, be sure to select the Body tab and add it. 6. Press Send to send the request. The response will appear below the request in Postman. Troubleshooting You receive the response "HTTP Status 401 - Unauthorized" Confirm the following: • The proper environment has been specified for the request. • The necessary environment variables have been set and their values are correct. Note that the script relies on the specific variable names used in this document: "api_id", "api_key", "url", and "auth". • The request is a member of the collection configured with the pre-request script. Postman reports "Could not get any response" or "There was an error in evaluating the Pre-request Script: TypeError: Cannot read property 'sigBytes' of undefined" Make sure you have set the proper environment for the request and all necessary environment variables and values are present.
  12. LogicMonitor's configuration backup product, LMConfig, has traditionally been focused on network device configuration backup and diff alerting. However, like other LogicMonitor LogicModules, we provide the capability to run both Groovy and PowerShell scripts in order to retrieve this information. Given those PowerShell capabilities, we can tap into the Windows Active Directory PowerShell modules and use LogicMonitor as an auditing tool. For example: Query Active Directory for a list of domain computers, and generate an alert if this list changes: Query Active Directory for the Default Domain Password Policy, and generate an alert if it doesn't comply with Microsoft best practices. The current suite of Active Directory ConfigSources consists of (11) ConfigSources that will attempt integrated authentication using a Windows collectors' service account - unless it finds wmi.user and wmi.pass properties set - in which case it will attempt to use those instead. I've published them to Github and they can be downloaded from the ConfigSources repository. *These are "officially unsupported" by LogicMonitor, so please proceed with caution!
  13. Exporting Netflow from Linux with softflowd NetFlow is an industry standard network protocol for monitoring traffic flows across a network interface. It is used most commonly by devices like firewalls, routers, and switches, but some software packages make it possible to export Netflow data from a server operating system - in this case Linux (with softflowd) - to a Netflow collector (LogicMonitor) for traffic analysis. Ubuntu Documentation here: http://manpages.ubuntu.com/manpages/xenial/man8/softflowd.8.html The following assumes you have an Ubuntu device in your portal which you can access with sudoer permissions. It also assumes Netflow has been enabled for the device and the collector in question. Install softflowd: sudo apt-get install softflowd Open /etc/default/softflowd for editing: sudo nano /etc/default/softflowd Set the value for INTERFACE and add the destination ip:port (<collectorIP>:2055) under OPTIONS. Other options are available, check the link above for full documentation. # # configuration for softflowd # # note: softflowd will not start without an interface configured. # The interface softflowd listens on. You may also use "any" to listen # on all interfaces. INTERFACE="eth0" # Further options for softflowd, see "man softflowd" for details. # You should at least define a host and a port where the accounting # datagrams should be sent to, e.g. # OPTIONS="-n 127.0.0.1:9995" OPTIONS="-n 192.168.170.130:2055" Save your changes by pressing Ctrl-O, then exit nano by pressing Ctrl-X. Restart softflowd. sudo service softflowd restart Add a rule to the firewall to allow traffic on 2055. sudo ufw allow 2055 CentOs This is a bit more work since you can't just install a package; you'll need to download the source and compile. Most of the information here comes from https://www.scribd.com/doc/199440303/Cacti-Netflow-Collector-Flowview-and-Softflowd More good info: https://thwack.solarwinds.com/thread/59620 Check to see if you have the compiler installed. which gcc If you don't get /usr/bin/gcc as the response, you'll need to install it. sudo yum install gcc Install libpcap-devel (you'll need this to compile softflowd). sudo yum install libpcap-devel Download the softflowd source. wget https://storage.googleapis.com/google-code-archive-downloads/v2/code.google.com/softflowd/softflowd-0.9.9.tar.gz Make sure you're in the directory where you saved the download, then untar the dowloaded source files. tar -xzvf softflowd-0.9.9.tar.gz Switch to the softflowd directory, then run the commands to compile and install it. cd softflowd-0.9.9 ./configure make make install Now we want to have softflowd start when the system boots. We'll need to add a line to the end of /etc/rc.d/rc.local. Use your device's interface after -i and your collector's IP address after -n. sudo nano /etc/rc.d/rc.local <add the following line to the end of the file> /usr/local/sbin/softflowd -i eth0 -n 10.13.37.111:2055 Save your changes with Ctrl-O, exit nano with Ctrl-X. Make sure /etc/rc.d/rc.local is executable. sudo chmod +x /etc/rc.d/rc.local Open port 2055 in the firewall so the collector can receive the data. sudo firewalld-cmd --zone=public --add-port=2055/tcp --permanent Reboot the machine for all changes to take effect. *Original guide courtesy of @Kurt Huffman at LogicMonitor
  14. The Monaco Editor powers VS Code, would be awesome if all the script fields could use this. https://github.com/Microsoft/monaco-editor
  15. Timezone per user account

    Great to hear @Ali Holmes!
  16. PropertySource - Installed Windows Server Features

    Awesome job Guy's, loved it, but i wanted to show pretty much all the stuff so i updated it limiting the exclude to only Netlogon, EMRLT9 Screen Shot Love it. Medi
  17. Sharing Dashboard views

    Thanks Kerry, Awesome dashboards I feel like a kid in a candy store, "can we have some more?" regards, Medi
  18. Love the idea of this! Might have to figure out a way to get this setup. Thanks for the write up Kerry!
  19. Audit Log Enhancement for API Activity

    3) The existing audit log could have an easy filter to hide API calls and reduce noise this would be very helpfull to toggle it api, or non api. we commonly want to be checking API only or User only Logs.
  20. Windows Services Check

    You would create a new LogicModule > DataSources template. The Apply To expression should be set there. This makes the subsequent datasource available (or automatically applied if you choose to use Active Discovery) to Windows servers that have your desired services running. The rest of the Datasource would be designed to poll for your services and alert thresholds. There are many ways to do that. You can use a WMI collector (like in the WinService- datasource which you can clone and play with). OR if you feel adventurous you can use a Script collector and use Groovy or PowerShell (or some other scripting language) to return the data you want.
  21. Sharing Dashboard views

    Agreed. Add NetApp and Nutanix to the list. Anyone willing to start a dashboard? "Willing" contributed here both in terms of time/effort put into this or a donation if someone else beats me to the punch.
  22. Actually i want to add recipient group using REST API and i'm not be able to get any documentation. Am I doing any mistake or anything else? Thanks Regards, ShahzadAli
  23. Windows Services Check

    I am fairly new to LogicMonitor I was able to import the PropertySource and in the appliesTo: field I am able to apply it to the servers I need. I am a little lost on where I would apply the auto.winservices =~ "WindowsService1|WindowsService2|etc" Then where would the alert tuning come in to only alert when specific services are not running, or in another case to alert if a service is running.
  24. Sharing Dashboard views

    Kerry, these are awesome. Keep them coming! Would love to see for Cisco switches, CheckPoint FW's, Citrix XA/XD, Citrix NetScaler, and more! Much appreciated, these really do help a lot.
  25. Audit Log Enhancement for API Activity

    I think that would help, @Sarah Terry. The main issue I'm trying to avoid is this: we recently went through and removed users with no recorded activity. Some of them ended up being API users that were heavily used, but "Last Action" date from the users screen was blank and there was no activity in the audit log.
  26. Audit Log Enhancement for API Activity

    Hi @stuart.vassey - thanks posting. Follow up question: If we offered a way of monitoring API usage (in a granular way that exposed the number requests to resources by method & type) for users in your account, would you still want the GET requests logged in the audit log?
  1. Load more activity