Stuart Weenig

Administrators
  • Content Count

    190
  • Joined

  • Last visited

  • Days Won

    2

Community Reputation

3 Neutral

About Stuart Weenig

  • Rank
    Myth
  • Birthday 02/29/1980

Recent Profile Visitors

156 profile views
  1. You might look at other ways of discovering the list of VRF names, like maybe an API pull. You could do it in a property source and store the list as a property on each device. Then, instead of making a DS for each device, you could make a DS that pulls the list of VRFs from the device's property that was set previously.
  2. Whoops. Forgot to push it to github. Here it is. No guarantees, but it's a start. https://github.com/sweenig/lmcommunity/blob/master/Microsoft_RemoteDesktop_ConnectionBroker/Microsoft_RemoteDesktop_ConnectionBroker.xml
  3. Snmp method reference: https://www.logicmonitor.com/support/terminology-syntax/scripting-support/access-snmp-from-groovy/
  4. In 15 years of SNMP monitoring, I've never seen it done that way. Somebody got needlessly creative. Just to be sure, you do make the SNMP get to the same address, regardless of VRF, right? So, you could switch to groovy and make the SNMP queries custom. Looks like both the Snmp.get() and Snmp.walk have the ability to get specify the community string at runtime. So, if you have the list of VRF names and you have the base community string, you could construct the concatenated version and make the query using a different "community string" for each VRF.
  5. Ah, looks like you put the collection script in the active discovery field. What you needed was a new BATCHSCRIPT DS with two scripts: The active discovery script looks like this: import com.santaba.agent.groovyapi.expect.Expect; import com.santaba.agent.groovyapi.snmp.Snmp; import com.santaba.agent.groovyapi.http.*; import com.santaba.agent.groovyapi.jmx.*; import org.xbill.DNS.*; import groovy.sql.Sql // needed for SQL connection and query import groovy.time.* hostname = hostProps.get("system.hostname"); user = '' pass = '' SQLUrl = "jdbc:sqlserver://" + hostname + ";databaseName=master;integratedSecurity=true" // sql = Sql.newInstance(SQLUrl, user, pass, 'com.microsoft.sqlserver.jdbc.SQLServerDriver') sql.eachRow( 'SELECT name, expiry_date from sys.certificates'){ certname = it.name.toString().replaceAll("#","") println(certname + "##" + certname) } sql.close() // Close connection And the collection script would look like this: import com.santaba.agent.groovyapi.expect.Expect; import com.santaba.agent.groovyapi.snmp.Snmp; import com.santaba.agent.groovyapi.http.*; import com.santaba.agent.groovyapi.jmx.*; import org.xbill.DNS.*; import groovy.sql.Sql // needed for SQL connection and query import groovy.time.* hostname = hostProps.get("system.hostname"); user = '' pass = '' SQLUrl = "jdbc:sqlserver://" + hostname + ";databaseName=master;integratedSecurity=true" // sql = Sql.newInstance(SQLUrl, user, pass, 'com.microsoft.sqlserver.jdbc.SQLServerDriver') sql.eachRow( 'SELECT name, expiry_date from sys.certificates'){ certname = it.name.toString().replaceAll("#","") daystoexpire = TimeCategory.minus(new Date(),Date.parse("yyy-MM-dd HH:mm:ss.S",it.expiry_date.toString())) println(certname + ".daystoexpire: " + daystoexpire.getDays()) } sql.close() // Close connection Then a datapoint that looks like this:
  6. whoops, i called the difference timetoexpire then tried to print daystoexpire. Change one to the other.
  7. So, you can't change the collector once the DS has been saved, unfortunately. You will have to start a new DS to change it. I think the problem might just be that the date coming from the sql query is somehow not a string, which the parser expects. Try casting it as a string so the parser can pick it up. timetoexpire = TimeCategory.minus(new Date(),Date.parse("yyy-MM-dd HH:mm:ss.S",it.expiry_date.toString())) Casting it as a string then parsing into a date may be redundant as it may be that it's already a date, but this makes sure.
  8. So close again: import com.santaba.agent.groovyapi.expect.Expect; import com.santaba.agent.groovyapi.snmp.Snmp; import com.santaba.agent.groovyapi.http.*; import com.santaba.agent.groovyapi.jmx.*; import org.xbill.DNS.*; import groovy.sql.Sql // needed for SQL connection and query hostname = hostProps.get("system.hostname"); user = '' pass = '' SQLUrl = "jdbc:sqlserver://" + hostname + ";databaseName=master;integratedSecurity=true" // sql = Sql.newInstance(SQLUrl, user, pass, 'com.microsoft.sqlserver.jdbc.SQLServerDriver') sql.eachRow( 'SELECT name, expiry_date from sys.certificates'){ certname = it.name.toString().replaceAll("#","") println(certname + "##" + certname) } sql.close() // Close connection The problem was two-fold: 1) you were not outputting in the AD format and 2) you had "##" in your certificate names, which interferes with the built in parsing mechanism that parses the output. Make sure discovery is working properly first because the collector script will look very similar up until the println statement. Your collection script would be very similar (assuming you've set batchscript as the collector type on the DS): import com.santaba.agent.groovyapi.expect.Expect; import com.santaba.agent.groovyapi.snmp.Snmp; import com.santaba.agent.groovyapi.http.*; import com.santaba.agent.groovyapi.jmx.*; import org.xbill.DNS.*; import groovy.sql.Sql // needed for SQL connection and query import groovy.time.* hostname = hostProps.get("system.hostname"); user = '' pass = '' SQLUrl = "jdbc:sqlserver://" + hostname + ";databaseName=master;integratedSecurity=true" // sql = Sql.newInstance(SQLUrl, user, pass, 'com.microsoft.sqlserver.jdbc.SQLServerDriver') sql.eachRow( 'SELECT name, expiry_date from sys.certificates'){ certname = it.name.toString().replaceAll("#","") timetoexpire = TimeCategory.minus(new Date(),Date.parse("yyy-MM-dd HH:mm:ss.S",it.expiry_date)) println(certname + ".daystoexpire: " + daystoexpire.getDays()) } sql.close() // Close connection The timetoexpire variable might need some tweaking to make sure it parses properly. This should give you collection output that looks like this (fake values): MS_AgentSigningCertificate.daystoexpire: 23 MS_PolicySigningCertificate.daystoexpire: 45 etc. You'd create a datapoint and set "Content the script writes to the standard output" and set Interpret output with to "multi-line key-value pairs". Then set the key to ##WILDVALUE##.daystoexpire
  9. Ok, looks like you're using the PropertySource output format (propertyname=value) for the ActiveDiscovery output format (certid##certdisplayname). If you can post your AD script here, I can show you what changes need to be made to get it to work.
  10. Motivation is directly proportional to the number of customers that request it.
  11. That likely has to do with the output format of your script. "#" should be avoided in WILDVALUEs. You switched to scripted active discovery right? If so, then your output should be like this for each certificate: ##WILDVALUE##WILDALIAS##DESCRIPTION So, if your WILDVALUE or WILDALIAS contain double ##, that may be screwing up the parsing of the output. I recommend a .replaceAll("#",""). I'm curious if you're still going the propertysource route or if you've done it in a datasource. I think it can all be done in a datasource with a datapoint that calculates the difference between now and the expiration date. Is that what you did?
  12. So close. The .eachRow is your loop statement: import com.santaba.agent.groovyapi.expect.Expect; import com.santaba.agent.groovyapi.snmp.Snmp; import com.santaba.agent.groovyapi.http.*; import com.santaba.agent.groovyapi.jmx.*; import org.xbill.DNS.*; import groovy.sql.Sql // needed for SQL connection and query hostname = hostProps.get("system.hostname"); user = '' pass = '' SQLUrl = "jdbc:sqlserver://" + hostname + ";databaseName=master;integratedSecurity=true" // sql = Sql.newInstance(SQLUrl, user, pass, 'com.microsoft.sqlserver.jdbc.SQLServerDriver') sql.eachRow( 'SELECT name, expiry_date from sys.certificates'){ println "full_name_certificate=" + it.toString() } sql.close() // Close connection As for the connection string, looks like it should take this format: jdbc:sqlserver://[serverName[\instanceName][:portNumber]][;property=value[;property=value]]
  13. Here's the link to the DataSources for collector counts. I don't like how difficult it is to make the dashboard work, so i'll be revising them. It's possible these metrics are already available in the portal metrics package, so i have to take a close look at that to see if the stats are available there. Also, there's no documentation, so there's that too. https://github.com/sweenig/lmcommunity/tree/master/CollectorLoad
  14. You can view the May 27th recording here: We'd love your feedback, so please take a moment to respond to our survey: https://docs.google.com/forms/d/e/1FAIpQLScPWW5DzNxe2W5ieh6PjamLYWcP5AhDbUl1E3U7ZKryEgwEoA/viewform?usp=pp_url&entry.2118543627=2020-05-27 We'll be having another Live Training Webinar in two weeks (and if you've watched the video, you might be able to guess the topic). Register here. All of the questions were answered in the video, so please watch the recording. The questions are listed here in the same order asked for search-ability: I want to know about the Datasources configuration. How to do it. I want to know about the integration between the alerts and auto-ticketing hi Stuart, is there any report available OOB similar to a heatmap in CAPC? Also, what's the best way to get an Availability/Reachability SLA report for multiple remote locations? Hi Stuart!! We have had Collector load (overload!) issues off-and-on for quite a while. The biggest challenge is usually about how to track down which DataSource (or family of DataSources) is impacting a set of Collectors. Do you have any tips / tricks on how to drill down from a Collector overload condition to find out what exactly is generating the overload? is it possible to group interfaces from different devices? For instance, if we have 100 remote locations with 2 internet circuits each, can we group those 200 internet circuits into a single group so we can easily identify the port where circuit is terminating instead of navigating to different location and then trying to find where the circuit interface is? how can i protect my system from blowing up after upgrading datasources? i have some exclusions and "tweaks" to some of them. for example, i exclude the vmware host level vm disk space check because vcenter also does it. when i upgrade datasources, it adds back the defaults which causes be to get hundreds of alerts that were excluded previously a confusion/question related to netflow - as opposed to NFA, in LogicMonitor, netflow is shown at device level instead of interface level. So if we have flow export configured on multiple interfaces from a single device, would that show aggregate of all interfaces? this might show how effective ABCG is? or at least give one visualization showing how ABCG might be moving devices (and their instances) around? One feedback; alternatively a question. The SDT panel is somewhat cumbersome, if you have a number of alerts to SDT. Lots of clicks and confirms. Ideally, a couple hot buttons, like "Put in downtime for: 5 min, 30 min, 2 hours, 2 days", etc. The fewer clicks, the better. I provided this because we seem to have a number of services that have numerous alerts, all of which tend to fire when the service/server I down (multi-tenant apps, etc.). Ever since the Nagios days I've wanted cascading alerts. For example, if a database is down, I KNOW that every single app on a host, or even several, will be down. I don't need all 30 apps to complain if the database is out, so it peeves me to have to SDT 30 apps. Is it possible to link alerts in a hierarchy or dependency chain so that only one alert goes out for significant outages? Another example, if the power in the data center goes out (or a battery back catches fire - it HAPPENED!), all the servers don't need to report." Safe Logic Module Merge. Best thing since Sierra Nevada Summerfest Lager. Hi Stuart, there are various devices in the environment, mostly routers and switches with parent-child relationship. Can LogicMonitor read the dependency like parent-child relationship in the devices so that number of alerts can be suppressed? Also, can it read the topology mapping from the network itself? or we still need a Discovery tool?
  15. Both options i listed use DataSources. Instance level properties can only be set inside Active Discovery in a DataSource. PropertySources will only set properties on the device level. PropertySources also don't open alerts at all.