All Activity

This stream auto-updates     

  1. Today
  2. Just the ability to pass the alert message through a scripted function that returns a string back the alert handling process would meet many people's requirements in this space. The alert message script could be part of the data source same as discovery and collection scripts. At datapoint level would be nice, but even at module level, it would be fine as we could if()...else or switch() case for data points.
  3. I like the idea, but I would want the option to hide it by default or limit its visibility based on a role.
  4. This would be really useful. We had to develop our own internal map because we need weather overlay. We have global networks and weather affects network operations in many parts of the world.
  5. online dating site for elderly http://def2o.tennis-academy-online.de/sitemap.xml online dating site for elderly omaha ne dating sites
  6. eharmony girl http://ykg2l.press-cutting.de/sitemap.xml eharmony girl sugar daddy dating sites in nigeria
  7. single guy dating a single mother http://sezip.akkordeonorchester-ferndorftal.de/sitemap.xml single guy dating a single mother single guy dating a single mother
  8. Last week
  9. @pperreault good stuff. If I were to implement this in product I would probably write a regex with capture groups to match a valid user line. That way you can be more sure than the 9 column line is actually a user entry, and you can use the capture groups to easily dump out other items of interest. This is a pretty common pattern in our core DataSources, most of the Nimble Storage stuff follows this pattern. The regexes in those scripts might look scary but it's pretty easy to make in something like https://regexr.com It's pretty solid looking for a first Groovy script. I would consider exiting the SSH session before you process the output, no need to keep that connection open after you have the data in a variable. As far as error handling, you could wrap the initial Expect session setup in a try/catch. You can also use exitValue() on your Expect client object to ensure that your `show` command returned a successful return code before parsing the output. Here's a pretty extreme example of a Groovy script with multiple return codes for specific issues: import rocks.xmpp.addr.Jid import rocks.xmpp.core.XmppException import rocks.xmpp.core.sasl.AuthenticationException import rocks.xmpp.core.session.TcpConnectionConfiguration import rocks.xmpp.core.session.XmppClient import rocks.xmpp.core.session.XmppSessionConfiguration import rocks.xmpp.core.session.debug.ConsoleDebugger import rocks.xmpp.core.stanza.model.Message import javax.net.ssl.* // Server details, required def host = hostProps.get("system.hostname") def domain = hostProps.get("xmpp.domain") def port = hostProps.get("xmpp.port") ?: '5222' // Account details, required def sender = hostProps.get("xmpp.sender") def sender_pass = hostProps.get("xmpp.sender.pass") def receiver = hostProps.get("xmpp.receiver") def receiver_pass = hostProps.get("xmpp.receiver.pass") // Optional, disable starttls by setting to 'false' def use_ssl = hostProps.get("xmpp.ssl") ?: true // Optional, set to "true" to enable debug output in wrapper.log def debug = hostProps.get("xmpp.debug") ?: 'false' // Optional, change default authentication mechanism def auth_mechanism = hostProps.get("xmpp.authmech") ?: "PLAIN" // time to wait for expected message ( in seconds ) def timeout = hostProps.get("xmpp.message.timeout") ?: 5 // Check for required props if (!(sender && sender_pass && receiver && receiver_pass && domain)) { println 'missing required properties' println "xmpp.sender, xmpp.sender_pass, xmpp.receiver, xmpp.receiver_pass, and xmpp.domain are required" return 1; } // Used as a lock for synchronizing def received = new Object(); // Setup a bogus trust manager to accept any cert def nullTrustManager = [ checkClientTrusted: { chain, authType -> }, checkServerTrusted: { chain, authType -> }, getAcceptedIssuers: { null } ] // Setup a bogus hostname verifier def nullHostnameVerifier = [ verify: { hostname, session -> true } ] // Setup an SSL Context with the bogus TM and HV to accept any cert SSLContext sc = SSLContext.getInstance("SSL") sc.init(null, [nullTrustManager as X509TrustManager] as TrustManager[], null) public class NullHostnameVerifier implements HostnameVerifier { public boolean verify(String hostname, SSLSession session) { return true; } } HostnameVerifier verifier = new NullHostnameVerifier(); // Store send and receive timestamps so we can calculate RTT ( in millis ) def send_time = null def receive_time = null // Check debug option, setup session accordingly if (debug == 'true') { sessionConfiguration = XmppSessionConfiguration.builder() .debugger(ConsoleDebugger.class) .authenticationMechanisms(auth_mechanism) .build(); } else { sessionConfiguration = XmppSessionConfiguration.builder() .authenticationMechanisms(auth_mechanism) .build(); } // Setup connection config def tcpConfiguration = TcpConnectionConfiguration.builder() .hostname(host) // The hostname. .port(port.toInteger()) // The XMPP default port. .sslContext(sc) // Use an SSL context, which trusts every server. Only use it for testing! .hostnameVerifier(verifier) .secure(use_ssl) // We want to negotiate a TLS connection. .build(); // create the client instance def xmppClientSender = XmppClient.create(domain, sessionConfiguration, tcpConfiguration); def xmppClientReceiver = XmppClient.create(domain, sessionConfiguration, tcpConfiguration); // Add a message listener to the client instance xmppClientReceiver.addInboundMessageListener( { e -> // Get the message Message message = e.getMessage(); // Confirm that this message is from the expected sender if (message.from.toString() == "${sender}@${domain}/LMSENDER") { // Confirm that this message has the expected content if (message.body == "TESTMESSAGE") { // Record receive time receive_time = System.currentTimeMillis(); // Notify the parent thread that we can exit // synchronized must be used for notify() to work // received is our lock synchronized (received) { // notify the main thread to resume now that we have received the expected message. received.notify(); } } } }); // Try to connect the receiver client try { xmppClientReceiver.connect(); } catch (XmppException e) { println 'Receiver client failed to connect'; return 2; } // Try to connect the sender client try { xmppClientSender.connect(); } catch (XmppException e) { println 'Sender client failed to connect'; return 3; } // Try to login and send a message try { // We need to use synchronized here to call wait() on our lock object (received) // This will wait for a message event from the expected user with the expected content // The message handler will call notify() to resume this thread. // If we don't do this the client will close before we get the message. synchronized (received) { // Login with receiver account xmppClientReceiver.login(receiver, receiver_pass, 'LMRECEIVER') // Login with Sender account xmppClientSender.login(sender, sender_pass, 'LMSENDER') // Record time just before sending, so we can get RTT send_time = System.currentTimeMillis() // Send the message xmppClientSender.send(new Message(Jid.of("${receiver}@example.com"), Message.Type.CHAT, "TESTMESSAGE")) // Call wait() to block this thread until the message listener has called notify() // or until the timeout. received.wait(timeout * 1000) } if (send_time == null) { println "Message wasn't sent." return 4; } if (receive_time == null) { println "Message wasn't received." return 5; } // Print delivery time println receive_time - send_time } catch (AuthenticationException e) { println 'Authentication issue. Check your credentials.' println e; return 6; } catch (XmppException e) { println 'Something else went wrong with XMPP.' println e; return 7; } finally { // Close the connections xmppClientReceiver.close() xmppClientSender.close() } return 0;
  10. Just created a ds to count SonicWall SSL VPN sessions. Locator FMN27M. Would love some feedback and code improvements as this is my first groovy script. Not a fan of just dropping the exit code as I have. There must be a better way to implement validations/error checking and output appropriate exit codes Would like to see other methods for counting the users. Maybe matching on the string "User Name" and counting the lines that follow? Could see this growing to include user session length Would be nice to only apply the ds to a resource if ssl vpn server was running For ease here is sample output from the firewall and the script. ======================= Active SSLVPN Sessions: ======================= User Name Client Virtual IP Client WAN IP Login Time Inactivity Time Logged In user1 10.10.10.10 6.6.6.6 1799 Minutes 0 Minutes 01/23/2020 09:29:52 user2 10.10.10.11 5.5.5.5 460 Minutes 0 Minutes 01/24/2020 07:49:31 user3 10.10.10.12 4.4.4.4 368 Minutes 0 Minutes 01/24/2020 09:22:08 user4 10.10.10.13 3.3.3.3 224 Minutes 0 Minutes 01/24/2020 11:45:54 user5 10.10.10.14 2.2.2.2 170 Minutes 0 Minutes 01/24/2020 12:39:37 user6 10.10.10.15 1.1.1.1 13 Minutes 0 Minutes 01/24/2020 15:15:49 import com.santaba.agent.groovyapi.expect.Expect; hostname = hostProps.get("system.hostname"); userid = hostProps.get("ssh.user"); passwd = hostProps.get("ssh.pass"); // initialize a variable to contain the actual host prompt def actualPrompt = ""; def sslvpn_user_count = 0; // open an ssh connection and wait for the prompt ssh_connection = Expect.open(hostname, userid, passwd); ssh_connection.expect(">"); // capture full prompt e.g. user@host ssh_connection.before().eachLine { line -> actualPrompt = line; } // display the ssl vpn sessions ssh_connection.send("show ssl-vpn sessions \n"); ssh_connection.expect(actualPrompt + ">"); cmd_output = ssh_connection.before(); // read thru multiline output // rows with 9 columns are user sessions // increment to total user sessions cmd_output.eachLine { line -> row_length = line.split(/\s+/); if ( row_length.size() == 9 ) { sslvpn_user_count++ } } ssh_connection.send("exit"); println(sslvpn_user_count); return 0;
  11. We have our setup working kinda like that. The same alert will update the same ticket and will only cause a new ticket once the original ticket has been closed. But we use ServiceNow and all of that process is completed within the ticketing systems itself (I'm not too involved with how it works) and independent of LM or any other system. We do still try to limit flapping situations though. That is something that might be different in various ticketing system. For example, we never re-open an old ticket, it actually prevents us from doing that if been closed after a period of time. We can have tickets reference another older ticket but it's still a new ticket. I think that would throw off our reports and such to have old tickets come back that way. But I think these are all valid options but something I personally feel is better suited for the ticketing system to handle (or middle-ware) since LM isn't all flexible with integrations. I just let LM do the monitoring and use other systems to deal with ticket routing, notifications, SLAs and such. Then again we're able to use a ($$$) ITIL/ticketing system :) So, I don't think I have any further suggestions for this case. Unless LM implements some sort of Business Rules like feature where you can have conditional effects to integration messages. Does CWM have any workflow or ticket pre-processing options? If it does, this might be easy since you can directly tie the LMX###### directly to one ticket forever.
  12. I'm glad you asked! It would be set back to false when we mark it as resolved either in Connectwise Manage or in LM, a feature that doesn't currently exist. This way, the alerts generated by flapping would be ingested into the same ticket as updates, but wouldn't change the status of the ticket. This is the workflow that I'd like to see with LM and Connectwise Manage which N-Central has accomplished with CWM: Alert threshold for Datapoint A is crossed, alert raised, ticket 001 created, status New. Alert status clears, ticket 001 is updated, status remains unchanged. Tech investigates the issue, communicates with the client, makes valuable internal notes, and resolves it. Marks ticket as Solved in CWM. Alert in LM clears because the datapoint cleared, not because the ticket was marked as Solved in CWM. Days/weeks/months/eons pass. Alert threshold for Datapoint A is crossed, alert raised, ticket 001 is reopened thereby retaining the earlier communication with the client, the internal notes, etc. Status changes from Solved to New, Ack, Re-opened - not terribly important. The points are: There should be a one-to-one relationship between an alert raised by a datapoint its ticket; flapping of a datapoint's status should update its ticket, not under any circumstances create a new ticket. A CWM ticket raised by LM should remain open until Solved in CWM. LM should be the truth; the status of an LM alert should not be impacted by the status of its ticket in CWM. I hope this helps, Nate
  13. In addition to enum management within modules, the right fix is to replace the simple token single-pass token substitution functionality with an actual templating engine. See https://docs.groovy-lang.org/docs/next/html/documentation/template-engines.html for a list -- LM needs to pick one and provide some sort of library function to enable management of template fragments.
  14. I haven't thought of a way to do this either. Best I've come up with is for script-based DataSources is to output any text info in script output that DataPoints will ignore but will show up if you use Raw Data > Poll Now and scroll down to scriptOutput section. No where as as good as having the details in the alert message/ticket itself, but better than having to log into the problem system and work it out. P.S. Calling LM"Lomo", ha! :)
  15. The datasource is available now. Appreciate this Kerry! It's already making a big splash for my team.
  16. VMWARE ESXi can perform snapshots on Virtual Machines which create the effect of a point in time (freezing) of a virtual disk. The new disk (delta disk) is created to continue to record changes in the I/O since the snapshot was created. When a snapshot is deleted, then all the changes are written down into the earlier disk. If you continue to delete snapshots until there are no snapshots then all the changes will be written finally down into the base vmdk disk. The delta disks are removed after this process which is called "Consolidation" of the delta disks. Due to situation in VMWARE ESXi where the files are locked, then the consolidation process will not occur or be interrupted by the condition. This will abort the process, and vm will continue to use the delta disk, and back out the changes or not commit them to the disk earlier. This will then create a snapshot chain that continues to grow. In our situation we use a Backup Solution that utilizes Snapshot Technology to freeze (Quiesce) the operating system and take a recoverable backup. Backups are taken every 15 minutes. In this scenario we have reach a snapshot situation where the disk chain is 255 Delta disks in length. There are no Snapshots in the GUI (LogicMonitor can see these). However, LogicMonitor can not see the delta disks. In theory, when a snapshot is taken, it will create a delta disk named "vmdiskname-000001.vmdk", and if a second snapshot is taken it will create a delta disk called "vmdiskname-000002.vmdk". When you remove or delete the snapshots, and the process completes, then the delta disks no longer exist. We need to see if the number of delta disks exceeds the number of snapshots in the GUI. If it does, then we have to repair the VM Manually. This situation is very dangerous for an ESX Host, because with a disk chain at around 255 Delta Disks, you will start to see very strange Delay Behaviors. The ESX has to track all the data through the Chain. Behaviors include VM Freezing, and then releasing, and then speeds up and processes faster than normally until all the data has been processed. Then it will freeze again. The more delta disks the worst the decay in performance for the ESX host.
  17. The trunk can return some information, but it doesn't explicitly have a status object. Call status can be reached via https://www.twilio.com/docs/voice/api/call-resource#read-multiple-call-resources You'd then traverse the returned JSON object, for example $.calls[].status I'm working on something tangentially related; when I'm done I can update here if you'd like.
  18. Technically just calls, I don't believe the trunk exposes any statistics, that would probably be just ping checks on our endpoints.
  19. Are you looking to see when a trunk has failed, or a call?
  20. On a test run with some enum properties defined, no joy. Maybe it can be done via an undocumented method, like braces or something, or more likely there is just no support for constructing tokens this way. Help, I see a value of ##AUTO.ENUM.FAKEDS.FAKEDP.##VALUE####! Waaaah! I did create the enum mappings as a property source (could also just be manually defined at the root, but using a PS is a lot faster for many items). This will pollute the property list pretty badly when used extensively unless there is an option to keep them hidden in the UI. Enum support really needs to be added to LogicModules directly.
  21. Does anyone have experience getting a dashboard put together for Twilio? The out of the box data source doesn't quite have what we need. Specifically I want to look at trunking calls their statuses, we'd love to have alerts on failures when they occur. I'm busy (lazy) and would be happy to pay someone to assist me with this if there's an expert out there with a some free time. :)
  22. 🤔 ... LogicMonitor's StatusPage allows for webhook integrations. Something can be designed to consume (or scrape) those events . This would necessitate LM to post their planned maintenances, which they do not do. @LogicMonitor --- is there a reason why planned maintenances and portal upgrades are not announced on StatusPage?
  23. Looking at the innards of the service-detector.jar, which is where I think the canonical LMRequest class is defined, you won't be able to do this with an internal web check (scripted or out-of-box) as documented. You would be able to do this with a scripted Datasource though--something similar to this: https://stackoverflow.com/questions/21223084/how-do-i-use-an-ssl-client-certificate-with-apache-httpclient. The libraries listed in the SO thread solution (except for junit, which isn't necessary) are available to the current GA collector. I have not attempted to use the apache httpclient libraries in a scripted internal web check... yet. So if you feel adventurous 😉...
  24. When we have portal upgrades, our API scripts fail hard with a long stream of 500 errors. I am generally aware and let it go, but I would be happy to check a status endpoint prior toi API activity and bounce out if an upgrade is in progress.
  25. I'm trying to monitor a curl call with cert and keys The call would be something like this: curl --key myKey.key --cert myCert.crt myurl.com Regular internal web check gives me the option of adding username and password but I don't see a way to add key and cert. Would appreciate any help!
  26. I just thought of a hack that might work. In the AD section, generate a bunch of auto properties that look like auto.enum.DSName.DPName.enumValue = enumName (substituting your DS name, DP name and enum value/name pairs, of course). Then in your alert message you may be able to reference ##AUTO.ENUM.DSNAME.DPNAME.##VALUE####. I have not tested whether nested token references work, but if they do, this could take away some of the pain. It might work as well with or without AD to put the definitions into a propertysource. Complete conjecture at this point, but I will probably whip something up to test as soon as I have some time....
  27. Hello! I've been toying around with the idea of coding up a basic app in the Windows 10 environment that has the following purpose/ functionality: The Goal: - Produce a simple program that lists all assigned tickets by ticket number in a Descending column. While having a LED icon of sorts next to each ticket number... RED/ GREEN to distinguish if that ticket has any active alerts. In Progress / Coding Flow: - Parse currently assigned tickets that are tracking alerts for various devices. (Ticket system being used is AutoTask) - Throw the parsed ticket results into a list/ dictionary. - Scan through the list of ticket references and validate if all the alerts associated to each ticket are active or clear. (Would like to keep it as a simple boolean logic approach of True/False). That would decide whether the LED icon above would be green or red. - Possibly add a single button that updates the list to see the current state per ticket. / Or have it in a delayed loop until the app is closed for updates. My Question / Problem: - My understanding of Logic Monitor when searching for items under the "Alerts" tab and using a specific search string value .. say like, the ticket # that the alert was tagged to is going to show all alerts associated to that value that was search. - Knowing that each alert will have it's own conditions for clearing (example: value exceeds defined value .. so throw the alert) -- Is there any specific LogicMonitor Documentation that may help in checking multiple "LMDXXXXX' numbers and whether its active/clear? Thanks in advance for any thoughts! Project Progress: https://github.com/LavheyM/pyStateOfAlert
  1. Load more activity