Jerry Wiltse

  • Content Count

  • Joined

  • Last visited

  • Days Won


Community Reputation

4 Neutral

About Jerry Wiltse

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Annie, we've had some exciting developments in our ability to manage our logicmonitor environments more effectively. Two remaining components that we really need to automate via API calls are: 1. Creating/Updating eventsources/datasources 2. Creating/Updating reports I can see from with my web tracing tools that the new UI uses REST API's itself to perform these items, those API's simply aren't published yet. Also, when I try to replicate the same API calls, I get the following error: ---The specified HTTP method is not allowed for the requested resource (Method Not Allowed). I understand the developers have chosen not to publish or permit outside access to these yet for good reason. However, can you please ping them to see where they are on releasing these new APIs (which were never released in old UI btw)? Also, can you try to ask if it might be possible to start working on at least one of them sooner rather than later? I think reports is ready to go, and it's just a permission thing. It's worth noting that the reports one was easy to figure out and replicate, and I already have code built that could do it, while the datasource update one I don't quite understand yet. It was much more complicated, and is a slightly lower priority.
  2. Hi, GRAPE is a dependency manager for groovy scripts. I know logicmonitor is working on adding a screen to add custom JARs for scripts to use. An even more forward thinking approach would be to support GRAPE. Actually, it partially works right now, however it seems there are still a few errors that would need to be worked out. The benefit of this would be that devops teams could really develop the groovy scripts more freely and deploy them to larger environments without having to do manual processes with importing custom JARs, a process which simply does not scale. In my opinion, it's one of the few remaining barriers to Logicmonitors extensibility and scalability, and it's very close to working. Please investigate and support
  3. The problem. Presently, mass-auditing or updating of property files by navigating the tree is a real tedious, linear, and single-threaded endeavor. There's no good way to ensure accuracy or completeness. If a grid-view could be created to show all hosts, and groups, and their respective properties, this would be amazing. Out of necessity, I have written an application that interfaces with Logicmonitor API's to import/export hosts and their properties to CSV, to get the "grid-view" functionality for myself, but would much rather see a native feature. I believe this is a common enough problem that it would significantly benefit most Logicmonitor administrators.
  4. We support over a dozen customers with separate Logicmonitor environments (and counting). We are now building a comprehensive management application in order to enforce consistency across the environments effectively (custom datasource versioning, standard reports and dashboards, alert rules, email templates, etc). Our POC version uses the RPC API's. However, RPC API's are extremely difficult to work with when it comes to more complicated objects like reports, escalation chains, dashboards, etc. REST makes MUCH more sense, and it looks like the REST features are coming along. Since there's no published "roadmap", can someone please post the present status and priority level of the rest of the REST API's (no pun intended)?
  5. Hi Annie, it's been 3 months since you posted about more integrations. Do you have any updates?
  6. Every ticket system that could make use of Webhooks for opening ticket for new alerts generates a ticket number for every new alert. Most provide the ticket number as JSON or XML in the response. Most also support updating and closing of said tickets via API, however they require the "Ticket Number" as a parameter. A key to more advanced and flexible ticket integration would be for Logicmonitor to have a new field on every alert called "Ticket Number". That field could be populated by parsing the API response, an SMTP email reply, or manually entered. The tricky thing is that all email and HTTP responses from various ticket systems are diverse in format. So, then we will at least regular expression matching on HTTP header/body or email subject/body. Maybe copy functionality from the datapoint parser, enabling XPATH and JSON PATH. Would be amazing. Alternatively, you could start to add your own code to handle each unique integration. Seems unmanageable though. Better to let people "Script" their own parsing, and provide examples in the documentation as a guide for popular systems.
  7. Please consider adding other Integrations alongside Connectwise, primarily for ticketing. Most cloud-based ticket systems have a simple API for opening, updating, and closing tickets. They don't all have advanced email parsing, so in those cases a single logicmonitor event can result in 5 tickets (Open, Ack, Update, Escalate, Clear). I believe one framework could be designed that allowed admins to specify three standard API calls for any ticket system, defining the companie IDs, tokens, URLs and bodies with the same variables that are available to the emails. The standard calls would be for "Open", "Update", and "Close" (there are probably a few more scenarios, but you get the idea). I don't believe it would have to be complicated and "code-like" either. Also, after seeing the service-now integration launches from the collectors, I should state that any such functionality MUST run from the cloud servers rather than the collectors to be viable for most use cases I know of (like the connectwise part).
  8. So... I just noticed in that there is now a new JAR file in /logicmonitor/agent/lib ... called "jt400-full-6.0.jar". I tested a DB query with it... and it just hangs. I uploaded the jt400.jar that i identified in the request, updated the wrapper.conf file, and restarted the collector and was able to query successfully. So, if the jt400-full-6.0.jar was included based on my request, thank you very very much. However, unfortunately it doesn't seem to work with IBM DB2 on iSeries.
  9. It is my understanding that the event-source functionality is only capable of evaluating: Log Files Locally on the CollectorEvent Logs on a remote windows hostSNMP Traps, Syslog, and IPMI events. The hosts we support with Logicmonitor are multi-platform, Many Linux, some Unix, some AIX, some AS400, some Network Appliances, which cannot be collectors for either technical reasons, OR permissions reasons. In simple terms, we would like to request a modification to Event Sources which enables the parsing of support for arbitrary log files on remote servers, and shows all alerts and results under the remote server (rather than the collector). I believe this would not be that difficult to do from a development perspective, and would benefit virtually all clients that have non-windows hosts. As a workaround for now, we\'re in the process of developing several groovy Datasource methods which essentially copy log files from remote machines to the collector hard drive on a recurring basis (expect scripts, powershell, etc). We are then planning on making a local file based EventSource to parse those files on the collector for alerting. Unfortunately, this is very complex and not-optimal, and ultimately the alerts and messages will all show up under the collector. It\'s misleading to our team, and our clients who already struggle to understand context. Now that we\'ve done the part part of making the datasources, we would settle for some kind of dynamic host-property based \'\'appears under\'\' functionality. on the event source configuration. Please let me know if there\'s something in the pipeline that will resolve this difficulty in some other way, or if you think this would be worth-while as a core feature addition. If so, please also include if you think it will realistically make it into the development cycle this year.
  10. I request that you add the environment URL to this email because we manage about 20 client environments now, and we can’t distinguish if this is for “†or “client2.logicmonitor.comâ€. Gerald R. Wiltse | Managed IT Lead ``` LogicMonitor will deploy the latest software release (v.58, collector build 16033 - unchanged from last release) to your account next Thursday, February 12, 2015 from 5:30pm - 8:30pm PT (01:30-04:30 UTC). During that window, there may be up to 10 minutes of disabled account access and delayed alerts. After the release, you may notice some gaps in graphs for places where the collector couldn\'t check in during that downtime. To review new features and updates included in this version, please read the Release Notes ( Should you have any questions concerning this release, please reply to this email or contact us through normal support channels. Thank you, LogicMonitor ```
  11. There is a DB2 driver currently included, however, this is the DB2 UDB driver for DB2 database running on Unix. The IBM iSeries and AIX platforms require a different driver, and it has been opensourced by IBM. I have included a link below. I manually tested it in a Logicmonitor datasource by downloading it to our lab collector by adding it into the wrapper.conf and it works great. However, we have to deploy it to about 40 environments, and I would prefer it not be a separate dependency. If possible, I request that it be added to the standard collection of jdbc drivers.
  12. The output parsing for \'\'jdbc\'\' datasources have a great post-processing mechanism for specifying the datapoints in the results of queries. We\'re doing \'\'jdbc\'\' queries within our Groovy Script datasources, but don\'t have the same post-processing options for creating datapoints. In theory we can Regex anything, but it would be very nice if we could use the same database-query-result post-processing method on groovy script output. Seems it would be universally applicable to all clients who might do this.
  13. We have our first client that is interested in measuring application performance from multiple locations within their WAN. In simple terms, we want to monitor some hosts from multiple collectors at the same time. Based on my current understanding, this is not really a built in feature for \'\'internal monitoring\'\', although Logicmonitor does offer this capability for the website monitoring component. I can think of several ways to achieve this for internal hosts, but all of them are simply clever workarounds with some drawbacks. For example, we could create DNS Aliases for each web server so that we can have the same host existing multiple times, each from a different collector. Are there any other suggestions you might have? Is this a common request, is it on the roadmap? Also, we actually have a few situations where we only want 1 particular Datasource to be executed on a remote collector, but the rest to be executed from a central collector. Any discussion of \'\'defining collector per-host per-datasource\'\' as a feature?