• Content Count

  • Joined

  • Last visited

  • Days Won


Community Reputation

4 Neutral

About drewwats

  • Rank
    Community Whiz Kid

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Found the original, sorry for the repost. I'll consider it a bump
  2. I believe I submitted this a while back, but cannot find the post. There have been countless times I have had to script something to pull data from multiple sources, to bring it together to compare or manipulate. E.g. Querying multiple databases to compare a count of transactions processed at each step in the workflow or alerting if CPU across more than one server in cluster is over threshold. I really hate Groovy scripting, but so far, it has been the best way to accomplish this. It sounds very feasible to create a DS that just scrapes data from existing datasources. Please implement this!
  3. I would like to have the ability to set devices as unmonitored (and therefore not incurring charges) at the group level. I am using netscans to discover devices, but we automatically place them into a non-alerting subfolder under the team to which the subnet belongs. Unfortunately, even with alerting disabled, I am unable to avoid the charge for these devices which may or may not be requiring monitoring. Sometimes devices sit here for a long time before they are moved into a appropriate subdirectory and alerting and monitoring is configured.
  4. I use a ton of SQL datasources and have noticed a need for dynamically created datapoints. Basically it would look like a regex datapoint where the matched field could be regex-captured and used as the datapoint name. Currently, I use a regex datapoint like "client=abc,type=medical,countofsomething=(\d+)" to pull the count from SQL result set like below. The addition of "clone datapoint" functionality is noted and appreciated but this would take it a step further. What I am imagining would allow more captures and back-referencing them using \1, $1, or ${1} syntax. The benefit of this would be that the creation of one datasource and one datapoint would automatically define all datapoints instead of requiring the tedious manual creation of dozens of points (or more). Hoping this makes sense! Creating dynamic datapoints could look like ... Datapoint Name: ${1}_${2} REGEX: client=(\w+),type=(\w+),countofsomething=(\d+) Value: ${3} SAMPLE RESULT SET: client=abc,type=medical,countofsomething=123 client=def,type=medical,countofsomething=456 client=ghi,type=warranty,countofsomething=789
  5. I love that graphs are now created automatically as I add datapoints to the datasource (JDBC datasource in this case). However, when I edit the datasource, the default graph does not appear under Graphs. My datasource has about 75 datapoints and I would love the ability to clone the default graph and remove the few lines I don't want shown instead of adding so many lines to create a new graph.
  6. https://www.logicmonitor.com/support/dashboards-and-widgets/widgets/advanced-custom-graph-widget/ According to your page, this is normal behavior but, as shown in the image attached, the default use of instance name can be quite a hindrance to usability of the legend. I tried to define the Legend entry but that value gets pushed off to the far left by the defaulting instance value - which in most cases is useless and in this case is just a duplicate of the datasource name.
  7. In the case of JDBC collection, there are many times when a query runs long due to load on the DB and is killed by LM according to the configured timeout. This action is appropriate but the resulting "No Data" is mostly undesirable. In the "If there is no data for this datapoint" dropdown, I would like to see an option for "Use Previous Data". I know in many instances, "No Data" is a really bad thing but, in others, it is not. Instead of breaking all my dashboard widgets and seeing "NaN" in raw data, I would prefer seeing a limited amount of old data. 1-3 polling cycles would be a reasonable length of substitution before returning to "No Data" or "not a number" results.
  8. Just ran into this today, needed to add a %. I'm also not a fan of the default dash between unit label and bottom label. If I don't want to use a unit label, my bottom label always begins with a dash.
  9. What I'm picturing is a hybrid of the NOC widget and big number widget. The big number (text color or background color) would be in green when not triggering any alert conditions and yellow, orange, or red to match alert level. Alternatively, adding a value column to the NOC widget would accomplish the same goal. Either of these allows the eye to be drawn to the most important things on a dashboard without having to dig for the associated value.
  10. I am familiar with this method but this doesn't easily accomplish what I'm talking about. And not every customer is proficient in scripting, I'm sure. The idea is to allow what already exists as independent datasources to be combined into a dependent metric without any redundancy of data collection. The uses for this are varied. CPU/memory/volume stats for nodes in a working cluster could be averaged into an overall cluster monitor. A datasource indicating a critical job is filling a message queue combined with an alert that a consumer service is not running could be of higher criticality than either of the 2 pieces of data alone.
  11. If I had one wish... Alert conditions being met could trigger execution of a script I know this may not be on the near-term radar, but this would be the feature that takes the product to the next level. Fortunately, most of the functionality is already in place; scripts can be ran on remote machines as a datasource. Adding the ability to run our own custom groovy (or whatever) scripts would allow error recovery actions and self-healing. Just guessing, but many customers may be willing to pay more for this. I just wanted to put this idea out in the forums and see what happens...
  12. Is there any possibility of creating complex datasources in the future? I would imagine it being similar to complex datapoints from the user perspective and could be even more useful.