All Activity

This stream auto-updates     

  1. Past hour
  2. Today
  3. Timezone per user account

    We also have this requirement. Is there any update on the requests.
  4. Could somebody explain how is SLA calculated for internal service checks? I read this but it didn't seems to apply to the report I created. I have an internal service check scheduled to do stat test every minute. There were 4 network errors (status = 8) within the last 24 hours. I created a report using SLA widget to report 24 hours SLA and it showed 100% during the past 24 hours. If it's calculated to the seconds, the calculation would be: (24(hr) x 60(min) x 60 (sec) -4 (errors))/(24(hr) x 60(min) x 60 (sec)) x 100% = 100.00%, which seems to match. So, to confirm, I changed the report config to narrow down to 2 hours (in which 2 of the errors appeared), which should show SLA of 99.97% (i.e. (2x60x60-42)/(2x60x60)) but SLA report shows 100% still. Thanks, Mary
  5. Is there a way to get a Widget and Report setup to report on multiple service checks, specifically reporting on Response Time for ping or http/https checks, from the various service check locations (Washington, Dublin,etc) Example, widget to show the top 10 or 25 Service Checks in terms of Response time (show the service checks with highest response times, to allow us quickly to pinpoint which environments are potentially experiencing slow performance) For reports, a report of all service checks and response times would be good to have, to again help pinpoint problem customers, or even report back to them the performance of their environments. I found a similar feature request, that is a year old and there was no response to it.
  6. ESX Host Monitoring

    Looking for a datasource to monitor ESX Hosts, specifically for when a failover occurs on a ESX host for our vCenters. I have gone through all the VMware_vSphere_(Datasources) and do not see anything that will alert when that failover occurs. Any suggestions?
  7. Hi Mosh, We were using Active Discovery with PowerShell to discover MSMQ queues on each server. This in itself was not an issue, but the way to check each queue requires each Queue found in the discovery would need a connection to the server and a check on whatever we were looking for(ie, Age of oldest message, or subscription verifications). With only a few queues, this may not be much of an issue, but with our 125+ queues on a server and each request requiring an open connection from the logic monitor collector we found that this could overwhelm the CPU resources on a server, fairly quickly(given the right scenario). The solution was pretty simple. We removed the active discovery script and combined it in Collector script("Embedded PowerShell Script") but returned the data in JSON. This allowed for all 125+ queues to return information with 1 connection in under 2 seconds, where as before, each of the 125+ queues would have to fire a script that connected to the server and would take less than a second each, but have 125+ connections spawned over the duration of the 5 minute monitor.
  8. Yesterday
  9. Hello to everyone, I've problems when monitoring windows server 2003, always the collector tells me that it can not connect to port 135, but doing the tests locally the WMI service with some tools like WBEMTEST or PORTQRY seems working properly. please help me to verify if this problem is in general with the version windows server 2003 or I must do something else. thanks
  10. Note: As of the publication of this article, collector 25.000 is a GD (optional general release), which means this article will be obsolete as version goes forward. In the past 2-3 months I had two cases whereby error occurred when an Internal Service Check of a website is authenticated with NTLM using ADFS. There error seemed odd with a message of: or in the detailed response, it can be seen as: <title>401 - Unauthorized: Access is denied due to invalid credentials.</title> regardless whether the credentials (username,password) set in the Service Check configuration are correct. Based on the design by Product & Development team, previous collector version (before 24.300), the error is "normal" due to the fact that the URL of the request origin is different from authentication URL which is in this case, is ADFS URL and the collector does not pass the credentials to the authentication server which makes the process fails. Fortunately with the arrival of version 25.000, this all has been changed so redirected authentication will be supported as explained in this document: (see "General Deployment Collector - 25.0") It is evident with my little test that you may also see in the screenvideo below: website to check: http://admin.lmglobalsupport.com (redirected to http://pk.lmsupportteam.com) ADFS authentication: https://fspk.lmsupporteam.info The following is additional screenshots of the location in IIS (which I used for my test) to configure the HTTP redirection: Here is just a preview about website authentication in a browser:
  11. Hi Jason, Re "we found that Active Discovery has the potential to generate too many connections to servers", may I ask what is it that you're discovering (the instances)? Are they things that need to be discovered in one go, or could they be discovered one by one or in batches?
  12. Please add number formatting for Big Number widget. We need thousand and million commas for readability. 12345 should be 12,345 1234567 should be 1,234,567 And so on.
  13. Last week
  14. Please add an option to delete all defined DataPoints in a DataSource module in. At the moment when we clone a DS we have to remove DataPoints one by one which is very time consuming (3 mouse clicks per delete!).
  15. Hi, We think it would be a good idea whenever a netscan ran it can pickup the devices that were newly discovered in a subnet. It would be a nice addition that those devices get included in the email that is sent out on a completed netscan run. In that case we can monitor what lives in a particular subnet. Maybe we can add the devices that went down also to the list. Br, Stijn
  16. Hi emmablisa, Thanks for your questions. There are a few differences between inheritance and composition which lend themselves to slightly different use cases. Currently in our environment we use a combination of both class inheritance and composition to try and get the most out of each. Inheritance: There are 2 kinds of inheritance in (certain versions) of puppet. Node and class inheritance. As of puppet 4.0.0 node inheritance has been removed from the puppet language, so I'm going to skip that one. Class inheritance has the following effects implicit declaration of base class first when the child class is declared base class becomes parent scope providing access to all of the base classes variables and resource defaults - this also allows for overriding the values and defaults. #This code is based on the logicmonitor puppet module include logicmonitor::collector # implictly includes the logicmonitor base class. # you can override the base class credentials class{'logicmonitor::master': account => 'accountname', } # the next examples show how the above classes are defined. class logicmonitor( $account = undef, $access_id = undef, $access_key = undef, ) { ... } class logicmonitor::collector( $install_dir = '/usr/local/logicmonitor/', $agent_service = 'logicmonitor-agent', $watchdog_service = 'logicmonitor-watchdog' ) inherits logicmonitor { ... } class logicmonitor::master ( $account = $logicmonitor::account, $access_id = $logicmonitor::access_id, $access_key = $logicmonitor::access_key, )inherits logicmonitor { ... } In addition to the usage in modules like this, we use inheritance to reduce the redundancy of code in the role/profile framework documented here. Composition: This is a way to establish a ordering relationship while avoiding duplicate declaration of resources. If 2 classes within a profile have a dependency on the same class you can run into duplicate resource declaration errors. #Both classes are tomcat apps. class app1 { include tomcat::tomcat7_0_52 } class app2 { include tomcat::tomcat7_0_52 } #This will result in duplicate declaration class { ['app1', 'app2']: } #vs class app1 { Class['tomcat::tomcat7_0_52'] -> Class['app1'] } class app2 { Class['tomcat::tomcat7_0_52'] -> Class['app2'] } class { ['app1', 'app2']: } # This is ok, but requires that the node has the tomcat::tomcat7_0_52 class declared elsewhere. Hopefully this answers your questions. ~Ethan
  17. Alerts Based on Data Over Time

    Agreed, this would be nice to have.
  18. What is the major differences between Puppet inheritance vs Puppet composition.? I just came cross puppet inheritance lately. A few questions around it: 1. Is it a good practice to use puppet inheritance? I've been told by some of the experienced puppet colleagues Inheritance in puppet is not very good, I was not quite convinced. 2. Coming from OO world, I really want to understand under the cover, how puppet inheritance works, how overriding works as well.
  19. HPE 3Par RCopy

    Published @ NHHDFL
  20. Alerts Based on Data Over Time

    upvote!
  21. While working on optimizing Powershell scripts for Logic Monitor, we found out that Active Discovery was great for some applications. However, when it came Powershell invoking commands(running scripts on servers), we found that Active Discovery has the potential to generate too many connections to servers. The answer we arrived at was doing everything in one script and returning it all in a JSON response. This worked significantly better than the dynamic Active Discovery, but had one draw back. The data points had to be manually entered. My suggestion is that Logic Monitor modify the data points to allow reference to the JSON responses. Meaning, that we would set one instance of a Data Point with a Name field that indicates the JSON path to an array with all of the Instances and the Post Process could be pointed to the corresponding JSON path for the Value of each instance. JSON Example: [ { "Title":"Name of an Instance", "Value":1 }, { "Title":"Name of another Instance", "Value": 2000 } ] DataPoint would look something like this: Name Metric Type Raw Metric Post Processor Alert Threshold json(Title[*]) guage output json(Value[*]) != 1 Results would create instances like this on a graph as you would if you type them out normal: "Name of an Instance": 1 "Name of another Instance": 2000 I believe this would be more efficient and allow us to be still dynamic. Thanks, Jason Wainwright
  22. Dynamic Group Permissions

    Request to have the ability to enable/disable "Dynamic Group Permissions" at a folder level.
  23. Earlier
  24. Time Zone setting at the User level

    This is my number one issue. Is this being worked on? Can anyone from LM update us please.
  25. Timezone per user account

    Is LM even working on this? Or should I be looking for alternatives before next renewal?
  26. Hi LM team, I noticed that I can create a Customer Email Delivery Integration within LM to have custom text. This works really well for doing different alerts for each of my customers (which can vary wildly). The only problem I noticed is that the warn/error/critical emails look almost identical to the "clear" emails. I would really like it if I could have a paired integration for clears. This way, I can create custom text for the customer on when something is alerting, and then create something just as custom for when it stops alerting. It can be confusing for customers to differentiate the alerts from the clears. Thanks! Paul
  27. Alerts Based on Data Over Time

    This feature is needed. I have the same scenarios as it was described in previous posts.
  28. Timezone per user account

    Any updates from LM team?
  29. Time Zone setting at the User level

    Any updates on this topic? I've created the same request and still silence
  30. Yes, I too have to con-template implementing an intermediate service where emails to go a "middle man" system to be parsed and then actions taken. Tempting to build something in-house but looking at off he shelf options. In the past I would have done this in Netcool using the Impact module, or filtered inbound events using algorithms in Managed Objects.
  1. Load more activity