Search the Community

Showing results for tags 'AWS'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • From LogicMonitor
    • General Announcements
    • LM Staff Contributions
    • Community Events
  • LogicMonitor Product Discussion
    • Feature Requests
    • LM Exchange
    • Ask the Community

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...


  • Start



About Me

Found 24 results

  1. As a global business we have infrastructure in China and we also use cloud services in China. Please add support for AWS China. China is a huge market for us.
  2. Is there any way to use the AWS Unified CloudWatch Agent and Cloudwatch Logs load EC2 metrics into LogicMonitor similar to what is pulled from the LogicMonitor collector? A little background. We currently use two LogicMonitor collectors to gather Windows metrics from AWS EC2 instances and physical and virtual servers in our on-prem datacenter. This has worked well for the 100 Windows servers and network devices we are monitoring. We are moving all of our on-prem Windows applications to AWS, which requires we change the way we use AWS. Take a look at AWS Control Tower for more details Instead of one AWS account and one VPC ( network ) , we are moving every application and environment ( PROD, UAT, DEV, STAGE ) into different AWS account. No shared networking, so VPC peering or using a TGW isn't an option. This will result in about 20 new AWS accounts and up to 100 by the end of the year. Each of these accounts will have 2 to 20 EC2 Windows instances for web/application along with one EC2 or RDS based Microsoft SQL Server instance. I have no desire to pay for EC2 Windows instances for every account ( possibly hundreds ) just to run the LogicMonitor collector. The most important Windows server monitoring features we rely on in LogicMonitor boil down to - server is down ( ping loss ) - low disk space - high CPU utilization - the ability to alert on Windows Event log entries - Windows service X is not running Then tack on monitoring of an EC2 and RDS Microsoft SQL Server performance data Does LogicMonitor have any recommendations or examples on this type of configuration? Thanks in advance, Kevin Foley
  3. I have been using a custom datasource to collect the metrics for each resource and method (excluding OPTIONS) behind a API Gateway stage. It has been extremely useful in our production environments. I would share the datasource via the Exchange, but the discovery method I'm using will not be universal, so I think it would be best if that discovery were to work natively. If possible, could we please have a discovery method for AWS API Gateway Resources by Stage? *Something to note - This has the potential to discover quite a few resources and thus, create a substantial number of cloudwatch calls which might hit customer billing. For this reason, I added a custom property ##APIGW.stages## so that I could plug in the specific stages I wish to monitor instead of having each one automatically discovered. The Applies To looks like this: == "AWS/APIGateway" && apigw.stages Autodiscovery is currently written in PowerShell (hence why not everyone can take advantage of it) $apigwID = ''; $region = '' $stages = '##APIGW.Stages##'; $resources = Get-AGResourceList -RestApiId $apigwID -region $region $stages.split(' ') | %{ $stage = $_ $resources | %{ if($_.ResourceMethods) { $path = $_.Path $_.ResourceMethods.Keys | where{$_ -notmatch 'OPTIONS'} | %{ $wildvalue = "Stage=$stage>Resource=$Path>Method=$_" Write-Host "$wildvalue##${Stage}: $_ $Path######auto.stage=$stage" } } } }
  4. I've been creating datasources to collect our custom AWS Cloudwatch metrics as per the docs: - mainly this is fine... However it can't cope with dimensionless metrics: "Namespace>Dimensions>Metric>AggregationMethod, where Dimensions should be one or more key value pairs" I've tried creating datapoints without a dimension but it returns NAN (probably because LM requires "one or more key value pairs" for dimensions). We currently use a Python script to collect most our custom metrics but it's resource intensive for our collectors and I'm trying to move away from it. Does anyone know of a way to use the 'AWS CLOUDWATCH' collector with dimensionless metrics?
  5. It looks like someone in my org had enabled "Enhanced Monitoring" for several AWS RDS instances--a surprise, to be sure, but a welcome one . I would love Cloud Collector method that can consume this data and display it along side all other metrics we are collecting in LogicMonitor. Implementation should be relatively simple. In the discovery, presumably using describe-db-instances, we would just need a* property for the "dbiresourceid" which can be used to get-log-events.
  6. Got an issue where getting a status "5", can't connect on https-443 to a web server on AWS EC2. But can connect to https site from public browser and using curl from the collector, without issue. Running diagnostics through LM shows a timeout. The "Poll Now" data looks fine except Normal Datapoints shows errors. These errors cannot be confirmed.
  7. GX2WXT A single Lambda function might have several versions. The default Lambda datasource monitors and alerts on the aggregate performance of each Lambda function. Using the Alias functionality in AWS, this datasource returns CloudWatch metrics specifically for the versions to which you have assigned aliases, allowing you to customize alert thresholds or compare performance across different versions of the same function. This datasource does not automatically discover aliases and begin monitoring them (as this could very quickly translate into several Aliases being monitored and drive up your CloudWatch API bill). Instead, add only the Aliases you want monitored by adding the device property "lambda.aliases" either to individual Lambda functions or at the group level if you're using the same Alias across several lambda functions. To add more than one, simply list them separated with a single space - e.g: "Prod QA01 QA02". If an alias does not exist, no data will be returned. This datasource is otherwise a clone of the existing AWS_Lambda datasource with the default alert thresholds.
  8. AWS lists that the Listener limit is on a per ELB basis. The AWS_ClassicELB_ServiceLimits datasource seems to intimate that the ListenerUsage is returning the total number of listeners for the given region. Is this useful information to capture on a regional basis or should this be refactored to apply to each classic ELB?
  9. I know that it is not exactly recommended/reliable to use a 1GB/1CPU Core machine to monitor...but it seems that installing a "nano" sized collector on a t2.micro AWS instance and having it just monitor itself brings the AWS instance to a screeching halt. I am seeing that when the collector is running, top shows that CPU pegs to 100% almost nonstop. Memory is not hit quite as bad..but it does get up there to use 500mb+ But the CPU load average is 5+ cores and it makes the system unusable. Sometimes this causes the instance to throw status alerts & even crash. Question: Has anybody been able to tweak the wrapper.conf etc files to make the collector CPU load less demanding?
  10. I would like to see the cloud collector for AWS utilize all of the relevant data in the cost and usage reports provided by Amazon. For example, I just enabled billing monitoring for an AWS account, and the billing bucket contains reports for every day in June, but the collector just calculated the month-to-date total costs as of today, though it could have calculated the month-to-date total costs for each day in June and populated the graphs accordingly. It also ignored the reports for May.
  11. Earlier this evening I used the REST API to create an AWS group containing an account device for the purposes of billing monitoring. I didn't specify any services to monitor; I only provided a billing bucket and report path prefix. (Side note: the example under (2) at is out of date.) Nearly four hours later, LogicMonitor has discovered a handful of instances for each of the Cost By Service, Cost By Operation, and Cost By Region data sources. It has yet to discover an instance for the Cost By Account data source. (Those four comprise what comes with billing monitoring.) The billing bucket had report files waiting to be retrieved as soon as the account device was created. I have the active discovery schedule for each of the aforementioned data sources set to 15 minutes, though that shouldn't matter because, as stated previously, I force active discovery upon device creation. I have the polling frequency for one of the four data sources set to 1 minute for testing purposes, with the rest set to 1 hour, but that shouldn't matter either, because surely instances are polled at least once when they are discovered, regardless of the frequency setting (?).
  12. We have some custom CloudWatch metrics we'd like to gather, display and alert on however, they aren't specific to any host or AWS resource. If we push application specific metrics into CloudWatch, we don't want them tied to any specific application host as hosts/instances can come-and-go. We push these metrics to CloudWatch without an associated EC2 / AWS resource and can see them in CloudWatch, but without an InstanceID, we can not pull these metrics into Logic Monitor. We'd like to be able to pull in and organize any metric from CloudWatch, regardless of whether or not its dimensions include an InstanceID etc.
  13. I’m not sure if your product group has this on their radar, but for AWS in particular it would be very useful to have a “keep display name in sync with AWS name tag” checkbox that is selected for AWS sync since it is possible for AWS names to get changed and no longer match what’s in LM. Azure doesn’t seem to have this problem (I don’t think Google would have this issue either). In order to deal with this, I ended up writing this script (which can be scheduled), but hopefully it can just be added to the product:
  14. Hello, At the moment, the documentation for adding an AWS Datasource almost seems to imply that you can filter any type of service (be it EC2, S3, SQS etc) by name using the tag name/value system, but in actuality, only filters things that literally support the tag feature. I'm asking for a simple addition to the AWS Datasource so that you can filter, for example, SQS by some regex, e.g.: regex name = foo, regex value = *foo* Thereby causing the autodiscovery to only show *foo* SQS queues. This is a bit of deal breaker for people who have many SQS queues, but where some of them are just test queues which don't need to be monitored. Thanks.
  15. Please add native support for discovery of ECS metrics by ClusterName AND ServiceName dimensions!
  16. HCPFGA The default LogicMonitor datasource names the instances in a strange way and then alerts for events that have already completed. I've added a better instance naming convention that clearly identifies the event that will occur and when. I also put in logic to detect if the scheduled event has already taken place to prevent unnecessary alerting.
  17. MZMPR6 Every 5 minutes, this datasource will query ElasticSearch for a list of the top 20 API callers as identified by the "userIdentity.sessionContext.sessionIssuer.userName" identity. This should return a list of users that are running under automation as opposed to user accounts. This will also return the number of calls that are being throttled by AWS as outlined here: Use this datasource to improve any code you have running in AWS that relies on API calls outside of cloudwatch. Suggestions on how to improve this datasource are welcome! We've already used it to find a number of issues and identify code that was not adhering to AWS's API request guidelines. To Apply this datasource, assign the property "CloudTrailES.URL" to your Elasticsearch instance that services cloudtrail log requests. The value of this property should be the search URL endpoint for that Elasticsearch instance (e.g. "")
  18. Today, the system.displayname property for an AWS device is not automatically updated when the "Name" property in AWS is changed. It would make it easier to see which devices we are actually monitoring if the display names in LogicMonitor were synced with the current "Name" property in AWS.
  19. Monitors EBS volume IO burst credits (measured in Percentage) and triggers a warning alert below 10%. JAPZ69
  20. - Token to name a device based on AWS tag - The ability to exclude devices based on region - All AWS tags should be imported/populated/inherited - Netscan should populate the property instead of system.ec2.resourceid. The former is actually used by the default AWS_EC2 datasource. Probably moot if my third bullet is done.
  21. Today we were flooded with hundreds of alerts in our alert dashboard. AWS was having an issue in the ap-southeast-1 region with launching new instances. The "AWS Service Health" datasource found this issue and then alerted on it for each instance & ebs volume we had in that region. That was too many alerts, especially since the issue wasn't with our existing devices in that region. I would like this alert to happen on the AWS Device Group itself - per region, so that we can know about it, but it won't generate an excessive number of alerts with the same exact information.
  22. While setting up NetScan Policy and choosing AWS can we instead rely on the IAM instance role of the collector instance (when the collector is an EC2 instance running in AWS) instead of hardcoding AWS AccessKey/SecretKey in the Netscan policy?
  23. Two very useful metrics to be monitoring on our EC2 instances are CPUCreditUsage : The number of CPU credits consumed during the specified period. This metric identifies the amount of time during which physical CPUs were used for processing instructions by virtual CPUs allocated to the instance. CPUCreditBalance : The number of CPU credits that an instance has accumulated. This metric is used to determine how long an instance can burst beyond its baseline performance level at a given rate. Close monitoring of these enable us to spot an error if a T2 instance is using more CPU than it should over time, monitoring CPU usage alone is not enough as once your credit balance is 0 the usgagae is throttled by amazon to a baseline level. In other words if your credit balance reaches 0, the cpu utilisation will drop to either 10,15 or 20% depending on the instance type, Making it impossible to alerts on cpu usage of 100%. AWS/EC2>>CPUCreditBalance AWS/EC2>>CPUCreditUsage Added these manually and they work fine, would be nice if these were included in the default logicmodule
  24. Hello, At the moment, when you've added an AWS Account, it will show all the services you've asked for (e.g., all SQS queues). Is it possible to have a feature where you can select a subset of these queues after they are discovered? Clearly they are stored somewhere (via a datapoint's metric path:, but there doesn't seem to be a way of changing them. Thanks.