Search the Community

Showing results for tags 'aws'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • LogicModule Exchange
    • LM Exchange
    • LM Staff Contributions
  • Product Announcements
    • LogicMonitor Notices
  • LogicMonitor Product Q&A
    • Feature Requests
    • Ask the Community
    • From the Front

Found 18 results

  1. Earlier this evening I used the REST API to create an AWS group containing an account device for the purposes of billing monitoring. I didn't specify any services to monitor; I only provided a billing bucket and report path prefix. (Side note: the example under (2) at https://www.logicmonitor.com/support/rest-api-developers-guide/device-groups/aws-device-groups/ is out of date.) Nearly four hours later, LogicMonitor has discovered a handful of instances for each of the Cost By Service, Cost By Operation, and Cost By Region data sources. It has yet to discover an instance for the Cost By Account data source. (Those four comprise what comes with billing monitoring.) The billing bucket had report files waiting to be retrieved as soon as the account device was created. I have the active discovery schedule for each of the aforementioned data sources set to 15 minutes, though that shouldn't matter because, as stated previously, I force active discovery upon device creation. I have the polling frequency for one of the four data sources set to 1 minute for testing purposes, with the rest set to 1 hour, but that shouldn't matter either, because surely instances are polled at least once when they are discovered, regardless of the frequency setting (?).
  2. I would like to see the cloud collector for AWS utilize all of the relevant data in the cost and usage reports provided by Amazon. For example, I just enabled billing monitoring for an AWS account, and the billing bucket contains reports for every day in June, but the collector just calculated the month-to-date total costs as of today, though it could have calculated the month-to-date total costs for each day in June and populated the graphs accordingly. It also ignored the reports for May.
  3. I’m not sure if your product group has this on their radar, but for AWS in particular it would be very useful to have a “keep display name in sync with AWS name tag” checkbox that is selected for AWS sync since it is possible for AWS names to get changed and no longer match what’s in LM. Azure doesn’t seem to have this problem (I don’t think Google would have this issue either). In order to deal with this, I ended up writing this script (which can be scheduled), but hopefully it can just be added to the product: https://communities.logicmonitor.com/topic/1525-logicmonitor-powershell-module/?tab=comments#comment-4385
  4. AWS Lambda Alias

    GX2WXT A single Lambda function might have several versions. The default Lambda datasource monitors and alerts on the aggregate performance of each Lambda function. Using the Alias functionality in AWS, this datasource returns CloudWatch metrics specifically for the versions to which you have assigned aliases, allowing you to customize alert thresholds or compare performance across different versions of the same function. This datasource does not automatically discover aliases and begin monitoring them (as this could very quickly translate into several Aliases being monitored and drive up your CloudWatch API bill). Instead, add only the Aliases you want monitored by adding the device property "lambda.aliases" either to individual Lambda functions or at the group level if you're using the same Alias across several lambda functions. To add more than one, simply list them separated with a single space - e.g: "Prod QA01 QA02". If an alias does not exist, no data will be returned. This datasource is otherwise a clone of the existing AWS_Lambda datasource with the default alert thresholds.
  5. AWS - Top API calls by User

    MZMPR6 Every 5 minutes, this datasource will query ElasticSearch for a list of the top 20 API callers as identified by the "userIdentity.sessionContext.sessionIssuer.userName" identity. This should return a list of users that are running under automation as opposed to user accounts. This will also return the number of calls that are being throttled by AWS as outlined here: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/query-api-troubleshooting.html Use this datasource to improve any code you have running in AWS that relies on API calls outside of cloudwatch. Suggestions on how to improve this datasource are welcome! We've already used it to find a number of issues and identify code that was not adhering to AWS's API request guidelines. To Apply this datasource, assign the property "CloudTrailES.URL" to your Elasticsearch instance that services cloudtrail log requests. The value of this property should be the search URL endpoint for that Elasticsearch instance (e.g. "https://search-cloudtraillogs-ABCDEFGHIJK12345.us-east-1.es.amazonaws.com/_search")
  6. I've been creating datasources to collect our custom AWS Cloudwatch metrics as per the docs: https://www.logicmonitor.com/support/monitoring/cloud/monitoring-custom-cloudwatch-metrics/ - mainly this is fine... However it can't cope with dimensionless metrics: "Namespace>Dimensions>Metric>AggregationMethod, where Dimensions should be one or more key value pairs" I've tried creating datapoints without a dimension but it returns NAN (probably because LM requires "one or more key value pairs" for dimensions). We currently use a Python script to collect most our custom metrics but it's resource intensive for our collectors and I'm trying to move away from it. Does anyone know of a way to use the 'AWS CLOUDWATCH' collector with dimensionless metrics?
  7. HCPFGA The default LogicMonitor datasource names the instances in a strange way and then alerts for events that have already completed. I've added a better instance naming convention that clearly identifies the event that will occur and when. I also put in logic to detect if the scheduled event has already taken place to prevent unnecessary alerting.
  8. Today, the system.displayname property for an AWS device is not automatically updated when the "Name" property in AWS is changed. It would make it easier to see which devices we are actually monitoring if the display names in LogicMonitor were synced with the current "Name" property in AWS.
  9. AWS EBS Burst Credits

    Monitors EBS volume IO burst credits (measured in Percentage) and triggers a warning alert below 10%. JAPZ69
  10. AWS China

    As a global business we have infrastructure in China and we also use cloud services in China. Please add support for AWS China. China is a huge market for us.
  11. Please add native support for discovery of ECS metrics by ClusterName AND ServiceName dimensions!
  12. - Token to name a device based on AWS tag - The ability to exclude devices based on region - All AWS tags should be imported/populated/inherited - Netscan should populate the system.aws.resourceid property instead of system.ec2.resourceid. The former is actually used by the default AWS_EC2 datasource. Probably moot if my third bullet is done.
  13. Today we were flooded with hundreds of alerts in our alert dashboard. AWS was having an issue in the ap-southeast-1 region with launching new instances. The "AWS Service Health" datasource found this issue and then alerted on it for each instance & ebs volume we had in that region. That was too many alerts, especially since the issue wasn't with our existing devices in that region. I would like this alert to happen on the AWS Device Group itself - per region, so that we can know about it, but it won't generate an excessive number of alerts with the same exact information.
  14. Netscan AWS Integration (IAM instance roles)

    While setting up NetScan Policy and choosing AWS can we instead rely on the IAM instance role of the collector instance (when the collector is an EC2 instance running in AWS) instead of hardcoding AWS AccessKey/SecretKey in the Netscan policy? http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
  15. Two very useful metrics to be monitoring on our EC2 instances are CPUCreditUsage : The number of CPU credits consumed during the specified period. This metric identifies the amount of time during which physical CPUs were used for processing instructions by virtual CPUs allocated to the instance. CPUCreditBalance : The number of CPU credits that an instance has accumulated. This metric is used to determine how long an instance can burst beyond its baseline performance level at a given rate. Close monitoring of these enable us to spot an error if a T2 instance is using more CPU than it should over time, monitoring CPU usage alone is not enough as once your credit balance is 0 the usgagae is throttled by amazon to a baseline level. In other words if your credit balance reaches 0, the cpu utilisation will drop to either 10,15 or 20% depending on the instance type, Making it impossible to alerts on cpu usage of 100%. AWS/EC2>InstanceId:##system.aws.resourceid##>CPUCreditBalance AWS/EC2>InstanceId:##system.aws.resourceid##>CPUCreditUsage Added these manually and they work fine, would be nice if these were included in the default logicmodule
  16. Hello, At the moment, when you've added an AWS Account, it will show all the services you've asked for (e.g., all SQS queues). Is it possible to have a feature where you can select a subset of these queues after they are discovered? Clearly they are stored somewhere (via a datapoint's metric path: ##system.aws.resourceid##), but there doesn't seem to be a way of changing them. Thanks.
  17. Filter AWS Services

    Hello, At the moment, the documentation for adding an AWS Datasource almost seems to imply that you can filter any type of service (be it EC2, S3, SQS etc) by name using the tag name/value system, but in actuality, only filters things that literally support the tag feature. I'm asking for a simple addition to the AWS Datasource so that you can filter, for example, SQS by some regex, e.g.: regex name = foo, regex value = *foo* Thereby causing the autodiscovery to only show *foo* SQS queues. This is a bit of deal breaker for people who have many SQS queues, but where some of them are just test queues which don't need to be monitored. Thanks.
  18. We have some custom CloudWatch metrics we'd like to gather, display and alert on however, they aren't specific to any host or AWS resource. If we push application specific metrics into CloudWatch, we don't want them tied to any specific application host as hosts/instances can come-and-go. We push these metrics to CloudWatch without an associated EC2 / AWS resource and can see them in CloudWatch, but without an InstanceID, we can not pull these metrics into Logic Monitor. We'd like to be able to pull in and organize any metric from CloudWatch, regardless of whether or not its dimensions include an InstanceID etc.