Sam Gendler

  • Content Count

  • Joined

  • Last visited

Everything posted by Sam Gendler

  1. SNMP based monitoring of an EC2 instance that is a member of an ECS cluster picks up filesystems at /var/lib/docker/containers/<container_id> which go away once the docker container exits. This causes logicmonitor to alert on the 'StorageNotAccessible' data point of the 'Filesystem Capacity' data source. Does anyone have a good fix for this that doesn't just turn off monitoring of those filesystems? And do I care, anyway? They seem to be some kind of overlay filesystem that is just content on a normal filesystem that is also being monitored, so I can probably just exempt /var/lib/do
  2. SNMP credentials do exist - at the top level so there's no way to avoid them - and the collector does correctly populate some snmp data sources, so I know the access is working. I can do arbitrary snmpwalk commends from the collector host to the instances in question, so I know the network ACLs and security groups are correct (also, I spent far longer than should have been necessary getting them to work in the first place, so I KNOW they are correct). But it does not seem to be picking up auto properties correctly. I have no system.sysinfo property and I'm unsure how that gets populated. I
  3. Are ssh keys supported yet? Better yet would be the ability to fetch an ssh key, by name, from a key management service such as those offered by AWS and others. I don't want to have to encode passwords into the logicmonitor interface, and I certainly don't want to have to leave a private key with access to all my production resources on my collector, but my collector runs on an ec2 instance that has an instance role assigned to it, so it would be trivial to give that role a policy which allows it to fetch a key by name. If the basic ssh functionality included in scripts were to include the
  4. I recently enabled collector-based monitoring for resources discovered via my AWS Cloud Account. My intention was to enable snmp based monitoring of instances, since cloudwatch doesn't give visibility into a lot of metrics that are otherwise useful - disk utilization, most notably. However, while the ec2 instances did pick up basic collector monitoring - ping, Host Status, etc - none of the extended data sources that are automatically applied if I add the instance to the UI manually were enabled. Looking at data source definitions, many snmp data sources use functions like Servers() and isLin
  5. Thanks. I hadn't seen that, but that's pretty much what I ended up implementing - except that I want info about collector id and collector description to be available to the rest of my provisioning system, so I do all the parts other than downloading and running the installer within terraform templates/modules via provisioners and data sources which run local python scripts, and then the id is passed to the ec2 instance, which then runs a script to download the installer for that collector id and runs it. I'll eventually post a blog entry about it, but I've got too much on my plate to documen
  6. I'm looking for cloud-init configuration which will get a collector installer URL with my API key, download the installer, and run it. I'll happily take any non cloud-init solution and convert it. Here's a cut and past of the question I posted at stack overflow. The impetus behind this is using terraform to manage AWS infrastructure and not wanting to have to roll a custom AMI just to get a collector up and running. Here's my SO question, which explains what I'm looking for in a bit more detail (