Joe Skup

Members
  • Content Count

    8
  • Joined

  • Last visited

  • Days Won

    2

Community Reputation

4 Neutral

About Joe Skup

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Update- I found the solution: Edit agent.conf: 1. collector.xxxx.enable=false on every unused/unneeded datasource collection mechanism. 2. eventcollector.xxxx.enable=false on every unused/unneeded eventsource collection mechanism. 3. sbproxy.pool.connections=25 4. sbproxy.linux.threadPoolSize=60 5. collector.snmp.threadpool=25 6. discover.workers=5 In my case, I pretty much am monitoring the collector machine only, and I've only left snmp, script & webpage-related collectors enabled and this seems to have done the trick. The last few configurations might not be n
  2. I know that it is not exactly recommended/reliable to use a 1GB/1CPU Core machine to monitor...but it seems that installing a "nano" sized collector on a t2.micro AWS instance and having it just monitor itself brings the AWS instance to a screeching halt. I am seeing that when the collector is running, top shows that CPU pegs to 100% almost nonstop. Memory is not hit quite as bad..but it does get up there to use 500mb+ But the CPU load average is 5+ cores and it makes the system unusable. Sometimes this causes the instance to throw status alerts & even crash. Question: Has a