• 0

What collector settings control # of threads for these threads: esx-service, snmp-selector, webpage-async-workers


Question

Hello, I'm looking at tweaking my collectors b/c I see these collection services are maxed out.

image.png.2895693e0c23a3e10e2ccb64974fc43a.png

  • esx-service
  • snmp-selector
  • webpage-async-workers

What settings in the collector config can I bump up to improve these collection services? I searched for these specific sections but some don't find any specific to these exactly.

 

Link to post
Share on other sites

4 answers to this question

Recommended Posts

  • 0

Hey Stu!!! I already have a case open asking but figured I'd post in here if anyone else knew. Searching didn't' find anything either so asked.

Once I get a response I'll update. 

  • Thanks 1
Link to post
Share on other sites
  • 0
  • LogicMonitor Staff

This is from a lab collector that is monitoring a handful of devices, so what you're seeing might be totally normal. I think those are single threads having to do with the collector's internal task management (but I am not certain of this, and will be interested to hear what the support team has to say)

image.thumb.png.71ec63bc6eeab509d48c9ea815b02e14.png

 

This datasource has the counters for the collection tasks. Probably the most important one of these having to do with thread availability is the unavailable thread counter (visible on the instances)

image.thumb.png.3d0f03bfb1841018e0d0e531da2e8428.png

  • Like 1
  • Thanks 1
Link to post
Share on other sites
  • 0

Hi Mike. Thanks for the info. I'm not sure if that specific collector Datapoint is displaying that graph correctly b/c I know we had like 100's of failed tasks for the XenApp_* datasources that spawned my support issues in the 1st place. This was based on BatchScripts.

The graph is showing only .0046t/s   :   "# of tasks /second that are being scheduled that are still waiting for a thread from a previously scheduled task."
image.png.e02c80ce7e2b381d4bfa273173b7c036.png

Unavailable Thread Scheduling:

image.thumb.png.71dd4c022440d5474bf59b8fdefab788.png

I looked into this b/c was getting huge gaps in this DS's DataPoints. Once we bumped up the batchscript thread count it started working so far nicely. We'll see over the weekend  how it holds up. 

image.png.e5afeb07fd58d5f7eb3a83a936156149.png

 

 

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.