• 0

DataSource - Powershell to check 40 different remote shared (on printers...) - TIMEOUT


LMNewbie
 Share

Question

Hi community :)

I have a new datasource powershell script that checks print shares to ensure that a copy process has run successfully.

I also have individual datapoints for each path so the alert can be relevant to the particular site.

$paths = @()
$paths += ,@("\\PATHONE\share\folder\")
$paths += ,@("\\PATHTWO\share\folder\")

etcetc

$statuses = @(0) * $paths.Count
$location=[int]0
$i=0

foreach ($path in $paths){

if(!(Test-path $path)){
    $statuses[$location]= 1
    $location++
    continue
     
}

$file = gci $path | sort LastWriteTime | select -last 1

if($file.LastWriteTime -lt  (Get-Date).AddDays(-1)){

 

Unfortunately I get a timeout while trying to test this, and when implemented, it just doesn't work.

Error;

Quote

FAILED

Embed powershell script failed, elapsed time: 61 seconds - java.util.concurrent.TimeoutException: The request timeout

 

This script might take about 1.5 minutes to run, so I definitely exceed any request limit here.

Can I work around this issue somehow?

 

Thanks!

Link to comment
Share on other sites

8 answers to this question

Recommended Posts

  • 0

I think Test Script button has a timeout of 1 min and scheduled checks default to two minutes. You can change it in the collector agent.conf but keep in mind this is across all scripts the collector would run. Generally you would want scripts to run as fast as possible since items are queued (default is 10 checks simultaneously I think) so be careful not to bog down the collector with checks just waiting for timeouts or run them too frequently.

Also keep in mind that checks like test-path can have long timeout itself so if a path or two is really down, your script might fail completely even with a >2 min timeout on the collector. Then you wouldn't be able to tell which failed and if you didn't implemented No Data alerts, you might not even know it not working. It might be better to instead check each path separately. That will let the UNC checks run in parallel and one path timing out will not cause the others to also fail. The easiest way to do that is make the DataSource multi-instance and using the "script" method (instead of batch script). It also lets you get separate stats for each path.

Have you looked at the built-in UNC Monitor check?

As a side note, it's also generally it's better to make checks generic using tokens/properties rather then hard code values into the code. Provide more flexibility and reuse later. The multi-instance way above is also a good way to do that.

  • Like 1
Link to comment
Share on other sites

  • 0
  • Administrators
14 minutes ago, Mike Moniz said:

As a side note, it's also generally it's better to make checks generic using tokens/properties rather then hard code values into the code.

^^^Most definitely.

Also, are you running in batchscript or script mode. One thing you can do when individual instances take longer to finish is use script instead of batchscript. The difference is that batchscript runs one script to gather all the data with a single, pretty low, timeout. That timeout is susceptible to the issues @Mike Moniz mentioned. However, if you use Script instead, the collector gets one task for each instance. Each instance would have its own timeout. This would mean that you'd have 1 minute (or 2, i'd have to check the default config) for a single path to be checked. When using script, you get a token, ##WILDVALUE## that contains the wildvalue of the current instance.

So, instead of your

foreach ($path in $paths)

 

you'd just do

$path = "##WILDVALUE##"

 

TL;DR: BATCHSCRIPT is not the only multi-instance option. Script is also an option, especially when individual instances take a long time to complete.

  • Like 1
Link to comment
Share on other sites

  • 0

Hi Mike, Stuart.

Thanks for the advice. I have a busy week, but I'll hopefully attack it again soon and use your suggestions. Appreciate your time!

I probably don't even care if checking the path works. I copied this off another script in our environment and rewrote for something else.

Edited by LMNewbie
Link to comment
Share on other sites

  • 0
$paths = @()
$paths += ,@("\\xxxxxxxxxx1\share\folder\")#Location0 Site Code: AB
$paths += ,@("\\xxxxxxxxxxx2\share\folder\")#Location1 Site Code: CD


$statuses = @(0) * $paths.Count
$location=[int]0
$i=0

foreach ($path in $paths){

if(!(Test-path $path)){
    $statuses[$location]= 1
    $location++
    continue
     
}

$file = gci $path | sort LastWriteTime | select -last 1

if($file.LastWriteTime -lt  (Get-Date).AddDays(-1)){
   $statuses[$location]= 1  
    
}
$location++
}
foreach($status in $statuses){

Write-Host Location$i=$status
$i++
}
exit 0

I missed the bottom of my code in my code insert in the first post.

 

Results of above example;

Location0=0
Location1=0
Location2=0
Location3=1 (fail path and or lastwritetime)

Link to comment
Share on other sites

  • 0
On 1/15/2022 at 3:48 AM, Mike Moniz said:

UNC Monitor check?

 

 

I have tested and think I will implement this shortly. Works well :).

Just need to monitor the files within now. Script runs much quicker without checking the paths first, so i'll see how that goes.

Link to comment
Share on other sites

  • 0
  • Administrators
On 1/16/2022 at 4:01 PM, LMNewbie said:

Results of above example;

Location0=0
Location1=0
Location2=0
Location3=1 (fail path and or lastwritetime)

FYI, this isn't a very efficient way of handling your data. Reason being: you have to create a new datapoint for every location. Instead, have your locations be instances and have one datapoint. Your output would look something like:

0.status=0
1.status=0
2.status=0
3.status=1

Link to comment
Share on other sites

  • 0
On 1/19/2022 at 3:21 AM, Stuart Weenig said:

FYI, this isn't a very efficient way of handling your data. Reason being: you have to create a new datapoint for every location. Instead, have your locations be instances and have one datapoint. Your output would look something like:

0.status=0
1.status=0
2.status=0
3.status=1

 

Yeah creating all those datapoints was ugly and time consuming to do this with every new site we deploy. I modelled it off another one we had configured here - but I could already see this was out of date and missing sites...

I ended up adjusting the script to be an email alert yesterday.

It creates a robocopy log file, then powershell parses the log for errors, then emails the team if there are any errors present in the script.

I will try integrate this into LM in the future.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share