• Posts

  • Joined

  • Last visited


0 Neutral

About usnishguha

  • Rank
  1. Hi @Stuart Weenig, I tried that. Python kept failing due to multiple library dependencies. I am looking for something like a "Step by Step" guide to run at least one on of the simplest programs in Python from Linux. I think I am missing something, also in our environment its not possible to "pip install ****" due to not having open internet connectivity from Linux.
  2. Hi All, I am new to both GO as well as to API Programming. I have downloaded the SDK and kept it inside one of our Linux boxes (where GO is installed). When I am running go mod tidy, this is the output I am receiving. [root@server]# go mod tidy go: finding module for package go: finding module for package go: finding module for package go: finding module for package go: finding module for package go: finding module for package go: finding module for package go: finding module for package imports module Get "": read tcp x.x.x.x :21126-> read: connection reset by peer imports module Get "": read tcp x.x.x.x:21130-> read: connection reset by peer imports module Get "": read tcp x.x.x.x:21132-> read: connection reset by peer I am inside this path /home/user/LM and the SDKs are unzipped under /home/user/ I have put "x" instead of the actual server IP. Please suggest. Thanks
  3. Thanks @Stuart Weenig, is there any SSH equivalent? We don't have SNMP in our environment
  4. Hi @Stuart Weenig, can you please tell me the built-in datasource name?
  5. Hi All, Can anyone please tell me if you have a solution to the below situation. Currently we have a static thresholds for all the filesystem 85, 90 and 95% utilization. This actually causes a lot of alerts for bigger filesystems, which could be dealt with less priority than the small sized ones. So what we are looking is to find a way to is to have dynamic thresholds which would change based on the total size of the filesystem. Let's say for filesystems of size 1GB to 100GB --> 85, 90 and 95% 101GB to 500GB ---> 88, 92, 96% 501GB to 2TB --> 92, 95, 97 2TB above --> 96, 98, 99 One of the solution is to probably have a separate datasource for bigger filesystems or to have something by manipulating the complex datapoints. Is there any better solution for this?
  6. Pardon my ignorance, but I am fairly new to groovy. Can you please tell me if this looks ok? import com.jcraft.jsch.JSch import com.santaba.agent.util.Settings host = hostProps.get("system.hostname") user = hostProps.get("ssh.user") pass = hostProps.get("ssh.pass") port = hostProps.get("ssh.port")?.toInteger() ?: 22 //cert = hostProps.get("ssh.cert") ?: '~/.ssh/id_rsa' timeout = 15000 // timeout in milliseconds try { jsch = new JSch() session = jsch.getSession(user, host, port) session.setConfig("StrictHostKeyChecking", "no"); String authMethod = Settings.getSetting(Settings.SSH_PREFEREDAUTHENTICATION, Settings.DEFAULT_SSH_PREFEREDAUTHENTICATION); session.setConfig("PreferredAuthentications", authMethod); if (user && pass) { session.connect() } } catch(Exception e) { System.out.println(e); } finally { session.disconnect() }
  7. @Stuart Weenig, thanks. Are you referring to something like below? if (user && pass) { // Both user and pass exist, try to auth and if we get an invalid login error, warn the user try { def jsch = new JSch() session = jsch.getSession(user, host, port) session.connect() } catch(Exception e) { System.out.println(e); } }
  8. Hi All, We have observed in our environment sometimes the ID that we use for monitoring gets locked resulting in "No Data" for that particular datasource. We do have "No Data" alerts on, but as "No Data" situation can occur due to multiple reasons (server down, network disconnection etc. and SSH ID being locked) we wanted a more meaningful alert for the SSH ID being in locked status. Can anyone please let me know if there's any datasource available for this purpose? Thanks, Usnish
  9. Hi @Stuart Weenig, I was able to get past the "certifi" module error after installing it. But can you please let me know about this problem, Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'lm' is not defined >>> *seems it doesn't find lm module.
  10. @Stuart Weenig, thanks a lot for your detailed answer. I am struggling to some extent with Python SDK ( I am new to this). I have tried to download the tar.gz and extracted it. Then I tried running a sample from here: But it's giving me this error, ModuleNotFoundError: No module named 'certifi' I am probably doing it wrong. Is there any url which helps in starting with the SDKs? Thanks in advance.
  11. yes @Stuart Weenig, exactly like that. Can you please help me to find any sample API scripts in which can referred for this purpose? Thanks in advance.
  12. Hi @Stuart Weenig, not really this. I was looking to know if there's anyway we can export the raw data from the data source but with custom interval (different from the polling interval of the datasource). Can this be done via API?
  13. Thanks @Stuart Weenig, I have earlier checked the option, but what I was looking for is the detailed data with a custom interval (greater than the polling interval of the datasource). Can you please help with that?
  14. Hi Everyone, Can anyone please let me know if they are aware of any method/script that can be used to get the CPU, Memory and Disk related performance data in csv for a list of servers between a specific start time and end time. Thanks in advance.