Stuart Weenig

Administrators
  • Content Count

    733
  • Joined

  • Last visited

Everything posted by Stuart Weenig

  1. You could create a service out of those two links. The service metric would be interface status. You would choose to aggregate the status data by "mean". If both links are up, they'd both return 1, so the average would be 1. If one link is down, you'd get the average of 1 and 2 (1.5). If both links are down, you'd average 2 and 2 (2). Set your threshold to >=2 and you should be good to go. The only tedious part is setting this up for each pair of links you have.
  2. @Sarah Terry do you know what is required in order to get the custom columns to be returned either in the API or through the Python SDK?
  3. Haven't used chunker much, but if devicealert contains the response of the api_instance.get_alert_list() method, then you should be looping through devicealert.items. Not looping through device alert looking for items on each child in devicealert.
  4. The API call to get at the data that's shown on the raw data tab is documented here: https://www.logicmonitor.com/swagger-ui-master/dist/#/Data/getDeviceDatasourceInstanceData. 1) You'll need to get the device id. You can get that by calling "/device/devices/?fields=displayName,id,name" 2) You'll also need to get the devicedatasourceid (known as "hdsId" in the documentation). Do this call to get the list of datasources by name and id: "/device/devices/{deviceId}/devicedatasources?fields=dataSourceName,id" 3) You'll also need to get the instance id that you want to fetch data for
  5. Sorry folks, I inadvertently paused the recording at the start of the webinar and it remained paused for the entirety of today's webinar. We don't have a recording of today's session. You can access a few of the past sessions (which cover the same content) here, here, and here. There was a question about time based escalation chains. The official documentation for this feature is here. Look for "Create time-based chain". I mentioned a community developed solution. That can be found here. There was also discussion about our YouTube channel. That channel is here.
  6. What happens if you do this? for alert in lm.get_alert_list().items: print(alert.custom_columns) If you get a bunch of "none"s, then it may be that it's not fetched as part of the method. I sort of remember there being something special you had to do to get the custom columns, but I can't remember what it was. Try the above and let me know if all you get are "none"s.
  7. It's actively being worked on. Covid is partially to blame for the slowdown of transitioning the entire product to the new version of the UI. I think the Resources page is next on the list of upgrades. Not sure when, but probably soon.
  8. Here's one i did long ago. I recently rebuilt it to be an SSH based DS, which is also in that repo.
  9. What i cannot tell you yet would make you very happy.
  10. Here you go: https://github.com/sweenig/lmcommunity/blob/master/PiJuice/collect.groovy In this script, I compose the python script within my groovy script. Then the groovy script connects to the server via SSH, runs the script on the command line, grabs the output, then extracts the data from the returned json. //SETUP PORTION OMITTED (it's in the linked github repo) pythonScript = """import json from pijuice import PiJuice x=PiJuice(1,0x14) output = {} output["charge"] = x.status.GetChargeLevel() output["status"] = x.status.GetStatus() output["temp"] = x.status.GetBatteryTemperature
  11. Better quality images are definitely needed there. However, I recommend against extending snmp. It's messy, performance is not great, and requires stuff on the remote side to work. Better to use Expect or JSch scripted DSs. I've a couple of examples I can link when I get back to the office in a few minutes.
  12. It's there. There's just a different search box for locator codes. (Yes, I know.) In the Exchange, in the top right, there's a locator search option. I have enhanced my version of this DS to include automatic discovery using this discovery script: try{ hostProps.get("pingmulti.targets").tokenize(",").each{ target = it.tokenize("|") println(target[0] + "##" + target[1]) } return 0 } catch (Exception e){return 1} Set a property called "pingmulti.targets" in the format: target1_IP/FQDN|target1_Name,target2_IP/FQDN|target2_Name This also allows the a
  13. Like if you're polling at 1 minute but you want to pull the data with a resolution of every 5 minutes? Closest thing might be the graph data api endpoints.
  14. Definitely something to take up with support, if you haven't already.
  15. No. ABCG don't share a single cache. You should have logic in your code to compensate when the cache doesn't have the data so that it can be fetched brand new if needed. Typical cache logic is build for performance so you don't have to do "heavy" calls for stuff that doesn't change that often. But if that thing doesn't exist in the cache you should do the heavy call and store it in cache for subsequent runs.
  16. Ok, so you want to aggregate data over the selected timeframe to get an average, sum, or percentile for each device?
  17. Here's the info on script caching: https://www.logicmonitor.com/support/collectors/collector-configurations/collector-script-caching
  18. Not that i know of (I want that capability too). Your only option would be to sort by that value so that the ones of interest show up at the top. You should submit this as a feature request. I know it's been submitted before, but every submission helps justify prioritization.
  19. I can post the link later but there is a script caching capability that will allow you to carry values from one run of a script to the next through a cache on the collector.
  20. The thinking behind this is for enterprise customers, not MSP. If you are an enterprise and you're adding one device, more than likely you'll be adding other devices. Storing the creds at the root level makes sense on an enterprise level, especially if you are depending on the Wizard to help you with your deployment. I agree that it should be fixed. Perhaps just a checkbox in the wizard that says, "Save these credentials for future use" that defaults to unchecked.
  21. IMO, MSPs should not be using the wizard. Use expert mode or a scripted netscan to sync from a cmdb. The wizard is not meant for that level of scalability. It goes through troubleshooting connectivity to every device, etc. If you do add one device at a time, use expert mode, which actually allows you to set properties on the device, add it into groups, etc. as its added. I'm probably missing some use case out there, so i'm curious how you're using it.
  22. Here's the replay from today's webinar. Please provide feedback about this session. Questions Asked during this webinar: [13:07] We are already monitoring Zoom and Office365 the old way. Is there an advantage to changing? [14:17] Office365 datasources look usage related. Are there any for service status? [14:45] Under previous O365 monitoring setup, many of the existing DataSources timed out when collecting stats from any non-tiny org. There was really no good option (that we found) to expand the timeout to give the collector more time without potentially im