Jeffrey McGovern

  • Posts

  • Joined

  • Last visited

  • Days Won



4 Neutral

About Jeffrey McGovern

  • Rank

Recent Profile Visitors

397 profile views
  1. +1 I agree - this would be a useful feature
  2. Hey Sarah, I picked it up in the release notes and have already started looking at how to implement some of my existing users under this new type. Thanks for the heads up!
  3. Hey Thomas, You need to call GET: /device/devices/{deviceId}/devicedatasources/{deviceDatasourceId}/instances and retrieve the instanceId from there and then pass the output into your call above. I think what you are trying to do is the last example at the bottom of:
  4. I think that this request dove tails with the request to ACL announcements to a role but can stand alone. I would like to have service announcements logged in a central location rather than just having the UI popup banner. Something like Settings --> Account Information --> Platform Messages This way as an MSP if I need to send any information to the user base when the information does relate to them there is the ability pull from an authoritative source. It will also be extremely helpful when you look at the pop up banner and say to yourself that you need to write that information down and then never do and can't recall it when you need it. (Not that I have ever done that. Heard it from a friend)
  5. Hey Grant, Have you figured out how to do this? Probably not the most elegant solution to the problem because it will most likely involve changing existing process but a way to solve this would be to create a device group (something like _implementation_) and then using Advanced Netscan dump all new assets into this group rather than unmonitored. This would then allow you to run a standard device inventory report against that group. Not elegant but would get you from here to there. The only other thing I can think of would be to submit a Feature Request to be able to add a customer property to the unmonitored group that you could pick up with the reporting engine.
  6. I agree that there needs to be a mechanism to hide these from portions of the user base.
  7. Hey James, Another way to do this might be with a property source. If you take a look at this article in the 'From the Front' forum you could do something like create the 'snmp.working=' tag that would then automatically populate once AD runs. This would then allow you to validate based on monitored protocol rather than asset type. The WMI example is pretty easily modified (pull something like winders version) to create a 'wmi.working=' to catch that as well. Cheers!
  8. Problem I have a datasource that collects information from the LogicMonitor API. In order for this to work correctly I need a valid user on the LM platform with a valid API token. I can see two potential paths forward. Case 1 - Use my existing account as the datasource author with my API token. This has a big downside that if I have to leave the company for any number of reasons and my account gets disabled this datasource will stop working and is customer facing. This is probably not so good. Case 2 - Create a 'service account' inside LogicMonitor that can have its' own API token and if any one human needs to leave the company there really is not a big problem. The issue with this is that this user has a username and a password that can grant it access to the UI under all the permissions granted by the role but this account should/will never be used within the UI. This also generates a potential security problem because the password will most likely never be rotated because as long as the API user and token work this is simply going to sit there. Request Be able to create a new user type of 'API only' which will never have access to the UI and therefore you should not have to set any of the UI specific information for the account. This would remove the need for any of this information under that account: First/Last name/Email/Password/Force password change/2-factor/Phone/SMS/SMS Email format
  9. Use Case: Provider I am a provider with a substantial amount of customers being monitored by the platform. A single customer requests monitoring to be suspended for 14 days as they do a physical DC move. The move will be 1:1 so all systems will come back up in same logical location and only move physical locations. Requests are filed, meetings are had and the day comes to move and NOC turns alerting off for customer. Uneventful days go by and on the day that alerting is supposed to be turned back on a regional event happens that the provider NOC is responding to for other customers (You can insert any normal well defined chaos that happens in a NOC here) and alerting does not get re-enabled for the customer with the physical DC move. Enterprise: Oracle team notifies the NOC that a weekend upgrade will be happening on the Oracle customer and the upgrade team does not want to be notified of any alarms as they will have their hands full with the upgrade and they will call back when the upgrade is complete. NOC turns alerting off and upgrade team never calls to say that they are done working. Request: Much like SDT enable calendaring and scheduling as a option for enabling/disabling alerting as a backup option in case of failure in manual processes.
  10. Use Case: I am a provider with loads of customers using the system. Operators working on MACs must navigate the flat user tree looking for specific users for any number of a myriad of reasons. One of the main use cases for this feature is to avoid mix ups when multiple customers have very similar names only differentiated by domain. Think Jim Smith as an example. Request: Looking for a way to “containerize” users in the User Access -> Users tab. It would be extremely cool and helpful to be able to add groups on this screen to contain customer’s users in a single bucket. I would love for this to occur by adding metadata/tags to a user when it is created. I would then create groups just like we do when we create devices and auto-assign users to them based on the metadata entered for the user. If the auto-assign is robust enough you could really make this interesting by using a standard query of metadata + role to get the structure below: MSP Admin Operator Cst 1 Role 1 Users Role 2 Users Cst 2 Role 4 Cst N Role N Users
  11. Looking for a simpler way to create standardized dashboards during onboarding process. Use case: I am an MSP who builds three (three is arbitrary but this story gets worse if this number grows) default dashboards for any customer who signs up for my service. As an initial step the provider creates these dashboards using widget tokens and place them in a non-customer facing group so that the provider onboarding operators can clone them. Today this means that the operator must: 1. Go into the dashboard section of the UI 2. Create a new dashboard group for the new customer, 3. Clone dashboard 1, update the group, and then update widget tokens for dashboard 1 4. Clone dashboard 2, update the group, and then update widget tokens for dashboard 2 5. Clone dashboard 3, update the group, and then update widget tokens for dashboard 3 6. Wash, rinse, repeat Request: UI: Move widget token metadata definition to the group level as an option and then allow dashboard cloning at the group level. This changes the process above to look like below regardless of how many dashboards exist. 1. Go into the dashboard section of the UI 2. Create a new group 3. Clone the dashboard group, change the widget token, change the group, and then go about their business. API: This process would then simplify the onboarding code dramatically as there would be no need to iterate through the existing group to gather data before the code could continue with the cloning process.