Vitor Santos

Members
  • Posts

    166
  • Joined

  • Last visited

  • Days Won

    36

Reputation

115 Excellent

6 Followers

About Vitor Santos

  • Rank
    Thought Leader
    Thought Leader

Recent Profile Visitors

505 profile views
  1. Hello, Due to recent requirements imposed by a customer of ours, we've refined the Velo Cloud module suite. We've tweaked those to allow the usage of a generated API Token only (using the property -> velo.apitoken.key) & disregard the use of velo.user/velo.pass. The reason behind this is the customer not wanting to share any creds to their infra (even if read-only), since he didn't wanted us to have GUI access. The modules will collect the same exact metrics as the OOTB ones, only difference will be in the actual authentication. In addition to the original suite, we've also created the addCategory_VeloCloudAPI_TokenStatus property source. This will allow the token to be renewed & having the resource updated on LM automatically (without any interaction from the end-user). This is a required automation since VMware only allows to create tokens that are valid for up to 12 months & we don't want to miss/forget to renewing it. Since it would cause monitoring to fail. With that in mind we've came up with this PS. However, to use this PS the velo.user needs to mapped (only the user) & it has to have more than read-only permissions too (required privileges are mentioned in their swagger page for the different API calls). PS would require the following privilegies assigned to the user - CREATE - ENTERPRISE TOKEN - READ - UPDATE The remaining modules work only with the velo.apitoken.key mapped, this is just if you want to renew the token in a automated way & make use of that extra PS (we've coded it to renew the token if <10 days for it to expire). Despite those still being in security review - here you go Datasource(s) VMware_VeloCloud_EdgeLinkMetrics - MTWGY4 VMware_VeloCloud_EdgeLinkEventQuality - J7WKEF VMware_VeloCloud_EdgeLinkHealth - AWTYNE VMware_VeloCloud_EdgeHealth - 4DPLLC Property Source(s) addCategory_VeloCloudAPI - RTCDA2 addERI_VMware_VeloCloud - ECDHXG addCategory_VeloCloudAPI_TokenStatus - EER7JK Topology Source(s) VMware_VeloCloud_Topology - CZ7AKJ Hope this helps in case anyone has the same needs. Thank you!
  2. Thanks for the quick reply. Is there any ETA?
  3. Hello, As the title suggests... Is there any modules developed for Dell EMC PowerStore? I've already gone through the exchange & found nothing... Thanks!
  4. This would DEFINITELY be appreciated (it's something we've also raised to our CSM too). It would ease the management & implementation of custom requests across our different clients for certain 'data sources' without having to create dedicated DS when those requests come).
  5. Hello @Jeroen Gouma, I don't know if this is the best solution but, we had a similar need in the past. We have a data source that discovers different instances (one per client essentially). Since Roles don't allow you to drill down to Instance level permissions, what we ended up doing was instead of giving any access to the actual resource that owns the data source, we've created 'Services' that only grab select the specific instances/metrics we want (created a group of service per customer), then on the 'Roles' you can give read/manage permissions for that service only (on a per customer role basis). Not sure if I was clear enough on my explanation but, that's how we ended up doing. Thanks!
  6. I'll reply to your email (will include our CSM as well for his awareness). Thanks!
  7. Well yeah, I could argue that as an enterprise user that could cause issues as well but, you got my point 🙂 I agree if this doesn't get solved, then have an alternative solution. Exactly like you stated would suffice (at least it would give us the option to choose what happens).
  8. As an MSP we use netscan & expert mode all the time. That's not in question, however, we've a lot of persons using LM & sometimes 'Wizard' is used to add 1 device or so instead of 'Expert'. This is a bug & should be fixed... Regardless if we're an MSP or no, it doesn't make sense to populate the properties of a certain device on the root group... I'm also missing the use case for this logic, are you aware of it? This is just my opinion though. We rarely use the Wizard (myself, I don't use it all), however, I can't control our users. We already brought this to everyone attention, however, if someone forgets, the issue is still there & might impact something. Therefore & to prevent human error this should be considered & fixed on LM end.
  9. Hello, We already raised this but, it was a while ago & it didn't got fixed yet. When adding device(s) via 'Wizard' option, properties that are set for the device will appear on the ROOT group as well (independent of the group we assign that device to). This is very concerning & doesn't make any sense. Is there any ETA for this to be fixed? Doesn't make sense to add a device (with properties at its level) & then map those properties on the ROOT level (making all the SUB groups inheriting it - as an MSP that represents DOZENS of clients that have nothing to do with eachother). Thanks!
  10. Yeah, this is a HUGE LACK of feature. This is common sense & needed. We should be able to see if the device/instance/etc was in SDT whenever the specific alert got triggered. For tshoot purposes is very helpful. Like @DanB said, we don't care about if it's on SDT at the time at all, that's silly LOL. We can see it right away on the GUI logo, no need to have a column on that. We've already brought this to our CSM as well, lets see if it gets to dev ears.
  11. There should be a field for the syslog time then, or include it in the message itself. But the actual alert date should be the time it hit LM (because that's the actual time where we received the notification - not 4/5 hours ahead of the REAL time in the portal).
  12. We've a Trap server in our environment & all our clients send Traps into it. The problem is that it belongs to our old solution architecture (which we like a lot) but, they'll be shutdown soon. Our upper management wants to get rid of those & all the VPN tunnels we've from our DCs to our clients infra (which allowed us to send Traps to a single spot). That's why we're ultimately leveraging a workaround for the stuff we still need to rely on Traps. Never mind, I think we just found a solution. We'll receive the duplicated Traps on LM, set the ES to clear them every 5 minutes (which really doesn't matter because we're sending them to SNOW & ignoring the CLEAR events there). This way the alert will have to be cleared manually on SNOW & we're also transforming the Event ID (alert id on LM) to be static (this way we only have 1 alarm on SNOW). Thanks!
  13. Yes, I found a tool this morning that does that. Was able to create a config source that accommodates our needs, however, that represent a LOT of manual work in each collector. Which isn't a optimal solution. We were wondering if there's a property that we can set on the collector config for it to dump the received traps within a certain log file. I'll poke support about it. Thanks anyway @Stuart Weenig!