Michael Dieter

Members
  • Content count

    26
  • Joined

  • Last visited

  • Days Won

    2

Community Reputation

3 Neutral

About Michael Dieter

  • Rank
    Community Whiz Kid
  1. R2NNTG My group supports several hundred switches across all of our buildings and locations, but we don't always receive reliable or timely information about events that change the local density of wired connectivity needed to support constituents and their various devices. This frequently results in a significant amount of wasted switch port capacity and wasted electricity when a specific location is vacated or its use changed and we are not told, leaving a 48-port switch where 24 ports or less would be sufficient for example (worst of all is the scenario where we purchase additional switch hardware to support growth or expansion in a location while under-utilized switch hardware needlessly burns power and annual maintenance $ elsewhere. The referenced datasource here is expected to help this situation. It is not a switch-by-switch, port-by-port uptime measurement, rather it shows the percentage of ports with link (ie in use) over a range of time. Its value is expected to come from providing a longer-term trend of ports in use for specific locations by highlighting those locations where switches can be removed due to persistent over-capacity, but it can certainly be effective tracking use-cases with shorter term periods as well. It is extremely well-suited to presentation on a dashboard. Note that this datasource was built for Juniper switches [and includes specific interface filtering so you will need to adjust that (along with Applies To and any thresholds) to meet your needs] but it is most likely not difficult to substitute other vendors' MIB/OID into it. Much thanks to Josh L on the pro-services team who did all the real development work. R2NNTG
  2. Juniper Switch HW Info

    Hey Andrey.. this use-case came to mind immediately when I initially saw the release notes announcing propertySources. but the reasons for not actually using a propertySource include: 1)as a dense, slow and dim-witted "network guy" I've had a hard time clearly and accurately understanding propertySources 2)my level of groovy-scripting competence is zero (see #1) and 3)I thought it would be easier to modify an existing datasource than to start from scratch (so basically I cheated, especially since Johnny Y took all my development notes and did the actual modification). I think it would be cool & hope that somebody does post this functionality as a propertysource; in the meantime hopefully other Juniper customers might get some value out of this one.
  3. Juniper Switch HW Info

    Name: Name: Juniper Virtual Chassis_lmsupport Display Name: Displayed As: Juniper Virtual Chassis_HW Info Locator Code: YWWE74 I modified (actually, I had help from Support) the datasource Juniper Virtual Chassis- so that values that originally were displayed in the UI as only descriptive text on per-instance mouse-overs are now presented as properties. Juniper switches present difficulty to the datasource Device_Component_Inventory; this modification allows a single-step way to associate with stand alone switches and virtual chassis while getting inventory data on a per-member basis instead of on just whichever switch happens to be the virtual chassis routing-engine at the time of polling. And it comes with the huge bonus that using ILP as the instance grouping method produces a great presentation in the UI. It collects from jnxVirtualChassisMemberEntry (there are other values there that may be of interest to you, so walk it) for each member of a virtual chassis, but it does require that you enter any specific info you would like to see into the command <set...member n location [any desirable text value]>. We chose to enter building and room location along with asset tag number and it is stored by the property auto.vcmemberassetinfo Also, you will need to configure even standalone switches as a virtual chassis <set...member 0 location [any desired value]> The Descriptive text in the original datasource is not reportable, but properties are, and this lets us create a great report using the Device Inventory report. It gives us device name building, room number and asset tag for each individual switch in a virtual chassis serial number for each individual switch in a virtual chassis HW model info for each individual switch in a virtual chassis junos version for each individual switch in a virtual chassis Special thanks to Support Engineer Johnny Y for doing most of the heavy lifting (after recognizing that I was trying to pound a screw in with a hammer), all the other Support Engineers who patiently answered my questions in a series of cases, and to CSM Kyle for kickstarting me.
  4. Juniper Router HW Info

    Name: Juniper Router Inventory Displayed As: Juniper MX and SRX HW Info Locator Code: RJEAXH I cloned the Device_Component_Inventory and modified it to work with Juniper MX and SRX devices, since D_C_I didn't work with Juniper OoB. There is more information available within Juniper's jnxBoxAnatomy if you'd like to further modify, but serial number and box description were most valuable to us. We added snmp sysContact, which is how/where we've chosen to record Building Location and Asset Tag number in a router's local configuration file (we wanted to reserve snmp sysLocation for future use) and represent that with a property called "auto.assetinfo" and then we added the system.sysinfo property. We've used this very effectively with the Device Inventory Report, which gives us inventory with the following: Device Name (in Logicmonitor) Serial number Building, Room number, asset tag HW Description Junos version Thanks very much to my CSM Kyle and to each of the Support Engineers who provided answers to questions that helped me put this together.
  5. Way to track switch port capacity?

    Any ideas, or in-use methods, for how to track switch port capacity? By this I mean tracking how many ports are actually being used now and in historical time-frames? Right now, this is a labor- & time-intensive manual process and is basically impossible to see any kind of trending. But it has an awful lot of value for a few reasons: being able to reclaim ports no longer in use to support additional wired connections is far less expensive than adding a new switch when demand for density n of new wired connections is generated -->new switch = purchase price + maintenance support + electricity + install labor being able to identify wired connections no longer in use can permit switch consolidation--> reducing the number of switches deployed lowers electricity bills, maintenance support bills and will have a big effect on reducing cost in the HW-refresh cycle switches are dense devices from a monitoring perspective; with fewer of them, collector deployments and resources can possibly be consolidated, reduced and/or simplified I've cloned and then customized the Interfaces datasource and then adjusted the filtering to return ports that are down. The default interfaces datasource filters returns (of course) ports that are up. But this is where I have gotten stuck: So now some math (addition, division, multiplication) is required to arrive at a "capacity percentage" (ports up/total port count [ie ports down + ports up]) * 100 = capacity percentage And then some way to keep a historical record in graphical or tabular format is needed. I know this functionality exists out there in the market; thought I'd ask here to see if anyone has figured out how to do it on their own before exploring a feature request. thanks.
  6. VRRP status indicator: master or backup

    shamelessly replying to my own topic (even if its text is not totally accurate: there are OSPF and BGP datasources which provide valuable information about adjacencies and peers, respectively), just to bring it back to the top of the list in the hope that others might see it, find value in it and give it a vote.
  7. Customized Juniper Interface datasources

    I've published these customized interface datasources for use with Juniper Networks' switches and routers. Combined with additional snmp configuration of the devices, these have helped make Juniper devices a little easier to deal with. I think there is much room for additional customization to permit further grouping. NOTES: in all three the Collection schedule is unchanged but the Discovery period has been reduced to the minimum (10 minutes), which may not be necessary for all use-cases/environments. none of the default datapoints (normal and complex) were removed or edited in any way "snmp64_If_juniper_VCP_interfaces" does not capture every single VCP port in VC larger than 2 members. Additional investigation is needed to understand how Juniper makes VCP accessible via snmp, and whether or not it is possible to discover and monitor every such instance. snmp64_If_juniper_logical: MM6C96 snmp64_If_juniper_physical: WZ2AZC snmp64_If_juniper_VCP_interfaces: YZ42H7
  8. Juniper Netflow configuration examples

    OK, I've finally had a chance to validate this configuration and I can tell you that it works, with a few minor alterations....see below. I have deployed this on an MX-80 running Junos 13.3R9.13. One other relevant addendum to my original "you need to know your MX HW & SW in detail" caveat: I have 20 x1 GE and 2 x 10GE MIC-3D powering my physical interfaces; if you have anything else consult Juniper documentation for sampling support information. good luck with that set chassis fpc 1 sampling-instance NETFLOW-INSTANCE #####The above statement is valid for MX-240, MX-480, and MX-960 HW, though you will need to specify the fpc you want to use. Also, there are very likely some limitations with regards to the number of sampling instances per fpc that you can create, the total number of instances that can be configured per chassis, and whether any single instance can span multiple fpc. #####The below statement is valid for MX-80 HW. Given that MX-80 has a single tfeb, there are almost certainly much stricter limitations that govern the configuration of the number and deployment of sampling instances. set chassis tfeb0 slot 0 sampling-instance NETFLOW-INSTANCE #####From here down is the same regardless of MX model, though of course the physical and logical interfaces will vary. set chassis network-services ip set services flow-monitoring version9 template LM-V9 option-refresh-rate seconds 25 set services flow-monitoring version9 template LM-V9 template-refresh-rate seconds 15 set services flow-monitoring version9 template LM-V9 ipv4-template set forwarding-options sampling instance NETFLOW-INSTANCE input rate 1 run-length 0 set forwarding-options sampling instance NETFLOW-INSTANCE family inet output flow-server 192.168.1.2 port 2055 set forwarding-options sampling instance NETFLOW-INSTANCE family inet output flow-server 192.168.1.2 source 192.168.10.1 source-address 192.168.10.1 set forwarding-options sampling instance NETFLOW-INSTANCE family inet output flow-server 192.168.1.2 version9 template LM-V9 set forwarding-options sampling instance NETFLOW-INSTANCE family inet output inline-jflow source-address 192.168.10.1 set interfaces ge-1/3/3 unit 2630 family inet sampling input set interfaces ge-1/3/3 unit 2630 family inet sampling output
  9. LogicModule Exchange -- Public Beta

    We don't really actively do anything with LogicModules besides a few cases of cloning defaults, editing and then re-applying them but I think we'd at least like to be on the radar as far as the exchange goes as we would be interested in the possibility of contributing what little we might be able to. Feel free to contact us anytime.
  10. Juniper Netflow configuration examples

    Hey James as I said, this takes a while to work through all the moving parts. I just recently completed an upgrade to JUNOS 13.3Rx and will be attempting this soon and I'm not really looking forward to it. What I pasted previously was from a working configuration that exported IPFIX from an MX240; I don't have my notes with me but I do recall that IPFIX and v9 were nearly identical procedures. As you reference, the HW difference between 80 and 240 does come with slight configuration differences. I expect to post again in the near future once I get it working (or maybe it won't work?), but in the meantime this juniper link may help. note also that LogicMonitor's netflow configuration documentation page does have specific caveats with regards to v9 and templates. good luck and post details of your results if/when you get the chance. https://www.juniper.net/documentation/en_US/junos13.3/topics/task/configuration/inline-flow-monitoring.html
  11. SNMP tuning with Juniper Networks devices

    As a follow-up, see below for ideas that I should have included the first time. A good way to contribute towards improved Collector efficiency and performance, and potentially moderate the demands that polling exert on your Juniper devices (or any devices, really): Review the datasources that associate with your devices, to ensure that they are providing only information that has value and then customize them accordingly --> adjust Discovery schedules and eliminate any datasources or underlying datapoints that aren't providing value. Use Caution: don't delete a datapoint supporting a calculation elsewhere! Review datasource Collection intervals (especially multi-instance ones with a high-density) and increase them from their default values (either by globally editing the datasource, creating customized versions with different collection intervals, or setting group/device properties specifying collection interval for that datasource) where possible. For example, maybe its acceptable to poll the 48 10/100/1000 switch interfaces that connect end-user devices every 4-5 minutes, but the 2 fiber uplinks require 2 minute visibility. If you have 100 (or even 50 or 10) switches this can make a big difference. LogicMonitor gives you many different ways to combine settings to achieve this, so think it through to come up with the way best suited for your environment. Just don't forget that changes in collection intervals will impact Alert Thresholds, so make sure you account for that.
  12. I started working on the best-practice of implementing Collector dashboards, which revealed to me a number of undesirable conditions of which I hadn't previously been aware and suggest that I need to go through some Collector Tuning excercises. LogicMonitor has an excelllent reference page on this topic http://www.logicmonitor.com/support/settings/collectors/tuning-collector-performance/ For those of you who are monitoring any density of Juniper devices, 1)I feel your pain; 2)I would recommend that you use this Juniper reference to complement LM's instructions. https://www.juniper.net/techpubs/en_US/release-independent/nce/information-products/topic-collections/nce/snmp-best-practices/junos-os-snmp-best-practices.pdf And another note: you can look for my previous thread for details if you want, but I do not at all recommend using SNMPv3 to monitor either Juniper Virtual Chassis (switch stack) or Juniper routers with multiple Routing Engines. If you've somehow managed to consistently and successfully do this, I'd appreciate a post with your solution.
  13. Working through best practice of creating collector dashboards. The various data collecting tasks provide a wealth of info that can be customized as desired into widgets for such a dashboard. But there doesn't seem to be anything that can provide visibility into the underlying collector mechanisms (tasks, processes, thread, cpu, mem, etc) that support netflow operation. Would probably be nice to be able to see such info, and to be able to put it onto a collector dashboard particularly since its best to pipe netflow to a dedicated collector.
  14. Working through best-practice of setting up collector dashboards. Instead of navigating to Settings-->Collector would anyone find value in being able to create a dash widget that displays the collector version? Perhaps even some way to display current running version, other available versions and their status (R-GD, O-GD, EA), and timestamps of version upgrades that occurred?
  15. Juniper Netflow configuration examples

    Hey Mike, despite the differences between my posted <set> commands and the <set> commands you issued to generate your config, you have a perfectly valid configuration and your results confirm that. What your config and results also demonstrate is that <polling-interval> and <sampling-rate> are variables whose values are not "one size fits all" so I highly recommend consulting Juniper documentation, experimenting, and then reviewing the results you see in LM to arrive at what works best for you and your needs.