This is tied in to my last post "Do you prefer agentless or agent-based monitoring? Are there situations where you prefer one over the other?" I prefer agent-based solutions for monitoring to get monitoring data across WAN links. However, there are other methods, mostly focusing on a style of network design that includes local syslog/collection servers that then forward all local data to a central collection system. If you have multiple sites or remote nodes to collect data from, how do you handle the transmission of that data from the disparate locations to a some form of centralized collection system across the internet?
It’s a rare luxury if you only have to monitor and report on devices that all reside on the same private network. With the prevalence of cloud servers, and the common scenario of remote offices and multiple sites, monitoring often requires data to be passed across WAN links. That could be the public internet or private leased lines connecting multiple offices.
So back to the topic of agents, if you use agentless monitoring, you’re probably relying on the pulling of information from SNMP or WMI down to a monitoring node. However, you really don’t want to be exposing those interfaces to the public (SNMP v3 does have encryption, which mitigates some of the concerns). So in a case like that you have some options:
- A VPN connection from each remote node back to a central monitoring collector. This is often the case if the remote nodes are on foreign networks like POS devices or appliances that you monitor for customers.
- A collector on the same LAN as the remote nodes that polls for information form the nodes and then connects back to a main monitoring node via a VPN or some other encrypted data stream. This is popular in the case of branch offices.
- A point-to-point VPN connection to the gateway of the remote LAN back to the main collector. This is also poplar for branch offices.
- Direct polling access from the monitoring node to the client nodes across the internet. Kinda scary, but can be done with proper firewall rules.
Then you have the case of using agent-based monitoring for remote nodes. With agent based monitoring, often the option exists to compress and encrypt the data that is collected and send it to any remote collector whether it’s on the same LAN or halfway across the world.
When working with cloud server instances, my preference is to use agent-based monitoring of the OS and applications that then sends data back to a collector that is on the public internet using tight firewall rules to only allow accepted nodes. Of course, with the right cloud provider, one could make a private backend network that has a monitoring node collecting data from your instances and then forward it on via a VPN or other secure channel.
So how do you handle the collection of remote monitoring information? When do you choose the different options available? VPNs on individual nodes? Agents? Push or pull? Public interfaces exposing polling information?