Homebrew NMS: Keep a Database of Network Assets
As you build your own NMS, you'll eventually want to tie together all the data you gather. Perl's DBD::Pg is one way to do that.
Over time, we find it necessary to gather more and more information. An NMS solution may be able to store data in its internal database, but sometimes we need to combine many data sources in a single place. For example, it's extremely useful to grab Layer 2 discovery data and place it into a "host database," which will contain much more information than your discovery application provides. A few top-of-the-head examples are: owner information, serial number, related ticket numbers from your trouble ticketing system, physical location, and much, much more.
We'll get into the details of one proposed database layout in a future article, which will provide more concrete examples of using externally gathered data to enable successful executions of IT processes.
The overall concept of DBD::Pg is best expressed in steps:
- Define database name, host name of the database server, and your username and password
- Connect to the database
- Execute queries: insert new data, retrieve existing data, or delete data
So let's see this in action. The following example connects to a PostgreSQL database and executes a simple "SELECT" query. Here's the connection part:
The above code sets some useful variables, and then crafts the arguments required by the connect() method. The $dbh variable is a handle returned by the database, and if it's undefined or holds a negative value after the connect() call, that means the connection did not succeed.
At this point we simply need to use the handle returned by DBI->connect() to execute queries. First, a database statement must be "prepared." If, for example, you needed to execute the same query over and over, you would execute() the query in a loop, but with different variables each time. A performance enhancement, prepare(), with a question mark placeholder, allows you to avoid sending the entire query over and over. Instead, it will send only the new arguments for every subsequent execute() call. We aren't using substitution in the following example, but you need to be aware of the real purpose of prepare().
The above code should execute the SELECT query it as commanded to run and print the results. It is quite straightforward after reading the DBD::Pg documents, but there are a few things to point out. The $errstr variable is always available, and should be printed if any call to a DBD function fails. This means that you need to check the return value for every function call, obviously. The fetchrow_array object will return an array containing the data returned from the database. It may be wise to check that only one result was returned!
Since @row is an array, it can be accessed by referencing individual indexes, like so: $row. You will know the database schema beforehand, so accessing individual fields inside @row should be easy. If you're going to be using the data frequently during processing, it is recommended that you assign each field in the array some useful variable names.
For the sake of the example, we'll assume that we have new knowledge about a host in our database. The mythical database keeps track of switch, switchport, mac_addr, and "lastseen" (in that order). An easy way to update this information, correcting the switchport information, follows.
When processing data returned from the DB gets more complex, assigning the @row elements meaningful names save a lot of time and frustration. See how strange it gets referring to $row[number]? Referring back to the order of database elements gets quite tedious.
Inserting new data into the database is actually easier; just create some INSERT queries based on whatever data you have available.The difficult part is keeping track of what data types and fields you need to insert into a database row. I find it best to include the output of '\d' in PostgreSQL right in a comment in my source code, and also to name variables based on database fields.
We'll be using the above example to show how easy it is to store, and more importantly, correlate data from multiple sources in a single authoritative database. Before zooming out to the overall IT picture, we'll continue focusing on network-based information in the next article: managing and verifying discovery data.