Splunk 3.1: A Log-Monitoring Gem Shines Brighter

Enterprise Networking Planet content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Charlie Schluting

Many moons have risen since we last gushed about Splunk, so what better way to reinvigorate our personal buzz than to install the latest version and write a how-to. After talking about a few neat features, we will briefly discuss how to set up central syslogging and how to install Splunk, before a tangent into "working around the free version’s crippled interface."

New Features

First off, the latest Splunk version is a bit more polished. That may seem strange to say, but diamonds can, indeed, shine more if you find a new, better way to polish them. Splunk version 3.1.x, most noticeably, has a new front page. Ignoring the annoying "yes I want to use the free license" splash page, the front page is really the dashboard. You are no longer limited to just one dashboard, though!

The default dashboard looks familiar, with a summary of which sources and hosts have the most entries, a search bar, and listing of saved searches. Closer inspection, however, reveals even more graphs down below.

The default graph shows how many total log entries have been processed in the last few days. Any other Saved Search can also be configured to display a graph, which is extremely handy. For example, say we’re interested in the number of viruses our mail servers have found recently. If you use ClamAV, you simply perform a search to get the results you want, in this case: "clamav FOUND" will do it. It says that 1,316 events  in the past 24 hours were found—sounds correct. You probably want to click on the "sourcetype:sendmail_syslog" text to add that search term as well; it immensely speeds up search times. Save the search, click the box to display it in the default dashboard, and — ta-da — the dashboard now has a graph. At a glance, we can see how many viruses per hour we have blocked in the last 24 hours. In the past, this type of information was only available by scripting something that scanned log files.

That’s just one example, and there are tons more — the amount of spam rejected, failed login attempts, OSPF adjacency changes — the list goes on. You will want to schedule the search to run every few minutes for the most up-to-date dashboard information, which is easily accomplished via the Saved Search settings. Given too many saved searches, the dashboard will start loading slowly, which is surmountable by creating purpose-specific dashboards. Creating these wonderful graphs is highly addicting, so allocate a few hours before you start playing with it.

Setting up Syslog

We generally find that existing infrastructures already have central syslog servers, but in case you don’t, here’s a quick rundown of what it is all about.

Even the old Unix syslogd program is capable of sending syslog entries to a remote server. The configuration looks something like: *.err @loghost.domain

Unfortunately, the classic syslog daemon will only send logs to a single place. If you wanted to leave a copy of some logs locally, you were out of luck. With syslog-ng, available on most Unix and Linux platforms today, you can (among other fanciness) speficy multiple destinations for each facility.severity specified. For example:
*.err  /var/log/messages
*.*  /var/log/syslog

The above will send any "err" severity messages to /var/log/messages, yet still log everything to one mondo-log file, /var/log/syslog. In fact, you can do this as many times as you like. Each server on your network should be configured to send "*.*" to a central log server. Instead of a file name, simply put "@hostname" as the destination.

For Splunk’s purposes, it’s best to simply add another line on the log server, if you want everything sent to splunk, saying: "*.* |/var/run/splunk-pipe." What’s all this, you ask? Well it’s a named pipe, or FIFO. You can create the FIFO with ‘mkfifo /var/run/splunk-pipe.’

A FIFO is much more resource-friendly than constantly reading your text log files. A FIFO is a buffer that can be written to by one program, and read from by another in a First In Last Out fashion. To make this work, you simply need to configure Splunk to read from the FIFO.

Installing Splunk

There isn’t much to say about installing Splunk. Follow the simple instructions provided, depending on your platform, and then point your Web browser at the address indicated by the installer. At this point, you simply need to add a data source.

Caution, beware, pay attention! The syslog source type automatically extrapolates host names. If you choose another type, all of your syslog entries may appear to be from your syslog hostname. So add a FIFO data input, syslog type, and point it at /var/run/splunk-pipe. The dashboard front page will soon begin to populate with data. If you want more immediate results, temporarily add a text file with syslog messages, and Splunk will happily slurp up and index the data. Now you can start playing with Saved Searches.

Unfortunately, the administrator interface is wide open. Splunk feels that securing one’s configuration settings is an enterprise feature, and you must pay money for a functional product. People who care enough about this limitation can easily find a way to work around it. Exercising some not-as-evil-as-they-seem insight, Splunk made the admin interface exclusively reachable via the URI /admin. Apache to the rescue.

Even with users and password protected admin sections, note that your syslog data is still wide open. We can eliminate two birds at once by proxying via Apache to the Splunk server.

First, we want to make Splunk only listen on localhost, assuming we’re serving the Splunk Web page from the syslog server. Simple set the environment variable in the startup script:
export SPLUNK_BINDIP="127.0.0.1"

Next, configure Apache to proxy requests to Splunk; something like:

ProxyPass / http://127.0.0.1:8000/
ProxyPassReverse / http://127.0.0.1:8000/
<Location /admin>
Order Deny,Allow
Deny from all
# Stuff
</Location>

To limit access to the /admin/ section, you can simply replace #Stuff with specific allow lines, LDAP authentication, or whatever you choose. To protect all of Splunk, place the restriction at "/" instead. You’ll probably want to ensure people are connecting via SSL if they are required to enter a password, so be sure to redirect non-SSL requests.

If you start indexing more than 500MB of logs per day, Splunk will nag you to get an Enterprise License. Splunk remains functional regardless, and if you are large enough to produce that much data, the other Enterprise features are likely useful. The Enterprise License gets you: Splunk server mating (send events between them), distributed search and clustering, and access control.

Be sure to check out Splunk’s online demos: http://www.splunk.com/product/205

Get the Free Newsletter!

Subscribe to Daily Tech Insider for top news, trends, and analysis.

Latest Articles

Follow Us On Social Media

Explore More