In this article, I will walk you through the process of creating a scripted input in Splunk. With a scripted input, you configure your Splunk server or Universal Forwarder (UF) to run a script and capture the output of that script as events to be indexed by Splunk. This article will assume that you have some understanding of Splunk, running python and shell scripts on your system, and understand the difference between a Universal Forwarder and a Splunk Indexer. This article has been tested on Ubuntu 14, running Splunk 6.5. With minor modifications it should work for most Linux and Unix-based systems.
We will assume that our initial goal is to have Splunk run a python script, capturing the output as events. We will configure Splunk to run this python script as a scripted input by creating a new add-on on the Splunk system. To do this, we will create our add-on folder in the apps directory of the Splunk system. The apps directory is located under the etc folder in the $SPLUNK_HOME directory. $SPLUNK_HOME is the location where Splunk was installed. On an indexer, this will often be /opt/splunk, while on a Universal Forwarder, this will often be /opt/splunkforwarder. This guide will assume that you are working on a Universal Forwarder, but all steps can be easily modified for a Splunk Indexer.
Following Splunk’s naming conventions for applications, we will name this application TA-SimpleApp. The TA stands for “Technology Add-on” (Which is different from a Splunk App, which has a GUI). First we will create all the necessary folder and files:
# The path below is the apps folder for a Universal Forwarder # Change if you are on an indexer (/opt/splunk) or used a different install location cd /opt/splunkforwarder/etc/apps/ sudo mkdir TA-SimpleApp sudo mkdir TA-SimpleApp/bin sudo mkdir TA-SimpleApp/default sudo touch TA-SimpleApp/bin/TA-SimpleApp.py sudo touch TA-SimpleApp/default/inputs.conf
Next we need to adjust permissions:
cd /opt/splunkforwarder/etc/apps/ sudo chown -R splunk:splunk TA-SimpleApp
When you install Splunk, a Splunk user is created. We want these folders to be owned by the Splunk user, to ensure that it can access these files.
you should now have the following files and folders:
noah@thor:/opt/splunkforwarder/etc/apps$ tree TA-SimpleApp/ TA-SimpleApp/ ├── bin │ └── TA-SimpleApp.py └── default └── inputs.conf
Now we need to add the content of these two files. First we’ll setup our default inputs.conf (located in the default folder):
[script://./bin/TA-SimpleApp.py] interval = 10 sourcetype = my_sourcetype disabled = False index = main
A breakdown of this file:
Next we add the following content to our python script (TA-SimpleApp.py):
# So we can run this scipt under python 2 or 3 from __future__ import print_function import sys # for sys.stderr.write() import time # for strftime from datetime import datetime # for datetime.utcnow() import random # to provide random data for this example sys.stderr.write("TA-SimpleApp python script is starting up\n") # output a single event print (str(time.time()) + ", username=\"agent smith\", status=\"mediocre\", admin=noah, money=" + str(random.randint(1, 1000))) # output three events, each one separated by a newline (each line will be a unique event) for x in range(0, 3): strEvent = str(time.time()) + ", " strEvent += "username=\"" + random.choice(["Stryker", "Valkerie", "Disco Stu"]) + "\", " strEvent += "status=\"" + random.choice(["groovy", "hungry", "rage quit"]) + "\", " strEvent += "admin=" + random.choice(["lenny", "carl", "moe"]) + ", " strEvent += "money=" + str(random.randint(1, 1000)) print (strEvent)
Line 9 is an example of how we write information to the Splunk event log: $SPLUNK_HOME/var/log/splunk/splunkd.log. See the section below on logging.
Line 12 is where we generate a single event for Splunk to consume. We start by printing the UNIX epoch time, followed by a comma-separated list of keys and values, ending with a newline (the print function ends each line with a newline). Epoch time is preferred because it is easy for Splunk to identify, and it is highly accurate. Splunk will automatically identify the key-value pairs in this instance. If your output is less structured, you would need to configure a props.conf and transforms.conf (a simple example of these files).
Beginning with line 15, we loop thee times to create three random events. Each event will be written to stdout (using the print function at line 21, like above), and will terminate with a newline, which Splunk interprets as the end of the event. You can have Splunk ingest multiple-line events, by configuring Line Breaking in props.conf.
Now we want to test that the python script works correctly. We can do this by simply running it, and making sure we are seeing the stderr and stdout being written correctly to the screen (this is how Splunk will ingest this information). Run the script manually, and look for similar output:
noah@thor:/opt/splunkforwarder/etc/apps/TA-SimpleApp/bin$ python ./TA-SimpleAppy TA-SimpleApp python script is starting up 1484997424.92, username="agent smith", status="mediocre", admin=noah, money=627 1484997424.92, username="Valkerie", status="hungry", admin=carl, money=800 1484997424.92, username="Disco Stu", status="rage quit", admin=lenny, money=663 1484997424.92, username="Stryker", status="groovy", admin=moe, money=483
If you have the following output, then your script is correct. Now you need to restart Splunk to have it load your Technology Add-on, and run your script. Reboot Splunk:
noah@thor:~$ cd /opt/splunkforwarder/bin/ noah@thor:/opt/splunkforwarder/bin$ sudo ./splunk restart
Check your log files (see the next section on error logging) and the SplunkWeb search for these events. In the SplunkWeb search app, you should see something similar to:
Congratulations, if you have similar output to above, you now have a simple scripted input for Splunk. On a UF, you have to ensure that python is available. On an indexer, Splunk will use it’s own version of python (2.7.5), and on a UF, you’ll use the system’s version of python.
One challenge is that you have to be an administrator to view this log file, or even browse the log folder. To view this folder, I usually use sudo bash in a separate terminal window to work with log files. Because Splunk captures all output from your script, we need to differentiate between events and log information. We do this by writing events to stdout, which is ingested by Splunk, becomes an event, and is indexed. Anything written to stderr is captured by Splunk and is written to the splunkd.log file. This event will look like the following entry in the splunkd.log:
01-21-2017 09:43:09.216 +0200 ERROR ExecProcessor - message from "/opt/splunkforwarder/etc/apps/TA-SimpleApp/bin/myAppLauncher.sh" TA-SimpleApp myAppLauncher.sh is starting
You will notice that Splunk marks this event as an ERROR in the log, there doesn’t seem to be a way to modify this event to change the severity.
A good way to follow these events on a Universal Forwarder is as follows:
root@thor:/opt/splunkforwarder/var/log/splunk# sudo tail -f splunkd.log | grep ExecProcessor 01-21-2017 09:49:32.684 +0200 INFO ExecProcessor - New scheduled exec process: python /opt/splunkforwarder/etc/apps/TA-SimpleApp/bin/TA-SimpleApp.py 01-21-2017 09:49:32.684 +0200 INFO ExecProcessor - interval: 10 ms 01-21-2017 09:49:35.195 +0200 ERROR ExecProcessor - message from "python /opt/splunkforwarder/etc/apps/TA-SimpleApp/bin/TA-SimpleApp.py" TA-SimpleApp python script is starting up
On an indexer (rather than a UF), this command generates too much information (there are a number of apps that are started by Splunk on an Indexer, where on a UF there is only the one we created). On an indexer, you may want to grep for the name of the app: TA-SimpleApp instead.
If you need to pass parameters to your script, or you want to execute an application, you can have Splunk call a shell script, where you have more options in launching your script. Let’s create this script:
cd /opt/splunk/etc/apps sudo touch TA-SimpleApp/bin/myAppLauncher.sh sudo chmod a+x TA-SimpleApp/bin/myAppLauncher.sh sudo chown splunk:splunk TA-SimpleApp/bin/myAppLauncher.sh
We need to modify our inputs.conf to call this new shell script, rather than the python script directly. Modify the first line of inputs.conf to look like this (everything else is the same):
[script://./bin/myAppLauncher.sh]
And enter the following content for this shell script (myAppLauncher.sh):
#!/bin/bash # Write a line to the splunk log file echo TA-SimpleApp myAppLauncher.sh is starting >&2 # Check if we can run a python interpreter, exit otherwise command -v python >/dev/null 2>&1 || { echo No python interpreter found. >&2; exit 1; } # execute our python script, located in the same folder as this script cd $( dirname "${BASH_SOURCE[0]}" ) python ./TA-SimpleApp.py
On line 4, we are writing to stderr, which adds the output to an event to the splunkd.log event log (this would probably be removed in a final release, but it is good for testing).
On line 7 we are checking that we can find a python interpreter to run our script. If we can’t find one, we write an error to the splunkd.log log file and quit.
on line 10 we are setting the current directory to the directory of this shell script.
on line 11 we are calling our python script, which will execute and generate output just like the example above.
This example is very similar to launching a python script directly, however we now have the ability to pass parameters, do additional setup, change environmental variables (if needed), test for a python interpreter, and other housekeeping. You could also call a binary executable, output to your own log files, and other required setup or testing for your script. Basically, if you can do it from a shell script, and it generates output to stdout and stderr, you can use it as a scripted input. For any scripts or applications you execute here (including all child applications), all data written to stdout will become an event on your Splunk indexer. All errors written to stderr will become entries in the splunkd.log log file.
When Splunk runs your script, it is doing it within it’s own environment, with it’s own environmental variables. To test your script within that environment (without setting it up as an App), you can run ./splunk cmd from the bin folder:
noah@thor:/opt/splunkforwarder/bin$ ./splunk cmd ../etc/apps/TA-SimpleApp/bin/myAppLauncher.sh
you’ll see the following output:
TA-SimpleApp myAppLauncher.sh is starting TA-SimpleApp python script is starting up 1484994520.21, username="agent smith", status="mediocre", admin=noah, money=736 1484994520.21, username="Valkerie", status="hungry", admin=carl, money=141 1484994520.21, username="Stryker", status="rage quit", admin=carl, money=941 1484994520.21, username="Disco Stu", status="groovy", admin=moe, money=168
All data written to stdout and sdterr will show on the screen as your application writes it. Splunk is not running your app and processing the output, this tool merely lets you test your script to see if it would run in the environment that Splunk would run it. If you want to see the environmental variables for Splunk’s environment, just add the printevn command to the end of your shell script and run the ./splunk cmd command again. A simple example of the difference in environmental variables:
noah@thor:/opt/splunkforwarder/bin$ printenv | grep "splunk" PWD=/opt/splunkforwarder/bin noah@thor:/opt/splunkforwarder/bin$ ./splunk cmd /usr/bin/printenv | grep "splunk" PATH=/opt/splunkforwarder/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/lib/jvm/java-8-oracle/bin:/usr/lib/jvm/java-8-oracle/db/bin:/usr/lib/jvm/java-8-oracle/jre/bin PWD=/opt/splunkforwarder/bin _=./splunk SPLUNK_HOME=/opt/splunkforwarder SPLUNK_DB=/opt/splunkforwarder/var/lib/splunk SPLUNK_ETC=/opt/splunkforwarder/etc SPLUNK_WEB_NAME=splunkweb LD_LIBRARY_PATH=/opt/splunkforwarder/lib OPENSSL_CONF=/opt/splunkforwarder/openssl/openssl.cnf LDAPCONF=/opt/splunkforwarder/etc/openldap/ldap.conf
If you want to see some examples of Splunk Tecnology Add-ons, just download them from the Splunkbase and browse through their files, or look at the links below.
Feedback is welcomed, especially if there are errors in this guide or recommendations you have from your own experience. Please contact me here.
Setting up a scripted input
Add a scripted input with inputs.conf
Writing reliable scripts
Anatomy of an app
Advanced Python Script Testing
A good Scripted Inputs Tutorial
Building Splunk Technology Add-ons From the Splunk blog.
Package and publish a Splunk app
Apps and add-ons: an introduction
This guide will show you how to convert AsciiDoc files into EPUB and Kindle’s .mobi format using open source software on Linux. This guide will assume you have some experience with Linux, and a general understanding of html, css, and XML will be very helpful. I will walk you through all the steps required to create these files, explaining how each tool works so that you can troubleshoot and adapt the workflow to make it work for you. My goal is for you to really understand how this process works, rather than just following rote steps without really understanding them. I will include links to related resources as we go through that will help round out your understanding. At the end of this article, you will have a Makefile that will allow you to easily build and modify eBooks.
This guide is tested on Ubuntu 14 x64, although it should work on most Debian-based systems without much effort. It could easily be used on any Linux-based system with a minimum amount of modification.
AsciiDoc is a simple document format that allows you to easily mark up text in an easy to read format, which can also be easily converted to a specific type of XML that is used for EPUBs. From Wikipedia: “AsciiDoc is a human-readable document format, semantically equivalent to DocBook XML, but using plain-text mark-up conventions.” A good overview of AsciiDoc can be found on the AsciiDoctor website.
Documents written in AsciiDoc will be converted to DocBook XML, a semantic language made for technical documentation. This DocBook XML is converted to EPUB3 html files using DocBook XSL stylesheets. XSL Stylesheets are used to convert XML to another format, in this case html files (eBooks, including .mobi and EPUB are merely an archive of html files and cascading style sheets, essentially a web page wrapped into a single archive). We will then turn those html files into an eBook using a few different tools.
All tools and files here are open source or free. We will use the following tools:
We will also use the following files (download instructions are below):
The DocBook schema (in Relax NG schema Language).
The DocBook XSL Stylesheets, used to convert DocBook XML to HTML files from the DocBook Project.
This guide will have you setup a folder to hold the files for your eBook and all required files. It will assume that the filename of your eBook in adoc format is myBook.adoc. In this guide, we will setup all the required files, as well provide sample content for each file, so you will be able to create a full eBook from the example, and can then modify it after everything is working to match your workflow.
First, let’s install the required software:
sudo apt-get install -y asciidoctor jing xsltproc epubcheck calibre
Next let’s create the folder that will hold all the files for this eBook. We’re using a folder called ebook on the Desktop. We’ll create a number of necessary folders here as well:
mkdir ~/Desktop/ebook/ mkdir ~/Desktop/ebook/build-resources/ mkdir ~/Desktop/ebook/ebook-resources/ mkdir ~/Desktop/ebook/ebook-resources/graphics/ mkdir ~/Desktop/ebook/output/ cd ~/Desktop/ebook/
The build-resources folder will hold the DocBook XSL Stylesheets and the DocBook Schema file that we will download. This folder holds files that can be used for any ebook. The ebook-resources folder will store files that are specific to this one eBook, including css files, graphics (like the cover of the ebook), and any other files you want to include in your ebook. The output file is where our final products will be stored (the .mobi and EPUB files).
Now let’s create a simple AsciiDoc file. This will be the source material for our ebook. This format is text-based, and is simple to read and create. Since it’s text, it can also be added to your favorite version control tool (SVN, subversion, or the like). Here we will edit the adoc file using your favorite editor:
touch ~/Desktop/ebook/myBook.adoc xdg-open ~/Desktop/ebook/myBook.adoc
with the following content:
= Witty Book Title :doctype: book :backend: docbook :docinfo: :!numbered: :imagesdir: graphics [dedication] == My Dedications This book is dedicated to..... I'd also like to thank.... == This is the First Chapter Lorem ipsum dolor sit amet, consectetur adipiscing elit. Maecenas vehicula congue dolor, vel commodo magna viverra ac. Morbi ullamcorper, est eu egestas semper, velit elit bibendum orci, ut tristique tellus nulla sit amet ipsum. Sed fringilla, lacus sed viverra dictum, nulla augue placerat lectus, in efficitur magna risus non nibh. Ut laoreet, tortor at tempus mollis, magna risus ullamcorper dolor, quis rutrum ex augue a risus. Vivamus pellentesque accumsan est aliquam fringilla. Quisque eleifend ac eros in volutpat. Quisque eu euismod metus, at blandit diam. Phasellus in magna eget erat finibus lacinia quis at metus. Cras ut hendrerit sem. Vivamus ligula est, volutpat nec convallis eget, efficitur at orci. Proin non aliquet nunc. Mauris dui odio, bibendum consectetur ligula at, faucibus dapibus est. Praesent porttitor, nisi sit amet accumsan euismod, sem felis semper turpis, ut fermentum leo orci sit amet tortor. Nulla eros leo, eleifend vitae ornare quis, mollis tristique eros. In quis accumsan arcu. Ut hendrerit vitae sem ut consectetur. Nunc enim massa, tempus id orci vitae, rhoncus laoreet nulla. Pellentesque elementum purus rutrum, condimentum elit vitae, sagittis magna. Maecenas ornare justo et arcu consequat, nec volutpat risus fermentum. == This is the Second Chapter Duis cursus ac augue id blandit. Nulla varius accumsan odio, sed vestibulum odio lobortis quis. Nunc vitae ipsum tortor. Ut ut eros dignissim est luctus finibus ac quis nulla. Etiam consequat, neque sit amet laoreet laoreet, magna odio ornare justo, et ornare sapien nunc quis dolor. Praesent felis metus, facilisis a quam id, euismod faucibus nibh. Mauris venenatis dui erat, vel auctor felis tempus eget. Pellentesque tellus metus, pretium aliquam tristique eget, bibendum ut sapien. Curabitur magna augue, feugiat id enim congue, ullamcorper iaculis arcu. Integer pulvinar elit nulla, at gravida velit sodales eget. In quis leo ac mauris mollis facilisis fermentum non ex. Ut a lorem lacinia, egestas sem eu, tincidunt risus. Proin non ornare lacus, vitae imperdiet erat. == This is the Third Chapter Lorem ipsum dolor sit amet, consectetur adipiscing elit. Maecenas vehicula congue dolor, vel commodo magna viverra ac. Morbi ullamcorper, est eu egestas semper, velit elit bibendum orci, ut tristique tellus nulla sit amet ipsum. Sed fringilla, lacus sed viverra dictum, nulla augue placerat lectus, in efficitur magna risus non nibh. Ut laoreet, tortor at tempus mollis, magna risus ullamcorper dolor, quis rutrum ex augue a risus. Vivamus pellentesque accumsan est aliquam fringilla. Quisque eleifend ac eros in volutpat. Quisque eu euismod metus, at blandit diam. Phasellus in magna eget erat finibus lacinia quis at metus. Cras ut hendrerit sem. Vivamus ut feugiat neque, sed varius tortor. Phasellus sit amet ante ut tortor pulvinar efficitur non non massa. Curabitur imperdiet justo nec urna cursus, sit amet dapibus lectus posuere. Aliquam hendrerit nisi eget nunc aliquet, gravida aliquam elit volutpat. Donec semper tincidunt neque in aliquet. Curabitur lobortis rutrum felis quis tempus. In bibendum neque vitae ipsum tempus aliquet. Maecenas euismod consequat pellentesque. Maecenas in tincidunt nibh. Aliquam tempus libero non augue finibus fermentum. Sed massa leo, tempus sollicitudin consequat vel, ornare nec ligula. Donec et tellus bibendum, blandit nunc sed, porttitor magna. Integer eget pulvinar lorem. Sed in bibendum quam. Donec eu molestie ipsum, at maximus ipsum. Nunc vel mauris vulputate, faucibus dui quis, imperdiet enim. Phasellus sodales turpis quis velit egestas, in rhoncus diam pellentesque.
There are a number of interesting things happening in this file. The text we are using is Lorem ipsum, common filler text used by typesetters to allow you see how text looks on the screen or in print without getting caught up in the content of the text. The header of the adoc file begins with the Title of the book on the first line, underlined on the next line with a number of equals signs. The third line says that when we convert this file, we want the final document type to be a book (more on different document types here.) The backend actually gets overwridden at the command line when we convert this file later, but is good to have. The docinfo file command indicates that there is a docinfo.xml file (the one we will create below) that contains further header information. The Numbered line indicates that we don’t want page numbers. The first section we have is the dedication. The two equals signs indicate a chapter heading.
Now create the css file. This css file tells our ebook (EPUB or .mobi) how to be displayed, including font size, color, any anyting else that can be configured with css).
touch ~/Desktop/ebook/ebook-resources/master.css xdg-open ~/Desktop/ebook/ebook-resources/master.css
and enter the following information:
html, body { height: 100%; margin: 0; padding:0; border-width: 0; } @page { margin: 5pt; } /* indent paragraph */ h2 + p { text-indent:0; } p { text-indent:1em; margin: 0; } /* Set the minimum amount of lines to show up on a separate page. (There is not much support for this at the moment.) https://github.com/reitermarkus/epub3-boilerplate/blob/master/Ebook/OPS/css/main.css*/ p, blockquote { orphans: 2; widows: 2; } /* page break for dedication (xslst keeps on same page as the copyright */ div.dedication { page-break-before:always; } /* Move the legal notice from the title page to its own page */ div.legalnotice{ page-break-before:always; } /* Tile Page formatting */ div.book div.titlepage h1{ font-family: Helvetica,Arial,sans-serif; text-align: center; color: blue; }
The docinfo file is an xml file that holds information about the author and the copyright information. This file needs to have the same name as your AsciiDoc file with -docinfo.xml appended, and be in the same folder as your adoc file. In this example, our AsciiDoc book is named myBook.adoc, so the docinfo file is named myBook-docinfo.xml. Create this file:
touch ~/Desktop/ebook/myBook-docinfo.xml
with the following content (note that there can’t be any blank space at the beginning of this file):
Important note: use a text editor that will automatically recognize that you’re working in an XML file, so that it will format this file correctly (replacing the spaces with tabs). It is important that this file be formatted correctly, or you’ll get errors.
<author> <personname> <honorific>Mr</honorific> <firstname>Noah</firstname> <surname>Dietrich</surname> </personname> </author> <copyright> <year>2017</year> <holder>SublimeRobots Intl.</holder> </copyright> <legalnotice> <para> Copyright © 2017 by Noah Dietrich </para> <para> All rights reserved. This book or any portion thereof may not be reproduced or used in any manner whatsoever without the express written permission of the publisher except for the use of brief quotations in a book review. </para> <para> Printed in the United States of America </para> <para> First Printing, 2017 </para> <para> ISBN 0-9000000-0-0 </para> <para> Jim & Joe Publishers, LLC </para> <para> www.SublimeRobots.com </para> </legalnotice> <cover> <mediaobject> <imageobject> <imagedata fileref="graphics/cover.jpg"> </imagedata> </imageobject> </mediaobject> </cover>
This information will be added to your ebook when it is processed, but is not stored in the adoc file. A good example of a docinfo file can be found here.
Next, we need to get the DocBook XML Schema file (docbookxi.rng) and the DocBook XSLT stylesheets. We will store them in our build-resources folder:
cd ~/Desktop/ebook/build-resources wget http://docbook.org/xml/5.2b01/rng/docbookxi.rng wget http://downloads.sourceforge.net/project/docbook/docbook-xsl-ns/1.79.1/docbook-xsl-ns-1.79.1.tar.bz2 tar -xvjf docbook-xsl-ns-1.79.1.tar.bz2
Finally, we need to download the KindleGen application from Amazon. Navigate to the KindleGen homepage, download the linux version, and extract the kindleGen binary to the build-resources folder.
cd ~/Desktop/ebook/build-resources wget http://kindlegen.s3.amazonaws.com/kindlegen_linux_2.6_i386_v2_9.tar.gz tar -xzvf kindlegen_linux_2.6_i386_v2_9.tar.gz kindlegen
The Book Cover: You’ll need to put a jpeg for the cover into the graphics folder named cover.jpg. If you don’t do this, you’ll need to remove the cover section in your myBook-docinfo.xml file. You can get information on recommended jpeg sizing here.
The first step is converting the adoc file into DocBook XML format. This part can be a little frustrating sometimes, as many small semantic issues can cause errors to show up at this stage. Some issues I have encountered (and quick fixes for them if I have an answer):
from your ebooks directory, assuming you have all the above files setup correctly, run the following command:
cd ~/Desktop/ebook/ asciidoctor --backend docbook5 --doctype book --verbose --destination-dir ./output/ myBook.adoc
here we are using the asciidoctor application to convert the adoc file into an AsciiDoc XML file. This XML file will have all the same content as our original adoc file, only formatted (and marked up semantically) to meet the DocBook schema (docbookxi.rng). The schema describes the legal layout of all valid files. We are using these options:
you should see output similar to:
noah@thor:~/Desktop/ebook$ asciidoctor --backend docbook5 --doctype book --verbose --destination-dir ./output/ myBook.adoc Input file: myBook.adoc Time to read and parse source: 0.00527 Time to render document: 0.00944 Total time to read, parse and render: 0.01476 noah@thor:~/Desktop/ebook$
If you you have errors, you may see output similar to:
asciidoctor: WARNING: myBook.adoc: line 9: invalid style for paragraph: dedication
This usually means that there is either an error in your header, or you docinfo xml file has an error (spaces instead of tabs, extra spaces between elements, and similar issues). You must fix these issues before continuing.
Now that we have our ebook in the DocBook XML file format, we want to validate that it is semantically correct. We want to check to see that its format matches the schema defined in the docbookxi.rng file (the schema in Relax NG schema language). For this, we use a tool called jing. There is another tool called xmllint, that also does validation, but I encountered issues with it, and found jing to be much more reliable. An excellent resource for understanding the details can be found in the Processing DocBook5 section in the DocBook XSL: The Complete Guide (you’ll be referencing this online ebook a lot if you want to do any configuration of your ebook).
So the content of our adoc file and the docinfo file have been combined into a single xml file in the output directory (you can open it to see what it looks like), and we need to validate it to make sure it’s formatted correctly (sometimes asciidoctor makes mistakes). To do this, we run the following command from the same directory as before (not the output directory):
cd ~/Desktop/ebook/ jing -i ./build-resources/docbookxi.rng output/myBook.xml
This command is simple, it takes the docbookxi.rng schema file as the first input (-i), and our book in xml format as our second input, and will tell us if it’s valid (properly formatted) DocBook XML. If you have issues, try to figure out what line of the xml file is causing the issue, and try to track it back to the original asciidoc or docinfo file. This can be a challenge to do, sometimes searching the internet for your error can help.
If you see no output, then there are no errors.
Next, we are going to use xsltproc to convert our DocBook XML file into a series of html files (HTML 5 files actually), copy in our css and images to create a folder that represents our entire ebook, including all required resources.
It helps here to understand how ebook file systems are laid out before they are zipped into an archive we consider an EPUB or mobi file. A basic EPUB has the following files and folder heirarchy stored in a zipped container:
mimetype META-INF/ container.xml OEBPS/ content.opf chapter1.xhtml chapter2.xhtml css/ style.css toc.ncx graphics/ cover.jpg
Good explanations of these files can be found on Wikipedia, as well as here and here.
We need to convert our valid DocBook XML into the above folder structure. Do do this, we use xsltproc, whic applies XSLT stylesheets to XML documents. XSLT stylesheets are a language for converting XML into other formats (in our case, HTML documents). The XSLT stylesheets are provided by the DocBook project. The following command converts our DocBook XML into an EPUB folder hierarchy:
cd ~/Desktop/ebook/ xsltproc --stringparam base.dir ./output/epub3-book/OEBPS/ --stringparam chapter.autolabel 0 --stringparam chunker.output.indent yes ./build-resources/docbook-xsl-ns-1.79.1/epub3/chunk.xsl ./output/myBook.xml
Let’s break this down:
the stringparam options above are specific to the XSLT files that we are working with. To find other options that are available, read through Chapter 7. HTML output options of DocBook XSL: The Complete Guide.
You should see output similar to:
noah@thor:~/Desktop/ebook$ xsltproc --stringparam base.dir ./output/epub3-book/OEBPS/ --stringparam chapter.autolabel 0 --stringparam chunker.output.indent yes ./build-resources/docbook-xsl-ns-1.79.1/epub3/chunk.xsl ./output/myBook.xml Writing ./output/epub3-book/OEBPS/bk01-toc.xhtml for book Writing ./output/epub3-book/OEBPS/ch01.xhtml for chapter(_this_is_the_first_chapter) Writing ./output/epub3-book/OEBPS/ch02.xhtml for chapter(_this_is_the_second_chapter) Writing ./output/epub3-book/OEBPS/ch03.xhtml for chapter(_this_is_the_third_chapter) Writing ./output/epub3-book/OEBPS/index.xhtml for book Writing ./output/epub3-book/OEBPS/docbook-epub.css for book Generating EPUB package files. Writing ./output/epub3-book/OEBPS/cover.xhtml for mediaobject Generating image list ... Writing ./output/epub3-book/OEBPS/package.opf for book Writing ./output/epub3-book/OEBPS/../META-INF/container.xml for book Writing ./output/epub3-book/OEBPS/../mimetype for book Generating NCX file ... Writing ./output/epub3-book/OEBPS/toc.ncx for book noah@thor:~/Desktop/ebook$
We also need to manually move our css file and images into the EPUB folder hierarchy (add any additional graphics you need at this stage):
cd ~/Desktop/ebook/ cp ./ebook-resources/master.css ./output/epub3-book/OEBPS/docbook-epub.css cp -r ./ebook-resources/graphics/ ./output/epub3-book/OEBPS/
The next step is to convert our EPUB folders into a single file (our actual EPUB). To do this we use epubcheck, then rename the file:
epubcheck ./output/epub3-book/ -mode exp -v 3.0 -save mv ./output/epub3-book.epub ./output/myBook.epub
Here we are using -mode exp to have epubcheck validate the expanded EPUB archives, version 3.0, and save it to a single file: ./output/myBook.epub.
This epub file is the first final product. You can view this epub on any epub compatible reader (including calibre, which we installed earlier).
The final step is to convert out epub into the mobi format, for use on Amazon Kindle devices. This is done with KindleGen. This tool is simple, it takes the name of the epub folder to convert, and the name of the .mobi to create:
cd ~/Desktop/ebook/build-resources ./kindlegen ../output/myBook.epub
if you look in your ./output folder, you will now see your epub and .mobi files. If you have an amazon device, you can email the .mobi file to yourself and have it automatically download to your device. All kindle devices support this .mobi format. More information can be found here and here.
You will quickly find that as you are modifying your files, it becomes a hassle to constantly run these commands. The solution to this is to use a Makefile. This tool was originally designed to compile software, but can be easily modified to simplify your ebook workflow.
in your ebook folder, create a new file called Makefile:
cd ~/Desktop/ebook/ touch Makefile
enter the following text (as with the docbook file above, replace spaces at the beginning of lines with tabs if needed):
mobi : epub #ebook-convert ./output/myBook.epub ./output/myBook.mobi ./build-resources/kindlegen ./output/myBook.epub epub : ebook epubcheck ./output/epub3-book/ -mode exp -v 3.0 -save mv ./output/epub3-book.epub ./output/myBook.epub ebook : docbook xsltproc --stringparam base.dir ./output/epub3-book/OEBPS/ \ --stringparam chapter.autolabel 0 \ --stringparam chunker.output.indent yes \ ./build-resources/docbook-xsl-ns-1.79.1/epub3/chunk.xsl ./output/myBook.xml cp ./ebook-resources/master.css ./output/epub3-book/OEBPS/docbook-epub.css cp -r ./ebook-resources/graphics/ ./output/epub3-book/OEBPS/ docbook : asciidoctor --backend docbook5 --doctype book --verbose --destination-dir ./output/ myBook.adoc jing -i ./build-resources/docbookxi.rng output/myBook.xml .PHONY: clean clean : -rm -rf ./output/*
Open a command prompt, navigate to the ebook folder, and you can now build your ebook by issuing the command make complete. If you want to delete all old versions of the ebook, you can run make clean. if you get an error: Makefile:3: *** missing separator. Stop, then you need to replace all spaces with tabs at the beginning of lines (there are issues pasting tabs from a website into a document).
Some of the options you have here:
You can modify this makefile to match your workflow, such as adding options to xsltproc (new lines are broken up with a backslash to improve readability), or having more files added to your ebook directory.
This guide has given you a simple framework for creating an ebook workflow. There are a number of things that can be improved or modified in this process to suit your needs, but hopeful you have learned enough to make these modifications yourself. You’ll probably want to improve the css files for your ebook (there are a number of websites that can better discuss epub css options, some of them are linked below). You may want to look at embedding images into your book, using specific fonts, adding different parameters to the XSL transforms, and many other options.
Feedback is welcomed, especially if there are errors in this guide or recommendations you have from your own experience: please contact me here.
DocInfo.xml example.
Oreily Publications docinfo.xml example for erlang book.
publishing with iBooks example docinfo.xml.
another Oreily docbook.xml example.
Amazon Kindle Publishing Guidelines
CSS Boilerplate for eBooks.
Basic css styles for Kindle html.
The eBook Design and Development Guide on Amazon.
These two guides below use a2x from the asciidoc package, rather than asciidoctor to generate the xml from docbook. I prefer asciidoctor, as i find that it worked better for my workflow.
A good guide on converting docbook to epub and mobi.
Another good guide.
I hope this series of articles has been helpful to you. Please feel free to provide feedback, both issues you experienced and recommendations that you have. The goal of this guide was not just for you to create a Snort NIDS, but to understand how all the parts work together, and get a deeper understanding of all the components, so that you can troubleshoot and modify your Snort NIDS with confidence.
You will probably want to configure your network infrastructure to mirror traffic meant for other hosts to your Snort sensor. This configuration is dependent on what network equipment you are using. If you are running Snort as a Virtual Machine on a VMware ESXi server, you can configure promiscuous mode for ESXi by following my instructions in this article: configure promiscuous mode for ESXi.
For different network infrastrucutre, you will need to do a little research to configure network mirroring for your Snort server. Cisco calls this a span port, but most other vendors call this Port Mirroring. Instructions for Mikrotik (a linux based switch and router product that I like). If you run DD-WRT, it can be configured with iptables, like any linux based system. If you have network equipment not listed above, any search engine should point you towards a solution, if one exists. Note that many consumer switches will not have the ability to mirror ports.
You can also purchase devices specifically made to mirror data (called taps). Some products that have been recommended on the Snort-Users list are:
Snort has the ability to do much more than we’ve covered in this set of articles. Hopefully you’ve learned enough through this setup that you will be able to implement more advanced configurations and make Snort work for you. Some things that Snort is capable of:
Some other related articles I have written:
I would love to get feedback from you about this guide. Recommendations, issues, or ideas, please contact me here.
BASE is a simple web GUI for Snort. Alternate products include Snorby, Splunk, Sguil, AlienVault OSSIM, and any syslog server.
Splunk is a fantastic product, great for ingesting, collating, and parsing large data sets. Splunk is free to use (limited to 500 MB of data per day, which is a lot for a small shop). Sguil client is an application written in tcl/tk. Snorby is abandoned, and relies on old versions of many Ruby packages that makes documenting the installation difficult, and a constantly changing target.
I’ve chosen to use BASE in this guide because it’s simple to setup, simple to use, and works well for what it does. Both BASE and Snorby are abandoned projects, and while Snorby gives a nice web-2.0 interface, since it is written in Ruby-on-Rails, the Ruby packages it relies on are constantly upgrading, which causes compatibility issues with other required Snorby packages, which causes too many installation problems. If you want to try installing Snorby, please see these unsupported out of date guides for Ubuntu 14 or Ubuntu 16.
There is a slight difference between BASE on Ubuntu 14 versus 16: BASE requires PHP 5, which isn’t available in the Ubuntu 16 archives (Ubuntu has moved on to PHP 7 in this release), so we have to use a PPA on Ubuntu 16 to install the php 5 packages:
# Ubuntu 16 only: sudo add-apt-repository ppa:ondrej/php sudo apt-get update sudo apt-get install -y apache2 libapache2-mod-php5.6 php5.6-mysql php5.6-cli php5.6 php5.6-common php5.6-gd php5.6-cli php-pear php5.6-xml
in Ubuntu 14, we can just install the necessary libraries:
# Ubuntu 14 only: sudo apt-get install -y apache2 libapache2-mod-php5 php5 php5-mysql php5-common php5-gd php5-cli php-pear
next install Pear image Graph:
sudo pear install -f --alldeps Image_Graph
Download and install ADODB:
cd ~/snort_src wget https://sourceforge.net/projects/adodb/files/adodb-php5-only/adodb-520-for-php5/adodb-5.20.8.tar.gz tar -xvzf adodb-5.20.8.tar.gz sudo mv adodb5 /var/adodb sudo chmod -R 755 /var/adodb
Download BASE and copy to apache root
cd ~/snort_src wget http://sourceforge.net/projects/secureideas/files/BASE/base-1.4.5/base-1.4.5.tar.gz tar xzvf base-1.4.5.tar.gz sudo mv base-1.4.5 /var/www/html/base/
Create the BASE configuration file:
cd /var/www/html/base sudo cp base_conf.php.dist base_conf.php
Now edit the config file:
sudo vi /var/www/html/base/base_conf.php
with the following settings (note that the trailing slash on line 80 is required, despite the instructions in the configuration file):
$BASE_urlpath = '/base'; # line 50 $DBlib_path = '/var/adodb/'; #line 80 $alert_dbname = 'snort'; # line 102 $alert_host = 'localhost'; $alert_port = ''; $alert_user = 'snort'; $alert_password = 'MySqlSNORTpassword'; # line 106
While in the base conf.php file, you will also want to comment out line 457 (we don’t want the DejaVuSans font), and un-comment (remove the two backslashes) from line 459, enabling a blank font. The section for fonts (begining at line 456) should look like this:
//$graph_font_name = "Verdana"; //$graph_font_name = "DejaVuSans"; //$graph_font_name = "Image_Graph_Font"; $graph_font_name = "";
Set permissions on the BASE folder, and since the password is in the base conf.php file, we should prevent other users from reading it:
sudo chown -R www-data:www-data /var/www/html/base sudo chmod o-r /var/www/html/base/base_conf.php
restart Apache:
sudo service apache2 restart
The last step to configure BASE is done via http:
Note: If you read through the BASE configuration file, there are a number of other options you can implement if you like. A few options are SMTP Email alerts, IP Address to Country Support, and user authentication.
Congratulations, if you’ve made it this far, you have a fully-functioning Snort system. Please continue on to the Conclusion for more things you can do with Snort.
In the previous articles in this series, we have created a complete Snort NIDS with a web interface and rulesets that automatically update. In this article, we will finalize the configuration of our Snort service by creating systemD scripts for the Snort and Barnyard2 daemons. If you are running Ubuntu 14, you should go see my Upstart article instead of this article.
Ubuntu 16 has moved to systemD for services / daemons. For more information about creating and managing systemD servcies, please see this excellent article.
To create the Snort systemD service, use an editor to create a service file:
sudo vi /lib/systemd/system/snort.service
with the following content (change ens160 if different on your system):
[Unit] Description=Snort NIDS Daemon After=syslog.target network.target [Service] Type=simple ExecStart=/usr/local/bin/snort -q -u snort -g snort -c /etc/snort/snort.conf -i ens160 [Install] WantedBy=multi-user.target
Now we tell systemD that the service should be started at boot:
sudo systemctl enable snort
And start the Snort service:
sudo systemctl start snort
Verify the service is running
systemctl status snort
Next, create the Barnyard2 systemd service. We will add two flags here: -D to run as a daemon, and -a /var/log/snort/archived logs, this will move logs that Barnyard2 has processed to the /var/log/snort/archived/ folder. Use an editor to create a service file:
sudo vi /lib/systemd/system/barnyard2.service
With the following content:
[Unit] Description=Barnyard2 Daemon After=syslog.target network.target [Service] Type=simple ExecStart=/usr/local/bin/barnyard2 -c /etc/snort/barnyard2.conf -d /var/log/snort -f snort.u2 -q -w /var/log/snort/barnyard2.waldo -g snort -u snort -D -a /var/log/snort/archived_logs [Install] WantedBy=multi-user.target
Now we tell systemD that the service should be started at boot:
sudo systemctl enable barnyard2
And start the barnyard2 service:
sudo systemctl start barnyard2
Verify the service is running
systemctl status barnyard2
Reboot the computer and check that both services are started
user@snortserver:~$ service snort status snort start/running, process 1116 user@snortserver:~$ service barnyard2 status barnyard2 start/running, process 1109 user@snortserver:~$
If both services are running, you are ready to move to the next section, where you will install BASE, a web-based GUI to view and profile alert data: Installing BASE
In the previous articles in this series, we have created a complete Snort NIDS with a web interface and rulesets that automatically update. In this article, we will finalize the configuration of our Snort service by creating Upstart scripts for the Snort and Barnyard2 daemons. If you are running Ubuntu 16, you should go see my systemD article instead of this article.
First create the Snort Upstart script:
sudo vi /etc/init/snort.conf
We will insert the below content into this Upstart script. Note that we are using the same flags that we used in earlier articles, so if Snort ran correctly for you earlier, then you shouldn’t need to change any of these flags:
description "Snort NIDS service" stop on runlevel [!2345] start on runlevel [2345] script exec /usr/sbin/snort -q -u snort -g snort -c /etc/snort/snort.conf -i eth0 -D end script
Now make the script executable, and tell Upstart that the script exists:
sudo chmod +x /etc/init/snort.conf initctl list | grep snort snort stop/waiting
do the same for our Barnyard2 script:
sudo vi /etc/init/barnyard2.conf
with the following content:
description "barnyard2 service" stop on runlevel [!2345] start on runlevel [2345] script exec /usr/local/bin/barnyard2 -c /etc/snort/barnyard2.conf -d /var/log/snort -f snort.u2 -w /var/log/snort/barnyard2.waldo -g snort -u snort -D -a /var/log/snort/archived_logs end script
Note that we have added a new flag here that we didn’t use before: -a /var/log/snort/archived_logs, this will move logs that Barnyard2 has processed to the /var/log/snort/archived_logs/ folder.
Now make the script executable, and tell Upstart that the script exists:
sudo chmod +x /etc/init/barnyard2.conf initctl list | grep barnyard barnyard2 stop/waiting
Reboot the computer and check that both services are started:
user@snortserver:~$ service snort status snort start/running, process 1116 user@snortserver:~$ service barnyard2 status barnyard2 start/running, process 1109 user@snortserver:~$
If both services are running, you are ready to move to the next section, where you will install BASE, a web-based GUI to view and profile alert data: Installing BASE
In the previous two sections of this article, we installed Snort and configured it to work as a NIDS with Barnyard2 processing packets that generated alerts based on a rule. In this article, we are going to install a Perl script called PulledPork, which will automatically download the latest rulesets from the Snort website.
To download the main free ruleset from Snort, you need an oinkcode. Register on the Snort website and save your oinkcode before continuing, as the oinkcode is required for the most popular free ruleset.
Install the PulledPork pre-requisites:
sudo apt-get install -y libcrypt-ssleay-perl liblwp-useragent-determined-perl
Download the latest PulledPork and install. Here we copy the actual perl file to /usr/local/bin and the needed configuration files to /etc/snort:
cd ~/snort_src wget https://github.com/shirkdog/pulledpork/archive/master.tar.gz -O pulledpork-master.tar.gz tar xzvf pulledpork-master.tar.gz cd pulledpork-master/ sudo cp pulledpork.pl /usr/local/bin sudo chmod +x /usr/local/bin/pulledpork.pl sudo cp etc/*.conf /etc/snort
Test that PulledPork runs by running the following command, looking for the output below:
user@snortserver:~$ /usr/local/bin/pulledpork.pl -V PulledPork v0.7.3 - Making signature updates great again! user@snortserver:~$
Now that we are sure that PulledPork works, we need to configure it:
sudo vi /etc/snort/pulledpork.conf
Make the following changes to the pulledpork.conf file. Anywhere you see ‹oinkcode› enter your oinkcode from the Snort website. I have included line numbers to help you identify the location of these lines in the configuration file.
Line 19: enter your oinkcode where appropriate (or comment out if no oinkcode) Line 29: Un-comment for Emerging threats ruleset (not tested with this guide) Line 74: change to: rule_path=/etc/snort/rules/snort.rules Line 89: change to: local_rules=/etc/snort/rules/local.rules Line 92: change to: sid_msg=/etc/snort/sid-msg.map Line 96: change to: sid_msg_version=2 Line 119: change to: config_path=/etc/snort/snort.conf Line 133: change to: distro=Ubuntu-12-04 Line 141: change to: black_list=/etc/snort/rules/iplists/black_list.rules Line 150: change to: IPRVersion=/etc/snort/rules/iplists
We want to run PulledPork once manually to make sure it works. We use the following flags:
-c /etc/snort/pulledpork.conf the location of the snort.conf file -l Write detailed logs to /var/log
Run the following command:
sudo /usr/local/bin/pulledpork.pl -c /etc/snort/pulledpork.conf -l
After this command runs (it takes some time), you should now see snort.rules in /etc/snort/rules, and .so rules in /usr/local/lib/snort_dynamicrules. Pulled Pork combines all the rulesets that it downloads into these two files. You need to make sure to add the line: include $RULE_PATH/snort.rules to the snort.conf file, or the pulled pork rules will never be read into memory when Snort starts:
sudo vi /etc/snort/snort.conf
Add the following line to enable snort to use the rules that PulledPork downloaded (line 547), after the line for local.rules:
include $RULE_PATH/snort.rules
Since we have modified snort.conf, we should test that Snort loads correctly in NIDS mode with the PulledPork rules included:
sudo snort -T -c /etc/snort/snort.conf -i eth0
Once that is successful, we want to test that Snort and Barnyard2 load correctly when run manually as daemons:
sudo /usr/local/bin/snort -u snort -g snort -c /etc/snort/snort.conf -i eth0 -D sudo barnyard2 -c /etc/snort/barnyard2.conf -d /var/log/snort -f snort.u2 -w /var/log/snort/barnyard2.waldo -g snort -u snort -D
As before, ping the IP address of the Snort eth0 interface, and then check the database for more events (remember to use the MYSQLSNORTPASSWORD):
mysql -u snort -p -D snort -e "select count(*) from event"
The number of events reported should be greater than what you saw the last time you ran this command. Now that we are sure that PulledPork runs correctly, we want to add PulledPork to root’s crontab to run daily:
sudo crontab -e
Choose any editor if prompted
The Snort team has asked you to randomize when PulledPork connects to their server to help with load balancing. In the example below, we have PulledPork checking at 04:01 every day. Change the minutes value (the 01 below) to a value between 0 and 59, and the hours value (the 04 below) to a value between 00 and 23. For more info on crontab layout, check here:
01 04 * * * /usr/local/bin/pulledpork.pl -c /etc/snort/pulledpork.conf -l
Stop the running daemons from earlier testing:
user@snortserver:~$ ps aux | grep snort snort 1296 0.0 2.1 297572 43988 ? Ssl 03:15 0:00 /usr/local/bin/snort -q -u snort -g snort -c /etc/snort/snort.conf -i eth0 -D user 1314 0.0 0.0 4444 824 pts/0 S+ 03:17 0:00 grep --color=auto snort user@snortserver:~$ sudo kill 1296 user@snortserver:~$ ps aux | grep barnyard2 snort 1298 0.0 2.1 297572 43988 ? Ssl 03:15 0:00 barnyard2 -c /etc/snort/barnyard2.conf -d /var/log/snort -f snort.u2 -w /var/log/snort/barnyard2.waldo -g snort -u snort -D user 1316 0.0 0.0 4444 824 pts/0 S+ 03:17 0:00 grep --color=auto barnyard2 user@snortserver:~$ sudo kill 1298
Note: Snort needs to be reloaded to see the new rules. This can be done with kill -SIGHUP snort-pid, or you can restart the snort service (once that’s created in a later part of this guide).
Additional note about shared object rules: In addition to regular rules, The above section will download Shared object rules. Shared object rules are also known as ”Shared Object rules”, ”SO rules”, ”pre-compiled rules”, or ”Shared Objects”. These are detection rules that are written in the Shared Object rule language, which is similar to C.
These rules are pre-compiled by the provider of the rules, and allow for more complicated rules, and allow for obfuscation of rules (say to detect attacks that haven’t been patched yet, but the vendor wants to allow detection without revealing the vulnerability). These rules are compiled by the vendor for specific systems. One of these systems is Ubuntu 12, and luckily these rules also work on Ubuntu 14 and 15.
Congratulations, if you have output similar to the above then you have successfully Configured PulledPork. Continue to the next section to install startup scripts for Snort and Barnyard2. Choose one of the two following links, depending on your version of Ubuntu. You will create an Upstart scripts for Ubuntu 12 and 14, and a systemD scripts for Ubuntu 15.
Choose One of the following to continue:
Ubuntu 14: Creating Upstart Scripts for Snort and Barnyard2
Ubuntu 16: Creating systemD Scripts for Snort
In the previous two articles in this series, we installed Snort an configured it to run as a NIDS. In this article, we are going to create a rule which causes Snort to generate an alert whenever it sees an ICMP message. If you want, you can skip this section, as it is not required to get a Snort NIDS up and running, but it will help you to gain a much better understanding of how Snort rules are created and loaded.
In the previous article, we created the /etc/snort/rules/local.rules file and left it empty. We also edited the snort.conf file to tell Snort to load this local.rules file (when we un-commented the line: include $RULE_PATH/local.rules in snort.conf). When Snort starts, it will use the include directive in snort.conf to load all rules in local.rules. The local.rules file is a place where we can place rules that are specific to our environment, and is great for testing.
First, we need to edit the local.rules file:
sudo vi /etc/snort/rules/local.rules
input the following text and save the file:
alert icmp any any -> $HOME_NET any (msg:"ICMP test detected"; GID:1; sid:10000001; rev:001; classtype:icmp-event;)
What this rule says is that for any ICMP packets it sees from any network to our HOME_NET, generate an alert with the text ICMP test. The other information here (GID, REV, classtype) are used group the rule, and will be helpful when you install BASE.
Barnyard2 doesn’t read meta-information about alerts from the local.rules file. Without this information, Barnyard2 won’t know any details about the rule that triggered the alert, and will generate non-fatal errors when adding new rules with PulledPork (done in a later step). To make sure that barnyard2 knows that the rule we created with unique identifier 10000001 has the message ”ICMP Test Detected”, as well as some other information (please see this blog post for more information). We add the following two lines to the /etc/snort/sid-msg.map file:
#v2 1 || 10000001 || 001 || icmp-event || 0 || ICMP Test detected || url,tools.ietf.org/html/rfc792
(the #v2 tells barnyard2 that the next line is the version 2 format, rather than v1)
Since we have made changes to the file that snort loads (local.rules), it is a good idea to test the configuration file again:
sudo snort -T -c /etc/snort/snort.conf -i eth0
If successful, you should be able to scroll up through the output and see that Snort has loaded our rule:
+++++++++++++++++++++++++++++++++++++++++++++++++++ Initializing rule chains... 1 Snort rules read 1 detection rules 0 decoder rules 0 preprocessor rules 1 Option Chains linked into 1 Chain Headers 0 Dynamic rules +++++++++++++++++++++++++++++++++++++++++++++++++++ +-------------------[Rule Port Counts]--------------------------------------- | tcp udp icmp ip | src 0 0 0 0 | dst 0 0 0 0 | any 0 0 1 0 | nc 0 0 1 0 | s+d 0 0 0 0 +----------------------------------------------------------------------------
Now to test the rule. We need to verify that Snort generates an alert when it processes an ICMP packet. We will launch Snort with the following options:
-A console the console option prints fast mode alerts to stdout -q Quiet. Don't show banner and status report. -u snort run snort as the following user after startup -g snort run snort as the following group after startup -c /etc/snort/snort.conf the path to our snort.conf file -i eth0 the interface to listen on
Run Snort with the command below, modifying the parameters as required specific for your configuration:
sudo /usr/local/bin/snort -A console -q -u snort -g snort -c /etc/snort/snort.conf -i eth0
Note: If you are running Ubuntu 16, remember that your interface name is not eth0.
Once you have started Snort with the above command, you need use another computer or another terminal window to ping the interface that you directed Snort to listen on. You should see output similar to the below on the terminal of the Snort machine:
10/31-02:27:19.663643 [**] [1:10000001:1] ICMP test detected [∗∗] [Classification: Generic ICMP event] [Priority:3] {ICMP} 10.0.0.74 -> 10.0.0.64 10/31-02:27:19.663675 [**] [1:10000001:1] ICMP test detected [∗∗] [Classification: Generic ICMP event] [Priority:3] {ICMP} 10.0.0.64 -> 10.0.0.74 10/31-02:27:20.658378 [**] [1:10000001:1] ICMP test detected [∗∗] [Classification: Generic ICMP event] [Priority:3] {ICMP} 10.0.0.74 -> 10.0.0.64 10/31-02:27:20.658404 [**] [1:10000001:1] ICMP test detected [∗∗] [Classification: Generic ICMP event] [Priority:3] {ICMP} 10.0.0.64 -> 10.0.0.74 10/31-02:27:21.766521 [**] [1:10000001:1] ICMP test detected [∗∗] [Classification: Generic ICMP event] [Priority:3] {ICMP} 10.0.0.74 -> 10.0.0.64 10/31-02:27:21.766551 [**] [1:10000001:1] ICMP test detected [∗∗] [Classification: Generic ICMP event] [Priority:3] {ICMP} 10.0.0.64 -> 10.0.0.74 10/31-02:27:22.766167 [**] [1:10000001:1] ICMP test detected [∗∗] [Classification: Generic ICMP event] [Priority:3] {ICMP} 10.0.0.74 -> 10.0.0.64 10/31-02:27:22.766197 [**] [1:10000001:1] ICMP test detected [∗∗] [Classification: Generic ICMP event] [Priority:3] {ICMP} 10.0.0.64 -> 10.0.0.74 ^C*** Caught Int-Signal
You have to use ctrl-c to stop snort from running after the above output. What the above example shows is the 4 ICMP Echo Request and Reply messages between our Snort server (IP 10.0.0.64) and our other machine (10.0.0.74). If you look in /var/log/snort, you will also see a file with the name snort.log.nnnnnnnnnn (the n’s are replaced by numbers), which contains the same information that Snort printed to the screen.
Congratulations, if you have output similar to the above then you have successfully created a rule for Snort to alert on. Continue to the next section to Install Barnyard2.
This is the second in a set of articles will guide you through the steps of installing and configuring Snort as a Network Intrusion Detection System (NIDS). In the previous article we installed the Snort binary and verified that it correctly executed. In this section, we will configure Snort to run as a NIDS by creating the files and folders that Snort expects when running as a NIDS, and we will learn about the Snort configuration file: snort.conf.
First off, for security reasons we want Snort to run as an unprivileged user. We create a snort user and group for this purpose:
sudo groupadd snort sudo useradd snort -r -s /sbin/nologin -c SNORT_IDS -g snort
Next, we need to create a number of files and folders that Snort expects when running in NIDS mode. We will then change the ownership of those files to our new snort user. Snort stores configuration files in /etc/snort, rules in /etc/snort/rules, /usr/local/lib/snort_dynamicrules, and stores its logs in /var/log/snort:
# Create the Snort directories: sudo mkdir /etc/snort sudo mkdir /etc/snort/rules sudo mkdir /etc/snort/rules/iplists sudo mkdir /etc/snort/preproc_rules sudo mkdir /usr/local/lib/snort_dynamicrules sudo mkdir /etc/snort/so_rules # Create some files that stores rules and ip lists sudo touch /etc/snort/rules/iplists/black_list.rules sudo touch /etc/snort/rules/iplists/white_list.rules sudo touch /etc/snort/rules/local.rules sudo touch /etc/snort/sid-msg.map # Create our logging directories: sudo mkdir /var/log/snort sudo mkdir /var/log/snort/archived_logs # Adjust permissions: sudo chmod -R 5775 /etc/snort sudo chmod -R 5775 /var/log/snort sudo chmod -R 5775 /var/log/snort/archived_logs sudo chmod -R 5775 /etc/snort/so_rules sudo chmod -R 5775 /usr/local/lib/snort_dynamicrules # Change Ownership on folders: sudo chown -R snort:snort /etc/snort sudo chown -R snort:snort /var/log/snort sudo chown -R snort:snort /usr/local/lib/snort_dynamicrules
We now need to move the following files from the extracted Snort tarball to the snort configuration folder:
Run the commands below to move the files listed above to the /etc/snort folder:
cd ~/snort_src/snort-2.9.9.0/etc/ sudo cp *.conf* /etc/snort sudo cp *.map /etc/snort sudo cp *.dtd /etc/snort cd ~/snort_src/snort-2.9.9.0/src/dynamic-preprocessors/build/usr/local/lib/snort_dynamicpreprocessor/ sudo cp * /usr/local/lib/snort_dynamicpreprocessor/
The Snort configuration folder and file structure should now look like the following:
user@snortserver:~$ tree /etc/snort /etc/snort ├── attribute_table.dtd ├── classification.config ├── file_magic.conf ├── gen-msg.map ├── preproc_rules ├── reference.config ├── rules │.. ├── local.rules │.. ├── iplists │ .. ├── black_list.rules │ .. ├── white_list.rules ├── sid-msg.map ├── snort.conf ├── so_rules ├── threshold.conf └── unicode.map
The Snort configuration file is stored at /etc/snort/snort.conf, and contains all the settings that Snort will use when it is run in NIDS mode. This is a large file (well over 500 lines), and contains a number of options for the configuration of Snort. We are interested in only a few settings at this time.
First, we need to comment out the lines that causes Snort to import the default set of rule files. We do this because we will be using PulledPork to manage our rulesets, which saves all the rules into a single file. The easy way to comment out all these lines is to use sed to append the “#” (hash) character to those lines. This is accomplished by running the following command:
sudo sed -i 's/include \$RULE\_PATH/#include \$RULE\_PATH/' /etc/snort/snort.conf
The result of this command is that lines 547 to 651 in snort.conf will now be commented out, which will prevent Snort from loading those rule files on start-up. These rule files do not exist, and will cause Snort to generate an error if it tries to load a file that doesn’t exist. If you were to manually download the rule files from the snort website and extract them to the /etc/snort/rules folder, then you would want those rules to be un-commented out. We will use PulledPork (configured later) to manage all our rules and save them into a single file, which is why we need all those rule files to be commented out.
Next, we need to manually edit a few lines in the snort.conf file. Use vi (or your favorite editor) to edit /etc/snort/snort.conf:
sudo vi /etc/snort/snort.conf
First, we need to let Snort know the network range of your home network (the assets you are trying to protect) and all other external networks. We do this by editing lines 45 and 48 of snort.conf to tell it the IP ranges of these two networks. In the example below, our home network is 10.0.0.0 with a 24 bit subnet mask (255.255.255.0), and our external networks are all other networks.
ipvar HOME_NET 10.0.0.0/24 # (line 45) make this match your internal (friendly) network
Note: it is not recommended to set EXTERNAL_NET to !$HOME NET as recommended in some guides, since it can cause Snort to miss alerts.
Next we need to tell Snort about the locations of all the folders we created earlier. These settings are also part of the snort.conf file. I have included the line numbers after the hash so you can more easily find the setting (do not write the line number, just change the path to match what is below):
var RULE_PATH /etc/snort/rules # line 104 var SO_RULE_PATH /etc/snort/so_rules # line 105 var PREPROC_RULE_PATH /etc/snort/preproc_rules # line 106 var WHITE_LIST_PATH /etc/snort/rules/iplists # line 113 var BLACK_LIST_PATH /etc/snort/rules/iplists # line 114
Finally, we want to enable one included rule file: /etc/snort/rules/local.rules. We will use this file to store our own rules, including one rule that we will write in the next article in this series that will allow us to easily check that Snort is correctly generating alerts. Un-comment the following line (line 545) by deleting the hash from the beginning of the line:
include $RULE_PATH/local.rules
Snort has the ability to validate the configuration file, and you should do this whenever you make modifications to snort.conf. Run the following command to have Snort test the configuration file:
sudo snort -T -c /etc/snort/snort.conf -i eth0
The -T tells snort to test, and -c tells snort the path to the configuration file, and you are required to specify an interface you want to listen to with -i (this is a new requirement as of 2.9.8.x version of snort). Make sure to use the correct interface. You should see some output, with the following lines at the end:
... Snort successfully validated the configuration! Snort exiting
Congratulations, if you have output similar to the above then you have successfully Configured Snort to run as a NIDS. Continue to the next section: Writing and Testing a Single Rule With Snort.