December 2014

0

Conky is a cool, desktop and lightweight monitoring tool. SELKS comes with a ready to use Conky config (also as part of the selks-scripts-stamus package).

With the installation of Conky – you get the possibility to do system monitoring on your desktop. Out of the box there are no Conky config files that are very useful for SELKS. That is why we created one. The trick is we used some info that Conky is capable of reading in itself for the system in general , however we added some stats that we harvest from the Suricata unix socket on the SELKS distro. That way you get right away  runtime, capture method , running mode and Suricata version right on your desktop – among other system stats like – Memory/CPU/Network usage.

The Conky config itself is already installed in /etc/conky/conky.conf however it is also present at /opt/selks/Scripts/Configs/Conky as part of the  selks-scripts-stamus  package for the purpose of record keeping(back up).

So if you are using the Desktop edition of SELKS, you can use Conky easily by running:

conky -d

in a terminal. Then you can close the terminal if you wish. That’s all 🙂

Desktop-SELKS1.2

The SELKS Conky config is best utilized when used with a screen resolution of 1680 x 1050 or more.

You can get further ideas for conky configs – just google “conky templates” there is plenty of stuff out there.

 

0

Introduction

Elasticsearch and Kibana are wonderful tools but as all tools you need to know their limits. This article will try to explain how you must be careful when reading data and explain how to improve this situation by using an existing Elastisearch feature.

The Problem

All did start with the analysis of an SSH bruteforce attack coming from Vietnam. This attack was interesting because of the announced SSH client “PuTTY-Local: Mar 19 2005 07:19:17” which really looks like a correct PuTTY software version when most attack don’t spoof their software version and reveal what they are using.

The Kibana dashboard was showing all information needed to get a good idea of attacks:

Screenshot from 2014-12-03 16:36:47

But when looking at less used and most used passwords, there was something really strange:

Screenshot from 2014-12-02 08:59:41

For example, webmaster is seen in the two panels with different values which is not logical.

By adding a filter on this value, the result was a bit surprising:

Screenshot from 2014-12-02 09:08:59

When looking at the detail of events, it was obvious this last result was correct. This SSH bruteforce has tried 10 different logins and has always used the same dictionary of 23 passwords.

To a solution

So the panels with top passwords and less seen passwords are displaying incorrect data in some circumstances. They have been setup in Kibana using the terms type.

This corresponds in Elasticsearch to a facets query. Here’s is the content of the query with the filter removed for readability:

{
 "facets": {
    "terms": {
      "terms": {
        "field": "password.raw",
        "size": 10,
        "order": "count",
        "exclude": []
      },
  }
}

So we have a simple request and it is not returning the correct data. The explanation of this problem can be found in Elasticsearch Issue #1305.

Adrien Grand is explaining that a algorithm returning possibly inaccurate values has been chosen to avoid a too memory intensive and network intensive search. The per-default algorithm is mainly wrong when they are more different values than searched values.

We can confirm that behavior in our case by asking for 30 values (on the 23 different passwords we have):

Screenshot from 2014-12-02 09:30:30

The result is correct this time.

If we continue reading Adrien Grand comment on the issue, we see that a shard_size parameter has been introduced to be able improve the algorithm accuracy.

So we can use this parameters to improve the accuracy of the queries. Patching this in Kibana is trivial:

diff --git a/src/vendor/elasticjs/elastic.js b/src/vendor/elasticjs/elastic.js
index ba9c8ee..8daa72a 100644
--- a/src/vendor/elasticjs/elastic.js
+++ b/src/vendor/elasticjs/elastic.js
@@ -3085,6 +3085,7 @@
         }
 
         facet[name].terms.size = facetSize;
+        facet[name].terms.shard_size = 10 * facetSize;
         return this;
       },

Here we just choose a far larger shard_size than the number of elements asked in the query. We could also have used the special value 0 (or Integer.MAX_VALUE) for shard_size to get perfect result. But in our test setup, Elasticsearch is failing to honor the request with this parameter. And furthermore, the result was already correct:

Screenshot from 2014-12-02 10:10:10

This patch has been proposed to Elasticsearch as PR 2106.

That was a small patch but this fixed our dashboard as the value in the terms panels are now correct:

Screenshot from 2014-12-03 16:44:57