Blog

Logging with the ELK-stack

By:
on:

In our organisation, we came to the conclusion that centralising our website and app logs would be a productive step to take. It allows us to stay informed about the health of our running projects and possibly prevent issues before they escalate into disasters.

Enter the ELK-stack.

What is the ELK-stack?

The ELK-stack consist of the following technologies:

With the help of these 3 technologies, we can collect logs from any of our website/app backends and frontends, and send them to our self-hosted instance of the ELK-stack.

But how do the 3 of them work together, and what are their roles?

Logstash

When our instance of the ELK-stack receives a new log, the first service to handle this log will be Logstash. The goal of Logstash is to receive a log, transform it into a standardised format, and then pass it on to Elasticsearch.
Other than mutating incoming logs, Logstash can also be configured to filter out certain logs.

Here's how that works:

Http input

When warnings or errors occur in our app backends and front-ends, they are responsible for sending the entire error to Logstash. We have configured our Logstash service to handle javascript errors from nodeJS or just plain javascript. The http-input configuration of Logstash will then extract the website name, the environment (dev/staging/production), the error text, and the error stack trace.

All of this information is then moulded into a uniform log object that the next service will be able to understand. But most importantly, the structure of this log object is exactly the same for our nodeJS/React frontends and backends, as well as our WordPress websites.

Beats

Beats are plugins which extend the input functionality of our Logstash Service. We use Filebeat for our WordPress websites, which by default just write their logs to the file system. Filebeat watches this log file and sends any changes to our Logstash service. Once again, those logs are transformed into a uniform object on the receiving end by Logstash's Filebeat configuration. The log object is then sent to the next service in the ELK-stack: Elasticsearch.

Elasticsearch

Elasticsearch is a lightning-fast, schema-less, document-oriented database, built to withstand huge volumes of data. Elasticsearch's role in the ELK-stack is to store and retrieve logs.

So far, all of my own use-cases with Elasticsearch have been relatively simple. I have not yet directly used the REST-API which Elasticsearch provides, so I don't feel qualified to expand on this topic.

In my view, Elasticsearch is the centrepiece of the ELK-stack. It exists as a layer between Logstash and Kibana, and most of the magic happens here.

Kibana

Kibana is the visual component of the ELK-stack. In this dashboard, we can interact with all of the data within Elasticsearch.

Kibana allows us to filter through all of the data to find the logs that are relevant to our interests. These filters can be saved, reused, and shared between different Kibana users. Because of this, we can set up a filter to get a collection of all logs where a very specific error has happened for any of our websites and apps.

We can also use Kibana to generate statistics about our data, and represent it in various graphs.

Other possibilities with Kibana range from uptime monitoring to machine learning, but our organisation's use-case is mostly restricted to filtering our logs.

Conclusion

The ELK-stack is a very powerful toolset and I would recommend trying it out if you are dealing with more than a dozen websites, even if it seems like overkill right now. It's always good to be well prepared for future growth and the issues it brings along.

KOEN IS HAPPY TO INTRODUCE APPSALOON
APPSALOON BV
Bampslaan 21 box 3.01
3500 Hasselt
Belgium
VAT: BE08 4029 5063
2021 © AppSaloon All Rights Reserved
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram