Elasticsearch, Logstash, and Kibana (ELK)
January 2015 • Presentation
In this presentation, the authors describe how they deployed ELK, the system architecture overview, and the operational analytics that ELK can create.
Elasticsearch is gaining momentum in industry as a distributed data store capable of handling many different types of data and analytical use cases. Elasticsearch's strength is that it combines the robust indexing and search capability of Lucene with clustering software that requires minimum configuration. This approach makes deploying, configuring, and prototyping Elasticsearch-based applications quick and easy. Our team is in charge of running a large prototyping and development lab used by the government for evaluating the viability of different computer systems. Since we are required to follow security best practices, we perform system logging, performance logging, and network monitoring. Our team is small and has additional responsibilities. To make the security requirements manageable, we must be able to automate the process of collecting, processing, and alerting on this data. In the first part of this presentation, the authors describe how they deployed Elasticsearch, Logstash, and Kibana to meet these requirements, provide a system architecture overview, and explain different design decisions. They also describe sensor locations, data types ingested (e.g., flow data, syslog, snmp, and BRO), system strengths, weaknesses, and limitations. In the second part of the presentation, they describe security related operational analytics that can be created using our collection system. They describe operational analytics that automatically alert on user changes, passive DNS data, and other data sources.