How to Set Up Your Own Home Lab with ELK
Note: This is a guide that I wrote for my own home lab setup. It is not using the latest version of ELK, and there are many other great ways to get...
This is part two of a four-part series on how to set up your own home lab with ELK. Read part one.
You may have noticed that the URLs we’ve been using to test our running services are using http. This can be okay in a home lab environment; however, it can be extremely insecure for anyone who could be monitoring a given network—no bueno. Plus, we also lose out on some great features of the ELK stack—also, no bueno.
We’ve intentionally neglected some pieces of the setup specifically to make sure we’re not having to change anything unnecessarily in our stack for this reason and at this point in the article, we do not have a fully functioning stack as a result. So buckle up—we’re about to finish this thing out and make a fully functional SIEM and implement some security in transit for our data.
To start, we need some form of verifying identities here by using certificates. These certificates in essence will be no different than the certificates you see while browsing the web over HTTPS; however, instead of being “verified” by a trusted entity, they’ll be verified by we, ourselves, and us.
There’s no reason to make this more difficult, however, so let’s start by creating a “template” to use to automatically generate some certificates for us using some elasticsearch utilities.
Helpful Jump Links
1. Start by navigating to the /usr/share/elasticsearch/
directory.
cd /usr/share/elasticsearch/
2. Create a new file here called instances.yml using your favorite text editor.
sudo vim instances.yml
3. Paste the following entries in the new file. These will be used to associate each service name with the IP of our machine. IMPORTANT NOTE: If you did not install logstash in the previous steps, remove it from this file and the subsequent certificate steps to avoid any “file or directory not found” errors.
instances:
- name: "elasticsearch"
ip:
- "192.168.1.150"
- name: "kibana"
ip:
- "192.168.1.150"
- name: "logstash"
ip:
- "192.168.1.150"
- name: "fleet"
ip:
- "192.168.1.150"
4. Save this file and exit the text editor.
What we’re essentially setting up is our own little Public Key Infrastructure (PKI) this is similar to how certificates are validated when you’re browsing the internet as well!
If you’re unfamiliar with key exchanges, asymmetric encryption, and other related topics, don’t worry—they’re not required for following the steps for getting this stack set up. However, it is extremely helpful to understand what’s happening in these steps to help remove some of the “behind the scenes magic.” We will assume some base knowledge here.
When you browse to a site over https, that site has to prove who it is to your browser in order for your machine to accept it. Think like a police officer checking an ID. Does it look real? Is it expired? Does the photo match? You may insist, yes that is me—but does the officer take you at your word? How do they know you are who you say you are? They may check their records, make some calls, and scan your ID to make sure it’s valid. This verification is the role of the certificate authority it’s who our machine reaches out to to ensure that the certificate is valid. It can vouch for the certificate and say, yes this is valid because I signed it or I have record of who it belongs to.
You can imagine there’s a TON of power with that—but what if that certificate is invalid or the certificate authority is untrusted? You’ve likely run into this before—your browser alerts you and says “Careful, I don’t know who this is—enter at your own risk.”
We will make an “untrusted” or “self-signed” certificate authority using some of elasticsearch’s built in utilities.
1. Create a Certificate Authority bundle using the elasticsearch-certutil.
sudo /usr/share/elasticsearch/bin/elasticsearch-certutil ca --pem
2. Now, we’ll want to unzip this bundle using the unzip utility we installed at the beginning of the article.
sudo unzip ./elastic-stack-ca.zip
Now we have a ca/ directory, a certificate file, and a matching key for our certificate authority. Next we’ll want to generate some certificates to sign with it using that instances.yml file we generated earlier!
3. We’ll use the same elasticsearch-certutil utility to generate these certificates:
sudo /usr/share/elasticsearch/bin/elasticsearch-certutil cert --ca-cert ca/ca.crt --ca-key ca/ca.key --pem --in instances.yml --out certs.zip
4. Next we’ll want to unzip the new cert bundle and package it properly.
sudo unzip certs.zip
sudo mkdir certs
This has created a directory, crt, and key for each entry that we had in the file. Pretty cool stuff! Now we’ll package it in that new certs file for organization.
5. Move the dedicated cert files to the new directory we made.
sudo mv /usr/share/elasticsearch/elasticsearch/* certs/
sudo mv /usr/share/elasticsearch/kibana/* certs/
sudo mv /usr/share/elasticsearch/logstash/* certs/
sudo mv /usr/share/elasticsearch/fleet/* certs/
6. We’ll then prepare two directories to house each certificate authority public certificate and the dedicated certificate for each one of our services.
sudo mkdir -p /etc/kibana/certs/ca
sudo mkdir -p /etc/elasticsearch/certs/ca
sudo mkdir -p /etc/logstash/certs/ca
sudo mkdir -p /etc/fleet/certs/ca
7. Copy that certificate to each directory.
sudo cp ca/ca.* /etc/kibana/certs/ca
sudo cp ca/ca.* /etc/elasticsearch/certs/ca
sudo cp ca/ca.* /etc/logstash/certs/ca
sudo cp ca/ca.* /etc/fleet/certs/ca
8. Do the same with each dedicated service certificate and the certs directory.
sudo cp certs/elasticsearch.* /etc/elasticsearch/certs/
sudo cp certs/kibana.* /etc/kibana/certs/
sudo cp certs/logstash.* /etc/logstash/certs/
sudo cp certs/fleet.* /etc/fleet/certs/
9. For future easy copying, we’ll also save the public certificate to our root directory.
sudo cp ca/ca.crt /
10. Clean up the leftover files to make things nice and tidy.
sudo rm -r elasticsearch/ kibana/ fleet/ logstash/
Now that we’ve created our certificate authority and generated our certificate files, we need to take another commonly missed step to secure our ELK deployment: proper permissions.
You notice how in some directories we had to run sudo to gain administrator access to their contents? That’s because of how the Linux file permissions are configured by default. With root permissions we can access most everything in the file system without issue. So we should just give root permissions to our applications right? That way they can access everything they need without permissions issues.
Absolutely not. While running things with open permissions may make things easier on set up, it can be a nightmare when those permissions are used against you. You can check out most any CTF box and find a perfect example of this. By running as root, an adversary who compromises this application then has root privileges giving them free reign of the system.
So what do we do? We’ll create a user and assign them ownership of the directories they require.
When we installed elasticsearch, logstash, and kibana a user was created for each service.
1. Navigate to the /usr/share directory where we’ll take ownership with our elasticsearch and kibana users.
sudo chown -R elasticsearch:elasticsearch elasticsearch/
sudo chown -R elasticsearch:elasticsearch /etc/elasticsearch/certs/ca
We should be in a good spot! Now we’ll verify that our certificates are proper before wrapping up this section. We’ll use the openssl utility to print our certificate information to the console.
sudo openssl x509 -in /etc/elasticsearch/certs/elasticsearch.crt -text -noout
So let’s recap up to this point. We’ve gone through all of this work to set up our services and generate certificates to verify their identities. Now we need to tell those services where to find the proper files to use them to communicate and to do so over https.
Copy the following and paste it to the bottom of the /etc/kibana/kibana.yml file using your favorite text editor.
server.ssl.enabled: true
server.ssl.certificate: "/etc/kibana/certs/kibana.crt"
server.ssl.key: "/etc/kibana/certs/kibana.key"
elasticsearch.hosts: ["https://192.168.1.150:9200"]
elasticsearch.ssl.certificateAuthorities: ["/etc/kibana/certs/ca/ca.crt"]
elasticsearch.ssl.certificate: "/etc/kibana/certs/kibana.crt"
elasticsearch.ssl.key: "/etc/kibana/certs/kibana.key"
server.publicBaseUrl: "https://192.168.1.150:5601"
xpack.security.enabled: true
xpack.security.session.idleTimeout: "30m"
xpack.encryptedSavedObjects.encryptionKey: "SomeReallyReallyLongEncryptionKey"
Copy the following and paste it to the bottom of the /etc/elasticsearch/elasticsearch.yml file using your favorite text editor.
xpack.security.enabled: true
xpack.security.authc.api_key.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.key: /etc/elasticsearch/certs/elasticsearch.key
xpack.security.transport.ssl.certificate: /etc/elasticsearch/certs/elasticsearch.crt
xpack.security.transport.ssl.certificate_authorities: ["/etc/elasticsearch/certs/ca/ca.crt"]
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.verification_mode: certificate
xpack.security.http.ssl.key: /etc/elasticsearch/certs/elasticsearch.key
xpack.security.http.ssl.certificate: /etc/elasticsearch/certs/elasticsearch.crt
xpack.security.http.ssl.certificate_authorities: ["/etc/elasticsearch/certs/ca/ca.crt"]
Now that we’ve made a change to the kibana and elasticsearch services, we’ll want to restart both kibana and elasticsearch to apply the changes.
sudo systemctl restart elasticsearch
sudo systemctl restart kibana
Let’s verify that everything is still working now with systemctl and https:
sudo systemctl status elasticsearch
sudo systemctl status kibana
curl -XGET https://192.168.1.150:9200
Uh-oh, a certificate issue! ...because our machine doesn’t know to trust this certificate authority. We will resolve this but for now let’s pass the insecure flag to bypass cert validation.
curl -XGET https://192.168.1.150:9200 --insecure
Yay! A new error and a bundle of junk. This is another JSON payload that we can parse with jq for readability. Try it now and we’ll see that after our changes, we require credentials to connect to elasticsearch.
curl -XGET https://192.168.1.150:9200 --insecure | jq
We have a working Application Program Interface (API) now! We’re still missing some credentials though—let’s go ahead and generate those next.
To generate those passwords, we will need to use another elasticsearch utility elasticsearch-setup-passwords to create the randomized passwords for our user and service accounts. This will only flash on screen once though! So be sure to document them (perhaps in a secure note in a password manager?).
1. Run the elasticsearch-setup-passwords utility to generate those passwords:
sudo /usr/shareare/elasticsearch/bin/elasticsearch-setup-passwords auto
Changed password for user apm_system
PASSWORD apm_system = <Password>
Changed password for user kibana_system
PASSWORD kibana_system = <Password>
Changed password for user kibana
PASSWORD kibana = <Password>
Changed password for user logstash_system
PASSWORD logstash_system = <Password>
Changed password for user beats_system
PASSWORD beats_system = <Password>
Changed password for user remote_monitoring_user
PASSWORD remote_monitoring_user = <Password>
Changed password for user elastic
PASSWORD elastic = <Password>
Now, let’s edit our /etc/kibana/kibana.yml file to add the kibana_system generated username to interact with elasticsearch.
elasticsearch.username: "kibana_system"
elasticsearch.password: "<Password>"
And restart both services a final time!
sudo systemctl restart kibana
sudo systemctl restart elasticsearch
Now, let’s test our new deployment by browsing to our ELK IP in a web browser! At first it may take a moment for things to refresh as pictured below:
https://192.168.1.150:5601
With a few more minutes and a refresh, we have a working login!
Use your elastic credentials generated from the elasticsearch password generator to login!
Read part three, where we cover ingesting data with agent and fleet!
Note: This is a guide that I wrote for my own home lab setup. It is not using the latest version of ELK, and there are many other great ways to get...
This is part three of a four-part series on how to set up your own home lab with ELK. Read part one and part two.
This is part four of a four-part series on how to set up your own home lab with ELK. Read part one, part two, and part three.