Home Lab: Ingesting Data with Agent and Fleet
This is part three of a four-part series on how to set up your own home lab with ELK. Read part one and part two.
DevOps (Development Operations) is defined in many ways, and some folks are really passionate about it. To keep terms simple, I define DevOps as a mesh between software development and software deployment operations.
It’s on a sliding scale because everyone does it differently. I’ve seen an organization with one poor soul, who is one bad day away from burnout, running everything. At another organization, they had a full team of engineers whose sole job was to make sure software repositories did not contain any user credentials.
When I got my first job as a software engineer, I was doing enterprise Java web development. We were working on an internal JSP web app that was hosted in an IBM WebSphere portal. I was on a small team of four devs, and it was my turn to learn how to “deploy” the software to the customer’s site. We cut a brand new shiny CD containing the latest version of our software, and I printed out my driving directions from MapQuest.
I was on my way to the customer’s office building, so I could manually install the latest version of our software. Nearly 25 minutes and a stop at Dunkin Donuts later, I was in a seat in front of a server rack and an admin console screen with an IT employee looking over my shoulder as I popped the CD into the drive bay and went through my checklist.
On the server, I could see my predecessors had copied older versions of the software to an “archive” folder. The list of files was enormous. So, I added my stone to the top of the pile and copied the current software to the deployment graveyard. Then I copied our new software to the Tomcat folder and “deployed” the latest software.
Once everything was restarted, I checked the website to make sure everything was working. Of course, it wasn’t! There was the dreaded 501 Server Error message on the screen, and panic ensued. I quickly resurrected the old version from the deployment graveyard and put everything back to how it was. I grabbed a copy of the log file and scurried back to my team with my tail between my legs. I was embarrassed.
That story above is a primitive ancestor to DevOps. I was a developer doing deployment operations. I never learned about network architecture or the IBM WebSphere platform or anything like that prior to this job, but as a small team, this is what we had to do to get our software out there. These days there are many more options and platforms available to make the process much smoother and less error-prone.
This is not by any means an exhaustive list. I would like to write a bit more about each of these in the future, but for now I’d like to share how DevOps makes things better–not just better in the sense of less hassle, but also better for the overall security of your application.
Containerization is another buzzword like DevOps, but it really means using virtual machines instead of a physical computer.
It’s like deploying your software on tiny servers that you can move around and share with people. In this case, Docker is the most common tool used to deploy web apps, but there are other solutions out there. Even making a VMware VM and copying it to a server is a type of containerization, albeit a more cumbersome one.
So, what security benefits do you get from containerization? When you build a docker container, you start with a base image. The base image can be Ubuntu, or RedHat, or some mini version of Linux or even Windows Server. Not only that, but you can also customize the base image and set up things like directories, users, roles, etc… and you see where I’m going with this. You can pre set-up a container with good security practices (see future blog article about that!) and maintain this image for your deployments.
It’s like always having an up-to-date and patched server to deploy to every time! Not only will this set you up for success, but you’ll have a consistent process for deployment. You’ll know what software versions the containers run. You’ll also know that if they’re compromised, it’ll be tough for an attacker to get anywhere.
Automation is the best thing about containerization. Going back to my story of driving to the customer site, if we were able to leverage Docker, I could’ve tested and maybe deployed the whole application from my desk and saved myself a trip and some worry. With good security practices, containerization can grant you consistent and automated security.
We’ve talked about keeping the images up-to-date and secure. What about the packages and dependencies your software uses? It’s just as crucial to know what version of operating system you’re using as it is to know what libraries your application is using.
DevOps is the pipeline between where the devs write code and when it gets turned into a product for customers. A section of your pipeline should be dedicated to security and looking for out-of-date software versions. Frameworks like Black Duck and Sonarqube will scan your code and tell you if you’re using old and potentially vulnerable third-party packages.
The best part is that you can do this automatically in a Continuous Integration system. That’s just a fancy phrase for automatically turning your software code into a deployable product. Automating security in your build pipeline helps you catch vulnerable software before it goes out the door. The automation helps you to do it quickly and test the alterations to make sure the changes did not break your application.
A complimentary practice to automatically looking for vulnerable dependencies in your software is looking for passwords or other stuff that should be kept secret.
A good and secure baseline container would keep a hacker from getting root permissions on the server–but they could still potentially have access to the application source code running on the server. What happens if that source code has hard-coded database credentials? Does your code have any hard-coded usernames and passwords?
All of us have done this at some point, so don’t feel singled out if you do. As is evident in the example I mentioned above, putting credentials in your code or config scripts on the server is not a good idea. Luckily, there’s a DevOps solution that can help!
Secret scanning is a term for scanning code to look for credentials and certificates and other private stuff that shouldn’t be public. There are even many open-source solutions that can be automated in your pipeline.
So, what do I do with the passwords I need for my application to connect to the database and actually function? Secrets management–yet another buzzword. This is the practice of using some sort of software “vault.” You have your vault grant permissions to your specific application and then your app can retrieve the secrets from the vault only when it needs them, like when it needs to connect to the database. Then the secret “goes back in the vault” meaning it’s never stored anywhere in your application code (* only in memory). Again, we’ll leave the details for another blog!
When it comes to software, this is horrendously true. Processes are lost, configurations change, and Ted, who left the company two years ago, may as well live on Mars because no one understands what he did.
DevOps alleviates this problem by letting you utilize Infrastructure as Code. You can turn all your server, application, deployment, etc. configurations into a type of code project. You can then store them in repositories instead of your laptop and they are constantly being used and updated.
Not only that, but when it comes time to start a new project, you have a vetted template to start with that is secure and already adheres to good practices. When it comes to deployment, it’s a no-brainer. If you’re still stuck with on-prem solutions, there are a plethora of orchestrators to choose from like Chef, Ansible, and Puppet. In the cloud, you have Terraform and other cloud provider-specific frameworks. This leads to the next item…
If you had to redeploy your entire application stack from scratch, how long would it take you? Can it even be done? Part of security is resiliency: can you bounce back after a crisis? Put another way, how long could an attacker knock you off the internet, and how much would that downtime cost the company?
Using automation and infrastructure as code is a darn good solution to this. You could deploy a large application stack with security in place and secret management in a matter of seconds. No joke.
Another upside to the configurations as code and automated deployments is that your reaction time can drop drastically. (That’s a good thing!) For example, let’s say there’s a new OpenSSL zero-day vulnerability in the wild. With our DevOps pipeline, we would be able to see all containers using the vulnerable version of OpenSSL. We could then update the affected images and deploy the new containers just in time for lunch.
Obviously, this is a made-up scenario that could take longer in real life based on software changes, but it’s not far from reality. In a best-case scenario, it really is that easy.
Aside from the above benefits of upping your DevOps game, what value does it provide for your customers? Well, you’re now building a more robust and secure product that’s going to keep customer information and credentials private.
The software feature updates and security patches are going to be deployed faster and with less downtime. No more “going down for maintenance” emails. You’ll be getting fewer support calls for technical issues because you’ve found them during the build and deployment pipeline. And finally, you’ll save some time by keeping devs at their desk instead of at an awkward deployment dance in a customer’s server room.
This is part three of a four-part series on how to set up your own home lab with ELK. Read part one and part two.
Studying is an art as much as it is a science.
Picture this: you just moved into a new apartment with neighbors you don’t even know and on a street that you’ve never even heard of until you came...