I wrote a post on The Scale Factory blog on how to assess EKS security with kube-bench. The blog post is also available on Medium.
Category Archives: Security
Cloud Natives UK Meetup
I recently gave a lightning talk on assessing EKS security with kube-bench at Cloud Native UK, a joint virtual event being put together by the Cloud Native groups in Manchester, Wales, Glasgow, and Edinburgh, representing the cloud native communities from each of the countries in mainland UK. This was a shorter but updated version of the talk I gave at the end of last year at two AWS User Group meetups. The slides are available here and a recording of the event is available on YouTube (my presentation starts at 23:44). It was a very slick and well organized event (kudos to the organizers!) with very interesting presentations so check out the whole even on YouTube if you missed it.
How I passed the AWS Security Specialty Exam
I wrote a post on The Scale Factory blog on how I passed the AWS Security Specialty exam. The blog post is also available on Medium.
How to set up a Site-to-Site VPN connection
I wrote a post on The Scale Factory blog on how to set up an AWS Site-to-Site VPN connection. The blog post is also available on Medium.
AWS User Group Meetups
I recently gave a talk on assessing EKS security with kube-bench at two AWS User Group meetups, namely at AWS User Group Liverpool on 26th October 2020 and at Cambridge AWS User Group on 10th November 2020. These were two online events and the organisers and the audience were very friendly as you would expect from a good meetup! The slides are available here.
How to encrypt and decrypt emails and files
Somewhere I read that sending unencrypted email is like sending postcards: anyone can potentially read them. This is not nice for privacy but becomes very dangerous when the content of the email or attached files contains secrets like passwords, access keys, etc. Anyone who can get hold of your email can also potentially access your systems.
For sending encrypted email I generally use Enigmail which is data encryption and decryption extension for the Thunderbird email client. I also used Mailvelope which is an add-on for Firefox and Chrome allowing to integrate encryption in webmail providers such as Gmail, Outlook, etc. These tools simplify the encryption/decryption process, especially if you are not familiar with it.
However, it has occurred to me to have to encrypt large files containing data dumps. The challenge with email extensions is that they don’t allow you to send email with such huge attachments. Plus, Mailvelope doesn’t allow to encrypt files larger than 25 MB. This is when knowing how to encrypt and decrypt a file via the command line comes in handy. You can easily upload a large encrypted file on an FTP server or cloud hosting service without worrying that the file will end in the wrong hands. As a bonus, an encrypted file is generally smaller than a non-encrypted file so the upload is also quicker.
The encryption process requires to first get the GPG public key from the person you want to send the encrypted file or email to. Once you have the recipient’s public key, you can encrypt a file with that key. You send the email or upload the file and then ask the recipient to decrypt it at their end using their GPG private key. I’m going to cover both processes. Note that this is also useful in order to encrypt the content of an email that you want to keep secret and send it as attachment in a non-encrypted email.
Generate GPG public and private keys
- Install gpg or gpg2 on Linux or MacOS. This is generally part of the standard packages, for example on Ubuntu:
sudo apt install gnupg2
If you are on Windows, you can use Cygwin and install gpg or use the GnuPG utility which should work similarly (although I have not tried it).
- Generate a GPG key and follow the instructions. I recommend selecting RSA and RSA (default) as kind of key and 4096 as keysize of the key:
gpg2 --gen-key
- You should now have two files in .gnupg within your home directory (e.g. /home/sandro/.gnupg):
-- pubring.gpg: this is your public key -- secring.gpg: this is your private key
Verify your public key with:
gpg2 --list-keys
Verify your private key with:
gpg2 --list-secret-keys
Encrypt and decrypt files
You have received a public key from someone and you want to encrypt a file with their public key in order to transmit it securely. The file containing the public key will typically have an extension .gpg or .asc.
-
Import the public key (e.g. someonekey.asc is the filename of the key):
gpg2 --import someonekey.asc
- Trust the public key (user@example.com is the email associated with the key and should be shown as output of the import command):
gpg2 --edit-key user@example.com
You’ll get a prompt command>, type trust and select 5 = I trust ultimately. Type quit to exit.
- Encrypt the file with the public key of the user (replace the email address with the email address of the user associated to the public key):
gpg2 -e -r user@example.com mysecretdocument.txt
- This will generate an encrypted file mysecretdocument.txt.gpg which is smaller than the original file. Transmit the encrypted file and tell the user to decrypt it at their end with the following command:
gpg2 -o mysecretdocument.txt -d mysecretdocument.txt.gpg
Stay safe and encrypt important emails and files!
How to force HTTPS on Apache
I recently added an SSL certificate to this website. I used Let’s Encrypt which is an awesome initiative to increase the use of HTTPS in websites by making SSL certificates free and easy to install.
My web hosting provider offers Let’s Encrypt certificates via cPanel so installing one for my website was as easy as clicking few buttons. If you are not that lucky, Let’s Encrypt provides instructions to install certificates via the shell as well as a list of hosting providers supporting Let’s Encrypt.
Once you have your SSL certificate installed on your server, you may want to force HTTPS so that any request for HTTP pages will automatically be redirected to HTTPS.
The Apache web server provides the .htaccess file to store Apache configuration on a per-directory basis. For example, if your website is stored under /var/www/html/mysite and you’re using Apache, you can create the following .htaccess file in that directory:
RewriteEngine On RewriteCond %{SERVER_PORT} 80 RewriteRule ^(.*)$ https://sandrocirulli.net/$1 [R,L]
The third line is the rewrite rule that forces HTTPS for any request made to the web server. Note that you need to have the mod_rewrite module installed on Apache to add rewrite rules for URL redirection.
Gotchas
Make sure that the URL in the rewrite rule is the one used in the SSL certificate. I initially put www.sandrocirulli.net in the rewrite rule even though I register the SSL certificate for sandrocirulli.net and all its sub-domains (including www.sandrocirulli.net ) and got nasty security warnings displaying on the browser. You can easily check the SSL certificate with any browser by clicking on the green padlock near the URL and select View Certificate or the like:
If the padlock near the URL displays a warning, click on it and see what’s the problem. I initially encountered issues with mixed content. This occurred because I had links to images on the websites with HTTP instead of HTTPS. All the major browsers allow you to see where the error occurs, just click on the warning and then Details or the like. Changing these links to HTTPS solved the issue with mixed content.
CD Summit and Jenkins Days 2016
This week I’m giving a talk about Continuous Security with Jenkins, Docker Bench, and Amazon Inspector at CD Summit & Jenkins Days in Amsterdam and in Berlin. CD Summit & Jenkins Days are a series of conferences in the US and in Europe focusing on Continuous Integration (CI) and Continuous Delivery (CD).
This is the abstract of my talk:
Security testing is often left out from CI/CD pipelines and perceived as an ad hoc and one-off audit performed by external security experts. However, the integration of security testing into a DevOps workflow (aka DevSecOps) allows to achieve security by design and to continuously assess software vulnerabilities within a CI/CD pipeline. But how does security fit in the world of cloud and microservices?
In this talk I show how to leverage tools like Jenkins, Docker Bench , and Amazon Inspector to perform security testing at the operating system and container levels in a cloud environment and how to integrate them into a typical CI/CD workflow. I discuss how these tools can help assessing the risk of security vulnerabilities during development, improving security and compliance, and lower support costs in the long term.
I also present two demos showing how to integrate Docker Bench with Jenkins and how to run Amazon Inspector from Jenkins.
The slides of my talk are available here.
Continuous Security with Jenkins and Amazon Inspector
Amazon Inspector is an automated security assessment service on Amazon Web Services (AWS). It allows to identify security vulnerabilities at operating system and network levels by scanning the host against a knowledge base of security best practices and rules.
I recently integrated Amazon Inspector to run in a Jenkins job so that security testing can be automated and performed prior to deployment to production.
AWS Configuration
The first thing to do is to set up the assessment target and assessment template in Amazon Inspector. An assessment target allows to select the EC2 instances via their tags in order to include them in the security scan. Here is an example of my assessment target for the EC2 instances tagged as gadictionaries-leap-ogl-stage-v360 :
The assessment template allows to specify the type of scan and its duration and is linked to the assessment target set up above. Here is an example of my assessment template (ARN is masked for security reasons). I selected the Common Vulnerabilities and Exposures (CVE) rule package scanning for 15 minutes (one hour is the recommended duration time to reliable results).
Jenkins Configuration
We now move to the Jenkins configuration in order to run the security scan via a Jenkins job instead of using the AWS console.
The first thing to do is to make sure that openssh is installed on the instance where Jenkins is running and on the host you want to check. For example, on Ubuntu you can install openssh with:
sudo apt-get install openssh-server
Then install the SSH Agent plugin in Jenkins. This will provide Jenkins with SSH credentials to automatically login into a machine on the cloud. Add the credentials in Jenkins -> Credentials -> System -> Global credentials (unrestricted) -> Add credentials -> SSH Username with private key. This is an example of my credentials for user jenkins (private key details are obfuscated):
Then create a Jenkins job and select the SSH agent credentials for user jenkins in Build Environment:
This will allow Jenkins to ssh into the machine with the private key stored securely (make sure you only grant permission to configure Jenkins to administrators otherwise your private keys are not safe).
I like to parameterize my builds so that I can run Amazon Inspector on a specific EC2 instance within a given Elastic Beanstalk stack:
Then we set up build and post-build actions. The build executes a shell script invoke_aws_inspector.sh pulled from the version control system. The post-build action provides the location of the JUnit file.
The shell script invoke_aws_inspector.sh looks like this:
# check parameters expected from Jenkins job if [[ -n "$HOSTNAME" ]] && [[ -n "$STACK" ]]; then # install and start AWS Inspector agent on host ssh -T -o StrictHostKeyChecking=no ec2-user@$HOSTNAME << 'EOF' curl -O https://d1wk0tztpsntt1.cloudfront.net/linux/latest/install sudo bash install sudo /etc/init.d/awsagent start sudo /opt/aws/awsagent/bin/awsagent status exit EOF # run AWS Inspector from localhost export AWS_DEFAULT_REGION=us-east-1 python execute_aws_inspector.py # stop and uninstall AWS Inspector agent on host ssh -T -o StrictHostKeyChecking=no ec2-user@$HOSTNAME << 'EOF' sudo /etc/init.d/awsagent stop && sudo yum remove -y AwsAgent EOF else echo "ERROR! Parameters HOSTNAME and STACK required from Jenkins job security_checks_aws_inspector" exit 1 fi
The shell script works as follows:
- line 4 allows Jenkins to ssh into a host (I’m using AWS EC2 as you can guess by the username ec2-user, replace it with your default username but do not user root). Note that the environment variable $HOSTNAME is passed from the parameter we set up earlier. The EOF allows to run a sequence of commands directly on the host so that you don’t have to disconnect every time. The single quotes are important, don’t skip them!
- lines 5-8 install and start the Amazon Inspector agent on the host
- lines 12-13 configure and set up a Python script execute_aws_inspector.py for running Amazon Inspector (we’ll see it in a minute)
- lines 16-18 remove the Amazon Inspector agent so that no trace is left on the host
- the final EOF disconnect Jenkins from host
The Python script execute_aws_inspector.py uses the Boto3 library for interacting with AWS services. The script looks like this:
import boto3 import os, sys import datetime, time import xml.etree.cElementTree as etree # initialize boto library for AWS Inspector client = boto3.client('inspector') # set assessment template for stack stack = os.environ['STACK'] if stack == 'gadictionaries-leap-ogl-stage-v360': assessmentTemplate = 'arn:aws:inspector:us-east-1:XXXXXXXXXXXXX:target/XXXXXXXXXXXXX/template/XXXXXXXXXXXXX' elif stack == 'gadictionaries-leap-odenoad-stage-v350': assessmentTemplate = 'arn:aws:inspector:us-east-1:XXXXXXXXXXXXX:target/XXXXXXXXXXXXX/template/XXXXXXXXXXXXX' else: sys.exit('You must provide a supported stack name (either gadictionaries-leap-ogl-stage-v360 or gadictionaries-leap-odenoad-stage-v350') # start assessment run assessment = client.start_assessment_run( assessmentTemplateArn = assessmentTemplate, assessmentRunName = datetime.datetime.now().isoformat() ) # wait for the assessment to finish time.sleep(1020) # list findings findings = client.list_findings( assessmentRunArns = [ assessment['assessmentRunArn'], ], filter={ 'severities': [ 'High','Medium', ], }, maxResults=100 ) # describe findings and output to JUnit testsuites = etree.Element("testsuites") testsuite = etree.SubElement(testsuites, "testsuite", name="Common Vulnerabilities and Exposures-1.1") for item in findings['findingArns']: description = client.describe_findings( findingArns=[ item, ], locale='EN_US' ) for item in description['findings']: testcase = etree.SubElement(testsuite, "testcase", name=item['severity'] + ' - ' + item['title']) etree.SubElement(testcase, "error", message=item['description']).text = item['recommendation'] tree = etree.ElementTree(testsuites) tree.write("inspector-junit-report.xml")
The Python script works as follows:
- lines 10-17 read the environment variable set in the parameterized build and select the correct template (I set up two different template for two different stacks, the ARNs are obfuscated for security reasons)
- lines 20-26 run the assessment template and waits a bit longer than 15 minutes so that the scan can finish
- lines 29-39 filters findings with severities High and Medium
- lines 42-58 serialize the findings into a JUnit report so that they can be automatically read by Jenkins
Finally, here is an example of Test Result Trend and JUnit test results showing security vulnerabilities on an EC2 instance running unpatched packages:
Happy security testing with Jenkins and Amazon Inspector!
Continuous Security with Jenkins and Docker Bench
Docker Bench is an open source tool for automatically validating the configuration of a host running Docker containers. The tool has been written among others by Diogo Mónica, security lead at Docker, and performs security checks at the container level following Docker’s CIS Benchmark recommendations.
As you would expect, the easiest way to run Docker Bench is via a Docker container. Just make sure you have Docker 1.10 or better, download the Docker image:
docker pull docker/docker-bench-security
and run the Docker container as follows:
docker run -it --net host --pid host --cap-add audit_control \ -v /var/lib:/var/lib \ -v /var/run/docker.sock:/var/run/docker.sock \ -v /usr/lib/systemd:/usr/lib/systemd \ -v /etc:/etc --label docker_bench_security \ docker/docker-bench-security
This will automatically generate some output as in the animated gif above with an assessment of possible Docker security issues.
I recently combined Docker Bench with Jenkins in order to integrate security testing into a typical DevOps workflow on the cloud – call it DevSecOps if you like buzzwords… This requires a little bit of Jenkins configuration but it’s not too difficult to follow.
The first thing to do is to make sure that openssh is installed on the instance where Jenkins is running and on the host you want to check. For example on Ubuntu you can install openssh with:
sudo apt-get install openssh-server
Then install the SSH Agent plugin in Jenkins. This will provide Jenkins with SSH credentials to automatically login into a machine on the cloud. Add the credentials in Jenkins -> Credentials -> System -> Global credentials (unrestricted) -> Add credentials -> SSH Username with private key. This is an example of my credentials for user jenkins (private key details are obfuscated):
Then create a Jenkins job and select the SSH agent credentials for user jenkins in Build Environment:
This will allow Jenkins to SSH into the machine with the private key stored securely (make sure you only grant permission to configure Jenkins to administrators otherwise your private keys are not safe).
I like to parameterize my builds so that I can run Docker Bench on any host reachable with the private key:
Finally, select Execute shell in the build and paste this shell script (you may want to put it under version control and retrieve it from there via Jenkins):
ssh -T -o StrictHostKeyChecking=no ec2-user@$HOSTNAME << 'EOF' sudo docker pull docker/docker-bench-security && \ sudo docker run --net host --pid host --cap-add audit_control -v /var/lib:/var/lib -v /var/run/docker.sock:/var/run/docker.sock -v /usr/lib/systemd:/usr/lib/systemd -v /etc:/etc --label docker_bench_security docker/docker-bench-security && \ sudo docker rm $(sudo docker ps -aq -f status=exited) && \ sudo docker rmi docker/docker-bench-security EOF
It works likes this:
- the first command allows Jenkins to ssh into a host (I’m using AWS EC2 as you can guess by the username ec2-user, replace it with your default username but do not user root). Note that the environment variable $HOSTNAME is passed from the parameter we set up earlier. The EOF allows to run a sequence of commands directly on the host so that you don’t have to disconnect every time. The single quotes are important, don’t skip them!
- the second command pulls the Docker image for Docker Bench directly on the host
- the third command runs Docker Bench on the host
- the forth command removes all exited containers from the host, including Docker Bench once it has finished its job
- the fifth command remove the Docker image for Docker Bench so that you don’t leave any trace on the host
- the final EOF disconnect Jenkins from host
The Jenkins console output shows the result of running Docker Bench on a specific host. Now you have to assess the results as you may see several warnings and they may just be false positives. For example, this warning may be acceptable for you:
[1;31m[WARN][0m 1.5 - Keep Docker up to date [1;31m[WARN][0m * Using 1.11.1, when 1.12.0 is current as of 2016-07-28
This means you are not running the latest version of Docker. This may not be an issue (unless Docker released a security release) especially if your Linux distribution hasn’t got the latest version of Docker available in its repositories.
In my case this warning was a false positive:
[1;31m[WARN][0m 2.1 - Restrict network traffic between containers
In fact, I need several containers to communicate between them so that restriction does not apply to my use case.
This warning should be taken much more seriously:
[1;31m[WARN][0m 4.1 - Create a user for the container [1;31m[WARN][0m * Running as root: container-name-bbf386c0b301
This means you are running a container as root. This is unsecure as if a nasty intruder manages to get inside the containers s/he can run any command in it. Basically, it’s like running a Linux system as root which is a bad security practice.
Once you have assessed your warnings, you may want to filter out the false positives. For example, you can use the Post build task plugin to make the build fail if the build log output contains a warning that you assessed as a security risk. You can use a regular expression to match the pattern identified above.
It would be good to get the Docker Bench output in JUnit format so that Jenkins can understand it natively but this option is currently not implemented in Docker Bench.
Happy security testing with Jenkins and Docker Bench!