With the changes in EU regulation that GDPR introduces, specifically relating to the processing of EU citizens’ personal data, organisations are facing fresh challenges in how they prove compliance. GDPR brings particular burdens with the ‘Privacy by Design’ mandate, that requires data privacy be part of the system design process from day one.
Failing to comply with GDPR could result in fines equal to 4% of Global revenue or ₠20m, whichever is greater.
GDPR mandates that every system we design be inherently private – that we design privacy and security into every system. The same is true for our application delivery processes.
Many organisations are already operating mature application delivery pipelines – processes for testing, building and deploying applications into target systems. As we rethink our compliance scanning and shift it to the left, closer to the developer / operator, we also need to think about how we verify our applications.
With Chef Automate and InSpec we are able to redefine our compliance standards, taking the GDPR baseline of controls and expressing them in a simple, executable code format.
A code driven approach to GDPR compliance builds on existing methods for collaboration already used by DevOps teams. Chef has taken the control based approach to defining security regulations and turned these into small, modular, executable blocks, than can be put together to define any security policy.
In the above example, you can see how to translate a security control into an InSpec code block. This InSpec code can then be executed in an audit scan either as a triggered task or as part of a continuous scanning policy.
Let’s take a look at how we can make the execution of an Audit scan part of an application delivery process.
The following example is using the hosted version of Visual Studio Team Services to build a Java application.
Using VSTS I can easily setup the build phase using the built in Maven definition.
In my example, I’m just deploying into QA and Staging at the moment.
As is common in a range of CI/CD tools, I can define the tasks to be completed at each pipeline stage. Let’s take a look at the QA stage.
Here you can see that I’m specifying a ‘Deployment group phase’. This means that I wish to trigger a deployment on a target environment. I have a RHEL 7 server running with the VSTS agent installed and listening, so a job will be despatched to deploy the latest artifact build onto this machine.
You may also notice that I have an InSpec task in my QA phase. This task installs the InSpec package from the Omnibus installer on the target machines in the deployment group, then executes InSpec against a profile path. In the following example, you can see that I’m cloning my working GDPR profile from my GitHub account, but I can also retrieve any profile I wish from the Chef Automate platform. We highly recommend this for production pipelines.
Once the profile is present on the QA environment nodes, InSpec executes it locally, creating a JSON file from the test results. In the above example, I’m using VSTS variables to represent the artifact directory in order to keep things simple between tasks.
Should you be using an alternative pipeline tool, you can build a similar list of tasks to execute. This blog is just an example, but you may also be using Chef to configure the deployment environment so would trigger an audit scan using the chef-client and Chef Automate method, documented here: https://docs.chef.io/perform_compliance_scan.html
In the case of VSTS there is a native Test Results tasks that can retrieve data in a range of formats, including JSON.
You can see that we’re looking in the same working directory on the profile path for the InSpec test results. In my case they’re stored in a file called inspec.out
. Again, if you’re building this into your own pipeline you can specify the test results output when you call the in InSpec CLI, documented here: https://docs.chef.io/inspec/cli/
the VSTS Tests view displays these results during a release. As I mentioned previously, if you use the chef-client method we recommend, your results will also be aggregated in Chef Automate. You can learn more about that here: https://www.chef.io/solutions/compliance/
Now that the pipeline is defined, let’s trigger a release. In VSTS I can do this through the dashboard.
The above example shows the output from the tasks that I created previously. In my QA environment, I’m automating the application deployment, but also the execution of a GDPR compliance scan against the target environment.
Let’s take a look at the results.
In this example I’ve only scanned my QA environment, but using the same method I can set up scanning for any environment in my application release cycle.
By applying the method I’ve demonstrated above, whether in VSTS or not (the principle is the same), you can make compliance scanning target systems a natural, and tightly integrated part of your application development and release process.
In my example, you can see that test results are collected in VSTS, but you can easily pipe these into Chef Automate, or another data aggregation tool already in use in your business.
The key point is that we’re capturing potential GDPR failures incredibly early and preventing them from reaching any production like systems and therefore systems that process and store EU citizen data.
By adopting a similar process, you can very easily demonstrate to an auditor how you meet the ‘privacy by design’ mandate’ within your application development practises.
Taking this forward, we need to consider a holistic approach to GDPR for all of our IT operations, such as infrastructure orchestration, long-lived production systems, and development machines on workstations. I’ll be following up on these in the next few weeks.