Curriculum Vitae
- Age: 41
- Location: Arnhem
- Email: rick@vandenhof.eu
Working Experiences
Company | BetterBe B.V. |
---|---|
Job Title | Platform Engineer |
Period | May 2024 - December 2024 |
In the LeaseServices Platform team I was responsible for creating a hybrid, automated Kubernetes platform. Using Terraform we were able to deploy Kubernetes clusters on on-premise hardware and cloud environments. We’ve utilised Harvester for provisioning virtual machines for testing and lab environment setups. Bare metal servers are deployed using Foreman. Rancher was used as the Kubernetes management system.
Key aspects of my work include:
- Setting up the Terraform framework used to manage and provision all new clusters
- Restructuring the ArgoCD deployments to be able to implement DTAP changes
- Implementing a Rancher Node Driver to be able to provision bare metal machines with Rancher node pools
- Provisioning MariaDB database instances using the MariaDB and Vault Secrets Operators
- Setting up MicroOS deployment configurations for use with Harvester and Foreman
- Implementing Rook-Ceph for distributed storage (block storage and S3 storage) to be able to move away from ZFS and NFS
Company | TenneT TSO |
---|---|
Job Title | Kubernetes Engineer |
Period | May 2022 - May 2024 |
As part of the Platform-as-a-Service team, I was responsible for developing and maintaining the Kubernetes platform TenneT deploys for Mission-Critical deployments. One of the biggest milestones was onboarding the PMS application, some people called it the biggest IT change TenneT had seen in the last 20 years. This mission-critical application is used in the Control Room to monitor the flow and distribution in the power grid.
Some of the major changes and improvements I was part of:
- Removing redundant or unnecessary cluster settings which could be determined logically
- Removing Helm application deployments from terraform and deploying them using ArgoCD
- Adopting DHCP for cluster deployments
- Making use of Rancher nodepool functionality instead of creating and maintaining virtual machines with terraform
- Using Portworx clouddrives instead of terraform-managed disks for virtual machines
- Implementing mature version update promotions within clusters using ArgoCD (following GitOps-at-scale guidelines)
Company | Ultimaker |
---|---|
Job Title | DevOps/Cloud Engineer |
Period | July 2021 - May 2022 |
Continuing my work for Ultimaker in the Cloud. Development is going fast, userbase grows quickly and requirements are changing. We made it a point to try and simplify the architecture. This culminated in a project targeted at deploying all applications using Google Cloud Run instead of Kubernetes clusters. This not only simplifies the architecture but simultaneously reduces costs and eases deployments and rollbacks.
- Enable better concurrent connection handling in applications
- Migrate Kubernetes deployments to Cloud Run
- Move secrets from Hashicorp Vault to Google Secrets Manager
- Eliminate use of Google Service Account credential keys, use Workload Identity instead
- Implement ArgoCD Deployments on Kubernetes
- Created a Virtual printer to enable easy QA / testing
- Create a GitHub integration for more user-friendly Lab Environments
Company | RAMLAB |
---|---|
Job Title | DevOps/Linux Engineer |
Period | February 2021 - July 2021 |
RAMLAB created a system to utilize a welding robot to 3D print metal parts on demand. Cool stuff. The hardware needed to support this, is powered by inhouse developed software.
My job as a Linux Engineer was to enable RAMLAB to automatically roll out these new hardware systems with as little human interaction as possible. We achieved this by utilizing MAAS (Ubuntu server deployments), Ansible (AWX), SSH authentication, JIRA Asset Management and some Google Cloud resources. By connecting all the API’s we were able to automate installations from beginning to end.
- Configured automatic Ubuntu server deployments with encrypted disks, RAID config and SSH server during boot for troubleshooting
- Used Hashicorp Vault for centralized SSH key management and certificate signing
- JIRA Asset Management for centralization of customer configuration, syncing data to MAAS and Vault among others
- AWX for rolling out deployments and updates for live systems
- WireGuard VPN security
- Dockerized a Windows client tool, removing the need for a 24/7 Windows Server running
- SecureBoot enabling servers with custom Linux kernels
- Using Terraform to manage all the infrastructure parts, setting up AWX and Vault
- Created a custom
ssh
tool which looks up customer information, generates a temporary SSH keypair, has it signed by Vault and connects over the VPN to the right machine
Company | Ultimaker |
---|---|
Job Title | DevOps/Cloud Engineer |
Period | February 2019 - January 2021 |
Ultimaker creates the best 3D-printers in the world. These printers are manageable through the cloud, enable you to monitor, start, stop and duplicate print jobs and much more, from anywhere. As a Cloud Engineer my main job was to help Ultimaker bring their Cloud Infrastructure to a higher standard.
- Described entire Google Cloud Platform infrastructure in Terraform code
- Seperated Kubernetes configuration from application code
- Created a uniform building / testing / deploying workflow for all applications
- Created and implemented a custom Helm chart for deploying applications on Kubernetes
- Migrated local MongoDB databases to MongoDB Atlas
- Applied several security / availability improvements (like spanning GKE clusters over several zones)
- Implemented the Istio Service Mesh
- Deploy applications using GitOps (Weaveworks)
- Created testing/lab environments for applications using Google Cloud Run
- Improved Docker builds with caching / stages and BuildKit
Company | Avisi |
---|---|
Job Title | Linux DevOps Engineer |
Period | February 2018 - February 2019 |
In Februari I returned to Avisi, in their beautiful new office building to continue my work for the Atlassian Products & Services team as a Linux DevOps Engineer.
- Updated the AWS Infrastructure:
- Make use of Auto Scaling Groups
- Modify the Ansible playbooks to automatically create new AMIs
- Make use of EFS for shared data
In October 2018 I joined the platform team and helped develop their new infrastructure on the AWS platform:
- Used Terraform, Consul and Nomad to deploy Docker containers
- Created Docker images for most of the Atlassian applications: Jira, Confluence, Bitbucket and Bamboo.
- Also created Docker images for SonarQube and Artifactory
- Updated the infrastructure code to make use of Application and Network Load Balancers instead of the Classic Load Balancers.
Company | Avisi |
---|---|
Job Title | Linux DevOps Engineer |
Period | April 2017 - December 2017 |
I returned to Avisi to further support the Hosted Insight platform which was now beginning to gain quite some traction. With the insights gained from the experience of adding new customers to the platform, a number of features were added to the wishlist and it was my responsibility to implement them.
- Hosted Insight platform support/development:
- Added host-based/path-based routing for customers
- Switched backup solution to Restic.
- Migrate various customers to the platform
- Use PostgreSQL WAL archiving method instead of SQL dumpfiles for backups
- Upgrade all customers’ Atlassian applications
In October I switched teams to APS - Atlassian Products & Services, where I supported the Atlassian Experts:
- Migrating data from a large MediaWiki instance (2000+ users) to Confluence
- Imported existing AWS hosting environments into Terraform
- Modularized for different (production, testing) environments
- Configuration of Application Load Balancers
- Various EC2 instance types
- Full networking configuration
- Bastion host
- Security groups
- I wrote various Python tools:
- Convert Atlassian Cloud Tempo worklogs to JIRA Server worklogs
- Migrate Atlassian Cloud usernames to local JIRA Server accounts
- Fix timezone differences in PostgreSQL databases
- Fix hyperlinks from one Confluence base url to the other
Company | IJsvogel Retail |
---|---|
Job Title | DevOps Architect Online |
Period | January 2017 - May 2017 |
As DevOps Architect Online, I supported a number of teams in the IJsvogel Retail organisation, the corporation behind Pets Place and Boerenbond.
- Responsible for day-to-day operations revolving around the e-commerce platform
- Built on Magento 2
- Troubleshooting issues and reporting them to the appropiate suppliers (payment providers, Magento 2 developers, iOS/Android developers)
- Helped launching the Customers Loyalty program consisting of iOS and Android mobile apps
- Together with the Online Business team I was responsible for Scrum sprint planning and project prioritization.
Company | Avisi |
---|---|
Job Title | Linux DevOps Engineer |
Period | June 2016 - December 2016 |
As a DevOps engineer, I was tasked to setup a new fully automated platform. Customers will be able to request a dedicated, Atlassian-stack based DTAP environment in a private cloud. Other tools like Jenkins and Rundeck may be added to the stack.
- Set up Hosted Insight platform
- Create and maintain Puppet code for shared services: puppetmasters, smtp services, DNS, nginx proxies, backup services etc.
- Quickly and automatically deploy virtual machines for running Atlassian applications using a self-written Python tool
- The backup- and restore process is completely automatic
- Create and maintain documentation for the platform
Other software used:
- Vagrant
- PostgreSQL
- OpenDJ (LDAP server)
Company | Exonet bv |
---|---|
Job Title | System Engineer |
Period | May 2015 - June 2016 |
There was already a lot of Ansible knowledge present at Exonet when I started working there. However, it was only being utilized to install specific software on servers once. I helped deploy Ansible Tower. At the time of writing Tower was being used to apply configurations to more than 150 servers every day. These configurations consist of self-written ‘roles’ aswell as the playbooks themselves. I deployed many different server setups, all using ansible playbooks, such as:
- Magento setup:
- This setup hosts Magento-based CMS sites.
- Nginx is optimized with Magento-specific settings.
- It uses NFS shared storage on a NetApp cluster.
- Backups are made amongst others with Bacula.
- PHP is running in php-fpm mode.
- There is a Redis instance per website for caching.
- Trytond setup:
- This setup uses nginx with gunicorn as backends.
- PostgreSQL is used as a databasebackend and Sphinx / searchd for the search functionalities.
- Python Trytond is installed into a virtualenv.
- Redis is used as a caching backend.
- Every service is controlled with systemd templates.
- Plone CMS setup:
- This setup is load-balanced using HAProxy on a number of Zope workers and Zope database hosts.
- Each worker runs Varnish with a number of backends for each site. These are all periodically probed and removed from the pool if they are no longer responding.
- The customer requested ansible playbooks on one of the worker to easily deploy and update sites.
- Docker setup:
- This customer uses docker extensively for deploying and automatically scaling websites.
- Containers are limited to customer-specific networks.
- Docker networks and nginx are managed with Ansible.
- ElasticSearch / MongoDB / PostgreSQL / RabbitMQ / Redis cluster:
- This setup contains a number of database servers and worker servers.
- All services are running in either cluster mode or master/slave mode.
- Workers run apps, written in go, that are exposed to the internet via nginx.
Besides engineering new setups and clusters, customers often requested a way of testing their software on their servers, without using the “live” servers. I used Packer to create Vagrant boxes that are identical to their production servers.
I also wrote a number of tools in Python:
- server_check: This open source tool checks if a DirectAdmin server is still functioning correctly.
- GitHub webhook: this webhook receives GitHub payloads, if certain criteria are met it will instruct Ansible Tower to start a new job.
- Migration scripts: collect data, write it to JSON format, and read the data to call the DirectAdmin API to create new accounts, email addresses, transfer data, etc.
- Ansible notification callback plugin (email).
Other duties include customer contact via phone and e-mail (3rd line support), configuring Cisco / NetApp infrastructure, implement firewalling and VPN using Juniper and pfSense appliances and implementing two-factor authentication for SSH.
Company | Totaalnet Internet Works |
---|---|
Job Title | Manager Engineering |
Period | November 2005 - May 2015 |
My job at TIW was two-fold: I was a Linux Engineer but also Manager of the department. As Linux engineer, my job mainly consisted of the following:
- Managing 200+ web, mail, dns and database servers for the Shared Hosting platforms.
- Implementing Puppet, DNSSEC, IPv6.
- I created an Asterisk VoIP telephony system.
- Migrated many servers from DirectAdmin, Ensim, cPanel etc to our own Control Panel.
- Managing the network (BGP peerings and transits, IPv6 and uplinks).
- Connecting our network to the AMS-IX and NL-IX.
- Automatically creating VMWare virtual machines by implementing the C# API.
- Engineering and developing the new Shared Hosting platform.
- Developing systems administration scripts and services in Python, C# and Perl.
- Developing and maintaining the Hosting and Domain names Control Panel for customers.
- Third line support.
As Manager Engineering, I was responsible for the following:
- Implement project management using Kanban/Scrum.
- Lead the Engineering team (“Scrum” master).
- Have periodic performance meetings with team members.
- Ensure the company policies were applied and kept to.
- Ensure the departmental planning were in line with corporate strategies.
- Create and monitor budgets.
- Measure and report on results of the department.
- Describe, implement and ensure departemental processes.
Company | Rockingstone IT |
---|---|
Job Title | System Engineer |
Period | March 2002 - October 2005 |
At Rockingstone IT I was responsible for:
- Installing and maintaining the company servers
- Registering domain names for customers
- Administrating user, e-mail and FTP accounts
- Managing the company network
- Developing various websites in Perl and PHP.
Company | Radix ICT |
---|---|
Job Title | System Engineer |
Period | December 2001 - March 2002 |
My job at Radix ICT was mainting and managing SCO UnixWare 7 servers on location at customers.
Company | Landis ICT Group |
---|---|
Job Title | Junior Support Engineer |
Period | June 2001 - December 2001 |
My job at Landis ICT Group was certifying for several SCO Unix courses with the intent on becoming a consultant.
Company | Tweakers.net BV |
---|---|
Job Title | Server Administrator |
Period | January 1999 - June 2001 |
At tweakers.net I installed and was responsible for the servers that kept the site running.
Certifications and courses
- 2023: Codefresh: GitOps at Scale
- 2023: Codefresh: GitOps Fundamentals
- 2022: CKAD: Certified Kubernetes Application Developer
- 2022: HashiCorp Certified: Terraform Associate (002)
- 2019: Certified Kubernetes Administrator (CKA)
- 2016: Amazon Web Services - Certified Solutions Architect - Associate
- 2014: VMWare VCP 550
- 2007: RIPE LIR Training Course
- 2006: NL-ix BGP4 Course
- 2001: SCO UnixWare 7 CUSA
- 2001: SCO UnixWare 7 ACE (Shell Programming)
- 2001: SCO UnixWare 7 Master ACE
Expertise
Amazon Web Services
- EC2
- Application / Network Load Balancers
- Lambda
- Security Groups
- Auto Scaling Groups
- Creating custom AMIs
- EFS
- EBS
- S3
- RDS
Linux / Unix
- Ubuntu
- Debian
- Slackware
- RedHat / Fedora Core
- Gentoo
- CentOS
- FreeBSD, OpenBSD, NetBSD
Virtual Infrastructure
- VMWare
- Xen / XenServer
- KVM
Configuration Management
- Ansible
- Puppet
- Terraform
Continuous Integration & Deployment
- Jenkins
- Atlassian Bitbucket / Bamboo / Confluence / Jira
- Docker
- Vagrant / packer
Databases
- MySQL
- PostgreSQL
- MSSQL
- ElasticSearch
- MongoDB
- Redis
- Memcache
- SOLR
Programming
- Python
- Perl
- PHP
- C#
- Java
- Objective-C
- Bash
- Ruby
Server Technologies
- Apache
- nginx
- PHP-FPM
- Varnish
- HAProxy
- Supervisor
- LogStash
- Kibana
- NodeJS / PM2
- RabbitMQ
Backup Technologies
- Veeam
- R1Soft / CDP
- Bacula
- Rsync / rdiff
- Restic