RAMLAB
RAMLAB enables additive manufacturing (3D Printing) using welding robots.
I'm a Linux Cloud Computing specialist with a DevOps mindset.
Would you like to know more?
To read more about my technical background, read my about page. To see a detailed list of past and present assignments, check out my Curriculum Vitae.
RAMLAB created a system to utilize a welding robot to 3D print metal parts on demand. Cool stuff. The hardware needed to support this, is powered by inhouse developed software.
My job as a Linux Engineer was to enable RAMLAB to automatically roll out these new hardware systems with as little human interaction as possible. We achieved this by utilizing MAAS (Ubuntu server deployments), Ansible (AWX), SSH authentication, JIRA Asset Management and some Google Cloud resources. By connecting all the API’s we were able to automate installations from beginning to end.
I4Networks is constantly improving their customer services, by automating as much as possible (and thereby eliminating human error) and making use of Cloud resources whenever possible. By deploying apps serverless (using Google Cloud Run), they can focus on developing applications instead of hosting and maintaining secure environments.
With automation, configuration changes are as easy as creating a Pull Request and merging.
Ultimaker creates the best 3D-printers in the world. These printers are manageable through the cloud, enable you to monitor, start, stop and duplicate print jobs and much more, from anywhere. As a Cloud Operations Engineer my main job was to help Ultimaker bring their Cloud Infrastructure to a higher standard.
I achieved this by creating an infrastructure-as-code setup and ensuring all the Google Cloud Platform resources were described in Terraform code. Next, building applications was streamlined using Docker, docker-compose and GitHub Actions. Built docker images were pushed to the Google Container Registry, after which our GitOps implementation would detect the new images and deploy them on the relevant clusters automatically using custom Helm charts.
To be able to replace the entire infrastructure with zero downtime, we migrated local MongoDB databases to MongoDB Atlas to get rid of the last bit of stateful data in the clusters.
I implemented the Istio Service Mesh for detailed insights into the performance of our clusters, ratelimiting and to be able to do canary / A-B / green-blue deployments.
I made it possible for developers to show their new features to stakeholders, by launching ‘lab environments’ on Google Cloud Run, using a custom made GitHub Action.
Refactory is an early adopter of the latest and greatest DevOps technologies. Because of this, they are able to quickly and efficiently help many customers overcome seemingly difficult and complex problems.
I helped Refactory to update their Ansible playbooks and roles to new and higher standards, therefor increasing scalability and maintainability. The playbooks will now be automatically checked whether they conform to the Coding Style guidelines aswell as valid syntax. A number of custom rules were developed in Python.
I also extended the playbooks to be able to run PHP applications with dedicated user accounts and roll out new websites automatically. Additionally, servers can now be updated automatically and will export statistics to Prometheus.
Xolphin uses a number of different websites to sell their SSL products. They wanted to switch from using Apache to Nginx, and in the process, implement a method to better maintain and more easily roll out additional websites. I have converted their existing Apache configuration to nginx virtualhosts, which are deployed using Ansible. A simple YAML configuration file is now used to describe the virtualhost and, if applicable, any settings that deviate from the defaults.
Obviously the entire setup can first be tested locally using Vagrant.
The ansible playbooks will:
Tuxis asked me to develop Ansible playbooks that would roll out and configure Sensu across their various platforms. I developed this locally using Vagrant, I started with a basic server and client and basic checks. From there I expanded the monitoring with more complicated checks and notification methods.
Using the ansible playbooks, you can easily subscribe to sensu checks or add your own.
As DevOps Architect Online, I supported a number of teams in the IJsvogel Retail organisation, the corporation behind Pets Place and Boerenbond.
I was responsible for the day-to-day operations revolving around the ecommerce platform built on Magento 2 - running on Amazon Web Services behind Varnish caching servers.
This included troubleshooting issues and reporting these to the correct suppliers, such as Payment Service Providers (Buckaroo), Magento 2 developers (50x Solutions), iOS/Android developers (Egeniq) or the Delivery Management Software provider Paazl. Together with the Online Business team I was responsible for Scrum sprint planning and project prioritization.
I also helped introduce a Customers Loyalty program (VIP Club) consisting of iOS and Android mobile apps built by Egeniq, the ecommerce website and a CRM platform built by The Valley.
As a DevOps engineer, I’ve been tasked to setup a new fully automated platform. Customers will be able to request a dedicated, Atlassian-stack based DTAP environment in a private cloud. Other tools like Jenkins and Rundeck may be added to the stack. I developed a Python tool that reads configuration from Puppet Hiera and consequently creates the necessary virtual machines in a VMware vCloud environment. It will also configure a private LAN for the customer as well as networking (SNAT) and firewalling rules. The public internet facing proxy servers, running nginx, will receive a signal to update their configuration. Aside from creating the DTAP environment, documentation and instructions for maintenance and provisioning new customers must be created. A follow-up project consists of migrating current customers to a new, private cloud.
Other software used: