Skip to content

From Application Engineer to Cloud Native Engineer

Photo by engin akyurt on Unsplash

I have been developing Software for quite some years now. I started with small private and scholar projects, to my first professional work for different companies (mobile Apps, VR, real-time renderings, etc.), and developing for a university in Austria (shout-out to the awesome people at my old team) with organizational and research focus. Although all jobs were very different, they all had one thing in common: I was a Developer, and my job was finished when the feature was merged into the main branch. I had some ideas about what happened when the release was built, deployed, and monitored, but most of the time, it was not my concern. There were always operations people doing all this stuff and just informing me when something went wrong.

Photo by Kenny Eliason on Unsplash

Over a year ago, I started a new job at &amp (andamp.io). I had the chance to work on an entirely new project with an up-to-date stack and actively using DevOps practices. The project we built had a tight schedule, so I was thrown in the middle and tried to help where I could. The project consists of several microservices, which communicate with each other over RabbitMQ and is run in a modern cloud-native setup consisting of Spring Boot, OpenShift, ArgoCD, Grafana, Loki, Prometheus, etc.. The idea was that I, the new guy, who was always just developing software, should relieve the team lead from his operational work. When starting the project, it was already clear that we wanted to use DevOps principles for the whole process. In case you don’t know, DevOps is a set of practices and tools that combines software development and operations to improve the speed and quality of software delivery. In our case, the infrastructure team provided us a cluster with OpenShift running and gave us a basic introduction to it. After that, we were completely independent of them, meaning we could deploy our applications whenever we wanted and install additional frameworks and tools we needed. We also designed and planned our project with this idea in mind. So that we can deploy fast, test and fix it, and then release it. As said, the timeline was really tight, so speed was key.

As you can think, I was not sure if I could do it, especially due to the tight schedule, but I was also really motivated to get into it and support my colleagues. I learned so many new things in the next few weeks, and I was completely hooked. I was developing features, building and releasing new container images, and deploying them to OpenShift in the next moment. We used GitLab for CI and ArgoCD for CD. For monitoring the application, we used Grafana and Prometheus-Alerts. Everything was just so practical, and I was in control of everything, a good feeling.

In parallel, I was reading “The Phoenix Project” (written by Gene Kim, Kevin Behr, and George Spafford) and the follow-up book “DevOps Handbook” (written by Gene Kim, Patrick Debois, John Willis, and Jez Humble). Due to the project, where we also lived the DevOps principles, I was completely hooked by the books, and when I was reading about the “Aha-moment”, I ultimately got it. The books, especially at the beginning of the “DevOps Handbook”, talk about the moment when a Developer realizes the practical ways of becoming a DevOps Engineer. New ways to simplify deploying, releasing and monitoring applications are introduced and made accessible to everyone. After the initial learning process, most (if not all) developers realize how simple and efficient everything can be, and then (most likely) they get the famous aha moment, just like me.

Conclusion

I wanted to give you a short introduction to my transformation and show you what benefits your engineers and colleagues will get, even when they are newbies in this area like I was. When planning a new project, always try to consider and use DevOps principles, even at your first stages of designing; you will thank yourself later.

In the next blog post, I will show you how to set up a similar cluster as we used here, so you can get an even deeper understanding of the processes and maybe even apply them for yourself.