The first time I successfully deployed to OpenShift via a Tekton pipeline, the pod immediately crashed with a truststore error. That was the beginning of a longer journey than I expected, one that eventually covered S2I builds, Maven publishing, GitOps configuration, and a failed attempt to force the JFrog CLI into a place it didn’t belong. Here’s what I learned.
Part 1: The Pipeline: From Code to Deployable Image
My first task was to turn a simple Spring Boot/Kafka application into a containerized image. In an OpenShift environment, this means defining a CI/CD pipeline (using Tekton/OpenShift Pipelines) that automates the entire journey from source code to a running pod.
To achieve a deployable image that successfully spins up as a pod, our pipeline required a sequence of specific, high-level tasks, with Steps 1 through 3 being executed within Tekton and step 4 is achieved through OpenShift’s ImageStream:
- Source Checkout: The pipeline starts by reaching out to our Bitbucket repository and cloning the latest commit from the target branch. This ensures every build is tied to a specific, traceable version of the source code.
- Containerization (Source-2-Image / Buildah): This step is handled by our custom s2i-java-certsincluded Tekton task, which runs two internal stages. First, S2I inspects the source tree and generates a Dockerfile, injecting environment variables such as the Artifactory URL so Maven knows where to fetch dependencies when the build runs. From there, Buildah takes that generated Dockerfile and builds the actual container image: Maven resolves dependencies from our company’s internal Artifactory instance, compiles the application, and packages it into a fat JAR, a single self-contained executable bundling the application, all its dependencies, and an embedded web server. The result is a single, immutable image built on our enterprise-approved base (ubi9-openjdk-21), needing nothing pre-installed beyond a JRE to run.
- Push to Internal Registry: Once the image is built, the pipeline pushes it to OpenShift’s internal image registry. Keeping the image inside the cluster’s internal registry avoids external network hops and authentication headaches.
- Deployment Trigger: When the pipeline pushes the new image to the internal registry, it automatically updates the OpenShift ImageStream tag. Since the Deployment references the ImageStream rather than the registry URL directly, OpenShift immediately detects the change and initiates a rolling update. From that point on, everything is hands-off, no manual deployment step needed.
Once the pipeline was running and the image deployed, the application immediately failed to connect to the Kafka cluster. The logs were unambiguous:
java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
The enterprise Kafka cluster uses internal SSL certificates that weren’t present in the JDK’s default truststore inside the container. My first fix was to use a Kubernetes ConfigMap to inject a (JAVA_OPTS) environment variable pointing the JVM at a custom truststore location. It worked, but it meant every deployment had to carry that flag and know where the truststore lived, fragile and easy to misconfigure. The lower-friction approach was to mount the enterprise certificate bundle directly over Java’s default truststore path:
volumeMounts:
# Mount directly to the OpenJDK default truststore location
- name: jks-ca-certs
mountPath: /usr/lib/jvm/java-21-openjdk/lib/security/cacerts
subPath: cacerts
This approach was cleaner and easier to maintain because it requires no changes to the application code or startup flags. Java simply picks up the certificates automatically on startup and treats them as all the other certificates that are shipped with the JDK by default. Once that rolling update triggers, the new pod spins up, the subPath volume mount kicks in to inject the SSL certificates into the Java truststore, and the application successfully connects to the Kafka brokers.
So after resolving the certificate issues and successfully setting up the tasks listed above, the process, from a git push to a running, securely connected application, became entirely hands-off.
Part 2: The Library Challenge: Publishing to Artifactory
With the application successfully containerized and connected, another project was waiting for a pipeline implementation. This time the project didn’t require me to deploy an application, but rather make it convenient to publish a JAR via an Openshift pipeline to the enterprise Artifactory with a simple button click — for other projects to consume.
The Trap: Over-Engineering
My initial approach was to replicate the complex tooling used in other pipelines, involving the JFrog CLI and sophisticated version extraction logic. This led to image pull problems, network restrictions, and authentication failures.
The Solution: Radical Simplification
I took a step back and reconsidered my choice of using JFrog. The limited images available in the cluster and the authentication issues I kept hitting were generating unnecessary overhead.
If I spent enough time, there would have been a workaround for these issues, but was it even necessary for such a simple publishing task? After quickly searching for alternatives, I realized I could achieve everything by using Maven.
So I created a simple Tekton Task that generated a settings.xml for Artifactory authentication using OpenShift secrets, and then ran a standard mvn deploy command.
The Simplified Logic:
- Inject Secrets: Use an envFrom reference to retrieve Artifactory credentials.
- Generate Config: Create a temporary settings.xml pointing to the internal Artifactory URL.
- Deploy:
mvn clean package deploy -s "${MAVEN_CONFIG_DIR}/settings.xml" -DskipTests
This method leveraged Maven’s native ability to handle artifact paths, metadata, and snapshot/release toggling, eliminating the need for external scripts. By trusting the pom.xml as the single source of truth, the library publishing pipeline became fast, reliable, and easy to maintain.
Part 3: The Ecosystem: Strategic Tagging & GitOps
At this point, I had successfully conquered the two core build patterns: compiling containerized Spring Boot applications and publishing versioned Java libraries. But getting an artifact built is only the first step of the CI/CD journey.
The second piece of the puzzle was managing the lifecycle of these artifacts as they travel toward production. How do we ensure that the exact same application tested in “Dev” is the one running in “Test”? And how do we manage environmental configurations, like different database URLs or Kafka broker addresses, without manually tweaking deployments?
This is where strategic tagging and GitOps came into play.
Strategic Tagging
For applications, we used the following tagging system that allowed us to promote images across environments without ever rebuilding the code:
- :latest : Automatically updated from the dev branch.
- :current : The stable version running in the test environment.
- :v1.0.0 : The frozen production release.
GitOps Configuration
To maintain sanity across these multiple environments, we adopted a strict GitOps structure. Configuration (like our ConfigMaps) is completely separated from the application code.
We maintain distinct folders (dev/, test/) in a dedicated configuration Git repository:
kafka-app-openshift/
└── openshift-config/
├── dev/
│ ├── ConfigMaps/
│ │ └── app-settings.yaml
│ ├── Deployments/
│ │ └── kafka-app.yaml
└── test/
├── ConfigMaps/
│ └── app-settings.yaml
└── Deployments/
└── kafka-app.yaml # Uses the :current tag
In our setup, I implemented a push-based GitOps flow. When a change is merged into the configuration repository’s master branch, a Bitbucket Webhook is triggered. This webhook notifies an OpenShift Event Listener, which in turn kicks off a specialized Tekton ‘Apply Pipeline.’ This pipeline automatically applies the updated manifests to the cluster. This ensures that our environments are completely reproducible, configuration drift is eliminated, and we have a full, version-controlled audit trail of every infrastructure change.

3 Key Lessons Learned
1. The Fix No One Has to Think About
When the JVM couldn’t find the enterprise SSL certificates, the first fix worked, but it pushed the problem into application configuration. Every deployment had to carry a startup flag and know where the truststore lived, fragile and easy to misconfigure. The better fix handled it at the infrastructure layer, where Java could pick it up automatically without any code or flag changes. A volume mount that silently satisfies Java’s expectations beats an explicit flag the deployment has to remember to set. When something feels fragile, ask whether you’re solving the problem at the right layer rather than just patching it at the nearest one.
2. Match the Tool to the Task
The JFrog CLI is a capable tool, but it was the wrong choice here. When a native tool already handles the job (Maven publishing to Artifactory), adding a specialized layer on top creates complexity without adding value. The principle generalizes: before scripting custom logic around a problem, check whether the build tool, runtime, or platform already solves it. The minimum viable pipeline is usually the most maintainable one.
3. Config Belongs in Git, Not in the Cluster
Once you have a working build, the next question is: how does configuration move safely across environments without accumulating drift?
The answer was to stop treating the config repo as a backup and start treating it as the actual control plane. All environment-specific manifests, ConfigMaps, Deployments, live in a dedicated repository under explicit dev/ and test/ folders. Nothing is edited directly in the cluster. The image promotion strategy reinforces this: :latest flows into dev on every successful pipeline run, :current marks what is stable in test, and :v1.0.0 freezes the production release. The same image binary moves across environments, no rebuilds, no drift.
The automation closes the loop: a merge to the config repository triggers a Bitbucket webhook, which hits an OpenShift EventListener, which kicks off a Tekton Apply Pipeline that runs oc apply against the cluster. Every configuration change is version-controlled, peer-reviewed, and applied consistently. When something breaks in test, you have an exact audit trail of what changed and when.
Conclusion
The journey from struggling with SSL certificate truststores to having multi-environment CI/CD pipelines taught me a simple but important lesson: “It doesn’t have to be clever.” The SSL fix was a volume mount. The library pipeline was mvn deploy. The GitOps flow was a webhook and a folder structure. The moments where things got complicated were almost always the moments I was reaching for a tool I didn’t need. If I took one thing from this project, it’s that a boring, minimal pipeline is almost always the right one.
Explore Our Latest Insights
Stay updated with our expert articles and tips.
Besprechen Sie Ihr Webentwicklungsprojekt noch heute mit unseren Experten.
Entdecken Sie, wie unsere maßgeschneiderten Webentwicklungslösungen Ihr Unternehmen auf ein neues Niveau heben können.
Stay Connected with Us
Follow us on social media for the latest insights and updates in the tech industry.







