Pipelines Questions
Questions on how to use Serverless Jenkins X Pipelines
Using bash completion really helps using the jx
command line letting you TAB
-complete commands and command line arguments.
To see how to enable bash completion check out the jx completion
Each environment in Jenkins X is defined in a git repository; we use GitOps to manage all changes in each environment such as:
The first two items are defined in the env/requirements.yaml
file in the git repository for your environment. the latter is defined in the env/values.yaml
file.
Helm charts use a values.yaml file so that you can override any configuration inside your Chart to modify settings such as labels or annotations on any resource or configurations of resources (e.g. replicaCount
) or to pass in things like environment variables into a Deployment
.
So if you wish to change, say, the replicaCount
of an app foo
in Staging
then find the git repository for the Staging
environment via jx get env to find the git URL.
Navigate to the env/values.yaml
file and add/edit a bit of YAML like this:
Submit that change as a Pull Request so it can go through the CI tests and any peer review/approval required; then when its merged it master it will modify the replicaCount
of the foo
application (assuming there’s a chart called foo
in the env/requirements.yaml
file)
You can use vanilla helm to do things like injecting the current namespace if you need that.
To see a more complex example of how you can use a values.yaml
file to inject into charts, see how we use these files to configure Jenkins X itself
See the above question on how to inject environment specific configuration into environments
Preview Environments are similar to other environments like Staging
and Production
only instead of storing the environments in a separate git repository the preview environment is defined inside each applications charts/preview
folder.
So to inject any custom configuration into your Preview environment you can modify the charts/preview/values.yaml
file in your applications git repository to override any helm template parameters defined in your chart (in the charts/myapp
folder).
You may need to modify your helm charts to add extra helm configuration if the configuration you wish to configure is not easily changed via the values.yaml
file.
Hashicorp Vault is the preferred way in Jenkins X to manage secrets. For example, the GitHub personal access token generated for the pipeline bot is stored in Vault. Read more about using Vault to manage your secrets with Jenkins X.
In addition, the Jenkins X team are big fans of Kubernetes External Secrets and are developing jx-secret, a small command line tool working with Kubernetes External Secrets.
We have a background garbage collection job which removes Preview Environments after the Pull Request is closed/merged. You can run it any time you like via the jx gc previews command
You can also view the current previews via jx get previews:
and delete a preview by choosing one to delete via jx delete preview:
When you create a Pull Request by default Jenkins X creates a new Preview Environment. Since this is a new dynamic namespace you may want to configure additional microservices in the namespace so you can properly test your preview build.
To find out more see how to add dependent charts, services or configuration to your preview environment
With Jenkins X you are free to create your own pipeline to do the release if you wish; though doing so means you miss out on our extension model which lets you easily enable various extension Apps like Governance, Compliance, code quality, code coverage, security scanning, vulnerability testing and various other extensions which are being added all the time through our community.
We’ve specifically built this extension model to minimise the work your teams have in having to edit + maintain pipelines across many separate microservices; the idea is we’re trying to automate both the pipelines and the extensions to the pipelines so teams can focus on their actual code and less on the CI/CD plumbing which is pretty much all undifferentiated heavy lifting these days.
We don’t use branch patterns
with Tekton; they are a Jenkins specific configuration.
For Tekton we use the prow / lighthouse configuration to specify which branches trigger which pipeline contexts.
If you are using boot to install Jenkins X then you can create your own custom Scheduler
custom resource in env/templates/myscheduler.yaml
based on the default one that is included.
e.g. here is how we specify the branches used to create releases.
You can also create additional pipeline contexts; e.g. here’s how we add multiple parallel testing pipelines on the version stream via a custom Scheduler so that we can have many integration tests run in parallel on a single PR. Then each named context listed has an associated jenkins-x-$context.yml
file in the source repository to define the pipeline to run like this example which defines the boot-lh
context
You can then associate your SourceRepository
resources with your custom scheduler by:
spec.scheduler.name
property of your SourceRepository
via kubectl edit sr my-repo-name
)jx import --scheduler myname
dev
Environment
at spec.teamSettings.defaultScheduler.name
before you import projectsIf you are not using boot then you can use kubectl edit cm config
and modify the prow configuration by hand - though we highly recommend using boot and GitOps instead; the prow configuration is easy to break if changing it by hand.
The kubernetes resources being deployed are defined as YAML files in the source code of your application in charts/myapp/templates/*.yaml
. If you don’t specify anything then Jenkins X creates default resources (a Service + Deployment
) but you are free to add any k8s resources as YAML into that folder (PVCs, ConfigMaps, Services
, etc).
Then the Jenkins X release pipeline automatically tars up the YAML files into an immutable versioned tarball (using the same version number as the docker image, git tag and release notes) and deploys it into a chart repository of your choice (defaults to chartmuseum but you can easily switch that to cloud storage/nexus/whatever) so that the immutable release can be easily used by any promotion.
Promotion in Jenkins X is completely separate to Release & we support promoting any releases if packaged as a helm chart. Promotion via jx promote CLI generates a Pull Request in the git repository for an environment (Staging, Canary, Production or whatever). This is GitOps basically - specifying which versions and configurations of which apps are in each environment using a git repository and configuration as code.
The PR triggers a CI pipeline to verify the changes are valid (e.g. the helm chart exists and can be downloaded, the docker images exist etc). Whenever the PR gets merged (could be automatically or may require additional reviews/+1s/JIRA/ServiceNow tickets or whatever) - then another pipeline is triggered to apply the helm charts from the master branch to the destination k8s cluster and namespace.
Jenkins X automates all of the above but given both these pipelines are defined in the environments git repository in a Jenkinsfile
you are free to customise to add your own pre/post steps if you wish. e.g. you could analyse the YAML to pre-provision PVs for any PVCs using some custom disk snapshot tool you may have. Or you can do that in a pre or post-install helm hook job. Though we’d prefer these tools to be created as part of the Jenkins X extension model to avoid custom pipeline hacking which could break in future Jenkins X releases - though its not a huge biggie.
When using a docker registry like gcr.io then the docker image owner gcr.io/owner/myname:1.2.3
can be different to your git owner/organisation.
On Google’s GCR this is usually your GCP Project ID; which you can have many different projects to group images together.
There’s a few options for defining which docker registry owner to use:
jenkins-x.yml
dev
at env.spec.teamSettings.dockerRegistryOrg
DOCKER_REGISTRY_ORG
If none of those are found then the code defaults to the git repository owner.
For more details the code to resolve it is here
To help automate CI/CD with GitOps we assume helm charts are created as part of the automated project setup and CI/CD. e.g. just import your source code and a docker image + helm chart will be generated for you - the developers don’t need to know or care if they don’t want to use helm:
If a developer wants to specifically create a specific resource (e.g. Secret, ConfigMap
etc) they can just hack the YAML directly in charts/myapp/templates/*.yaml
. Increasingly most IDEs now have UI wizards for creating + editing kubernetes resources.
By default things like resource limits are put in values.yaml
so its easy to customise those as needed in different environments (requests/limits, liveness probe timeouts and the like).
If you have a developer who is fundamentally opposed to helm’s configuration management solution for environment specific configuration you can just opt out of that and just use helm as a way to version and download immutable tarballs of YAML and just stick to vanilla YAML files in, say, charts/myapp/templates/deployment.yaml
).
Then if you wish to use another configuration management tool you can add it in - e.g. kustomise support.
If you use serverless apps with Knative we don’t use thee default exposecontroller mechanism for defaulting the Ingress
resources since knative does not use kubernetes Service
resources.
You can work around this by manually editing the knative config via:
For more help see using a custom domain with knative
You should be able to use exposecontroller directly in any app you deploy in any environment (e.g. Staging or Production) as we already trigger exposecontroller on each new release.
We use exposecontroller for Jenkins X to handle the generation of Ingress
resources so that we can support wildcard DNS on a domain or automate the setup of HTTPS/TLS along with injecting external endpoints into applications in ConfigMaps via annotations.
To get exposecontroller to generate the Ingress
for a Service
just add the label to your Service. e.g. add this to your charts/myapp/templates/service.yaml
:
If you want to inject the URL or host name of the external URL or your ingress just use these annotations.
There may be times when you need to add your custom annotations to the ingress controller or exposecontroller which jx
uses to expose services.
You can add a list of annotations to your application’s service Helm Chart, which is found in your app’s code repository.
A custom annotation may be added to the charts/myapp/values.yaml
and it may look as follows:
To see an example of where we add multiple annotations that the exposecontroller
adds to generated ingress rules, take a look at this values.yaml
If you have an existing monorepo you want to import into Jenkins X you can; just be aware that you’ll have to create and maintain your own pipelines for your monorepo. So just modify them jenkins-x.yml
file after you import your monorepo.
See how to add a custom step to your pipeline.
By default, enabling Vault via jx boot
’s jx-requirements.yml
will only activate it in your pipeline and preview environments, not in staging and production. To also activate it in those environments, simply add a jx-requirements.yml
file to the root of their repo, with at least the following content:
Note that the file must be named with .yml
, not .yaml
, or else the requirements loader cannot load the proper file.
Then, assuming you have a secret in Vault with path secret/path/to/mysecret
containing key password
, you can inject it into service myapp
(for instance, as a PASSWORD
environment variable) by adding the following to your staging repo’s /env/values.yaml
:
Notice the prefixing with vault:
URL scheme and also that we omit first path component (secret/
), as it gets added automatically. Finally, the key name is separated from path by a colon (:
).
If your secret is not environment-specific, you can also inject it directly into your app’s /charts/myapp/values.yaml
:
However, note that this value would be overriden at the environment level if the same key is also present there.
Vault does not need to be explicitly enabled for preview environment. To inject same secret as above into your preview, simply add the following to your app’s /charts/preview/values.yaml
:
When you inject secrets directly into environment variables, they appear in Deployment yaml as plain text, which is not advisable. It is recommended to rather inject them into a Secret yaml that will itself be mounted as environment variables.
For example, start by injecting the secret into your staging repo’s /env/values.yaml
:
Then, in your app’s /charts/myapp/templates
, create a mysecrets.yaml
file, in which you refer to the secret you just added:
Notice how we encode the secret value in Base64, as this is the format expected in a Secret yaml.
Also, make sure to add a default value for the same key in your app’s /charts/myapp/values.yaml
:
That allows Helm to resolve to some value during linting of your mysecrets.yaml
, as linting seems not to consider values from the environment. Otherwise, you might get something like:
Finally, mount the Secret yaml as environment variables in your app’s /charts/myapp/templates/deployment.yaml
:
Questions on how to use Serverless Jenkins X Pipelines
Using ChatOps with Jenkins X
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.