[{"data":1,"prerenderedAt":816},["ShallowReactive",2],{"/en-us/blog/migration-from-atlassian-bamboo-server-to-gitlab-ci":3,"navigation-en-us":40,"banner-en-us":449,"footer-en-us":459,"blog-post-authors-en-us-Ivan Lychev":698,"blog-related-posts-en-us-migration-from-atlassian-bamboo-server-to-gitlab-ci":712,"blog-promotions-en-us":753,"next-steps-en-us":806},{"id":4,"title":5,"authorSlugs":6,"body":8,"categorySlug":9,"config":10,"content":14,"description":8,"extension":26,"isFeatured":12,"meta":27,"navigation":28,"path":29,"publishedDate":20,"seo":30,"stem":35,"tagSlugs":36,"__hash__":39},"blogPosts/en-us/blog/migration-from-atlassian-bamboo-server-to-gitlab-ci.yml","Migration From Atlassian Bamboo Server To Gitlab Ci",[7],"ivan-lychev",null,"engineering",{"slug":11,"featured":12,"template":13},"migration-from-atlassian-bamboo-server-to-gitlab-ci",false,"BlogPost",{"title":15,"description":16,"authors":17,"heroImage":19,"date":20,"body":21,"category":9,"tags":22},"How to migrate Atlassian Bamboo Server's CI/CD infrastructure to GitLab CI, part one","Theoretical reasoning and practical proposal on migrating an existing CI/CD infrastructure of some multi-component application from Bamboo Server to GitLab CI",[18],"Ivan Lychev","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749663397/Blog/Hero%20Images/logoforblogpost.jpg","2022-07-06","\n\nWhen I faced a task of migrating from `Atlassian Bamboo Server` to `GitLab CI/CD`, I was not able to find any comprehensive information regarding something similar. So I designed a process on my own. This demo shows how to migrate a CI/CD structure for an existing multi-component application from a discontinued [Atlassian Bamboo Server](https://www.atlassian.com/migration/assess/journey-to-cloud) to [GitLab CI/CD](https://docs.gitlab.com/ee/index.html) (Community Edition).\n\nThe accompanying repository is https://gitlab.com/iLychevAD/ci-cd-for-a-multi-component-app.\n\nIn this first part of a two-part series, you will find a description of the current state of affairs - i.e., how the CI/CD has been organized within Bamboo Server, how the Bamboo Build and Deploy plans are designed for bootstrapping infrastructure and deploying the components of the application, and the architecture of the application itself.\n\nAnd in part two, we'll take a deeper look at the virtues of `GitLab CI/CD`.\n\n## Initial state\n\n(Note: This is not a description of some particular project but more a kind of compilation of several projects I worked on.)\n\nThe application solution allows the client to fulfill a particular business purpose (the nature of which is not relevant here and thus not specified) and consists of more than 50 discrete components (further referred to as `applications` or just `apps` or `components`). I refrain from calling them microservices as each of them looks more like a full-fledged application communicating with other siblings using REST API and messages in Kafka topics. Some of them expose a web UI to external or internal users and some are just utility parts serving the needs of other components or performing internal operations, etc.\n\nCode for each app is stored in its own Git repository (further just `repo`). So, a `multi-repo` approach is used for them. Each app may be written in different languages and packaged as one or several OCI-images for deployment.\n\nEach app repo looks like:\n```text\n📦 \u003Csome-app-git-repo>\n ┣ 📂src \u003C-- application source code\n ┣ 📂docker-compose\n ┃ ┗ 📜docker-compose.yml \u003C-- analogue of K8s manifests\n ┗ 📜Dockerfile \u003C-- conventionally, \"Dockerfile\" name is used for OCI image specification file\n\n```\n\nFor running the applications, the client uses an outdated orchestration system (one from pre-Kubernetes epoch). So each app repo contains a Docker-compose compatible file describing deployment directives for that outdated orchestration system (in essence, similar to Kubernetes Deployment manifests).\n\nFor all of the build and deploy activities Atlassian Bamboo Server is used.\n\nSome details for those not familiar with the Bamboo Server - in an opinionated manner it explicitly separates so-called `build` pipelines and `deployment` pipelines. The former are supposed to build application code and produce some artifacts for further deployment (in our case those artifacts are OCI images uploaded to OCI registry and docker-compose.yml files referring to those images). The latter ones are supposed to take some particular set of artifacts and apply them to some particular `environment`. An `environment` (referred to `env` in the future for brevity) here is just an abstract deployment target characterized by a set of environment variables attached to it and exposed to the apps deployed into it. In reality, an `env` is implemented as a set of resources (virtual machines, databases, object storage locations, etc.) required by the applications.\n\nIn Bamboo, one `build` pipeline usually corresponds to one `deployment` pipeline so when the latter is started it just takes the artifacts from the attached `build` pipeline as input.\n\nThe client uses a `production` env, `preproduction` env, and numerous (up to several hundreds) so-called `staging` (short-lived) envs where different development teams and software engineers can test various combinations of the apps (here we assume that they have ~80-100 distinguish components of the application solution and several hundreds of software developers which gives a lot of possible combinations and requires so many `staging` envs).\n\nRoughly, a configuration of a `deploy` pipeline consists of a specification of the source artifacts (which are provided by the attached `build` pipeline as described earlier) and a specification of the set of envs where those artifacts (effectively, an application) can be deployed to.\n\nCurrent installation uses sophisticated dynamic generation of envs set for each app deployment pipeline. Roughly speaking, they have a central configuration file with the list of all existing envs where for each env a list of apps allowed to be deployed to it is denoted. Each time the file is modified (i.e., an env is created or deleted), the deployment pipelines are automatically being updated so as in the result each of them contains a list of envs corresponding for each app. You will have more idea about this aspect when you have looked at the implementation section later.\n\nIn the Bamboo UI this looks like:\n\n![envs_list_on_build_result_page](https://about.gitlab.com/images/blogimages/migration-from-atlassian-bamboo-server-to-gitlab-ci/envs_list_on_build_result_page.png)\n\nHere you can see an application build result page where on the right-hand side under the `Included in deployment project` title you can see a list of envs into which you can deploy the application. (Keep in mind that besides `build` and `deployment` pipelines, the Bamboo also uses a notion of `releases` - this is just some kind of an intermediate entity that should be created out of a build result to make it possible to deploy that build into some env). The `cloud-with-upwards-arrow` button in the `Actions` column starts a corresponding `deploy` pipeline with automatically passing the link to a build result (in a form of a `release` entity in Bamboo terminology) and the name of the env next to which the button has been clicked (the procedure of how a list of envs is created for a `deploy` pipe is described above).\n\nA concept of a `release` is specific to Bamboo Server, though it provides some amenities. For example, on the Release details page you can see a list of envs where a release has been deployed to. On the `Commits` tab you can backtrack a release to the application code in a SVC. And the `Issues` tab shows attached Jira tickets.\n\n![bamboo_release_details](https://about.gitlab.com/images/blogimages/migration-from-atlassian-bamboo-server-to-gitlab-ci/bamboo_release_details.png)\nRelease details page\n\n\nAn env details page also enumerates releases history for this env (in scope of one particular application though as an env is specified for each deployment pipeline individually):\n\n![bamboo_env_details](https://about.gitlab.com/images/blogimages/migration-from-atlassian-bamboo-server-to-gitlab-ci/bamboo_env_details.png)\nEnv details page\n\n\nAnd upon clicking the `cloud-with-upwards-arrow` button the Bamboo shows diff of Jira tickets and commits in respect to the previous `release` (only if both releases are made from artifacts from the same Git branch):\n\n![deploy_launch_page](https://about.gitlab.com/images/blogimages/migration-from-atlassian-bamboo-server-to-gitlab-ci/deploy_launch_page.png)\nDeploy launch page\n\n\nSo, in general, the current path from source control to an env for each app looks like:\n\n![svc_to_env_path](https://about.gitlab.com/images/blogimages/migration-from-atlassian-bamboo-server-to-gitlab-ci/svc_to_env_path.png)\n\nThe Build plans are triggered automatically upon Git commits or Git tags. Most of the Deployment plans are started by the project members manually when needed. Each Deploy plan contains a step that checks if a user who started the plan has permissions to deploy into an env (for example, only members of the team which owns an env are allowed to deploy to that env and the deployment to the production env is allowed only for a set of eligible project members).\n\n## The task\n\nThe task is to migrate the aforementioned design from Bamboo Server to `GitLab` while keeping a similar deployment scheme (leveraging GitLab's `Environments` feature).\n\nAlso the following should be considered:\n\n - team members (software engineers, quality assurance specialists) are supposed to be able to manage environments on their own in a user-friendly self-service manner.\n - there should not be any discrepancy in IaC for different environments (per `12-factor apps` best practices), i.e. for any kind of an environment, be it a development or production one, the same set of IaC (here - Terraform files) should be used.\n  - the core ideas and workflows established in the previous situation (implemented with Atlassian Bamboo) should be kept to make the migration smoother for the members of the projects (also sometimes referred to as just users).\n\n## Implementation\n\n### Implementation's GitLab groups\\projects structure\n\n```text\n📦 \u003CGitLab root group>\n ┣ 📂 apps GitLab group\n ┃ ┣ 📃 app1 GitLab project\n ┃ ┣  ...\n ┃ ┗ 📃 appN GitLab project\n ┣ 📂 ci GitLab group\n ┃ ┣ 📃 library GitLab project\n ┃ ┗ 📃 oci-registry GitLab project\n ┗ 📂 infra GitLab group\n  ┣ 📃 environment-blueprints GitLab project\n  ┣ 📃 environment-set GitLab project\n  ┗ 📃 k8s-gitops GitLab project\n\n```\n\n*Description*:\n\nThe most important content is in the `ci/library` repo (the shared ci configs) and `environment-set` repo. The other repos don't require much attention: The `k8s-gitops` purpose is not implemented and the repo is empty, the `apps` group just imitates source code for some apps, and the `ci/oci-registry` serves a role of an OCI registry for the solution.\n\nThe `apps` GitLab group merely contains the apps source code per se. Each GitLab project in this group corresponds to one app. Each app repo is expected to contain the source code itself (in the `src` directory for example), a `k8s` directory with k8s manifests, and an OCI image specification file (traditionally often called `Dockerfile`).\n\nThe `ci` GitLab group contains the `ci/library` project that holds shared `.gitlab-ci.yaml` files used by other projects (in a manner similar to Jenkins' shared libraries) and the `ci/oci-registry` serves as an OCI-image registry for various images used by the demo project (it also contains a Git repository with gitlab-ci files to build some utility images with tools used in various pipelines). For simplicity, the latter stores all the images throughout all the projects of the demo, though it's clearly not the best choice for a real-life situation when different sets of images of a set of separate projects/registries should be created.\n\nThe `infra` group holds applications infrastructure creation related Git repositories:\n\nThe `infra/k8s-gitops` is mostly irrelevant to the topic of this demo. In this demo it's presumed that Kubernetes is used as a computation workload platform and when a k8s cluster is created for an environment all the k8s manifests are supposed to be put into this repo (where each branch corresponds to a single environment) to be consumed by a GitOps tool installed into the cluster.\n\nThe `infra/environment-blueprints` holds parametrized IaC templates describing all the resources required for a full-fleged environment. In this example, the Terraform is used as an IaC tool though the principles are similar for its analogs (CloudFormation, for instance). The blueprints are parametrized in such manner that in the defaults values they hold some sensible values (most likely set to different values depending on the kind of a environment they were used to bootstrap - for example, a production env and everything else). It's implied that there might coexist several versions of the blueprints (implemented by using Git branches or Git tags) so each environment (see the next paragraph about `infra/environment-set`) can explicitly specify which version it wants to use (in case of using Terraform by specifying Git reference in the module's `source` field).\n\nHere I would like once again to highlight a digression from the best practices. For simplicity in the `infra/environment-blueprints` repo all the parts of an environment are combined into one single Terraform module (or a workspace, or a Stack in CloudFormation's terminology). In that way all the resources are always updated or changed within a single `terraform apply` command, which is cumbersome for large infrastructures containing a lot of resources. For larger infrastructures it would be more manageable to split into disparate Terraform modules (or CloudFormation Stacks, or Azure ARM Resource Groups) and thus make it possible for the infrastructure to be changed/updated in parts according to which exact components of it have changed. This might raise another question - how to manage dependencies in between such parts if they are present? For that, we would use some kind of an external (in respect to the IaC tool itself) orchestration tool like AWS Step Functions... or even GitLab's DAG feature!\n\nFinally, the `infra/environment-set` project represents an actual expected state of resources for each environment (a branch corresponds to an environment). See the README.md file in the Git repo for details. In short, each branch here is meant to contain a `main.tf` file referring to some version of the blueprints in the `infra/environment-blueprints` project, a set of Terraform files with overrides for any default variables set in the blueprints modules and other utility files like with a list of users allowed to deploy to the environment (such a list is to be checked by the deployments job in the apps projects).\n\n### **Important!**\n\nWhile looking at the implementation keep im mind that this solution deliberately omits some crucial aspects of any project infrastructure like security or monitoring, just for the sake of keeping this solution manageable and comprehensible. Implementing security and monitoring aspects would make the solution cumbersome and much longer to prepare. That is also true for the `k8s-gitops` repository - it's implied that in a real-life solution this would actively participate in the deployment process and hold Kubernetes clusters state in a GitOps approach but currently, this repo is just a placeholder. In the practical guide later you will see a description of the process of controlling environments using different branches in the `infra/environment-set` project. Ideally, such a workflow should use Merge Requests though for simplicity this implementation skips using MRs.\n\nAnother important thing that's possible not clear in this solution is configuration management, i.e. how configuration settings unique to each environment are provided to the applications inside an environment. Well, given that our applications run within Kubernetes cluster and that the cluster state is placed into a dedicated repo (`k8s-gitops` in our case), the configuration settings situation is simple - for each app the Terraform files in the `infra/environment-blueprints` should output all the sensible configuration values for the resources (like S3 bucket names, RDS endpoint URLs, etc.). Then, using Terraform itself or some other tool to create/update an environment, an additional step would collect all those outputs, transform them into k8s ConfigMap manifests, and put them into the GitOps repo.\n\nFor the secrets, we can go several ways. The most simplistic (though not flexible and not easy for secret rotation) way is to use some kind of encryption at rest like Mozilla's SOPS so that the secrets are being encrypted when they are put into the GitOps repo and decrypted when deployed into K8s. Another (and better ?) way - do not store secrets at rest at all but use either a third-party tool like Hashicorp Vault (with dynamic secrets generation) or cloud native features like [AWS IAM Roles for Service Accounts](https://aws.amazon.com/blogs/containers/diving-into-iam-roles-for-service-accounts/).\n\n## Bootstrap the demo\n\nThe accompanying repository, https://gitlab.com/iLychevAD/ci-cd-for-a-multi-component-app, contains Terraform files that enable you to install a copy of the demo structure into your own GitLab account to see it in action:\n\n`*.tf` files in the root directory and in the `tf_modules` directory describe the structure and configuration of the GitLab projects and groups. In the `repo_content` directory there is a content for the GitLab repositories in the projects. The repositories are filled with those files by the Terraform scripts.\n\nThe demo was tested with GitLab Community Edition `15.0.0-pre revision 4bda1cc84df`. The Terraform scripts do not create any real resources but just imitate them using `null_resource` and `local-exec`.\n\nThe bootstrapping process is conducted inside a container image (see the steps below) so it's platform-agnostic and in terms of tools all you need to spin up the demo is some containerization engine installed on your PC (i.e., Docker, Podman, etc).\n\n**Steps**:\n\n1. In the GitLab web UI manually create a root group to bootstrap the demo into (see `root_gitlab_group.tf` for a web-link why it's not possible to automate). Notice its ID - you need to provide it at the next step.\n\n2. Clone this repository.\n    Download an official Hashicorp's Terraform image and enter its interactive shell. All the further commands are supposed to be performed inside that shell:\n\n    ```shell\n    docker run --rm -it --name ci-cd-for-a-multi-component-app \\\n      -e TF_VAR_gitlab_token=\\\u003Cyour GitLab account access token\\> \\\n      -v \\\u003Cpath to a location where to store ssh key-pairs on your PC\\>:/deploy-keys \\\n      -e TF_VAR_deploy_key_readwrite=/deploy-keys/ci-cd-for-a-multi-component-app-deploy-key.pub \\\n      -e TF_VAR_deploy_key_readonly=/deploy-keys/ci-cd-for-a-multi-component-app-deploy-key.pub \\\n      -e TF_VAR_root_gitlab_group_id=\\\u003CGitLab group ID\\> \\\n      -v \\\u003Cpath to the directory where you cloned the project into\\>:/repo -w /repo \\\n      --entrypoint /bin/sh \\\n      public.ecr.aws/hashicorp/terraform:1.1.9\n    ```\n\n    Explanation:\n\n    `-e TF_VAR_gitlab_token=\\\u003Cyour GitLab account access token\\>` - Terraform's `gitlab` provider needs a GitLab access token with sufficient permissions to spin up the demo. Provide it as a Bash environment variable - `TF_VAR_gitlab_token` (see `provider.tf`). It is also used by the `upload_avatar` module.\n\n    `-v \\\u003Cpath to a location where to store ssh key-pairs on your PC\\>:/deploy-keys` - on the left-hand side here specify some directory on your local PC where you would like to store SSH keys needed for deploying the demo. Thus they are persisted even if you exit the container. See bullet point `4` for more details.\n\n    `-e TF_VAR_deploy_key_readwrite=/deploy-keys/ci-cd-for-a-multi-component-app-deploy-key` and\n\n    `-e TF_VAR_deploy_key_readonly=/deploy-keys/ci-cd-for-a-multi-component-app-deploy-key` - set the names for the aforementioned keys\n\n    `-v \\\u003Cpath to the directory where you cloned the project into\\>:/repo -w /repo` - we mount the project content from your local PC into the running container. Note that because of that the Terraform local state file will be stored inside that directory on your PC.\n\n3. Install tools - bash and curl:\n\n    ```shell\n    apk add bash curl\n\n    /bin/bash\n    ```\n\n4. Upon bootstrapping the demo, the repositories' content is pushed into (i.e. is restored) from the `repo_content` directory. (When the demo is destroyed the content of the repositories is automatically pulled (i.e. is saved) into the same directory - probably you dont need this but I implemented that for my convinience during creating the demo.) We need to create an SSH key pair and need it be the same throughout both phases. In this step we generate it:\n\n    ```shell\n    ssh-keygen -t rsa -N '' -f /deploy-keys/ci-cd-for-a-multi-component-app-deploy-key \u003C\u003C\u003C y\n    ```\n\n    ```shell\n    chmod 0400 /deploy-keys/ci-cd-for-a-multi-component-app-deploy-key\n    ```\n\n    A trick used in `tf_modules/gitlab_project_with_restore_backup/main.tf` requires that in the host section of the SSH public key the location of the private key is specified (in a form like `filename@~/.ssh/\u003Cfilename>`). Otherwise the `tf_modules/gitlab_project_with_restore_backup` won't work. Edit accordingly:\n\n    ```shell\n    sed -i -e 's|^\\(ssh-rsa .*\\) \\(.*\\)$|\\1 ci-cd-for-a-multi-component-app-deploy-key@/deploy-keys/ci-cd-for-a-multi-component-app-deploy-key|' /deploy-keys/ci-cd-for-a-multi-component-app-deploy-key.pub\n    ```\n\nNow you can proceed with bootstrapping the demo using Terraform:\n\nInitialize Terraform by `terraform init` so it installs all the providers.\n\nDeploy the demo with Terraform by `terraform apply`.\n\n**Notice**: During Terraform execution you may see an error:\n```text\nError: POST https://gitlab.com/api/v4/projects/multi-component-app-root-group/ci/library/deploy_keys: 400 {message: {deploy_key.fingerprint_sha256: [has already been taken]}}\n```\nI believe this is some glitch in the GitLab API. To fix just run `terraform apply` once again until it shows no errors.\n\nAfter that you should see the following structure in GitLab in the root group:\n\n![gitlab_projects_tree](https://about.gitlab.com/images/blogimages/migration-from-atlassian-bamboo-server-to-gitlab-ci/gitlab_projects_tree.png)\n\nAll the projects should be filled with files from the `repo_content` directory.\n\nDo not delete the directory with the cloned project and the files created inside it if later you would want to clean up the things. See the next section for instructions.\n\n## Cleaning up\n\nLaunch a container image the same way you did for bootstrapping the demo (see the previous section). It's supposed that you didnt delete any files in \\\u003Cpath to a location where to store ssh key-pairs on your PC\\> and \\\u003Cpath to the direcory where you cloned the project into\\>:\n\n```shell\ndocker run --rm -it --name ci-cd-for-a-multi-component-app \\\n  -e TF_VAR_gitlab_token=\\\u003Cyour GitLab account access token\\> \\\n  -v \\\u003Cpath to a location where to store ssh key-pairs on your PC\\>:/deploy-keys \\\n  -e TF_VAR_deploy_key_readwrite=/deploy-keys/ci-cd-for-a-multi-component-app-deploy-key.pub \\\n  -e TF_VAR_deploy_key_readonly=/deploy-keys/ci-cd-for-a-multi-component-app-deploy-key.pub \\\n  -e TF_VAR_root_gitlab_group_id=\\\u003CGitLab group ID\\> \\\n  -v \\\u003Cpath to the direcory where you cloned the project into\\>:/repo -w /repo \\\n  --entrypoint /bin/sh \\\n  public.ecr.aws/hashicorp/terraform:1.1.9\n\n```\n\nInstall curl:\n\n```shell apk add curl ```\n\nDo `terraform destroy`.\n\n**Notice**: You may see some errors regarding deleting the `oci-registry` project with OCI images. In that case just delete the images and remove the project manually or wait while GitLab does that itself later.\n\nNow if you want you can remove the cloned project directory and the \\\u003Cpath to a location where to store ssh key-pairs on your PC\\> directory.\n\nIf you would like to deploy the demo once again without removing the directory with the cloned repo dont forget to remove files created during the previous demo deployment, namely `terraform.tfstate` files in the root directory and `.git` directories everywhere in the `repo_content` directory.\n\nIn the [second part](/blog/how-to-migrate-atlassians-bamboo-servers-ci-cd-infrastructure-to-gitlab-ci-part-two/) of this tutorial, we'll look at a real-world example of how this can work.\n\n\n\n\n\n",[23,24,25],"CI/CD","DevOps","tutorial","yml",{},true,"/en-us/blog/migration-from-atlassian-bamboo-server-to-gitlab-ci",{"title":31,"description":16,"ogTitle":31,"ogDescription":16,"noIndex":12,"ogImage":19,"ogUrl":32,"ogSiteName":33,"ogType":34,"canonicalUrls":32},"Migrating from Bamboo Server to GitLab CI: Getting started","https://about.gitlab.com/blog/migration-from-atlassian-bamboo-server-to-gitlab-ci","https://about.gitlab.com","article","en-us/blog/migration-from-atlassian-bamboo-server-to-gitlab-ci",[37,38,25],"cicd","devops","mbflNilx1cEfQVLT5Fn68IRaaW7gwXPJ0RAvMPO62BY",{"data":41},{"logo":42,"freeTrial":47,"sales":52,"login":57,"items":62,"search":369,"minimal":400,"duo":419,"switchNav":428,"pricingDeployment":439},{"config":43},{"href":44,"dataGaName":45,"dataGaLocation":46},"/","gitlab logo","header",{"text":48,"config":49},"Get free trial",{"href":50,"dataGaName":51,"dataGaLocation":46},"https://gitlab.com/-/trial_registrations/new?glm_source=about.gitlab.com&glm_content=default-saas-trial/","free trial",{"text":53,"config":54},"Talk to sales",{"href":55,"dataGaName":56,"dataGaLocation":46},"/sales/","sales",{"text":58,"config":59},"Sign in",{"href":60,"dataGaName":61,"dataGaLocation":46},"https://gitlab.com/users/sign_in/","sign in",[63,90,184,189,290,350],{"text":64,"config":65,"cards":67},"Platform",{"dataNavLevelOne":66},"platform",[68,74,82],{"title":64,"description":69,"link":70},"The intelligent orchestration platform for DevSecOps",{"text":71,"config":72},"Explore our Platform",{"href":73,"dataGaName":66,"dataGaLocation":46},"/platform/",{"title":75,"description":76,"link":77},"GitLab Duo Agent Platform","Agentic AI for the entire software lifecycle",{"text":78,"config":79},"Meet GitLab Duo",{"href":80,"dataGaName":81,"dataGaLocation":46},"/gitlab-duo-agent-platform/","gitlab duo agent platform",{"title":83,"description":84,"link":85},"Why GitLab","See the top reasons enterprises choose GitLab",{"text":86,"config":87},"Learn more",{"href":88,"dataGaName":89,"dataGaLocation":46},"/why-gitlab/","why gitlab",{"text":91,"left":28,"config":92,"link":94,"lists":98,"footer":166},"Product",{"dataNavLevelOne":93},"solutions",{"text":95,"config":96},"View all Solutions",{"href":97,"dataGaName":93,"dataGaLocation":46},"/solutions/",[99,122,145],{"title":100,"description":101,"link":102,"items":107},"Automation","CI/CD and automation to accelerate deployment",{"config":103},{"icon":104,"href":105,"dataGaName":106,"dataGaLocation":46},"AutomatedCodeAlt","/solutions/delivery-automation/","automated software delivery",[108,111,114,118],{"text":23,"config":109},{"href":110,"dataGaLocation":46,"dataGaName":23},"/solutions/continuous-integration/",{"text":75,"config":112},{"href":80,"dataGaLocation":46,"dataGaName":113},"gitlab duo agent platform - product menu",{"text":115,"config":116},"Source Code Management",{"href":117,"dataGaLocation":46,"dataGaName":115},"/solutions/source-code-management/",{"text":119,"config":120},"Automated Software Delivery",{"href":105,"dataGaLocation":46,"dataGaName":121},"Automated software delivery",{"title":123,"description":124,"link":125,"items":130},"Security","Deliver code faster without compromising security",{"config":126},{"href":127,"dataGaName":128,"dataGaLocation":46,"icon":129},"/solutions/application-security-testing/","security and compliance","ShieldCheckLight",[131,135,140],{"text":132,"config":133},"Application Security Testing",{"href":127,"dataGaName":134,"dataGaLocation":46},"Application security testing",{"text":136,"config":137},"Software Supply Chain Security",{"href":138,"dataGaLocation":46,"dataGaName":139},"/solutions/supply-chain/","Software supply chain security",{"text":141,"config":142},"Software Compliance",{"href":143,"dataGaName":144,"dataGaLocation":46},"/solutions/software-compliance/","software compliance",{"title":146,"link":147,"items":152},"Measurement",{"config":148},{"icon":149,"href":150,"dataGaName":151,"dataGaLocation":46},"DigitalTransformation","/solutions/visibility-measurement/","visibility and measurement",[153,157,161],{"text":154,"config":155},"Visibility & Measurement",{"href":150,"dataGaLocation":46,"dataGaName":156},"Visibility and Measurement",{"text":158,"config":159},"Value Stream Management",{"href":160,"dataGaLocation":46,"dataGaName":158},"/solutions/value-stream-management/",{"text":162,"config":163},"Analytics & Insights",{"href":164,"dataGaLocation":46,"dataGaName":165},"/solutions/analytics-and-insights/","Analytics and insights",{"title":167,"items":168},"GitLab for",[169,174,179],{"text":170,"config":171},"Enterprise",{"href":172,"dataGaLocation":46,"dataGaName":173},"/enterprise/","enterprise",{"text":175,"config":176},"Small Business",{"href":177,"dataGaLocation":46,"dataGaName":178},"/small-business/","small business",{"text":180,"config":181},"Public Sector",{"href":182,"dataGaLocation":46,"dataGaName":183},"/solutions/public-sector/","public sector",{"text":185,"config":186},"Pricing",{"href":187,"dataGaName":188,"dataGaLocation":46,"dataNavLevelOne":188},"/pricing/","pricing",{"text":190,"config":191,"link":193,"lists":197,"feature":277},"Resources",{"dataNavLevelOne":192},"resources",{"text":194,"config":195},"View all resources",{"href":196,"dataGaName":192,"dataGaLocation":46},"/resources/",[198,231,249],{"title":199,"items":200},"Getting started",[201,206,211,216,221,226],{"text":202,"config":203},"Install",{"href":204,"dataGaName":205,"dataGaLocation":46},"/install/","install",{"text":207,"config":208},"Quick start guides",{"href":209,"dataGaName":210,"dataGaLocation":46},"/get-started/","quick setup checklists",{"text":212,"config":213},"Learn",{"href":214,"dataGaLocation":46,"dataGaName":215},"https://university.gitlab.com/","learn",{"text":217,"config":218},"Product documentation",{"href":219,"dataGaName":220,"dataGaLocation":46},"https://docs.gitlab.com/","product documentation",{"text":222,"config":223},"Best practice videos",{"href":224,"dataGaName":225,"dataGaLocation":46},"/getting-started-videos/","best practice videos",{"text":227,"config":228},"Integrations",{"href":229,"dataGaName":230,"dataGaLocation":46},"/integrations/","integrations",{"title":232,"items":233},"Discover",[234,239,244],{"text":235,"config":236},"Customer success stories",{"href":237,"dataGaName":238,"dataGaLocation":46},"/customers/","customer success stories",{"text":240,"config":241},"Blog",{"href":242,"dataGaName":243,"dataGaLocation":46},"/blog/","blog",{"text":245,"config":246},"Remote",{"href":247,"dataGaName":248,"dataGaLocation":46},"https://handbook.gitlab.com/handbook/company/culture/all-remote/","remote",{"title":250,"items":251},"Connect",[252,257,262,267,272],{"text":253,"config":254},"GitLab Services",{"href":255,"dataGaName":256,"dataGaLocation":46},"/services/","services",{"text":258,"config":259},"Community",{"href":260,"dataGaName":261,"dataGaLocation":46},"/community/","community",{"text":263,"config":264},"Forum",{"href":265,"dataGaName":266,"dataGaLocation":46},"https://forum.gitlab.com/","forum",{"text":268,"config":269},"Events",{"href":270,"dataGaName":271,"dataGaLocation":46},"/events/","events",{"text":273,"config":274},"Partners",{"href":275,"dataGaName":276,"dataGaLocation":46},"/partners/","partners",{"backgroundColor":278,"textColor":279,"text":280,"image":281,"link":285},"#2f2a6b","#fff","Insights for the future of software development",{"altText":282,"config":283},"the source promo card",{"src":284},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1758208064/dzl0dbift9xdizyelkk4.svg",{"text":286,"config":287},"Read the latest",{"href":288,"dataGaName":289,"dataGaLocation":46},"/the-source/","the source",{"text":291,"config":292,"lists":294},"Company",{"dataNavLevelOne":293},"company",[295],{"items":296},[297,302,308,310,315,320,325,330,335,340,345],{"text":298,"config":299},"About",{"href":300,"dataGaName":301,"dataGaLocation":46},"/company/","about",{"text":303,"config":304,"footerGa":307},"Jobs",{"href":305,"dataGaName":306,"dataGaLocation":46},"/jobs/","jobs",{"dataGaName":306},{"text":268,"config":309},{"href":270,"dataGaName":271,"dataGaLocation":46},{"text":311,"config":312},"Leadership",{"href":313,"dataGaName":314,"dataGaLocation":46},"/company/team/e-group/","leadership",{"text":316,"config":317},"Team",{"href":318,"dataGaName":319,"dataGaLocation":46},"/company/team/","team",{"text":321,"config":322},"Handbook",{"href":323,"dataGaName":324,"dataGaLocation":46},"https://handbook.gitlab.com/","handbook",{"text":326,"config":327},"Investor relations",{"href":328,"dataGaName":329,"dataGaLocation":46},"https://ir.gitlab.com/","investor relations",{"text":331,"config":332},"Trust Center",{"href":333,"dataGaName":334,"dataGaLocation":46},"/security/","trust center",{"text":336,"config":337},"AI Transparency Center",{"href":338,"dataGaName":339,"dataGaLocation":46},"/ai-transparency-center/","ai transparency center",{"text":341,"config":342},"Newsletter",{"href":343,"dataGaName":344,"dataGaLocation":46},"/company/contact/#contact-forms","newsletter",{"text":346,"config":347},"Press",{"href":348,"dataGaName":349,"dataGaLocation":46},"/press/","press",{"text":351,"config":352,"lists":353},"Contact us",{"dataNavLevelOne":293},[354],{"items":355},[356,359,364],{"text":53,"config":357},{"href":55,"dataGaName":358,"dataGaLocation":46},"talk to sales",{"text":360,"config":361},"Support portal",{"href":362,"dataGaName":363,"dataGaLocation":46},"https://support.gitlab.com","support portal",{"text":365,"config":366},"Customer portal",{"href":367,"dataGaName":368,"dataGaLocation":46},"https://customers.gitlab.com/customers/sign_in/","customer portal",{"close":370,"login":371,"suggestions":378},"Close",{"text":372,"link":373},"To search repositories and projects, login to",{"text":374,"config":375},"gitlab.com",{"href":60,"dataGaName":376,"dataGaLocation":377},"search login","search",{"text":379,"default":380},"Suggestions",[381,383,387,389,393,397],{"text":75,"config":382},{"href":80,"dataGaName":75,"dataGaLocation":377},{"text":384,"config":385},"Code Suggestions (AI)",{"href":386,"dataGaName":384,"dataGaLocation":377},"/solutions/code-suggestions/",{"text":23,"config":388},{"href":110,"dataGaName":23,"dataGaLocation":377},{"text":390,"config":391},"GitLab on AWS",{"href":392,"dataGaName":390,"dataGaLocation":377},"/partners/technology-partners/aws/",{"text":394,"config":395},"GitLab on Google Cloud",{"href":396,"dataGaName":394,"dataGaLocation":377},"/partners/technology-partners/google-cloud-platform/",{"text":398,"config":399},"Why GitLab?",{"href":88,"dataGaName":398,"dataGaLocation":377},{"freeTrial":401,"mobileIcon":406,"desktopIcon":411,"secondaryButton":414},{"text":402,"config":403},"Start free trial",{"href":404,"dataGaName":51,"dataGaLocation":405},"https://gitlab.com/-/trials/new/","nav",{"altText":407,"config":408},"Gitlab Icon",{"src":409,"dataGaName":410,"dataGaLocation":405},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1758203874/jypbw1jx72aexsoohd7x.svg","gitlab icon",{"altText":407,"config":412},{"src":413,"dataGaName":410,"dataGaLocation":405},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1758203875/gs4c8p8opsgvflgkswz9.svg",{"text":415,"config":416},"Get Started",{"href":417,"dataGaName":418,"dataGaLocation":405},"https://gitlab.com/-/trial_registrations/new?glm_source=about.gitlab.com/get-started/","get started",{"freeTrial":420,"mobileIcon":424,"desktopIcon":426},{"text":421,"config":422},"Learn more about GitLab Duo",{"href":80,"dataGaName":423,"dataGaLocation":405},"gitlab duo",{"altText":407,"config":425},{"src":409,"dataGaName":410,"dataGaLocation":405},{"altText":407,"config":427},{"src":413,"dataGaName":410,"dataGaLocation":405},{"button":429,"mobileIcon":434,"desktopIcon":436},{"text":430,"config":431},"/switch",{"href":432,"dataGaName":433,"dataGaLocation":405},"#contact","switch",{"altText":407,"config":435},{"src":409,"dataGaName":410,"dataGaLocation":405},{"altText":407,"config":437},{"src":438,"dataGaName":410,"dataGaLocation":405},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1773335277/ohhpiuoxoldryzrnhfrh.png",{"freeTrial":440,"mobileIcon":445,"desktopIcon":447},{"text":441,"config":442},"Back to pricing",{"href":187,"dataGaName":443,"dataGaLocation":405,"icon":444},"back to pricing","GoBack",{"altText":407,"config":446},{"src":409,"dataGaName":410,"dataGaLocation":405},{"altText":407,"config":448},{"src":413,"dataGaName":410,"dataGaLocation":405},{"title":450,"button":451,"config":456},"See how agentic AI transforms software delivery",{"text":452,"config":453},"Watch GitLab Transcend now",{"href":454,"dataGaName":455,"dataGaLocation":46},"/events/transcend/virtual/","transcend event",{"layout":457,"icon":458,"disabled":28},"release","AiStar",{"data":460},{"text":461,"source":462,"edit":468,"contribute":473,"config":478,"items":483,"minimal":687},"Git is a trademark of Software Freedom Conservancy and our use of 'GitLab' is under license",{"text":463,"config":464},"View page source",{"href":465,"dataGaName":466,"dataGaLocation":467},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/","page source","footer",{"text":469,"config":470},"Edit this page",{"href":471,"dataGaName":472,"dataGaLocation":467},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/-/blob/main/content/","web ide",{"text":474,"config":475},"Please contribute",{"href":476,"dataGaName":477,"dataGaLocation":467},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/-/blob/main/CONTRIBUTING.md/","please contribute",{"twitter":479,"facebook":480,"youtube":481,"linkedin":482},"https://twitter.com/gitlab","https://www.facebook.com/gitlab","https://www.youtube.com/channel/UCnMGQ8QHMAnVIsI3xJrihhg","https://www.linkedin.com/company/gitlab-com",[484,531,582,626,653],{"title":185,"links":485,"subMenu":500},[486,490,495],{"text":487,"config":488},"View plans",{"href":187,"dataGaName":489,"dataGaLocation":467},"view plans",{"text":491,"config":492},"Why Premium?",{"href":493,"dataGaName":494,"dataGaLocation":467},"/pricing/premium/","why premium",{"text":496,"config":497},"Why Ultimate?",{"href":498,"dataGaName":499,"dataGaLocation":467},"/pricing/ultimate/","why ultimate",[501],{"title":502,"links":503},"Contact Us",[504,507,509,511,516,521,526],{"text":505,"config":506},"Contact sales",{"href":55,"dataGaName":56,"dataGaLocation":467},{"text":360,"config":508},{"href":362,"dataGaName":363,"dataGaLocation":467},{"text":365,"config":510},{"href":367,"dataGaName":368,"dataGaLocation":467},{"text":512,"config":513},"Status",{"href":514,"dataGaName":515,"dataGaLocation":467},"https://status.gitlab.com/","status",{"text":517,"config":518},"Terms of use",{"href":519,"dataGaName":520,"dataGaLocation":467},"/terms/","terms of use",{"text":522,"config":523},"Privacy statement",{"href":524,"dataGaName":525,"dataGaLocation":467},"/privacy/","privacy statement",{"text":527,"config":528},"Cookie preferences",{"dataGaName":529,"dataGaLocation":467,"id":530,"isOneTrustButton":28},"cookie preferences","ot-sdk-btn",{"title":91,"links":532,"subMenu":541},[533,537],{"text":534,"config":535},"DevSecOps platform",{"href":73,"dataGaName":536,"dataGaLocation":467},"devsecops platform",{"text":538,"config":539},"AI-Assisted Development",{"href":80,"dataGaName":540,"dataGaLocation":467},"ai-assisted development",[542],{"title":543,"links":544},"Topics",[545,549,554,557,562,567,572,577],{"text":546,"config":547},"CICD",{"href":548,"dataGaName":37,"dataGaLocation":467},"/topics/ci-cd/",{"text":550,"config":551},"GitOps",{"href":552,"dataGaName":553,"dataGaLocation":467},"/topics/gitops/","gitops",{"text":24,"config":555},{"href":556,"dataGaName":38,"dataGaLocation":467},"/topics/devops/",{"text":558,"config":559},"Version Control",{"href":560,"dataGaName":561,"dataGaLocation":467},"/topics/version-control/","version control",{"text":563,"config":564},"DevSecOps",{"href":565,"dataGaName":566,"dataGaLocation":467},"/topics/devsecops/","devsecops",{"text":568,"config":569},"Cloud Native",{"href":570,"dataGaName":571,"dataGaLocation":467},"/topics/cloud-native/","cloud native",{"text":573,"config":574},"AI for Coding",{"href":575,"dataGaName":576,"dataGaLocation":467},"/topics/devops/ai-for-coding/","ai for coding",{"text":578,"config":579},"Agentic AI",{"href":580,"dataGaName":581,"dataGaLocation":467},"/topics/agentic-ai/","agentic ai",{"title":583,"links":584},"Solutions",[585,587,589,594,598,601,605,608,610,613,616,621],{"text":132,"config":586},{"href":127,"dataGaName":132,"dataGaLocation":467},{"text":121,"config":588},{"href":105,"dataGaName":106,"dataGaLocation":467},{"text":590,"config":591},"Agile development",{"href":592,"dataGaName":593,"dataGaLocation":467},"/solutions/agile-delivery/","agile delivery",{"text":595,"config":596},"SCM",{"href":117,"dataGaName":597,"dataGaLocation":467},"source code management",{"text":546,"config":599},{"href":110,"dataGaName":600,"dataGaLocation":467},"continuous integration & delivery",{"text":602,"config":603},"Value stream management",{"href":160,"dataGaName":604,"dataGaLocation":467},"value stream management",{"text":550,"config":606},{"href":607,"dataGaName":553,"dataGaLocation":467},"/solutions/gitops/",{"text":170,"config":609},{"href":172,"dataGaName":173,"dataGaLocation":467},{"text":611,"config":612},"Small business",{"href":177,"dataGaName":178,"dataGaLocation":467},{"text":614,"config":615},"Public sector",{"href":182,"dataGaName":183,"dataGaLocation":467},{"text":617,"config":618},"Education",{"href":619,"dataGaName":620,"dataGaLocation":467},"/solutions/education/","education",{"text":622,"config":623},"Financial services",{"href":624,"dataGaName":625,"dataGaLocation":467},"/solutions/finance/","financial services",{"title":190,"links":627},[628,630,632,634,637,639,641,643,645,647,649,651],{"text":202,"config":629},{"href":204,"dataGaName":205,"dataGaLocation":467},{"text":207,"config":631},{"href":209,"dataGaName":210,"dataGaLocation":467},{"text":212,"config":633},{"href":214,"dataGaName":215,"dataGaLocation":467},{"text":217,"config":635},{"href":219,"dataGaName":636,"dataGaLocation":467},"docs",{"text":240,"config":638},{"href":242,"dataGaName":243,"dataGaLocation":467},{"text":235,"config":640},{"href":237,"dataGaName":238,"dataGaLocation":467},{"text":245,"config":642},{"href":247,"dataGaName":248,"dataGaLocation":467},{"text":253,"config":644},{"href":255,"dataGaName":256,"dataGaLocation":467},{"text":258,"config":646},{"href":260,"dataGaName":261,"dataGaLocation":467},{"text":263,"config":648},{"href":265,"dataGaName":266,"dataGaLocation":467},{"text":268,"config":650},{"href":270,"dataGaName":271,"dataGaLocation":467},{"text":273,"config":652},{"href":275,"dataGaName":276,"dataGaLocation":467},{"title":291,"links":654},[655,657,659,661,663,665,667,671,676,678,680,682],{"text":298,"config":656},{"href":300,"dataGaName":293,"dataGaLocation":467},{"text":303,"config":658},{"href":305,"dataGaName":306,"dataGaLocation":467},{"text":311,"config":660},{"href":313,"dataGaName":314,"dataGaLocation":467},{"text":316,"config":662},{"href":318,"dataGaName":319,"dataGaLocation":467},{"text":321,"config":664},{"href":323,"dataGaName":324,"dataGaLocation":467},{"text":326,"config":666},{"href":328,"dataGaName":329,"dataGaLocation":467},{"text":668,"config":669},"Sustainability",{"href":670,"dataGaName":668,"dataGaLocation":467},"/sustainability/",{"text":672,"config":673},"Diversity, inclusion and belonging (DIB)",{"href":674,"dataGaName":675,"dataGaLocation":467},"/diversity-inclusion-belonging/","Diversity, inclusion and belonging",{"text":331,"config":677},{"href":333,"dataGaName":334,"dataGaLocation":467},{"text":341,"config":679},{"href":343,"dataGaName":344,"dataGaLocation":467},{"text":346,"config":681},{"href":348,"dataGaName":349,"dataGaLocation":467},{"text":683,"config":684},"Modern Slavery Transparency Statement",{"href":685,"dataGaName":686,"dataGaLocation":467},"https://handbook.gitlab.com/handbook/legal/modern-slavery-act-transparency-statement/","modern slavery transparency statement",{"items":688},[689,692,695],{"text":690,"config":691},"Terms",{"href":519,"dataGaName":520,"dataGaLocation":467},{"text":693,"config":694},"Cookies",{"dataGaName":529,"dataGaLocation":467,"id":530,"isOneTrustButton":28},{"text":696,"config":697},"Privacy",{"href":524,"dataGaName":525,"dataGaLocation":467},[699],{"id":700,"title":18,"body":8,"config":701,"content":703,"description":8,"extension":26,"meta":707,"navigation":28,"path":708,"seo":709,"stem":710,"__hash__":711},"blogAuthors/en-us/blog/authors/ivan-lychev.yml",{"template":702},"BlogAuthor",{"name":18,"config":704},{"headshot":705,"ctfId":706},"","iLychevAD",{},"/en-us/blog/authors/ivan-lychev",{},"en-us/blog/authors/ivan-lychev","9r8u9KqOTqtu_73UkGz5wcH8ZwY-WyHEIE1ulqCYllQ",[713,727,740],{"content":714,"config":725},{"body":715,"title":716,"description":717,"authors":718,"heroImage":720,"date":721,"category":9,"tags":722},"Most CI/CD tools can run a build and ship a deployment. Where they diverge is what happens when your delivery needs get real: a monorepo with a dozen services, microservices spread across multiple repositories, deployments to dozens of environments, or a platform team trying to enforce standards without becoming a bottleneck.\n  \nGitLab's pipeline execution model was designed for that complexity. Parent-child pipelines, DAG execution, dynamic pipeline generation, multi-project triggers, merge request pipelines with merged results, and CI/CD Components each solve a distinct class of problems. Because they compose, understanding the full model unlocks something more than a faster pipeline. In this article, you'll learn about the five patterns where that model stands out, each mapped to a real engineering scenario with the configuration to match.\n  \nThe configs below are illustrative. The scripts use echo commands to keep the signal-to-noise ratio low. Swap them out for your actual build, test, and deploy steps and they are ready to use.\n\n\n## 1. Monorepos: Parent-child pipelines + DAG execution\n\n\nThe problem: Your monorepo has a frontend, a backend, and a docs site. Every commit triggers a full rebuild of everything, even when only a README changed.\n\n\nGitLab solves this with two complementary features: [parent-child pipelines](https://docs.gitlab.com/ci/pipelines/downstream_pipelines/#parent-child-pipelines) (which let a top-level pipeline spawn isolated sub-pipelines) and [DAG execution via `needs`](https://docs.gitlab.com/ci/yaml/#needs) (which breaks rigid stage-by-stage ordering and lets jobs start the moment their dependencies finish).\n\n\nA parent pipeline detects what changed and triggers only the relevant child pipelines:\n\n```yaml\n# .gitlab-ci.yml\nstages:\n  - trigger\n\ntrigger-services:\n  stage: trigger\n  trigger:\n    include:\n      - local: '.gitlab/ci/api-service.yml'\n      - local: '.gitlab/ci/web-service.yml'\n      - local: '.gitlab/ci/worker-service.yml'\n    strategy: depend\n```\n\n\nEach child pipeline is a fully independent pipeline with its own stages, jobs, and artifacts. The parent waits for all of them via [strategy: depend](https://docs.gitlab.com/ci/pipelines/downstream_pipelines/#wait-for-downstream-pipeline-to-complete) so you get a single green/red signal at the top level, with full drill-down into each service's pipeline. This organizational separation is the bigger win for large teams: each service owns its pipeline config, changes in one cannot break another, and the complexity stays manageable as the repo grows.\n\n\nOne thing worth knowing: when you pass [multiple files to a single `trigger: include:`](https://docs.gitlab.com/ci/pipelines/downstream_pipelines/#combine-multiple-child-pipeline-configuration-files), GitLab merges them into a single child pipeline configuration. This means jobs defined across those files share the same pipeline context and can reference each other with `needs:`, which is what makes the DAG optimization possible. If you split them into separate trigger jobs instead, each would be its own isolated pipeline and cross-file `needs:` references would not work.\n\n\nCombine this with `needs:` inside each child pipeline and you get DAG execution. Your integration tests can start the moment the build finishes, without waiting for other jobs in the same stage.\n\n```yaml\n# .gitlab/ci/api-service.yml\nstages:\n  - build\n  - test\n\nbuild-api:\n  stage: build\n  script:\n    - echo \"Building API service\"\n\ntest-api:\n  stage: test\n  needs: [build-api]\n  script:\n    - echo \"Running API tests\"\n```\n\n\nWhy it matters: Teams with large monorepos typically report significant reductions in pipeline runtime after switching to DAG execution, since jobs no longer wait on unrelated work in the same stage. Parent-child pipelines add the organizational layer that keeps the configuration maintainable as the repo and team grow.\n\n![Local downstream pipelines](https://res.cloudinary.com/about-gitlab-com/image/upload/v1775738759/Blog/Imported/hackathon-fake-blog-post-s/image3_vwj3rz.png \"Local downstream pipelines\")\n\n## 2. Microservices: Cross-repo, multi-project pipelines\n\n\nThe problem: Your frontend lives in one repo, your backend in another. When the frontend team ships a change, they have no visibility into whether it broke the backend integration and vice versa.\n\n\nGitLab's [multi-project pipelines](https://docs.gitlab.com/ci/pipelines/downstream_pipelines/#multi-project-pipelines) let one project trigger a pipeline in a completely separate project and wait for the result. The triggering project gets a linked downstream pipeline right in its own pipeline view.\n\n\nThe frontend pipeline builds an API contract artifact and publishes it, then triggers the backend pipeline. The backend fetches that artifact directly using the [Jobs API](https://docs.gitlab.com/ee/api/jobs.html#download-a-single-artifact-file-from-specific-tag-or-branch) and validates it before allowing anything to proceed. If a breaking change is detected, the backend pipeline fails and the frontend pipeline fails with it.\n\n```yaml\n# frontend repo: .gitlab-ci.yml\nstages:\n  - build\n  - test\n  - trigger-backend\n\nbuild-frontend:\n  stage: build\n  script:\n    - echo \"Building frontend and generating API contract...\"\n    - mkdir -p dist\n    - |\n      echo '{\n        \"api_version\": \"v2\",\n        \"breaking_changes\": false\n      }' > dist/api-contract.json\n    - cat dist/api-contract.json\n  artifacts:\n    paths:\n      - dist/api-contract.json\n    expire_in: 1 hour\n\ntest-frontend:\n  stage: test\n  script:\n    - echo \"All frontend tests passed!\"\n\ntrigger-backend-pipeline:\n  stage: trigger-backend\n  trigger:\n    project: my-org/backend-service\n    branch: main\n    strategy: depend\n  rules:\n    - if: $CI_COMMIT_BRANCH == \"main\"\n```\n\n```yaml\n# backend repo: .gitlab-ci.yml\nstages:\n  - build\n  - test\n\nbuild-backend:\n  stage: build\n  script:\n    - echo \"All backend tests passed!\"\n\nintegration-test:\n  stage: test\n  rules:\n    - if: $CI_PIPELINE_SOURCE == \"pipeline\"\n  script:\n    - echo \"Fetching API contract from frontend...\"\n    - |\n      curl --silent --fail \\\n        --header \"JOB-TOKEN: $CI_JOB_TOKEN\" \\\n        --output api-contract.json \\\n        \"${CI_API_V4_URL}/projects/${FRONTEND_PROJECT_ID}/jobs/artifacts/main/raw/dist/api-contract.json?job=build-frontend\"\n    - cat api-contract.json\n    - |\n      if grep -q '\"breaking_changes\": true' api-contract.json; then\n        echo \"FAIL: Breaking API changes detected - backend integration blocked!\"\n        exit 1\n      fi\n      echo \"PASS: API contract is compatible!\"\n```\n\n\nA few things worth noting in this config. The `integration-test` job uses `$CI_PIPELINE_SOURCE == \"pipeline\"` to ensure it only runs when triggered by an upstream pipeline, not on a standalone push to the backend repo. The frontend project ID is referenced via `$FRONTEND_PROJECT_ID`, which should be set as a [CI/CD variable](https://docs.gitlab.com/ci/variables/) in the backend project settings to avoid hardcoding it.\n\n\nWhy it matters: Cross-service breakage that previously surfaced in production gets caught in the pipeline instead. The dependency between services stops being invisible and becomes something teams can see, track, and act on.\n\n\n![Cross-project pipelines](https://res.cloudinary.com/about-gitlab-com/image/upload/v1775738762/Blog/Imported/hackathon-fake-blog-post-s/image4_h6mfsb.png \"Cross-project pipelines\")\n\n\n## 3. Multi-tenant / matrix deployments: Dynamic child pipelines\n\n\nThe problem: You deploy the same application to 15 customer environments, or three cloud regions, or dev/staging/prod. Updating a deploy stage across all of them one by one is the kind of work that leads to configuration drift. Writing a separate pipeline for each environment is unmaintainable from day one.\n\n\nGitLab's [dynamic child pipelines](https://docs.gitlab.com/ci/pipelines/downstream_pipelines/#dynamic-child-pipelines) let you generate a pipeline at runtime. A job runs a script that produces a YAML file, and that YAML becomes the pipeline for the next stage. The pipeline structure itself becomes data.\n\n\n```yaml\n# .gitlab-ci.yml\nstages:\n  - generate\n  - trigger-environments\n\ngenerate-config:\n  stage: generate\n  script:\n    - |\n      # ENVIRONMENTS can be passed as a CI variable or read from a config file.\n      # Default to dev, staging, prod if not set.\n      ENVIRONMENTS=${ENVIRONMENTS:-\"dev staging prod\"}\n      for ENV in $ENVIRONMENTS; do\n        cat > ${ENV}-pipeline.yml \u003C\u003C EOF\n      stages:\n        - deploy\n        - verify\n      deploy-${ENV}:\n        stage: deploy\n        script:\n          - echo \"Deploying to ${ENV} environment\"\n      verify-${ENV}:\n        stage: verify\n        script:\n          - echo \"Running smoke tests on ${ENV}\"\n      EOF\n      done\n  artifacts:\n    paths:\n      - \"*.yml\"\n    exclude:\n      - \".gitlab-ci.yml\"\n\n.trigger-template:\n  stage: trigger-environments\n  trigger:\n    strategy: depend\n\ntrigger-dev:\n  extends: .trigger-template\n  trigger:\n    include:\n      - artifact: dev-pipeline.yml\n        job: generate-config\n\ntrigger-staging:\n  extends: .trigger-template\n  needs: [trigger-dev]\n  trigger:\n    include:\n      - artifact: staging-pipeline.yml\n        job: generate-config\n\ntrigger-prod:\n  extends: .trigger-template\n  needs: [trigger-staging]\n  trigger:\n    include:\n      - artifact: prod-pipeline.yml\n        job: generate-config\n  when: manual\n```\n\n\nThe generation script loops over an `ENVIRONMENTS` variable rather than hardcoding each environment separately. Pass in a different list via a CI variable or read it from a config file and the pipeline adapts without touching the YAML. The trigger jobs use [extends:](https://docs.gitlab.com/ci/yaml/#extends) to inherit shared configuration from `.trigger-template`, so `strategy: depend` is defined once rather than repeated on every trigger job. Add a new environment by updating the variable, not by duplicating pipeline config. Add [when: manual](https://docs.gitlab.com/ci/yaml/#when) to the production trigger and you get a promotion gate baked right into the pipeline graph.\n\n\nWhy it matters: SaaS companies and platform teams use this pattern to manage dozens of environments without duplicating pipeline logic. The pipeline structure itself stays lean as the deployment matrix grows.\n\n\n![Dynamic pipeline](https://res.cloudinary.com/about-gitlab-com/image/upload/v1775738765/Blog/Imported/hackathon-fake-blog-post-s/image7_wr0kx2.png \"Dynamic pipeline\")\n\n\n## 4. MR-first delivery: Merge request pipelines, merged results, and workflow routing\n\n\nThe problem: Your pipeline runs on every push to every branch. Expensive tests run on feature branches that will never merge. Meanwhile, you have no guarantee that what you tested is actually what will land on `main` after a merge.\n\n\nGitLab has three interlocking features that solve this together:\n\n\n*   [Merge request pipelines](https://docs.gitlab.com/ci/pipelines/merge_request_pipelines/) run only when a merge request exists, not on every branch push. This alone eliminates a significant amount of wasted compute.\n\n*   [Merged results pipelines](https://docs.gitlab.com/ci/pipelines/merged_results_pipelines/) go further. GitLab creates a temporary merge commit (your branch plus the current target branch) and runs the pipeline against that. You are testing what will actually exist after the merge, not just your branch in isolation.\n\n*   [Workflow rules](https://docs.gitlab.com/ci/yaml/workflow/) let you define exactly which pipeline type runs under which conditions and suppress everything else. The `$CI_OPEN_MERGE_REQUESTS` guard below prevents duplicate pipelines firing for both a branch and its open MR simultaneously.\n\n\nWith those three working together, here is what a tiered pipeline looks like:\n\n```yaml\n# .gitlab-ci.yml\nworkflow:\n  rules:\n    - if: $CI_PIPELINE_SOURCE == \"merge_request_event\"\n    - if: $CI_COMMIT_BRANCH && $CI_OPEN_MERGE_REQUESTS\n      when: never\n    - if: $CI_COMMIT_BRANCH\n    - if: $CI_PIPELINE_SOURCE == \"schedule\"\n\nstages:\n  - fast-checks\n  - expensive-tests\n  - deploy\n\nlint-code:\n  stage: fast-checks\n  script:\n    - echo \"Running linter\"\n  rules:\n    - if: $CI_PIPELINE_SOURCE == \"push\"\n    - if: $CI_PIPELINE_SOURCE == \"merge_request_event\"\n    - if: $CI_COMMIT_BRANCH == \"main\"\n\nunit-tests:\n  stage: fast-checks\n  script:\n    - echo \"Running unit tests\"\n  rules:\n    - if: $CI_PIPELINE_SOURCE == \"push\"\n    - if: $CI_PIPELINE_SOURCE == \"merge_request_event\"\n    - if: $CI_COMMIT_BRANCH == \"main\"\n\nintegration-tests:\n  stage: expensive-tests\n  script:\n    - echo \"Running integration tests (15 min)\"\n  rules:\n    - if: $CI_PIPELINE_SOURCE == \"merge_request_event\"\n    - if: $CI_COMMIT_BRANCH == \"main\"\n\ne2e-tests:\n  stage: expensive-tests\n  script:\n    - echo \"Running E2E tests (30 min)\"\n  rules:\n    - if: $CI_PIPELINE_SOURCE == \"merge_request_event\"\n    - if: $CI_COMMIT_BRANCH == \"main\"\n\nnightly-comprehensive-scan:\n  stage: expensive-tests\n  script:\n    - echo \"Running full nightly suite (2 hours)\"\n  rules:\n    - if: $CI_PIPELINE_SOURCE == \"schedule\"\n\ndeploy-production:\n  stage: deploy\n  script:\n    - echo \"Deploying to production\"\n  rules:\n    - if: $CI_COMMIT_BRANCH == \"main\"\n      when: manual\n```\n\nWith this setup, the pipeline behaves differently depending on context. A push to a feature branch with no open MR runs lint and unit tests only. Once an MR is opened, the workflow rules switch from a branch pipeline to an MR pipeline, and the full integration and E2E suite runs against the merged result. Merging to `main` queues a manual production deployment. A nightly schedule runs the comprehensive scan once, not on every commit.\n\n\nWhy it matters: Teams routinely cut CI costs significantly with this pattern, not by running fewer tests, but by running the right tests at the right time. Merged results pipelines catch the class of bugs that only appear after a merge, before they ever reach `main`.\n\n\n![Conditional pipelines (within a branch with no MR)](https://res.cloudinary.com/about-gitlab-com/image/upload/v1775738768/Blog/Imported/hackathon-fake-blog-post-s/image6_dnfcny.png \"Conditional pipelines (within a branch with no MR)\")\n\n\n\n![Conditional pipelines (within an MR)](https://res.cloudinary.com/about-gitlab-com/image/upload/v1775738772/Blog/Imported/hackathon-fake-blog-post-s/image1_wyiafu.png \"Conditional pipelines (within an MR)\")\n\n\n\n![Conditional pipelines (on the main branch)](https://res.cloudinary.com/about-gitlab-com/image/upload/v1775738774/Blog/Imported/hackathon-fake-blog-post-s/image5_r6lkfd.png \"Conditional pipelines (on the main branch)\")\n\n## 5. Governed pipelines: CI/CD Components\n\n\nThe problem: Your platform team has defined the right way to build, test, and deploy. But every team has their own `.gitlab-ci.yml` with subtle variations. Security scanning gets skipped. Deployment standards drift. Audits are painful.\n\n\nGitLab [CI/CD Components](https://docs.gitlab.com/ci/components/) let platform teams publish versioned, reusable pipeline building blocks. Application teams consume them with a single `include:` line and optional inputs — no copy-paste, no drift. Components are discoverable through the [CI/CD Catalog](https://docs.gitlab.com/ci/components/#cicd-catalog), which means teams can find and adopt approved building blocks without needing to go through the platform team directly.\n\n\nHere is a component definition from a shared library:\n\n```yaml\n# templates/deploy.yml\nspec:\n  inputs:\n    stage:\n      default: deploy\n    environment:\n      default: production\n---\ndeploy-job:\n  stage: $[[ inputs.stage ]]\n  script:\n    - echo \"Deploying $APP_NAME to $[[ inputs.environment ]]\"\n    - echo \"Deploy URL: $DEPLOY_URL\"\n  environment:\n    name: $[[ inputs.environment ]]\n```\nAnd here is how an application team consumes it:\n\n```yaml\n# Application repo: .gitlab-ci.yml\nvariables:\n  APP_NAME: \"my-awesome-app\"\n  DEPLOY_URL: \"https://api.example.com\"\n\ninclude:\n  - component: gitlab.com/my-org/component-library/build@v1.0.6\n  - component: gitlab.com/my-org/component-library/test@v1.0.6\n  - component: gitlab.com/my-org/component-library/deploy@v1.0.6\n    inputs:\n      environment: staging\n\nstages:\n  - build\n  - test\n  - deploy\n```\n\nThree lines of `include:` replace hundreds of lines of duplicated YAML. The platform team can push a security fix to `v1.0.7` and teams opt in on their own schedule — or the platform team can pin everyone to a minimum version. Either way, one change propagates everywhere instead of needing to be applied repo by repo.\n\n\nPair this with [resource groups](https://docs.gitlab.com/ci/resource_groups/) to prevent concurrent deployments to the same environment, and [protected environments](https://docs.gitlab.com/ci/environments/protected_environments/) to enforce approval gates - and you have a governed delivery platform where compliance is the default, not the exception.\n\n\nWhy it matters: This is the pattern that makes GitLab CI/CD scale across hundreds of teams. Platform engineering teams enforce compliance without becoming a bottleneck. Application teams get a fast path to a working pipeline without reinventing the wheel.\n\n\n![Component pipeline (imported jobs)](https://res.cloudinary.com/about-gitlab-com/image/upload/v1775738776/Blog/Imported/hackathon-fake-blog-post-s/image2_pizuxd.png \"Component pipeline (imported jobs)\")\n\n## Putting it all together\n\nNone of these features exist in isolation. The reason GitLab's pipeline model is worth understanding deeply is that these primitives compose:\n\n*   A monorepo uses parent-child pipelines, and each child uses DAG execution\n\n*   A microservices platform uses multi-project pipelines, and each project uses MR pipelines with merged results\n\n*   A governed platform uses CI/CD components to standardize the patterns above across every team\n\n\nMost teams discover one of these features when they hit a specific pain point. The ones who invest in understanding the full model end up with a delivery system that actually reflects how their engineering organization works, not a pipeline that fights it.\n\n## Other patterns worth exploring\n\n\nThe five patterns above cover the most common structural pain points, but GitLab's pipeline model goes further. A few others worth looking into as your needs grow:\n\n\n*   [Review apps with dynamic environments](https://docs.gitlab.com/ci/environments/) let you spin up a live preview for every feature branch and tear it down automatically when the MR closes. Useful for teams doing frontend work or API changes that need stakeholder sign-off before merging.\n\n*   [Caching and artifact strategies](https://docs.gitlab.com/ci/caching/) are often the fastest way to cut pipeline runtime after the structural work is done. Structuring `cache:` keys around dependency lockfiles and being deliberate about what gets passed between jobs with [artifacts:](https://docs.gitlab.com/ci/yaml/#artifacts) can make a significant difference without changing your pipeline shape at all.\n\n*   [Scheduled and API-triggered pipelines](https://docs.gitlab.com/ci/pipelines/schedules/) are worth knowing about because not everything should run on a code push. Nightly security scans, compliance reports, and release automation are better modeled as scheduled or [API-triggered](https://docs.gitlab.com/ci/triggers/) pipelines with `$CI_PIPELINE_SOURCE` routing the right jobs for each context.\n\n## How to get started\n\nModern software delivery is complex. Teams are managing monorepos with dozens of services, coordinating across multiple repositories, deploying to many environments at once, and trying to keep standards consistent as organizations grow. GitLab's pipeline model was built with all of that in mind.\n\nWhat makes it worth investing time in is how well the pieces fit together. Parent-child pipelines bring structure to large codebases. Multi-project pipelines make cross-team dependencies visible and testable. Dynamic pipelines turn environment management into something that scales gracefully. MR-first delivery with merged results ensures confidence at every step of the review process. And CI/CD Components give platform teams a way to share best practices across an entire organization without becoming a bottleneck.\n\nEach of these features is powerful on its own, and even more so when combined. GitLab gives you the building blocks to design a delivery system that fits how your team actually works, and grows with you as your needs evolve.\n\n> [Start a free trial of GitLab Ultimate](https://about.gitlab.com/free-trial/) to use pipeline logic today.\n\n## Read more\n\n*   [Variable and artifact sharing in GitLab parent-child pipelines](https://about.gitlab.com/blog/variable-and-artifact-sharing-in-gitlab-parent-child-pipelines/)\n*   [CI/CD inputs: Secure and preferred method to pass parameters to a pipeline](https://about.gitlab.com/blog/ci-cd-inputs-secure-and-preferred-method-to-pass-parameters-to-a-pipeline/)\n*   [Tutorial: How to set up your first GitLab CI/CD component](https://about.gitlab.com/blog/tutorial-how-to-set-up-your-first-gitlab-ci-cd-component/)\n*   [How to include file references in your CI/CD components](https://about.gitlab.com/blog/how-to-include-file-references-in-your-ci-cd-components/)\n*   [FAQ: GitLab CI/CD Catalog](https://about.gitlab.com/blog/faq-gitlab-ci-cd-catalog/)\n*   [Building a GitLab CI/CD pipeline for a monorepo the easy way](https://about.gitlab.com/blog/building-a-gitlab-ci-cd-pipeline-for-a-monorepo-the-easy-way/)\n*   [A CI/CD component builder's journey](https://about.gitlab.com/blog/a-ci-component-builders-journey/)\n*   [CI/CD Catalog goes GA: No more building pipelines from scratch](https://about.gitlab.com/blog/ci-cd-catalog-goes-ga-no-more-building-pipelines-from-scratch/)","5 ways GitLab pipeline logic solves real engineering problems","Learn how to scale CI/CD with composable patterns for monorepos, microservices, environments, and governance.",[719],"Omid Khan","https://res.cloudinary.com/about-gitlab-com/image/upload/v1772721753/frfsm1qfscwrmsyzj1qn.png","2026-04-09",[23,723,25,724],"DevOps platform","features",{"featured":28,"template":13,"slug":726},"5-ways-gitlab-pipeline-logic-solves-real-engineering-problems",{"content":728,"config":738},{"title":729,"description":730,"authors":731,"heroImage":733,"date":734,"body":735,"category":9,"tags":736},"How to use GitLab Container Virtual Registry with Docker Hardened Images","Learn how to simplify container image management with this step-by-step guide.",[732],"Tim Rizzi","https://res.cloudinary.com/about-gitlab-com/image/upload/v1772111172/mwhgbjawn62kymfwrhle.png","2026-03-12","If you're a platform engineer, you've probably had this conversation:\n  \n*\"Security says we need to use hardened base images.\"*\n\n*\"Great, where do I configure credentials for yet another registry?\"*\n\n*\"Also, how do we make sure everyone actually uses them?\"*\n\nOr this one:\n\n*\"Why are our builds so slow?\"*\n\n*\"We're pulling the same 500MB image from Docker Hub in every single job.\"*\n\n*\"Can't we just cache these somewhere?\"*\n\nI've been working on [Container Virtual Registry](https://docs.gitlab.com/user/packages/virtual_registry/container/) at GitLab specifically to solve these problems. It's a pull-through cache that sits in front of your upstream registries — Docker Hub, dhi.io (Docker Hardened Images), MCR, and Quay — and gives your teams a single endpoint to pull from. Images get cached on the first pull. Subsequent pulls come from the cache. Your developers don't need to know or care which upstream a particular image came from.\n\nThis article shows you how to set up Container Virtual Registry, specifically with Docker Hardened Images in mind, since that's a combination that makes a lot of sense for teams concerned about security and not making their developers' lives harder.\n\n## What problem are we actually solving?\n\nThe Platform teams I usually talk to manage container images across three to five registries:\n\n* **Docker Hub** for most base images\n* **dhi.io** for Docker Hardened Images (security-conscious workloads)\n* **MCR** for .NET and Azure tooling\n* **Quay.io** for Red Hat ecosystem stuff\n* **Internal registries** for proprietary images\n\nEach one has its own:\n\n* Authentication mechanism\n* Network latency characteristics\n* Way of organizing image paths\n\nYour CI/CD configs end up littered with registry-specific logic. Credential management becomes a project unto itself. And every pipeline job pulls the same base images over the network, even though they haven't changed in weeks.\n\nContainer Virtual Registry consolidates this. One registry URL. One authentication flow (GitLab's). Cached images are served from GitLab's infrastructure rather than traversing the internet each time.\n\n## How it works\n\nThe model is straightforward:\n\n```text\nYour pipeline pulls:\n  gitlab.com/virtual_registries/container/1000016/python:3.13\n\nVirtual registry checks:\n  1. Do I have this cached? → Return it\n  2. No? → Fetch from upstream, cache it, return it\n\n```\n\nYou configure upstreams in priority order. When a pull request comes in, the virtual registry checks each upstream until it finds the image. The result gets cached for a configurable period (default 24 hours).\n\n```text\n┌─────────────────────────────────────────────────────────┐\n│                    CI/CD Pipeline                       │\n│                          │                              │\n│                          ▼                              │\n│   gitlab.com/virtual_registries/container/\u003Cid>/image   │\n└─────────────────────────────────────────────────────────┘\n                           │\n                           ▼\n┌─────────────────────────────────────────────────────────┐\n│            Container Virtual Registry                   │\n│                                                         │\n│  Upstream 1: Docker Hub ────────────────┐               │\n│  Upstream 2: dhi.io (Hardened) ────────┐│               │\n│  Upstream 3: MCR ─────────────────────┐││               │\n│  Upstream 4: Quay.io ────────────────┐│││               │\n│                                      ││││               │\n│                    ┌─────────────────┴┴┴┴──┐            │\n│                    │        Cache          │            │\n│                    │  (manifests + layers) │            │\n│                    └───────────────────────┘            │\n└─────────────────────────────────────────────────────────┘\n```\n\n## Why this matters for Docker Hardened Images\n\n[Docker Hardened Images](https://docs.docker.com/dhi/) are great because of the minimal attack surface, near-zero CVEs, proper software bills of materials (SBOMs), and SLSA provenance. If you're evaluating base images for security-sensitive workloads, they should be on your list.\n\nBut adopting them creates the same operational friction as any new registry:\n\n* **Credential distribution**: You need to get Docker credentials to every system that pulls images from dhi.io.\n* **CI/CD changes**: Every pipeline needs to be updated to authenticate with dhi.io.\n* **Developer friction**: People need to remember to use the hardened variants.\n* **Visibility gap**: It's difficult to tell if teams are actually using hardened images vs. regular ones.\n\nVirtual registry addresses each of these:\n\n**Single credential**: Teams authenticate to GitLab. The virtual registry handles upstream authentication. You configure Docker credentials once, at the registry level, and they apply to all pulls.\n\n**No CI/CD changes per-team**: Point pipelines at your virtual registry. Done. The upstream configuration is centralized.\n\n**Gradual adoption**: Since images get cached with their full path, you can see in the cache what's being pulled. If someone's pulling `library/python:3.11` instead of the hardened variant, you'll know.\n\n**Audit trail**: The cache shows you exactly which images are in active use. Useful for compliance, useful for understanding what your fleet actually depends on.\n\n## Setting it up\n\nHere's a real setup using the Python client from this demo project.\n\n### Create the virtual registry\n\n```python\nfrom virtual_registry_client import VirtualRegistryClient\n\nclient = VirtualRegistryClient()\n\nregistry = client.create_virtual_registry(\n    group_id=\"785414\",  # Your top-level group ID\n    name=\"platform-images\",\n    description=\"Cached container images for platform teams\"\n)\n\nprint(f\"Registry ID: {registry['id']}\")\n# You'll need this ID for the pull URL\n```\n\n### Add Docker Hub as an upstream\n\nFor official images like Alpine, Python, etc.:\n\n```python\ndocker_upstream = client.create_upstream(\n    registry_id=registry['id'],\n    url=\"https://registry-1.docker.io\",\n    name=\"Docker Hub\",\n    cache_validity_hours=24\n)\n```\n\n### Add Docker Hardened Images (dhi.io)\n\nDocker Hardened Images are hosted on `dhi.io`, a separate registry that requires authentication:\n\n```python\ndhi_upstream = client.create_upstream(\n    registry_id=registry['id'],\n    url=\"https://dhi.io\",\n    name=\"Docker Hardened Images\",\n    username=\"your-docker-username\",\n    password=\"your-docker-access-token\",\n    cache_validity_hours=24\n)\n```\n\n### Add other upstreams\n\n```python\n# MCR for .NET teams\nclient.create_upstream(\n    registry_id=registry['id'],\n    url=\"https://mcr.microsoft.com\",\n    name=\"Microsoft Container Registry\",\n    cache_validity_hours=48\n)\n\n# Quay for Red Hat stuff\nclient.create_upstream(\n    registry_id=registry['id'],\n    url=\"https://quay.io\",\n    name=\"Quay.io\",\n    cache_validity_hours=24\n)\n```\n\n### Update your CI/CD\n\nHere's a `.gitlab-ci.yml` that pulls through the virtual registry:\n\n```yaml\nvariables:\n  VIRTUAL_REGISTRY_ID: \u003Cyour_virtual_registry_ID>\n\n  \nbuild:\n  image: docker:24\n  services:\n    - docker:24-dind\n  before_script:\n    # Authenticate to GitLab (which handles upstream auth for you)\n    - echo \"${CI_JOB_TOKEN}\" | docker login -u gitlab-ci-token --password-stdin gitlab.com\n  script:\n    # All of these go through your single virtual registry\n    \n    # Official Docker Hub images (use library/ prefix)\n    - docker pull gitlab.com/virtual_registries/container/${VIRTUAL_REGISTRY_ID}/library/alpine:latest\n    \n    # Docker Hardened Images from dhi.io (no prefix needed)\n    - docker pull gitlab.com/virtual_registries/container/${VIRTUAL_REGISTRY_ID}/python:3.13\n    \n    # .NET from MCR\n    - docker pull gitlab.com/virtual_registries/container/${VIRTUAL_REGISTRY_ID}/dotnet/sdk:8.0\n```\n\n### Image path formats\n\nDifferent registries use different path conventions:\n\n| Registry | Pull URL Example |\n|----------|------------------|\n| Docker Hub (official) | `.../library/python:3.11-slim` |\n| Docker Hardened Images (dhi.io) | `.../python:3.13` |\n| MCR | `.../dotnet/sdk:8.0` |\n| Quay.io | `.../prometheus/prometheus:latest` |\n\n### Verify it's working\n\nAfter some pulls, check your cache:\n\n```python\nupstreams = client.list_registry_upstreams(registry['id'])\nfor upstream in upstreams:\n    entries = client.list_cache_entries(upstream['id'])\n    print(f\"{upstream['name']}: {len(entries)} cached entries\")\n\n```\n\n## What the numbers look like\n\nI ran tests pulling images through the virtual registry:\n\n| Metric | Without Cache | With Warm Cache |\n|--------|---------------|-----------------|\n| Pull time (Alpine) | 10.3s | 4.2s |\n| Pull time (Python 3.13 DHI) | 11.6s | ~4s |\n| Network roundtrips to upstream | Every pull | Cache misses only |\n\n\n\n\nThe first pull is the same speed (it has to fetch from upstream). Every pull after that, for the cache validity period, comes straight from GitLab's storage. No network hop to Docker Hub, dhi.io, MCR, or wherever the image lives.\n\nFor a team running hundreds of pipeline jobs per day, that's hours of cumulative build time saved.\n\n## Practical considerations\nHere are some considerations to keep in mind:\n\n### Cache validity\n\n24 hours is the default. For security-sensitive images where you want patches quickly, consider 12 hours or less:\n\n```python\nclient.create_upstream(\n    registry_id=registry['id'],\n    url=\"https://dhi.io\",\n    name=\"Docker Hardened Images\",\n    username=\"your-username\",\n    password=\"your-token\",\n    cache_validity_hours=12\n)\n```\n\nFor stable, infrequently-updated images (like specific version tags), longer validity is fine.\n\n### Upstream priority\n\nUpstreams are checked in order. If you have images with the same name on different registries, the first matching upstream wins.\n\n### Limits\n\n* Maximum of 20 virtual registries per group\n* Maximum of 20 upstreams per virtual registry\n\n## Configuration via UI\n\nYou can also configure virtual registries and upstreams directly from the GitLab UI—no API calls required. Navigate to your group's **Settings > Packages and registries > Virtual Registry** to:\n\n* Create and manage virtual registries\n* Add, edit, and reorder upstream registries\n* View and manage the cache\n* Monitor which images are being pulled\n\n## What's next\n\nWe're actively developing:\n\n* **Allow/deny lists**: Use regex to control which images can be pulled from specific upstreams.\n\nThis is beta software. It works, people are using it in production, but we're still iterating based on feedback.\n\n## Share your feedback\n\nIf you're a platform engineer dealing with container registry sprawl, I'd like to understand your setup:\n\n* How many upstream registries are you managing?\n* What's your biggest pain point with the current state?\n* Would something like this help, and if not, what's missing?\n\nPlease share your experiences in the [Container Virtual Registry feedback issue](https://gitlab.com/gitlab-org/gitlab/-/work_items/589630).\n## Related resources\n- [New GitLab metrics and registry features help reduce CI/CD bottlenecks](https://about.gitlab.com/blog/new-gitlab-metrics-and-registry-features-help-reduce-ci-cd-bottlenecks/#container-virtual-registry)\n- [Container Virtual Registry documentation](https://docs.gitlab.com/user/packages/virtual_registry/container/)\n- [Container Virtual Registry API](https://docs.gitlab.com/api/container_virtual_registries/)",[25,737,724],"product",{"featured":12,"template":13,"slug":739},"using-gitlab-container-virtual-registry-with-docker-hardened-images",{"content":741,"config":751},{"title":742,"description":743,"authors":744,"heroImage":746,"date":747,"category":9,"tags":748,"body":750},"How IIT Bombay students are coding the future with GitLab","At GitLab, we often talk about how software accelerates innovation. But sometimes, you have to step away from the Zoom calls and stand in a crowded university hall to remember why we do this.",[745],"Nick Veenhof","https://res.cloudinary.com/about-gitlab-com/image/upload/v1750099013/Blog/Hero%20Images/Blog/Hero%20Images/blog-image-template-1800x945%20%2814%29_6VTUA8mUhOZNDaRVNPeKwl_1750099012960.png","2026-01-08",[261,620,749],"open source","The GitLab team recently had the privilege of judging the **iHack Hackathon** at **IIT Bombay's E-Summit**. The energy was electric, the coffee was flowing, and the talent was undeniable. But what struck us most wasn't just the code — it was the sheer determination of students to solve real-world problems, often overcoming significant logistical and financial hurdles to simply be in the room.\n\n\nThrough our [GitLab for Education program](https://about.gitlab.com/solutions/education/), we aim to empower the next generation of developers with tools and opportunity. Here is a look at what the students built, and how they used GitLab to bridge the gap between idea and reality.\n\n## The challenge: Build faster, build securely\n\nThe premise for the GitLab track of the hackathon was simple: Don't just show us a product; show us how you built it. We wanted to see how students utilized GitLab's platform — from Issue Boards to CI/CD pipelines — to accelerate the development lifecycle.\n\nThe results were inspiring.\n\n## The winners\n\n### 1st place: Team Decode — Democratizing Scientific Research\n\n**Project:** FIRE (Fast Integrated Research Environment)\n\nTeam Decode took home the top prize with a solution that warms a developer's heart: a local-first, blazing-fast data processing tool built with [Rust](https://about.gitlab.com/blog/secure-rust-development-with-gitlab/) and Tauri. They identified a massive pain point for data science students: existing tools are fragmented, slow, and expensive.\n\nTheir solution, FIRE, allows researchers to visualize complex formats (like NetCDF) instantly. What impressed the judges most was their \"hacker\" ethos. They didn't just build a tool; they built it to be open and accessible.\n\n**How they used GitLab:** Since the team lived far apart, asynchronous communication was key. They utilized **GitLab Issue Boards** and **Milestones** to track progress and integrated their repo with Telegram to get real-time push notifications. As one team member noted, \"Coordinating all these technologies was really difficult, and what helped us was GitLab... the Issue Board really helped us track who was doing what.\"\n\n![Team Decode](https://res.cloudinary.com/about-gitlab-com/image/upload/v1767380253/epqazj1jc5c7zkgqun9h.jpg)\n\n### 2nd place: Team BichdeHueDost — Reuniting to Solve Payments\n\n**Project:** SemiPay (RFID Cashless Payment for Schools)\n\nThe team name, BichdeHueDost, translates to \"Friends who have been set apart.\" It's a fitting name for a group of friends who went to different colleges but reunited to build this project. They tackled a unique problem: handling cash in schools for young children. Their solution used RFID cards backed by a blockchain ledger to ensure secure, cashless transactions for students.\n\n**How they used GitLab:** They utilized [GitLab CI/CD](https://about.gitlab.com/topics/ci-cd/) to automate the build process for their Flutter application (APK), ensuring that every commit resulted in a testable artifact. This allowed them to iterate quickly despite the \"flaky\" nature of cross-platform mobile development.\n\n![Team BichdeHueDost](https://res.cloudinary.com/about-gitlab-com/image/upload/v1767380253/pkukrjgx2miukb6nrj5g.jpg)\n\n### 3rd place: Team ZenYukti — Agentic Repository Intelligence\n\n**Project:** RepoInsight AI (AI-powered, GitLab-native intelligence platform)\n\nTeam ZenYukti impressed us with a solution that tackles a universal developer pain point: understanding unfamiliar codebases. What stood out to the judges was the tool's practical approach to onboarding and code comprehension: RepoInsight-AI automatically generates documentation, visualizes repository structure, and even helps identify bugs, all while maintaining context about the entire codebase.\n\n**How they used GitLab:** The team built a comprehensive CI/CD pipeline that showcased GitLab's security and DevOps capabilities. They integrated [GitLab's Security Templates](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/gitlab/ci/templates/Security) (SAST, Dependency Scanning, and Secret Detection), and utilized [GitLab Container Registry](https://docs.gitlab.com/user/packages/container_registry/) to manage their Docker images for backend and frontend components. They created an AI auto-review bot that runs on merge requests, demonstrating an \"agentic workflow\" where AI assists in the development process itself.\n\n![Team ZenYukti](https://res.cloudinary.com/about-gitlab-com/image/upload/v1767380253/ymlzqoruv5al1secatba.jpg)\n\n## Beyond the code: A lesson in inclusion\n\nWhile the code was impressive, the most powerful moment of the event happened away from the keyboard.\n\nDuring the feedback session, we learned about the journey Team ZenYukti took to get to Mumbai. They traveled over 24 hours, covering nearly 1,800 kilometers. Because flights were too expensive and trains were booked, they traveled in the \"General Coach,\" a non-reserved, severely overcrowded carriage.\n\nAs one student described it:\n\n*\"You cannot even imagine something like this... there are no seats... people sit on the top of the train. This is what we have endured.\"*\n\nThis hit home. [Diversity, Inclusion, and Belonging](https://handbook.gitlab.com/handbook/company/culture/inclusion/) are core values at GitLab. We realized that for these students, the barrier to entry wasn't intellect or skill, it was access.\n\nIn that moment, we decided to break that barrier. We committed to reimbursing the travel expenses for the participants who struggled to get there. It's a small step, but it underlines a massive truth: **talent is distributed equally, but opportunity is not.**\n\n![hackathon class together](https://res.cloudinary.com/about-gitlab-com/image/upload/v1767380252/o5aqmboquz8ehusxvgom.jpg)\n\n### The future is bright (and automated)\n\nWe also saw incredible potential in teams like Prometheus, who attempted to build an autonomous patch remediation tool (DevGuardian), and Team Arrakis, who built a voice-first job portal for blue-collar workers using [GitLab Duo](https://about.gitlab.com/gitlab-duo-agent-platform/) to troubleshoot their pipelines.\n\nTo all the students who participated: You are the future. Through [GitLab for Education](https://about.gitlab.com/solutions/education/), we are committed to providing you with the top-tier tools (like GitLab Ultimate) you need to learn, collaborate, and change the world — whether you are coding from a dorm room, a lab, or a train carriage. **Keep shipping.**\n\n> :bulb: Learn more about the [GitLab for Education program](https://about.gitlab.com/solutions/education/).\n",{"slug":752,"featured":12,"template":13},"how-iit-bombay-students-code-future-with-gitlab",{"promotions":754},[755,769,780,792],{"id":756,"categories":757,"header":759,"text":760,"button":761,"image":766},"ai-modernization",[758],"ai-ml","Is AI achieving its promise at scale?","Quiz will take 5 minutes or less",{"text":762,"config":763},"Get your AI maturity score",{"href":764,"dataGaName":765,"dataGaLocation":243},"/assessments/ai-modernization-assessment/","modernization assessment",{"config":767},{"src":768},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1772138786/qix0m7kwnd8x2fh1zq49.png",{"id":770,"categories":771,"header":772,"text":760,"button":773,"image":777},"devops-modernization",[737,566],"Are you just managing tools or shipping innovation?",{"text":774,"config":775},"Get your DevOps maturity score",{"href":776,"dataGaName":765,"dataGaLocation":243},"/assessments/devops-modernization-assessment/",{"config":778},{"src":779},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1772138785/eg818fmakweyuznttgid.png",{"id":781,"categories":782,"header":784,"text":760,"button":785,"image":789},"security-modernization",[783],"security","Are you trading speed for security?",{"text":786,"config":787},"Get your security maturity score",{"href":788,"dataGaName":765,"dataGaLocation":243},"/assessments/security-modernization-assessment/",{"config":790},{"src":791},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1772138786/p4pbqd9nnjejg5ds6mdk.png",{"id":793,"paths":794,"header":797,"text":798,"button":799,"image":804},"github-azure-migration",[795,796],"migration-from-azure-devops-to-gitlab","integrating-azure-devops-scm-and-gitlab","Is your team ready for GitHub's Azure move?","GitHub is already rebuilding around Azure. Find out what it means for you.",{"text":800,"config":801},"See how GitLab compares to GitHub",{"href":802,"dataGaName":803,"dataGaLocation":243},"/compare/gitlab-vs-github/github-azure-migration/","github azure migration",{"config":805},{"src":779},{"header":807,"blurb":808,"button":809,"secondaryButton":814},"Start building faster today","See what your team can do with the intelligent orchestration platform for DevSecOps.\n",{"text":810,"config":811},"Get your free trial",{"href":812,"dataGaName":51,"dataGaLocation":813},"https://gitlab.com/-/trial_registrations/new?glm_content=default-saas-trial&glm_source=about.gitlab.com/","feature",{"text":505,"config":815},{"href":55,"dataGaName":56,"dataGaLocation":813},1776449933490]