Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Branch checkout times differs between pipelines

We have an issue where 2 pipelines, 1 cloned from the other with the same stages are having very different performance times using the online agent.

Our main Deploy pipeline on MAIN branch takes ~5min to checkout and ~50min to deploy to our UAT environment, regularly hitting the 1hr limit and failing.

Yet a cloned UAT pipeline triggered on UAT branch with the exact same steps takes 44s to checkout and ~20min to build.

The two branches are often identical, so it's not the repository contents. I'm wondering what can be the underlying difference between the performance hit. Could branch history be the cause?

Has anyone encountered this as it would be great to have a main deploy pipeline that doesn't timeout 1/3rd of the time.

like image 225
Sean T Avatar asked Aug 31 '25 01:08

Sean T


1 Answers

It is unlikely the branch itself will have impact on the build speed, as you've mentioned, it's often pointing at the exact same commit.

But there are a few hidden settings for each pipeline that can influence build speeds drastically:

Check the hidden "Checkout step" configuration to to check a couple of settings.

You can reach this screen by clicking Edit on the build definition, then open the menu by clicking the in the upper right corner of the screen and then pick ⚡Triggers in the menu. After it has taken you to the trigger configuration page click YAML and then the Get Sources:

Open the hidden configuration panel for the YAML pipeline

Find the Get Sources tab

Shallow fetch and depth - These settings configure how many commits should be retrieved from the repository. The default values for this setting have changed in the past, so it's possible there is a difference between the 2 pipelines. A shallow fetch would drastically reduce the time needed to clone the repository.

Cleanup: true and Clean options - Another option I can think of is that the Cleanup setting for both builds are different. If one build is configured to clean its working directory it will always have to fetch everything and rebuild everything. If the cleanup is turned off for the other build it will be able to do an incremental fetch and an incremental build, drastically speeding things up.

You can also explicitly configure these settings in the YAML file by adding an explicit checkout step to your job and configureing the workspace element:

    workspace:
      clean: outputs 

    steps:
    - checkout: self
      clean: false
      fetchDepth: 1

Another option to check is whether the 2 pipelines are configured to use the same build agent. While I wouldn't expect great performance differences between hosted agents, self-hosted agents can of course vastly differ in terms of underlying hardware and whether the host is busy or not.

Note that the default settings for Shallow Fetch and Cleanup have changed over time, this might explain the difference in the default configuration of the original and the cloned pipeline:

⚠️ Important

New pipelines created after the September 2022 Azure DevOps sprint 209 update have Shallow fetch enabled by default and configured with a depth of 1. Previously the default was not to shallow fetch. To check your pipeline, view the Shallow fetch setting in the pipeline settings UI as described in the following section.

While unlikely, the Checkout files from LFS and Checkout Submodules can of course also add time to the pipeline, but I'd expect the build to fail if these are misconfigured.

like image 75
jessehouwing Avatar answered Sep 02 '25 17:09

jessehouwing