I’ve been using GitHub Actions for years, thinking I had CI/CD figured out. Then I switched to a new project using BitBucket, and suddenly I’m staring at a YAML file that looks familiar but feels… different. Like walking into your friend’s kitchen – you know where the fridge is, but good luck finding the can opener.
BitBucket Pipelines isn’t just “GitHub Actions with different syntax.” It’s got its own personality, quirks, and honestly, some clever features that GitHub could learn from.
⚡ TL;DR for the Impatient
Why BitBucket Pipelines Exists
Let me be honest – if you’re perfectly happy with GitHub Actions, you’re not missing out on life-changing features. But here’s the thing: if you’re already using Jira, Confluence, or other Atlassian tools, BitBucket Pipelines slots in like a missing puzzle piece.
The integration is seamless. Your pipeline failures automatically create Jira tickets. Your deployment status shows up in Confluence documentation. It’s the kind of workflow integration that makes enterprise teams swoon.
While GitHub Actions feels like a Swiss Army knife with 47 different tools, BitBucket Pipelines is more like a really good chef’s knife – focused, sharp, and gets the job done without unnecessary complexity.
The Pipeline Structure
The basic structure is refreshingly straightforward:
pipelines:
branches:
main:
- step:
name: Build and Test
script:
- echo "Hello from BitBucket!"
- npm install
- npm test
Compare that to GitHub Actions’ verbose syntax with its jobs, steps, and actions ecosystem. BitBucket keeps it simple: you have pipelines, they contain steps, steps run scripts. Done.
Breaking Down the pipeline
Let me walk you through each piece of this structure, because understanding these components will save you headaches later:
pipelines: This is your root level – think of it as the container for all your CI/CD magic. Everything else lives inside here.
branches: This section defines trigger conditions. You can specify exact branch names (main
, develop
) or use patterns (feature/*
, hotfix/*
). It’s way more intuitive than GitHub’s complex trigger syntax.
main: Your actual branch name. This is where the magic happens for your primary branch. You can have multiple branch configurations side by side.
step: Each step is an isolated execution environment. Unlike GitHub Actions where you might have multiple jobs running in parallel, BitBucket steps run sequentially by default. Want parallel execution? You’ll need to explicitly configure it.
name: Pretty self-explanatory, but don’t underestimate this. A good name helps when you’re debugging failed builds at 2 AM. Trust me on this one.
script: Your actual commands. This is where you put the real work – building, testing, deploying. Each line runs in sequence, and if any command fails, the whole step fails.
Here’s a more complete example showing different branch behaviors:
pipelines:
branches:
main:
- step:
name: Production Deploy
script:
- npm install
- npm run build
- ./deploy.sh production
develop:
- step:
name: Staging Deploy
script:
- npm install
- npm test
- ./deploy.sh staging
feature/*:
- step:
name: Feature Testing
script:
- npm install
- npm test
- npm run lint
SCP and SSH Transport: Old School but Gold
Sometimes the fanciest deployment tools aren’t what you need. Sometimes you just want to copy files from your build to a server the old-fashioned way – with SCP and SSH. BitBucket Pipelines makes this surprisingly straightforward, though there are a few gotchas worth knowing about.
🔑 SSH Key Management
The basic pattern looks like this:
pipelines:
branches:
main:
- step:
name: Build and Deploy
script:
- npm run build
- scp -r dist/* user@server:/var/www/html/
- ssh user@server 'systemctl restart nginx'
But here’s where it gets interesting – BitBucket automatically mounts your SSH keys into the pipeline environment. No need to manually handle key files or worry about file permissions. Just add your private key to the repository SSH keys settings, and it’s available as $SSH_KEY_FILE
.
Here’s a more robust example that handles host key verification and uses proper SSH configuration:
pipelines:
branches:
main:
- step:
name: Deploy via SCP
script:
# Setup SSH config to avoid host key verification issues
- mkdir -p ~/.ssh
- echo "StrictHostKeyChecking no" >> ~/.ssh/config
- chmod 600 ~/.ssh/config
# Build the application
- npm install
- npm run build
# Copy files to server
- scp -r dist/* $SSH_USER@$SSH_HOST:$DEPLOY_PATH
# Run post-deployment commands
- ssh $SSH_USER@$SSH_HOST 'cd $DEPLOY_PATH && ./post-deploy.sh'
The beauty of this approach is its simplicity. No complex deployment tools, no orchestration frameworks, no learning curve. Just good old SSH doing what it’s done reliably for decades.
🌍 Environment Variables for SSH
But don’t let the simplicity fool you – you can build pretty sophisticated deployment workflows with SCP and SSH. Want blue-green deployments? Use SSH to swap symlinks. Need to run database migrations? SSH in and run your scripts. Want to check service health after deployment? SSH and curl your health endpoints.
Here’s a more advanced example that implements a basic blue-green deployment:
pipelines:
branches:
main:
- step:
name: Blue-Green Deploy
script:
- npm install && npm run build
# Determine which environment is currently live
- LIVE_ENV=$(ssh $SSH_USER@$SSH_HOST 'readlink /var/www/current')
- if [ "$LIVE_ENV" = "/var/www/blue" ]; then DEPLOY_ENV="green"; else DEPLOY_ENV="blue"; fi
# Deploy to the inactive environment
- scp -r dist/* $SSH_USER@$SSH_HOST:/var/www/$DEPLOY_ENV/
# Test the deployment
- ssh $SSH_USER@$SSH_HOST "curl -f http://localhost:8080/$DEPLOY_ENV/health"
# Switch traffic to new environment
- ssh $SSH_USER@$SSH_HOST "ln -sfn /var/www/$DEPLOY_ENV /var/www/current"
# Restart services
- ssh $SSH_USER@$SSH_HOST 'systemctl reload nginx'
One thing I particularly appreciate about BitBucket’s SSH integration is how it handles multiple keys. You can add different SSH keys for different environments (staging, production, etc.) and reference them by name in your pipeline. No juggling key files or complex authentication workflows.
The main limitation? SCP isn’t great for large files or unreliable networks. But for most web applications, copying a few megabytes of built assets is perfectly fine. And if you need more robustness, you can always add retry logic or use rsync instead of SCP.
Sometimes the old ways are the best ways. SCP and SSH might not be the sexiest deployment strategy, but they’re reliable, well-understood, and work everywhere. In a world of complex deployment tools and orchestration platforms, there’s something refreshing about a solution that just works.
What About Runners?
Now, here’s where BitBucket differs from GitHub’s self-hosted runners approach. By default, BitBucket provides shared runners that handle your workload. These are Docker containers that spin up, run your pipeline, and disappear.
But if you need more control – maybe you’re dealing with sensitive data or need specific hardware – you can set up self-hosted runners. BitBucket calls these “Runners” (creative, right?), and they’re surprisingly easy to configure:
pipelines:
branches:
main:
- step:
name: Custom Runner Task
runs-on: self.hosted
script:
- echo "Running on our own hardware!"
The beauty of BitBucket’s approach is that you don’t need to worry about runner management unless you actually need custom environments. Most teams can happily use the default shared runners for years without issues.
But here’s where it gets interesting – BitBucket’s branch-specific configuration is actually more intuitive than GitHub’s workflow triggers. Want different behavior for your main branch versus feature branches? Just define it clearly:
pipelines:
branches:
main:
- step:
name: Deploy to Production
script:
- ./deploy-prod.sh
feature/*:
- step:
name: Run Tests Only
script:
- npm test
You know what I love about this? No complex conditionals, no workflow_dispatch triggers, no matrix strategies unless you actually need them. It’s CI/CD for humans.
Manual Pipeline Activation
Here’s something that tripped me up initially – BitBucket Pipelines don’t just work out of the box. You need to enable them manually in your repository settings. Even for specific branches!
This might seem annoying coming from GitHub, where Actions are enabled by default. But honestly? It’s kind of brilliant. No accidental pipeline runs burning through your build minutes. No surprise charges because someone pushed a massive matrix job.
The process is simple enough:
- Go to Repository Settings
- Find “Pipelines” in the sidebar
- Toggle that switch to “On”
- Configure branch permissions if needed
It’s an extra step, sure, but it forces you to be intentional about your CI/CD setup.
Docker Integration That Just Works
This is where BitBucket really shines. Every pipeline step runs in a Docker container by default. No setup, no configuration, no “actions/checkout@v2” – just specify your image and go:
pipelines:
branches:
main:
- step:
name: Node.js Build
image: node:18
script:
- npm install
- npm build
Coming from GitHub Actions, this felt almost too easy. No wondering about runner environments, no version mismatches between your local setup and CI. Pick a Docker image, and you’re guaranteed consistency.
The default image is atlassian/default-image:3
, which includes common tools like Git, Docker, and various language runtimes. But honestly, I recommend specifying your own image – it’s more explicit and you won’t get surprised by updates.
Artifacts
BitBucket’s artifact system is genuinely impressive. Unlike GitHub’s complex Actions artifacts, BitBucket makes it dead simple to pass files between steps:
pipelines:
branches:
main:
- step:
name: Build
script:
- npm run build
artifacts:
- dist/**
- step:
name: Deploy
script:
- aws s3 sync dist/ s3://my-bucket/
Those artifacts automatically carry forward to the next step. No downloading, no complex paths, no expiration policies to worry about. It just works.
But here’s the really clever part – artifacts persist across pipeline runs too. Deploy an artifact from your main branch, then reference it in a hotfix branch. It’s like having a built-in artifact repository without the complexity.
The BitBucket Validator: Your New Best Friend
One tool that deserves serious praise is BitBucket’s pipeline validator at https://bitbucket.org/product/pipelines/validator. This thing has saved me countless hours of “fix YAML, push, wait, repeat” cycles.
Paste your pipeline configuration, and it’ll catch syntax errors, invalid configurations, and even suggest optimizations. It’s like having a linter for your CI/CD pipeline – something I desperately wish GitHub provided.
Conclusion
BitBucket Pipelines won’t revolutionize your development process, but it might just make your CI/CD setup a little more pleasant to work with. Sometimes that’s enough.
Stay up to date
Get notified when I publish something new, and unsubscribe at any time.