AWS Cloud Resume Challenge 3
cloud

AWS Cloud Resume Challenge 3

Welcome to my Cloud Resume Challenge experience! This is the third part of a three-part series where I'll share everything I learned while building a serverless resume website on AWS. If you're thinking about taking on this challenge, I hope my journey helps you navigate through the process.

January 6, 2026
1 min read
1 likes
DevOpsCI/CDAWS

My Cloud Resume Challenge Journey - Part 3: DevOps and CI/CD

← Back to Part 2

13. Source Control

I organized the project with clear separation between frontend and backend components:

├── .github/workflows/
│   ├── backend-cicd.yml
│   └── frontend-cicd.yml
├── infra/                 # Terraform infrastructure
│   ├── lambda/
│   ├── *.tf files
└── website/               # Static website files
    ├── index.html
    ├── script.js
    └── styles.css

This structure allows for:

  • Independent deployments: Frontend and backend can be deployed separately
  • Targeted CI/CD: Only rebuild what actually changed
  • Clear responsibilities: Infrastructure code vs. application code

14. CI/CD (Back end)

The backend pipeline handles infrastructure deployment and Lambda function testing. Here's the complete workflow:

name: Terraform CI/CD on: workflow_dispatch: push: branches: - main # Triggers on pushes to the main branch paths: ["infra/**"] jobs: test: runs-on: ubuntu-latest steps: - name: Checkout repository uses: actions/checkout@v4 - name: Set up Python uses: actions/setup-python@v5 with: python-version: "3.14" - name: Install dependencies # Use 'working-directory' to run commands inside the new folder working-directory: ./infra/lambda run: | python -m pip install --upgrade pip # Install pytest and other dependencies from requirements.txt (if any) pip install pytest if [ -f requirements.txt ]; then pip install -r requirements.txt fi - name: Test with pytest working-directory: ./infra/lambda run: | pytest terraform: needs: test runs-on: ubuntu-latest defaults: run: working-directory: ./infra steps: - name: Checkout Code uses: actions/checkout@v4 - name: Configure AWS Credentials uses: aws-actions/configure-aws-credentials@v4 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} aws-region: ${{ secrets.AWS_REGION }} - name: Setup Terraform uses: hashicorp/setup-terraform@v3 - name: Terraform Init run: terraform init - name: Terraform Validate run: terraform validate - name: Terraform Plan # Generates a plan file for review before applying run: terraform plan -out=tfplan - name: Terraform Apply # Applies the changes automatically if plan succeeds run: terraform apply -auto-approve tfplan

This pipeline does several important things:

  1. Triggers only on infrastructure changes: Uses paths: ["infra/**"] to avoid unnecessary runs
  2. Tests first: Runs Python tests before deploying infrastructure
  3. Validates Terraform: Ensures configuration is syntactically correct
  4. Plans before applying: Generates a plan file for safer deployments
  5. Outputs API URL: Makes the endpoint available for frontend integration

15. Remote State

For production Terraform, remote state management is essential:

terraform { backend "s3" { bucket = "cloudresume-terraform-state" key = "project.tfstate" region = "ap-southeast-1" dynamodb_table = "cloudresume-terraform-state" encrypt = true } }

This configuration:

  • Stores state in S3: Centralized, versioned, and backed up
  • Uses DynamoDB for locking: Prevents concurrent modifications
  • Encrypts state: Keeps sensitive data secure
  • Enables collaboration: Multiple people can work on the same infrastructure

16. CI/CD (Front end)

The frontend pipeline focuses on deploying static assets efficiently:

name: Upload Website on: workflow_dispatch: push: branches: [main] paths: ["website/**"] jobs: deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@master - uses: jakejarvis/s3-sync-action@master with: args: --acl private --follow-symlinks --delete env: AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }} AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID_Website }} AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY_Website }} AWS_REGION: "ap-southeast-1" SOURCE_DIR: "website"

Key features:

  • Targeted deployment: Only runs when website files change
  • Separate IAM credentials: Uses different credentials with minimal S3 permissions
  • Efficient sync: Only uploads changed files, deletes removed ones
  • Private ACL: Works with CloudFront OAC security model

17. JavaScript

The final piece connects everything together. Here's how the frontend fetches the visitor count:

// Initialize when DOM is loaded document.addEventListener("DOMContentLoaded", function () { const apiEndpoint = "https://xw6fumewed.execute-api.ap-southeast-1.amazonaws.com/visitors"; // Set to your API endpoint in production const visitorCounter = new VisitorCounter(apiEndpoint); });

Resources and Links

  • AWS Cloud Resume Challenge
  • My Resume Website
  • GitHub Repository
  • Terraform Documentation
  • AWS Lambda Documentation

About this post

Category:cloud
Published:Jan 6, 2026
Reading time:1 minutes

Quick Actions