Skip to main content

Infrastructure as Code Cloud Deployment

Infrastructure as Code - Cloud Deployment

Target Audience: DevOps engineers, Cloud architects, Contributors Purpose: Infrastructure provisioning and management using IaC tools Reading Time: 15 minutes


Table of Contents

  1. Overview
  2. Terraform Configuration
  3. Pulumi Configuration
  4. Cloud Platform Examples
  5. Network Architecture
  6. Database Provisioning
  7. Cost Estimation
  8. Best Practices

Overview

What This Guide Covers

Infrastructure as Code (IaC) templates for deploying CODITECT to production cloud environments:

  • Terraform - HCL-based infrastructure provisioning
  • Pulumi - Multi-language (Python, TypeScript) infrastructure
  • GCP - Google Cloud Platform deployment
  • AWS - Amazon Web Services deployment
  • Azure - Microsoft Azure deployment

Why Infrastructure as Code?

  • Reproducibility - Exact environment replication across environments
  • Version Control - Infrastructure changes tracked in Git
  • Automation - Programmatic provisioning and teardown
  • Documentation - Infrastructure self-documented in code
  • Testing - Infrastructure changes testable before production

Terraform Configuration

Directory Structure

infrastructure/terraform/
├── main.tf # Primary configuration
├── variables.tf # Input variables
├── outputs.tf # Output values
├── providers.tf # Cloud provider configuration
├── modules/ # Reusable modules
│ ├── network/ # VPC, subnets, firewall
│ ├── compute/ # VM instances, containers
│ ├── database/ # Database instances
│ └── storage/ # Object storage, volumes
└── environments/ # Environment-specific configs
├── development/
├── staging/
└── production/

Google Cloud Platform (GCP)

Complete GCP Deployment

# providers.tf
terraform {
required_version = ">= 1.5.0"
required_providers {
google = {
source = "hashicorp/google"
version = "~> 5.0"
}
}

backend "gcs" {
bucket = "coditect-terraform-state"
prefix = "production"
}
}

provider "google" {
project = var.project_id
region = var.region
zone = var.zone
}

# variables.tf
variable "project_id" {
description = "GCP Project ID"
type = string
}

variable "region" {
description = "GCP Region"
type = string
default = "us-central1"
}

variable "zone" {
description = "GCP Zone"
type = string
default = "us-central1-a"
}

variable "environment" {
description = "Environment (dev, staging, production)"
type = string
}

variable "machine_type" {
description = "Compute Engine machine type"
type = string
default = "e2-medium"
}

# main.tf
locals {
app_name = "coditect"
labels = {
environment = var.environment
managed_by = "terraform"
application = local.app_name
}
}

# VPC Network
resource "google_compute_network" "vpc" {
name = "${local.app_name}-vpc-${var.environment}"
auto_create_subnetworks = false
}

# Subnet
resource "google_compute_subnetwork" "subnet" {
name = "${local.app_name}-subnet-${var.environment}"
ip_cidr_range = "10.0.0.0/24"
region = var.region
network = google_compute_network.vpc.id

secondary_ip_range {
range_name = "services"
ip_cidr_range = "10.1.0.0/24"
}
}

# Firewall Rules
resource "google_compute_firewall" "allow_http" {
name = "${local.app_name}-allow-http-${var.environment}"
network = google_compute_network.vpc.name

allow {
protocol = "tcp"
ports = ["80", "443"]
}

source_ranges = ["0.0.0.0/0"]
target_tags = ["http-server"]
}

resource "google_compute_firewall" "allow_ssh" {
name = "${local.app_name}-allow-ssh-${var.environment}"
network = google_compute_network.vpc.name

allow {
protocol = "tcp"
ports = ["22"]
}

source_ranges = ["0.0.0.0/0"] # Restrict in production!
target_tags = ["ssh-access"]
}

# Cloud SQL PostgreSQL
resource "google_sql_database_instance" "postgres" {
name = "${local.app_name}-postgres-${var.environment}"
database_version = "POSTGRES_15"
region = var.region

settings {
tier = "db-f1-micro" # Adjust for production
availability_type = var.environment == "production" ? "REGIONAL" : "ZONAL"
disk_size = 10
disk_type = "PD_SSD"

backup_configuration {
enabled = true
start_time = "03:00"
point_in_time_recovery_enabled = var.environment == "production"
}

ip_configuration {
ipv4_enabled = false
private_network = google_compute_network.vpc.id
}
}

deletion_protection = var.environment == "production"
}

resource "google_sql_database" "database" {
name = "coditect"
instance = google_sql_database_instance.postgres.name
}

resource "google_sql_user" "user" {
name = "coditect_app"
instance = google_sql_database_instance.postgres.name
password = var.db_password # Use Secret Manager in production!
}

# Cloud Run Service
resource "google_cloud_run_service" "app" {
name = "${local.app_name}-app-${var.environment}"
location = var.region

template {
spec {
containers {
image = "gcr.io/${var.project_id}/${local.app_name}:latest"

env {
name = "DATABASE_URL"
value = "postgresql://${google_sql_user.user.name}:${var.db_password}@${google_sql_database_instance.postgres.private_ip_address}:5432/${google_sql_database.database.name}"
}

env {
name = "ENVIRONMENT"
value = var.environment
}

resources {
limits = {
cpu = "2"
memory = "2Gi"
}
}
}

container_concurrency = 80
}

metadata {
annotations = {
"autoscaling.knative.dev/minScale" = var.environment == "production" ? "2" : "0"
"autoscaling.knative.dev/maxScale" = var.environment == "production" ? "10" : "5"
}
}
}

traffic {
percent = 100
latest_revision = true
}
}

# Cloud Run IAM
resource "google_cloud_run_service_iam_member" "public_access" {
count = var.environment == "development" ? 1 : 0
service = google_cloud_run_service.app.name
location = google_cloud_run_service.app.location
role = "roles/run.invoker"
member = "allUsers"
}

# Load Balancer (Production)
resource "google_compute_global_address" "lb_ip" {
count = var.environment == "production" ? 1 : 0
name = "${local.app_name}-lb-ip-${var.environment}"
}

# Cloud Storage Bucket
resource "google_storage_bucket" "app_storage" {
name = "${var.project_id}-${local.app_name}-${var.environment}"
location = var.region
force_destroy = var.environment != "production"

uniform_bucket_level_access = true

versioning {
enabled = var.environment == "production"
}

lifecycle_rule {
condition {
age = 30
}
action {
type = "SetStorageClass"
storage_class = "NEARLINE"
}
}
}

# outputs.tf
output "app_url" {
description = "Application URL"
value = google_cloud_run_service.app.status[0].url
}

output "database_connection_name" {
description = "Cloud SQL connection name"
value = google_sql_database_instance.postgres.connection_name
}

output "storage_bucket_name" {
description = "Storage bucket name"
value = google_storage_bucket.app_storage.name
}

output "load_balancer_ip" {
description = "Load balancer IP address"
value = var.environment == "production" ? google_compute_global_address.lb_ip[0].address : "N/A"
}

Usage

# Initialize Terraform
cd infrastructure/terraform/environments/production
terraform init

# Plan deployment
terraform plan -out=deployment.tfplan \
-var="project_id=my-gcp-project" \
-var="environment=production" \
-var="db_password=$DB_PASSWORD"

# Apply infrastructure
terraform apply deployment.tfplan

# Verify deployment
terraform show

# Destroy (careful!)
terraform destroy -var="project_id=my-gcp-project"

AWS Configuration

Complete AWS Deployment

# providers.tf
terraform {
required_version = ">= 1.5.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}

backend "s3" {
bucket = "coditect-terraform-state"
key = "production/terraform.tfstate"
region = "us-east-1"
}
}

provider "aws" {
region = var.region
}

# variables.tf
variable "region" {
description = "AWS Region"
type = string
default = "us-east-1"
}

variable "environment" {
description = "Environment (dev, staging, production)"
type = string
}

variable "instance_type" {
description = "EC2 instance type"
type = string
default = "t3.medium"
}

# main.tf
locals {
app_name = "coditect"
tags = {
Environment = var.environment
ManagedBy = "terraform"
Application = local.app_name
}
}

# VPC
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true

tags = merge(local.tags, {
Name = "${local.app_name}-vpc-${var.environment}"
})
}

# Subnets
resource "aws_subnet" "public" {
count = 2
vpc_id = aws_vpc.main.id
cidr_block = "10.0.${count.index}.0/24"
availability_zone = data.aws_availability_zones.available.names[count.index]
map_public_ip_on_launch = true

tags = merge(local.tags, {
Name = "${local.app_name}-public-subnet-${count.index + 1}"
})
}

resource "aws_subnet" "private" {
count = 2
vpc_id = aws_vpc.main.id
cidr_block = "10.0.${count.index + 10}.0/24"
availability_zone = data.aws_availability_zones.available.names[count.index]

tags = merge(local.tags, {
Name = "${local.app_name}-private-subnet-${count.index + 1}"
})
}

# Internet Gateway
resource "aws_internet_gateway" "main" {
vpc_id = aws_vpc.main.id

tags = merge(local.tags, {
Name = "${local.app_name}-igw"
})
}

# RDS PostgreSQL
resource "aws_db_subnet_group" "main" {
name = "${local.app_name}-db-subnet-group"
subnet_ids = aws_subnet.private[*].id

tags = merge(local.tags, {
Name = "${local.app_name}-db-subnet-group"
})
}

resource "aws_db_instance" "postgres" {
identifier = "${local.app_name}-postgres-${var.environment}"
engine = "postgres"
engine_version = "15.3"
instance_class = "db.t3.micro"
allocated_storage = 20
storage_type = "gp3"
db_name = "coditect"
username = "coditect_admin"
password = var.db_password # Use Secrets Manager!
db_subnet_group_name = aws_db_subnet_group.main.name
vpc_security_group_ids = [aws_security_group.database.id]
skip_final_snapshot = var.environment != "production"
backup_retention_period = var.environment == "production" ? 7 : 1

tags = local.tags
}

# Security Groups
resource "aws_security_group" "alb" {
name = "${local.app_name}-alb-sg"
description = "Security group for ALB"
vpc_id = aws_vpc.main.id

ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

tags = merge(local.tags, {
Name = "${local.app_name}-alb-sg"
})
}

resource "aws_security_group" "database" {
name = "${local.app_name}-database-sg"
description = "Security group for RDS"
vpc_id = aws_vpc.main.id

ingress {
from_port = 5432
to_port = 5432
protocol = "tcp"
security_groups = [aws_security_group.app.id]
}

tags = merge(local.tags, {
Name = "${local.app_name}-database-sg"
})
}

# Application Load Balancer
resource "aws_lb" "main" {
name = "${local.app_name}-alb-${var.environment}"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.alb.id]
subnets = aws_subnet.public[*].id

tags = local.tags
}

# S3 Bucket
resource "aws_s3_bucket" "app_storage" {
bucket = "${local.app_name}-storage-${var.environment}-${data.aws_caller_identity.current.account_id}"

tags = local.tags
}

resource "aws_s3_bucket_versioning" "app_storage" {
bucket = aws_s3_bucket.app_storage.id
versioning_configuration {
status = var.environment == "production" ? "Enabled" : "Suspended"
}
}

# Data sources
data "aws_availability_zones" "available" {
state = "available"
}

data "aws_caller_identity" "current" {}

# outputs.tf
output "alb_dns_name" {
description = "Application Load Balancer DNS"
value = aws_lb.main.dns_name
}

output "database_endpoint" {
description = "RDS endpoint"
value = aws_db_instance.postgres.endpoint
sensitive = true
}

output "storage_bucket_name" {
description = "S3 bucket name"
value = aws_s3_bucket.app_storage.bucket
}

Pulumi Configuration

Python-based Infrastructure

# __main__.py
import pulumi
import pulumi_gcp as gcp

# Configuration
config = pulumi.Config()
project_id = config.require("gcp:project")
environment = config.get("environment") or "development"
app_name = "coditect"

# Labels
labels = {
"environment": environment,
"managed_by": "pulumi",
"application": app_name,
}

# VPC Network
network = gcp.compute.Network(
f"{app_name}-vpc-{environment}",
auto_create_subnetworks=False,
)

# Subnet
subnet = gcp.compute.Subnetwork(
f"{app_name}-subnet-{environment}",
ip_cidr_range="10.0.0.0/24",
region="us-central1",
network=network.id,
)

# Cloud SQL PostgreSQL
db_instance = gcp.sql.DatabaseInstance(
f"{app_name}-postgres-{environment}",
database_version="POSTGRES_15",
region="us-central1",
settings=gcp.sql.DatabaseInstanceSettingsArgs(
tier="db-f1-micro",
disk_size=10,
disk_type="PD_SSD",
backup_configuration=gcp.sql.DatabaseInstanceSettingsBackupConfigurationArgs(
enabled=True,
start_time="03:00",
),
ip_configuration=gcp.sql.DatabaseInstanceSettingsIpConfigurationArgs(
ipv4_enabled=False,
private_network=network.id,
),
),
deletion_protection=environment == "production",
)

# Cloud Run Service
service = gcp.cloudrun.Service(
f"{app_name}-app-{environment}",
location="us-central1",
template=gcp.cloudrun.ServiceTemplateArgs(
spec=gcp.cloudrun.ServiceTemplateSpecArgs(
containers=[gcp.cloudrun.ServiceTemplateSpecContainerArgs(
image=f"gcr.io/{project_id}/{app_name}:latest",
resources=gcp.cloudrun.ServiceTemplateSpecContainerResourcesArgs(
limits={"cpu": "2", "memory": "2Gi"},
),
)],
),
),
)

# Exports
pulumi.export("app_url", service.statuses[0].url)
pulumi.export("database_connection", db_instance.connection_name)

Usage

# Install Pulumi
curl -fsSL https://get.pulumi.com | sh

# Initialize project
pulumi new gcp-python
pulumi config set gcp:project my-gcp-project
pulumi config set environment production

# Deploy
pulumi up

# View outputs
pulumi stack output app_url

# Destroy
pulumi destroy

Network Architecture

Production Network Design

┌─────────────────────────────────────────────────────────────┐
│ VPC (10.0.0.0/16) │
│ │
│ ┌──────────────────┐ ┌──────────────────┐ │
│ │ Public Subnet 1 │ │ Public Subnet 2 │ │
│ │ 10.0.1.0/24 │ │ 10.0.2.0/24 │ │
│ │ │ │ │ │
│ │ ┌──────────┐ │ │ ┌──────────┐ │ │
│ │ │ NAT GW 1 │ │ │ │ NAT GW 2 │ │ │
│ │ └──────────┘ │ │ └──────────┘ │ │
│ │ ┌──────────┐ │ │ ┌──────────┐ │ │
│ │ │ ALB │◄───┼─────────┼───►│ ALB │ │ │
│ │ └──────────┘ │ │ └──────────┘ │ │
│ └──────────────────┘ └──────────────────┘ │
│ │
│ ┌──────────────────┐ ┌──────────────────┐ │
│ │ Private Subnet 1 │ │ Private Subnet 2 │ │
│ │ 10.0.11.0/24 │ │ 10.0.12.0/24 │ │
│ │ │ │ │ │
│ │ ┌──────────┐ │ │ ┌──────────┐ │ │
│ │ │ App Tier │ │ │ │ App Tier │ │ │
│ │ └──────────┘ │ │ └──────────┘ │ │
│ └──────────────────┘ └──────────────────┘ │
│ │
│ ┌──────────────────┐ ┌──────────────────┐ │
│ │ Database Subnet 1│ │ Database Subnet 2│ │
│ │ 10.0.21.0/24 │ │ 10.0.22.0/24 │ │
│ │ │ │ │ │
│ │ ┌──────────┐ │ │ ┌──────────┐ │ │
│ │ │ Primary │◄───┼─────────┼───►│ Standby │ │ │
│ │ │ Database │ │ │ │ Database │ │ │
│ │ └──────────┘ │ │ └──────────┘ │ │
│ └──────────────────┘ └──────────────────┘ │
└─────────────────────────────────────────────────────────────┘

Security Best Practices

  1. Network Segmentation

    • Public subnets for load balancers only
    • Private subnets for application tier
    • Database subnets with no internet access
  2. Firewall Rules

    • Deny all by default
    • Explicit allow rules for required traffic
    • Source IP restrictions for management access
  3. NAT Gateways

    • Private instances access internet via NAT
    • No direct inbound internet access to private subnets

Database Provisioning

High Availability Configuration

# Cloud SQL with high availability
resource "google_sql_database_instance" "postgres_ha" {
name = "coditect-postgres-ha"
database_version = "POSTGRES_15"
region = "us-central1"

settings {
tier = "db-custom-4-16384" # 4 vCPU, 16GB RAM
availability_type = "REGIONAL" # Multi-zone HA
disk_size = 100
disk_type = "PD_SSD"
disk_autoresize = true

backup_configuration {
enabled = true
start_time = "03:00"
point_in_time_recovery_enabled = true
transaction_log_retention_days = 7
backup_retention_settings {
retained_backups = 30
retention_unit = "COUNT"
}
}

ip_configuration {
ipv4_enabled = false
private_network = google_compute_network.vpc.id
require_ssl = true
}

database_flags {
name = "max_connections"
value = "200"
}

database_flags {
name = "shared_buffers"
value = "4194304" # 4GB in 8kB pages
}

insights_config {
query_insights_enabled = true
query_string_length = 1024
record_application_tags = true
}
}

deletion_protection = true
}

Database Migration

# Create migration script
cat > migrate.sh <<'EOF'
#!/bin/bash
set -e

DB_HOST=${DB_HOST:-localhost}
DB_NAME=${DB_NAME:-coditect}
DB_USER=${DB_USER:-coditect_admin}

echo "Running database migrations..."

# Apply schema migrations
psql "postgresql://${DB_USER}@${DB_HOST}/${DB_NAME}" \
-f migrations/001_initial_schema.sql

# Seed data (development only)
if [ "$ENVIRONMENT" = "development" ]; then
psql "postgresql://${DB_USER}@${DB_HOST}/${DB_NAME}" \
-f migrations/seed_data.sql
fi

echo "Migrations complete!"
EOF

chmod +x migrate.sh

Cost Estimation

Monthly Cost Breakdown (Production)

GCP Estimate

ServiceTierMonthly Cost
Cloud Run2-10 instances$50-250
Cloud SQLdb-custom-4-16384 (HA)$400
Cloud Storage100GB$2
Load BalancerStandard$18
Network Egress1TB$120
Total~$590-790/month

AWS Estimate

ServiceTierMonthly Cost
EC2t3.medium (2 instances)$60
RDSdb.t3.medium (Multi-AZ)$130
S3100GB$2.30
ALBStandard$22
Data Transfer1TB$90
Total~$304/month

Cost Optimization Tips

  1. Right-size instances - Monitor and adjust based on actual usage
  2. Use committed use discounts - Save 30-50% with 1-3 year commitments
  3. Auto-scaling - Scale down during off-peak hours
  4. Storage tiers - Move cold data to cheaper storage classes
  5. Reserved capacity - For predictable workloads

Best Practices

1. State Management

# Remote state with locking
terraform {
backend "gcs" {
bucket = "coditect-terraform-state"
prefix = "production"
}
}

2. Variable Management

# Use tfvars files
cat > production.tfvars <<EOF
project_id = "my-gcp-project"
environment = "production"
region = "us-central1"
EOF

# Apply with variables
terraform apply -var-file=production.tfvars

3. Secrets Management

# Use Secret Manager (not variables!)
data "google_secret_manager_secret_version" "db_password" {
secret = "db-password"
}

resource "google_sql_user" "user" {
password = data.google_secret_manager_secret_version.db_password.secret_data
}

4. Resource Tagging

# Consistent tagging
locals {
common_tags = {
Environment = var.environment
ManagedBy = "terraform"
Application = "coditect"
CostCenter = "engineering"
Owner = "devops-team"
}
}

5. Modular Design

# Use modules for reusability
module "network" {
source = "./modules/network"
environment = var.environment
region = var.region
}

module "database" {
source = "./modules/database"
network_id = module.network.vpc_id
subnet_ids = module.network.private_subnet_ids
}


Document Status: Production Ready Last Validation: December 22, 2025 Next Review: March 2026