<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[martzmakes]]></title><description><![CDATA[I'm an AWS Community Builder that loves CDK, automating things and tinkering in Machine Learning.]]></description><link>https://martzmakes.com</link><generator>RSS for Node</generator><lastBuildDate>Thu, 16 Apr 2026 21:31:39 GMT</lastBuildDate><atom:link href="https://martzmakes.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[It's My Birthday and I'm Giving YOU the Gift: Serverless MCP Servers That Cost Less Than Birthday Cake 🎂]]></title><description><![CDATA[How serverless architecture is transforming the way AI assistants connect to your tools and data

It's my birthday! 🎂. To celebrate, I want to teach you about creating SERVERLESS MCPS 💪!
In lieu of a present, I'll accept connections on LinkedIn
Her...]]></description><link>https://martzmakes.com/its-my-birthday-and-im-giving-you-the-gift-serverless-mcp-servers-that-cost-less-than-birthday-cake</link><guid isPermaLink="true">https://martzmakes.com/its-my-birthday-and-im-giving-you-the-gift-serverless-mcp-servers-that-cost-less-than-birthday-cake</guid><category><![CDATA[AWS]]></category><category><![CDATA[serverless]]></category><category><![CDATA[mcp]]></category><category><![CDATA[llm]]></category><category><![CDATA[claude-code]]></category><dc:creator><![CDATA[Matt Martz]]></dc:creator><pubDate>Wed, 10 Sep 2025 15:42:26 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1757518745077/ebd694c1-6e4b-4ccc-bbb6-8a068c437536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>How serverless architecture is transforming the way AI assistants connect to your tools and data</em></p>
<hr />
<p>It's my birthday! 🎂. To celebrate, I want to teach you about creating SERVERLESS MCPS 💪!</p>
<p>In lieu of a present, I'll accept connections on <a target="_blank" href="https://linkedin.com/in/martzmakes/">LinkedIn</a></p>
<p>Here's what this blog post, when deployed, will look like:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757517771527/46bb594d-96ca-43ef-8d6e-eb0da5ccafb3.gif" alt class="image--center mx-auto" /></p>
<p><em>Find the complete code on</em> <a target="_blank" href="https://github.com/martzmakes/mcp-cdk-lambda-cognito"><em>GitHub</em></a><em>.</em></p>
<h2 id="heading-the-problem-ai-assistants-need-better-access-to-your-world">The Problem: AI Assistants Need Better Access to Your World</h2>
<p>Picture this: You're using Claude to help with your daily work, but it needs access to your company's internal APIs, databases, or custom tools. Traditional approaches involve:</p>
<ul>
<li><p>Hard-coding integrations that break with every update</p>
</li>
<li><p>Managing always-on servers that sit idle 99% of the time</p>
</li>
<li><p>Complex authentication flows that are either insecure or user-hostile</p>
</li>
<li><p>Paying for infrastructure whether it's used or not</p>
</li>
</ul>
<p><strong>Enter the Model Context Protocol (MCP)</strong> - Anthropic's game-changing open standard that's revolutionizing how AI assistants interact with the digital world. <em>Learn more at</em> <a target="_blank" href="https://modelcontextprotocol.io"><em>modelcontextprotocol.io</em></a></p>
<h2 id="heading-why-serverless-mcp-changes-everything">Why Serverless MCP Changes Everything</h2>
<p>Here's the killer combination: <strong>MCP + Serverless = Magic</strong></p>
<p>MCP servers are naturally request/response based - they wake up when Claude needs something, do their job, and go back to sleep. This is <em>exactly</em> what serverless excels at. You're not paying for idle servers waiting for the occasional request from an AI assistant. You're paying for actual usage - pennies per thousands of requests.</p>
<p>But here's where it gets really interesting: <strong>The serverless advantage compounds when you consider the MCP ecosystem</strong>. Imagine thousands of specialized MCP servers, each handling different tools and data sources:</p>
<ul>
<li><p>Your CRM integration MCP server</p>
</li>
<li><p>Your analytics dashboard MCP server</p>
</li>
<li><p>Your code deployment MCP server</p>
</li>
<li><p>Your customer support MCP server</p>
</li>
</ul>
<p>In a traditional architecture, you'd need infrastructure for each. With serverless? They all scale to zero when not in use. <strong>You could have 100 MCP servers and pay nothing when they're idle.</strong></p>
<h2 id="heading-what-were-building-a-production-blueprint">What We're Building: A Production Blueprint</h2>
<p>This isn't another "Hello World" tutorial. We're building a <strong>production-ready, secure, scalable MCP server</strong> that demonstrates enterprise-grade patterns you can use immediately. Our example - a Dog Facts server - is intentionally simple so we can focus on the architecture that matters.</p>
<h3 id="heading-the-three-pillar-architecture">The Three-Pillar Architecture</h3>
<p>We've designed a modular system with three distinct components, each handling a critical piece of the puzzle:</p>
<pre><code class="lang-plaintext">┌─────────────────────┐    ┌─────────────────────┐    ┌─────────────────────┐
│  McpAuthConstruct   │───▶│ McpLambdaConstruct  │───▶│McpApiGatewayConstruct│
│   "The Gatekeeper"  │    │    "The Worker"     │    │   "The Gateway"     │
│                     │    │                     │    │                     │
│ • OAuth 2.0 + PKCE  │    │ • Your Logic Here   │    │ • Auto-discovery    │
│ • Self-registration │    │ • Pay-per-request   │    │ • RFC 9728 Support  │
│ • Enterprise SSO    │    │ • Scales to zero    │    │ • Custom domains    │
└─────────────────────┘    └─────────────────────┘    └─────────────────────┘
</code></pre>
<h2 id="heading-the-authentication-revolution-custom-dynamic-client-registration">The Authentication Revolution: Custom Dynamic Client Registration</h2>
<p>Here's something most developers don't know exists: <strong>Dynamic Client Registration (DCR)</strong>. It's OAuth 2.0's best-kept secret and it's perfect for the MCP ecosystem.</p>
<p>Instead of manually configuring OAuth clients for every tool that wants to connect to your MCP server, DCR allows clients to register themselves programmatically. Claude Code (an MCP client in Anthropic's tooling) can literally say "Hi, I'd like to access this MCP server" and get its own OAuth credentials automatically.</p>
<p><strong>Critical Implementation Detail</strong>: AWS Cognito doesn't natively implement RFC 7591 Dynamic Client Registration. What we're building is a <strong>custom DCR-compatible endpoint</strong> that uses API Gateway and VTL templates to call Cognito's <code>CreateUserPoolClient</code> API. This gives us the DCR workflow while working within AWS's authentication infrastructure, but with limitations: our endpoint is an API Gateway facade over CreateUserPoolClient, not full RFC 7591 registration semantics.</p>
<h3 id="heading-why-this-matters-for-mcp">Why This Matters for MCP</h3>
<p>Traditional OAuth flow:</p>
<ol>
<li><p>Developer manually creates OAuth client in console</p>
</li>
<li><p>Copies client ID and secret</p>
</li>
<li><p>Configures application</p>
</li>
<li><p>Hopes nothing breaks</p>
</li>
</ol>
<p>DCR-enabled MCP flow:</p>
<ol>
<li><p>Claude Code discovers your MCP server</p>
</li>
<li><p>Registers itself automatically</p>
</li>
<li><p>Starts using your tools immediately</p>
</li>
</ol>
<p><strong>This is the difference between minutes and days of setup time.</strong></p>
<h3 id="heading-the-implementation-secret">The Implementation Secret</h3>
<p>Here's a crucial implementation detail that took me longer than I'd like to admit to figure out:</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// IMPORTANT: Must use CLASSIC_HOSTED_UI for DCR support</span>
managedLoginVersion: ManagedLoginVersion.CLASSIC_HOSTED_UI,
</code></pre>
<p>AWS Cognito's newer managed login UI doesn't support dynamically registered clients because each client requires a theme configuration that doesn't get created automatically. This is a practical gotcha discovered during implementation. Use <a target="_blank" href="https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-app-ui-customization.html">Classic Hosted UI</a> for DCR compatibility until AWS guarantees branding defaults for dynamic clients. See <a target="_blank" href="https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-hosted-ui.html">Managed Login</a> documentation for the newer option's requirements.</p>
<h2 id="heading-the-serverless-advantage-in-action">The Serverless Advantage in Action</h2>
<p>Let's talk real numbers. A typical MCP server handling 1,000 requests per day:</p>
<p><strong>Traditional Server (t3.micro)</strong>:</p>
<ul>
<li><p>Monthly cost: ~$7.50*</p>
</li>
<li><p>Always running, mostly idle</p>
</li>
<li><p>Fixed capacity</p>
</li>
<li><p>Maintenance overhead</p>
</li>
</ul>
<p><strong>Serverless MCP</strong>:</p>
<ul>
<li><p>Monthly cost: ~$0.20**</p>
</li>
<li><p>Scales to zero</p>
</li>
<li><p>Scales automatically within account concurrency and per-function limits</p>
</li>
<li><p>Minimal ops overhead (soft limit defaults, request increases available via <a target="_blank" href="https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html">AWS Service Quotas</a>)</p>
</li>
</ul>
<p>* <em>EC2 instance cost only, excluding storage, data transfer, load balancer (~$16-25/month)</em><br />** <em>Lambda: 30,000 invocations × 100ms avg × $0.0000166667/GB-second at 2048MB = $0.67/month + $0.006 requests = $0.68/month</em> (<a target="_blank" href="https://aws.amazon.com/lambda/pricing/">AWS Lambda Pricing</a>)</p>
<p>But here's the real kicker: <strong>Most MCP servers will handle far fewer than 1,000 requests per day</strong>. Your internal tool integration might get 10 requests a week. With serverless, you're paying fractions of a penny.</p>
<p>More importantly... if you practice Domain Driven Development and end up having an MCP per knowledge-domain it ends up being ~$7.50 per knowledge domain. If you're an indie dev and want to incorporate MCPs into your side projects... that could end up being a lot OR you have to combine them / more carefully manage them. If you're practicing with <a target="_blank" href="https://martzmakes.com/destroy-their-stacks-ephemeral-cdk-stacks-as-a-service">Ephemeral CDK Stacks</a> maybe that's not a big deal though.</p>
<h2 id="heading-building-your-first-serverless-mcp-server">Building Your First Serverless MCP Server</h2>
<p>Let's build something real. Here's our complete Dog Facts MCP server:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> DogFactsServer <span class="hljs-keyword">implements</span> IMCPServer {
  initialize(): MCPInitializeResult {
    <span class="hljs-keyword">return</span> {
      protocolVersion: <span class="hljs-string">"2025-06-18"</span>,  <span class="hljs-comment">// Latest MCP specification</span>
      capabilities: { tools: {} },
      serverInfo: { name: <span class="hljs-string">"dog-facts-server"</span>, version: <span class="hljs-string">"1.0.0"</span> }
    };
  }

  listTools(): MCPToolsListResult {
    <span class="hljs-keyword">return</span> {
      tools: [{
        name: <span class="hljs-string">"getDogFacts"</span>,
        description: <span class="hljs-string">"Get random facts about dogs"</span>,
        inputSchema: {
          <span class="hljs-keyword">type</span>: <span class="hljs-string">"object"</span>,
          properties: {
            limit: {
              <span class="hljs-keyword">type</span>: <span class="hljs-string">"number"</span>,
              description: <span class="hljs-string">"Maximum facts to return (1-10)"</span>,
              minimum: <span class="hljs-number">1</span>,
              maximum: <span class="hljs-number">10</span>,
              <span class="hljs-keyword">default</span>: <span class="hljs-number">5</span>
            }
          }
        }
      }]
    };
  }

  <span class="hljs-keyword">async</span> callTool(params: MCPToolCallParams): <span class="hljs-built_in">Promise</span>&lt;MCPToolCallResult&gt; {
    <span class="hljs-keyword">const</span> { name, <span class="hljs-built_in">arguments</span>: args } = params;

    <span class="hljs-keyword">if</span> (name === <span class="hljs-string">"getDogFacts"</span>) {
      <span class="hljs-keyword">const</span> limit = <span class="hljs-built_in">Math</span>.min(<span class="hljs-built_in">Math</span>.max((args?.limit <span class="hljs-keyword">as</span> <span class="hljs-built_in">number</span>) || <span class="hljs-number">5</span>, <span class="hljs-number">1</span>), <span class="hljs-number">10</span>);
      <span class="hljs-keyword">const</span> response = <span class="hljs-keyword">await</span> fetch(<span class="hljs-string">`https://dogapi.dog/api/v2/facts?limit=<span class="hljs-subst">${limit}</span>`</span>);
      <span class="hljs-keyword">const</span> data = <span class="hljs-keyword">await</span> response.json();
      <span class="hljs-keyword">const</span> facts = data.data.map(<span class="hljs-function"><span class="hljs-params">fact</span> =&gt;</span> fact.attributes.body);

      <span class="hljs-keyword">return</span> {
        content: [{
          <span class="hljs-keyword">type</span>: <span class="hljs-string">"text"</span>,
          text: facts.map(<span class="hljs-function">(<span class="hljs-params">fact, i</span>) =&gt;</span> <span class="hljs-string">`<span class="hljs-subst">${i + <span class="hljs-number">1</span>}</span>. <span class="hljs-subst">${fact}</span>`</span>).join(<span class="hljs-string">'\n\n'</span>)
        }]
      };
    }

    <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Error</span>(<span class="hljs-string">`Unknown tool: <span class="hljs-subst">${name}</span>`</span>);
  }
}
</code></pre>
<p>This simple tool (<code>getDogFacts</code>) is just making a fetch request to an external service. Instead you could integrate this with DynamoDB queries, add a knowledge base, react-agents... you can pretty much do anything here.</p>
<p>This is all the code you need. The framework handles:</p>
<ul>
<li><p>OAuth authentication</p>
</li>
<li><p>JSON-RPC protocol</p>
</li>
<li><p>Error handling</p>
</li>
<li><p>CORS</p>
</li>
<li><p>Scaling</p>
</li>
<li><p>Monitoring</p>
</li>
</ul>
<h2 id="heading-advanced-patterns-for-production">Advanced Patterns for Production</h2>
<h3 id="heading-pattern-1-multi-tool-servers">Pattern 1: Multi-Tool Servers</h3>
<p>Don't create separate MCP servers for related functionality or knowledge domains. Bundle them:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> CompanyToolsServer <span class="hljs-keyword">implements</span> IMCPServer {
  listTools() {
    <span class="hljs-keyword">return</span> {
      tools: [
        { name: <span class="hljs-string">"searchEmployees"</span>, <span class="hljs-comment">/* ... */</span> },
        { name: <span class="hljs-string">"getRoomAvailability"</span>, <span class="hljs-comment">/* ... */</span> },
        { name: <span class="hljs-string">"submitExpenseReport"</span>, <span class="hljs-comment">/* ... */</span> },
        { name: <span class="hljs-string">"checkDeploymentStatus"</span>, <span class="hljs-comment">/* ... */</span> }
      ]
    };
  }
}
</code></pre>
<p>One Lambda, multiple tools, single authentication flow.</p>
<p>Since these are also packaged as CDK Constructs, it would be easy to create an inner-sourced construct library with MCPs and then have them as default items in your stacks... each stack could automatically get its own MCP! To be clear... I would centralize cognito, so that you only have to manage one user pool and then you'd get access to all of the MCPs in your AWS account.</p>
<h3 id="heading-pattern-2-async-operations-with-step-functions">Pattern 2: Async Operations with Step Functions</h3>
<p>For long-running operations, combine MCP with Step Functions:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">async</span> callTool(params) {
  <span class="hljs-keyword">if</span> (params.name === <span class="hljs-string">"generateReport"</span>) {
    <span class="hljs-comment">// Start Step Function execution</span>
    <span class="hljs-keyword">const</span> executionArn = <span class="hljs-keyword">await</span> startReportGeneration(params);

    <span class="hljs-keyword">return</span> {
      content: [{
        <span class="hljs-keyword">type</span>: <span class="hljs-string">"text"</span>,
        text: <span class="hljs-string">`Report generation started. Check status with execution ID: <span class="hljs-subst">${executionArn}</span>`</span>
      }]
    };
  }
}
</code></pre>
<h2 id="heading-the-enterprise-story">The Enterprise Story</h2>
<p>Imagine you're a Fortune 500 company with hundreds of internal tools. Traditional approach:</p>
<ul>
<li><p>Months of integration work per tool</p>
</li>
<li><p>Expensive API gateway infrastructure</p>
</li>
<li><p>Complex authentication federation</p>
</li>
<li><p>Ongoing maintenance nightmare</p>
</li>
</ul>
<p>With serverless MCP:</p>
<ol>
<li><p><strong>Week 1</strong>: Deploy MCP framework</p>
</li>
<li><p><strong>Week 2</strong>: First 10 tools integrated</p>
</li>
<li><p><strong>Month 1</strong>: 50 tools available to Claude</p>
</li>
<li><p><strong>Month 2</strong>: Entire organization using AI-powered tools</p>
</li>
</ol>
<p>Cost for 50 MCP servers handling 100,000 requests/month total: <strong>~$67/month</strong>*</p>
<p>Cost for traditional infrastructure: <strong>~$375/month plus maintenance</strong></p>
<p>* <em>2M Lambda invocations × 150ms × $0.0000166667/GB-second × 2GB = $50/month + $0.40 requests + API Gateway ~$17/month</em> (<a target="_blank" href="https://aws.amazon.com/lambda/pricing/">AWS Lambda Pricing</a>)</p>
<h2 id="heading-security-enterprise-grade-by-default">Security: Enterprise-Grade by Default</h2>
<p>Our implementation includes:</p>
<ul>
<li><p><strong>OAuth 2.0 with PKCE</strong>: No client secrets, perfect for public clients. <em>Note: Refresh tokens for public clients have shorter lifespans and require proper rotation handling</em></p>
</li>
<li><p><strong>Cognito User Pools</strong>: Enterprise SSO ready</p>
</li>
<li><p><strong>Scope-based authorization</strong>: Fine-grained access control</p>
</li>
<li><p><strong>API Gateway throttling</strong>: Rate limiting and AWS Shield Standard reduce volumetric risk (organizations should add WAF and rate plans for comprehensive protection)</p>
</li>
<li><p><strong>CloudWatch integration</strong>: Full audit trail</p>
</li>
<li><p><strong>VPC options</strong>: Private connectivity when needed</p>
</li>
</ul>
<h2 id="heading-deploy-your-own-in-10-minutes">Deploy Your Own in ~10 Minutes*</h2>
<pre><code class="lang-bash"><span class="hljs-comment"># Clone the repository</span>
git <span class="hljs-built_in">clone</span> https://github.com/martzmakes/mcp-cdk-lambda-cognito
<span class="hljs-built_in">cd</span> mcp-cdk-lambda-cognito

<span class="hljs-comment"># Install dependencies</span>
npm install

<span class="hljs-comment"># Deploy to AWS (custom domain required for .well-known endpoints)</span>
npx cdk deploy

<span class="hljs-comment"># Done! Your MCP server URL will be in the outputs</span>
</code></pre>
<p>* <em>Custom domain is</em> <strong><em>required</em></strong> <em>because Claude Code and other MCP clients use the base domain for</em> <code>.well-known</code> paths, which bypasses API Gateway's stage path. ACM DNS validation adds 2-10 minutes for certificate creation.</p>
<h2 id="heading-what-you-can-build-real-world-examples">What You Can Build: Real-World Examples</h2>
<h3 id="heading-customer-support-mcp">Customer Support MCP</h3>
<pre><code class="lang-typescript">tools: [
  <span class="hljs-string">"searchTickets"</span>,
  <span class="hljs-string">"updateTicketStatus"</span>, 
  <span class="hljs-string">"getCustomerHistory"</span>,
  <span class="hljs-string">"escalateToManager"</span>
]
</code></pre>
<p>Let Claude handle tier-1 support with access to your ticketing system.</p>
<h3 id="heading-devops-mcp">DevOps MCP</h3>
<pre><code class="lang-typescript">tools: [
  <span class="hljs-string">"checkDeploymentStatus"</span>,
  <span class="hljs-string">"rollbackRelease"</span>,
  <span class="hljs-string">"queryMetrics"</span>,
  <span class="hljs-string">"pageOnCall"</span>
]
</code></pre>
<p>Claude becomes your intelligent ops assistant.</p>
<h3 id="heading-analytics-mcp">Analytics MCP</h3>
<pre><code class="lang-typescript">tools: [
  <span class="hljs-string">"runSQLQuery"</span>,
  <span class="hljs-string">"generateReport"</span>,
  <span class="hljs-string">"exportDashboard"</span>,
  <span class="hljs-string">"scheduleAlert"</span>
]
</code></pre>
<p>Natural language to insights, instantly.</p>
<h2 id="heading-the-future-is-serverless-mcp">The Future is Serverless MCP</h2>
<p>We're in the midst of a revolution. With major platforms like OpenAI, Google DeepMind, and Microsoft adopting MCP in 2025, the serverless advantage becomes overwhelming:</p>
<ul>
<li><p><strong>Ecosystem explosion</strong>: Thousands of MCP servers, all scaling independently</p>
</li>
<li><p><strong>Universal adoption</strong>: <a target="_blank" href="https://techcrunch.com/2025/03/26/openai-adopts-rival-anthropics-standard-for-connecting-ai-models-to-data/">OpenAI adoption March 26, 2025</a>, <a target="_blank" href="https://x.com/demishassabis/status/1910107859041271977">Google DeepMind April 2025</a>, <a target="_blank" href="https://www.microsoft.com/en-us/microsoft-copilot/blog/copilot-studio/model-context-protocol-mcp-is-now-generally-available-in-microsoft-copilot-studio/">Microsoft Copilot Studio GA May 29, 2025</a></p>
</li>
<li><p><strong>Cost approaching zero</strong>: Pay only for actual AI assistance</p>
</li>
<li><p><strong>Instant integration</strong>: Tools become AI-ready in minutes, not months</p>
</li>
<li><p><strong>Standardization wins</strong>: The <a target="_blank" href="https://modelcontextprotocol.io/specification/2025-06-18/">MCP specification 2025-06-18</a> is now the widely adopted open standard</p>
</li>
</ul>
<h2 id="heading-conclusion-why-this-matters">Conclusion: Why This Matters</h2>
<p>MCP has fundamentally changed how we think about AI integration. With 2025's universal adoption across all major AI platforms, serverless makes it economically viable at any scale. Together, they're democratizing AI-powered automation.</p>
<p>The architecture we've built here isn't just another example - it's a <strong>production blueprint</strong> you can use today to start building the AI-integrated future. Whether you're a startup looking to give Claude access to your tools or an enterprise wanting to modernize your AI strategy, serverless MCP is your answer.</p>
<p>With OpenAI, Google, and Microsoft all standardizing on <a target="_blank" href="https://modelcontextprotocol.io">MCP</a>, <strong>the question isn't whether to build MCP servers. It's how many you'll build this month.</strong></p>
<hr />
<h2 id="heading-next-steps">Next Steps</h2>
<ol>
<li><p><strong>Deploy the example</strong>: Get hands-on with the code</p>
</li>
<li><p><strong>Build your first custom server</strong>: Start with one internal tool</p>
</li>
<li><p><strong>Share with the community</strong>: The MCP ecosystem grows with every contribution</p>
</li>
</ol>
<p><em>Ready to dive deeper into serverless patterns? Check out my post on</em> <a target="_blank" href="https://martzmakes.com/secure-your-serverless-app-with-cognitos-managed-login-pages"><em>securing serverless apps with Cognito's managed login pages</em></a><em>.</em></p>
<p><em>Find the complete code on</em> <a target="_blank" href="https://github.com/martzmakes/mcp-cdk-lambda-cognito"><em>GitHub</em></a><em>.</em></p>
<hr />
<h2 id="heading-technical-deep-dive-implementation-details">Technical Deep Dive: Implementation Details</h2>
<h3 id="heading-the-three-construct-architecture-deep-dive">The Three-Construct Architecture: Deep Dive</h3>
<p>Our modular approach separates concerns into three specialized constructs, each with a clean interface and specific responsibility. Here's how intermediate CDK developers can leverage this pattern:</p>
<h4 id="heading-mcpauthconstruct-the-oauth-foundation">McpAuthConstruct: The OAuth Foundation</h4>
<p><em>Full implementation:</em> <a target="_blank" href="https://github.com/martzmakes/mcp-cdk-lambda-cognito/blob/main/lib/constructs/mcp-auth-construct.ts"><code>lib/constructs/mcp-auth-construct.ts</code></a></p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">interface</span> McpAuthConstructProps {
  serverName: <span class="hljs-built_in">string</span>;  <span class="hljs-comment">// Only required input</span>
}

<span class="hljs-keyword">export</span> <span class="hljs-keyword">interface</span> McpAuthResult {
  userPool: UserPool;
  resourceServer: UserPoolResourceServer;
  oauthScopes: OAuthScope[];
  clientId: <span class="hljs-built_in">string</span>;
  authUrl: <span class="hljs-built_in">string</span>;
  tokenUrl: <span class="hljs-built_in">string</span>;
  oauthScope: <span class="hljs-built_in">string</span>;
  signInUrl: <span class="hljs-built_in">string</span>;
}
</code></pre>
<p><strong>The Critical DCR Implementation Detail</strong>: Here's the configuration that enables Dynamic Client Registration:</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// IMPORTANT: Must use CLASSIC_HOSTED_UI for DCR support</span>
<span class="hljs-keyword">const</span> userPoolDomain = userPool.addDomain(<span class="hljs-string">"McpAuthUserPoolDomain"</span>, {
  cognitoDomain: {
    domainPrefix: <span class="hljs-string">`mcp-<span class="hljs-subst">${serverName}</span>-<span class="hljs-subst">${domainHash}</span>`</span>,
  },
  managedLoginVersion: ManagedLoginVersion.CLASSIC_HOSTED_UI, <span class="hljs-comment">// 🔑 KEY!</span>
});
</code></pre>
<p><em>Source:</em> <a target="_blank" href="https://github.com/martzmakes/mcp-cdk-lambda-cognito/blob/main/lib/constructs/mcp-auth-construct.ts#L65-L67"><code>mcp-auth-construct.ts:65-67</code></a></p>
<p><strong>Why this matters</strong>: AWS Cognito's newer managed login UI doesn't support dynamically registered clients because each client requires a theme configuration that doesn't get created automatically. This single line is the difference between DCR working and spending hours debugging OAuth flows.</p>
<p><strong>Domain Collision Prevention</strong>: Notice the <code>domainHash</code> pattern:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> domainHash = <span class="hljs-built_in">this</span>.node.addr.substring(<span class="hljs-number">0</span>, <span class="hljs-number">8</span>);
<span class="hljs-keyword">const</span> domainPrefix = <span class="hljs-string">`mcp-<span class="hljs-subst">${serverName}</span>-<span class="hljs-subst">${domainHash}</span>`</span>;
</code></pre>
<p><em>Source:</em> <a target="_blank" href="https://github.com/martzmakes/mcp-cdk-lambda-cognito/blob/main/lib/constructs/mcp-auth-construct.ts#L60-L63"><code>mcp-auth-construct.ts:60-63</code></a></p>
<p>This uses CDK's internal node addressing to create unique domain prefixes, preventing conflicts when multiple developers deploy the same stack.</p>
<h4 id="heading-mcplambdaconstruct-optimized-for-performance">McpLambdaConstruct: Optimized for Performance</h4>
<p><em>Full implementation:</em> <a target="_blank" href="https://github.com/martzmakes/mcp-cdk-lambda-cognito/blob/main/lib/constructs/mcp-lambda-construct.ts"><code>lib/constructs/mcp-lambda-construct.ts</code></a></p>
<pre><code class="lang-typescript"><span class="hljs-built_in">this</span>.lambdaFunction = <span class="hljs-keyword">new</span> NodejsFunction(<span class="hljs-built_in">this</span>, <span class="hljs-string">"function"</span>, {
  entry: path.join(__dirname, <span class="hljs-string">"../lambda/mcp.ts"</span>),
  functionName: <span class="hljs-string">`mcp-server-<span class="hljs-subst">${serverName}</span>`</span>,
  memorySize: <span class="hljs-number">2048</span>,                    <span class="hljs-comment">// Sweet spot for CPU allocation</span>
  timeout: Duration.seconds(<span class="hljs-number">29</span>),       <span class="hljs-comment">// Just under API Gateway's 30s limit*</span>
  architecture: Architecture.ARM_64,   <span class="hljs-comment">// 20% better price/performance</span>
  runtime: Runtime.NODEJS_22_X,       <span class="hljs-comment">// Latest runtime for better cold starts</span>
  environment: {
    LOG_LEVEL: logLevel,
  },
  bundling: {
    format: OutputFormat.ESM,          <span class="hljs-comment">// Modern modules, better tree-shaking</span>
    mainFields: [<span class="hljs-string">"module"</span>, <span class="hljs-string">"main"</span>],    <span class="hljs-comment">// Prioritize ES modules</span>
  },
});
</code></pre>
<p><em>Source:</em> <a target="_blank" href="https://github.com/martzmakes/mcp-cdk-lambda-cognito/blob/main/lib/constructs/mcp-lambda-construct.ts#L38-L53"><code>mcp-lambda-construct.ts:38-53</code></a></p>
<p><strong>Performance Optimization Notes</strong>:</p>
<ul>
<li><p><strong>ARM64</strong>: Graviton2/3 processors offer significantly better price/performance</p>
</li>
<li><p><strong>2048MB</strong>: This memory allocation provides optimal CPU-to-memory ratio for most workloads</p>
</li>
<li><p><strong>ESM Format</strong>: Modern bundling reduces cold start times and bundle size</p>
</li>
<li><p><strong>29-second timeout</strong>: Keeps us safely under API Gateway's 30-second limit*</p>
</li>
</ul>
<p>* <em>Since mid-2024, some REST API timeouts can be increased via</em> <a target="_blank" href="https://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html"><em>quota requests</em></a><em>, but 29s remains the safe default for user-facing requests</em></p>
<h4 id="heading-mcpapigatewayconstruct-the-complex-integration-layer">McpApiGatewayConstruct: The Complex Integration Layer</h4>
<p><em>Full implementation:</em> <a target="_blank" href="https://github.com/martzmakes/mcp-cdk-lambda-cognito/blob/main/lib/constructs/mcp-api-gateway-construct.ts"><code>lib/constructs/mcp-api-gateway-construct.ts</code></a></p>
<p>This is where the CDK complexity really shows its value. Here's the DCR endpoint implementation:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">private</span> createDcrEndpoint(api: RestApi, userPool: UserPool, serverName: <span class="hljs-built_in">string</span>) {
  <span class="hljs-comment">// Create IAM role for API Gateway to call Cognito</span>
  <span class="hljs-keyword">const</span> cognitoIntegrationRole = <span class="hljs-keyword">new</span> Role(<span class="hljs-built_in">this</span>, <span class="hljs-string">"CognitoIntegrationRole"</span>, {
    assumedBy: <span class="hljs-keyword">new</span> ServicePrincipal(<span class="hljs-string">"apigateway.amazonaws.com"</span>),
    inlinePolicies: {
      CognitoAccess: <span class="hljs-keyword">new</span> PolicyDocument({
        statements: [
          <span class="hljs-keyword">new</span> PolicyStatement({
            effect: Effect.ALLOW,
            actions: [<span class="hljs-string">"cognito-idp:CreateUserPoolClient"</span>],
            resources: [userPool.userPoolArn],
          }),
        ],
      }),
    },
  });

  <span class="hljs-comment">// AWS Integration directly with Cognito API</span>
  <span class="hljs-comment">// Note: This is a custom DCR implementation, not native RFC 7591 support</span>
  <span class="hljs-keyword">const</span> dcrIntegration = <span class="hljs-keyword">new</span> AwsIntegration({
    service: <span class="hljs-string">"cognito-idp"</span>,
    action: <span class="hljs-string">"CreateUserPoolClient"</span>,
    options: {
      credentialsRole: cognitoIntegrationRole,
      requestTemplates: {
        <span class="hljs-string">"application/json"</span>: <span class="hljs-string">`#set($rawName = $input.path('$.client_name'))
#if(!$rawName || $rawName == "")
  #set($rawName = "client")
#end
#set($name1 = $rawName.trim())
#set($name2 = $name1.replaceAll("[^\\\\w\\\\s+=,.@-]", ""))
#set($safeName = $util.escapeJavaScript($name2))
#if($safeName.length() &gt; 128)
  #set($safeName = $safeName.substring(0,128))
#end

#set($cb = $input.json('$.redirect_uris'))
#if(!$cb) #set($cb = '[]') #end
{
  "UserPoolId": "<span class="hljs-subst">${userPool.userPoolId}</span>",
  "ClientName": "$safeName",
  "CallbackURLs": $cb,
  "AllowedOAuthFlows": ["code"],
  "AllowedOAuthFlowsUserPoolClient": true,
  "AllowedOAuthScopes": ["mcp-<span class="hljs-subst">${serverName}</span>/<span class="hljs-subst">${serverName}</span>", "openid", "email", "profile"],
  "SupportedIdentityProviders": ["COGNITO"],
  "GenerateSecret": false  // 🔑 This enables PKCE!
}`</span>
      }
    }
  });
}
</code></pre>
<p><em>Source:</em> <a target="_blank" href="https://github.com/martzmakes/mcp-cdk-lambda-cognito/blob/main/lib/constructs/mcp-api-gateway-construct.ts#L315-L398"><code>mcp-api-gateway-construct.ts:315-398</code></a></p>
<p><strong>VTL Template Deep Dive</strong>: This Velocity Template Language (VTL) template is doing critical work:</p>
<ol>
<li><p><strong>Input Sanitization</strong>: Removes dangerous characters from client names</p>
</li>
<li><p><strong>Length Validation</strong>: Ensures client names don't exceed Cognito's 128-char limit</p>
</li>
<li><p><strong>PKCE Configuration</strong>: <code>"GenerateSecret": false</code> is what makes PKCE work</p>
</li>
<li><p><strong>Scope Assignment</strong>: Automatically assigns the correct MCP scope</p>
</li>
</ol>
<p><strong>The Response Transformation</strong>:</p>
<pre><code class="lang-typescript">responseTemplates: {
  <span class="hljs-string">"application/json"</span>: <span class="hljs-string">`{
    "client_id": $input.json('$.UserPoolClient.ClientId'),
    "client_name": $input.json('$.UserPoolClient.ClientName'),
    "redirect_uris": $input.json('$.UserPoolClient.CallbackURLs'),
    "response_types": ["code"],
    "grant_types": ["authorization_code"],
    "token_endpoint_auth_method": "none",  // PKCE indicator
    "scope": "mcp-<span class="hljs-subst">${serverName}</span>/<span class="hljs-subst">${serverName}</span> openid email profile"
  }`</span>
}
</code></pre>
<p>This transforms Cognito's API response into RFC 7591-compliant DCR response format.</p>
<h3 id="heading-oauth-metadata-endpoints-rfc-compliance-made-easy">OAuth Metadata Endpoints: RFC Compliance Made Easy</h3>
<p><strong>OAuth Protected Resource Metadata (</strong><a target="_blank" href="https://datatracker.ietf.org/doc/rfc9728/"><strong>RFC 9728, published April 2025</strong></a><strong>)</strong>:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> metadataIntegration = <span class="hljs-keyword">new</span> MockIntegration({
  integrationResponses: [{
    statusCode: <span class="hljs-string">"200"</span>,
    responseTemplates: {
      <span class="hljs-string">"application/json"</span>: <span class="hljs-built_in">JSON</span>.stringify({
        resource_name: <span class="hljs-string">`<span class="hljs-subst">${serverName}</span> MCP Server`</span>,
        resource: finalApiUrl,
        authorization_servers: [
          <span class="hljs-string">`https://<span class="hljs-subst">${customDomain.customDomainName}</span>/.well-known/oauth-authorization-server`</span>
        ],
        scopes_supported: [<span class="hljs-string">`mcp-<span class="hljs-subst">${serverName}</span>/<span class="hljs-subst">${serverName}</span>`</span>],
        bearer_methods_supported: [<span class="hljs-string">"header"</span>],
      }, <span class="hljs-literal">null</span>, <span class="hljs-number">2</span>)
    }
  }]
});
</code></pre>
<p><strong>RFC 8414 Authorization Server Metadata</strong>:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> authServerMetadataIntegration = <span class="hljs-keyword">new</span> MockIntegration({
  integrationResponses: [{
    responseTemplates: {
      <span class="hljs-string">"application/json"</span>: <span class="hljs-built_in">JSON</span>.stringify({
        issuer: oauthConfig.authUrl.split(<span class="hljs-string">"/oauth2/authorize"</span>)[<span class="hljs-number">0</span>],
        authorization_endpoint: oauthConfig.authUrl,
        token_endpoint: oauthConfig.tokenUrl,
        registration_endpoint: <span class="hljs-string">`https://<span class="hljs-subst">${customDomain.customDomainName}</span>/connect/register`</span>,
        response_types_supported: [<span class="hljs-string">"code"</span>],
        grant_types_supported: [<span class="hljs-string">"authorization_code"</span>, <span class="hljs-string">"client_credentials"</span>],
        code_challenge_methods_supported: [<span class="hljs-string">"S256"</span>],
        token_endpoint_auth_methods_supported: [<span class="hljs-string">"client_secret_post"</span>, <span class="hljs-string">"client_secret_basic"</span>, <span class="hljs-string">"none"</span>]
      }, <span class="hljs-literal">null</span>, <span class="hljs-number">2</span>)
    }
  }]
});
</code></pre>
<p>Using <code>MockIntegration</code> here is a CDK pattern that lets us return static JSON without invoking Lambda, keeping costs near zero for metadata requests.</p>
<p><strong>Critical Architecture Note</strong>: Custom domains are <strong>required</strong> for MCP servers because clients like Claude Code make <code>.well-known</code> requests to the base domain (e.g., <code>https://example.com/.well-known/oauth-protected-resource</code>), which bypasses API Gateway's stage path entirely. Without a custom domain, these discovery endpoints would return 404s.</p>
<h3 id="heading-stack-orchestration-the-78-line-marvel">Stack Orchestration: The 78-Line Marvel</h3>
<p><em>Full implementation:</em> <a target="_blank" href="https://github.com/martzmakes/mcp-cdk-lambda-cognito/blob/main/lib/mcp-cdk-lambda-cognito-stack.ts"><code>lib/mcp-cdk-lambda-cognito-stack.ts</code></a></p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> McpCdkLambdaCognitoStack <span class="hljs-keyword">extends</span> cdk.Stack {
  <span class="hljs-keyword">constructor</span>(<span class="hljs-params">scope: Construct, id: <span class="hljs-built_in">string</span>, props: McpCdkLambdaCognitoProps</span>) {
    <span class="hljs-built_in">super</span>(scope, id, props);

    <span class="hljs-keyword">const</span> serverName = <span class="hljs-string">"dog-facts"</span>;
    <span class="hljs-keyword">const</span> customDomainName = <span class="hljs-string">`mcp-dogfacts.martzmakes.com`</span>;

    <span class="hljs-comment">// Certificate and DNS setup</span>
    <span class="hljs-keyword">const</span> hostedZone = HostedZone.fromLookup(<span class="hljs-built_in">this</span>, <span class="hljs-string">"HostedZone"</span>, {
      domainName: <span class="hljs-string">"martzmakes.com"</span>,
    });

    <span class="hljs-keyword">const</span> certificate = <span class="hljs-keyword">new</span> Certificate(<span class="hljs-built_in">this</span>, <span class="hljs-string">"Certificate"</span>, {
      domainName: customDomainName,
      validation: CertificateValidation.fromDns(hostedZone),
    });

    <span class="hljs-comment">// Three-construct composition</span>
    <span class="hljs-keyword">const</span> authConstruct = <span class="hljs-keyword">new</span> McpAuthConstruct(<span class="hljs-built_in">this</span>, <span class="hljs-string">"Auth"</span>, { serverName });
    <span class="hljs-keyword">const</span> lambdaConstruct = <span class="hljs-keyword">new</span> McpLambdaConstruct(<span class="hljs-built_in">this</span>, <span class="hljs-string">"Lambda"</span>, { serverName });
    <span class="hljs-keyword">const</span> apiGatewayConstruct = <span class="hljs-keyword">new</span> McpApiGatewayConstruct(<span class="hljs-built_in">this</span>, <span class="hljs-string">"ApiGateway"</span>, {
      serverName,
      lambdaFunction: lambdaConstruct.lambdaFunction,
      userPool: authConstruct.result.userPool,
      resourceServer: authConstruct.result.resourceServer,
      oauthScopes: authConstruct.result.oauthScopes,
      oauthConfig: {
        clientId: authConstruct.result.clientId,
        authUrl: authConstruct.result.authUrl,
        tokenUrl: authConstruct.result.tokenUrl,
        scope: authConstruct.result.oauthScope,
      },
      customDomain: { customDomainName, certificate, hostedZone },
    });
  }
}
</code></pre>
<p><strong>Pattern Highlights for CDK Users</strong>:</p>
<ol>
<li><p><strong>Construct Dependency Flow</strong>: Auth → Lambda → API Gateway, with clean interfaces</p>
</li>
<li><p><strong>Custom Domain Integration</strong>: ACM certificate with DNS validation</p>
</li>
<li><p><strong>Result Object Pattern</strong>: Each construct exposes a clean <code>result</code> interface</p>
</li>
<li><p><strong>Resource Naming</strong>: Consistent <code>serverName</code>-based naming throughout</p>
</li>
</ol>
<h3 id="heading-advanced-cdk-patterns-in-action">Advanced CDK Patterns in Action</h3>
<p><strong>Gateway Responses for OAuth Compliance</strong>:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">new</span> GatewayResponse(<span class="hljs-built_in">this</span>, <span class="hljs-string">"UnauthorizedResponse"</span>, {
  restApi: api,
  <span class="hljs-keyword">type</span>: ResponseType.UNAUTHORIZED,
  statusCode: <span class="hljs-string">"401"</span>,
  responseHeaders: {
    <span class="hljs-string">"WWW-Authenticate"</span>: <span class="hljs-string">"'Bearer realm=\"MCP Server\", error=\"invalid_request\"'"</span>,
  },
});
</code></pre>
<p>This ensures proper HTTP error responses that comply with OAuth 2.0 Bearer Token specification.</p>
<p><strong>CORS Configuration for MCP Clients</strong>:</p>
<pre><code class="lang-typescript">defaultCorsPreflightOptions: {
  allowOrigins: Cors.ALL_ORIGINS,
  allowMethods: Cors.ALL_METHODS,
  allowHeaders: Cors.DEFAULT_HEADERS.concat([<span class="hljs-string">"Authorization"</span>]),
}
</code></pre>
<p>Essential for browser-based MCP clients that need to make cross-origin requests.</p>
<h3 id="heading-the-type-system-that-saves-you">The Type System That Saves You</h3>
<p>Our construct interfaces enforce compile-time correctness:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">interface</span> McpApiGatewayConstructProps {
  serverName: <span class="hljs-built_in">string</span>;
  lambdaFunction: NodejsFunction;
  userPool: UserPool;
  resourceServer: UserPoolResourceServer;
  oauthScopes: OAuthScope[];
  oauthConfig: {
    clientId: <span class="hljs-built_in">string</span>;
    authUrl: <span class="hljs-built_in">string</span>;
    tokenUrl: <span class="hljs-built_in">string</span>;
    scope: <span class="hljs-built_in">string</span>;
  };
  customDomain: {
    customDomainName: <span class="hljs-built_in">string</span>;
    certificate: Certificate;
    hostedZone: IHostedZone;
  };
}
</code></pre>
<p>You literally cannot wire constructs incorrectly. TypeScript prevents entire categories of deployment failures.</p>
<hr />
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757517479844/d0f7f129-c30d-425a-bb5d-21b78bffd2b4.jpeg" alt="dog tax from my late pup astro" class="image--center mx-auto" /></p>
<p>(dog tax from my late pup Astro)</p>
<p><em>Let's build the future of AI integration together. One serverless function at a time.</em></p>
]]></content:encoded></item><item><title><![CDATA[One Client, Two Worlds: Building a Type-Safe API and ReAct Agent Interface with Zod, LangGraph, and AWS]]></title><description><![CDATA[In this post, I’ll show you how I built a single TypeScript framework for creating robust, schema-validated API clients that serve both humans and ReAct agents without duplication. Using Zod, AWS Serverless tech, and a sprinkle of LangGraph, this pro...]]></description><link>https://martzmakes.com/one-client-two-worlds-building-a-type-safe-api-and-react-agent-interface-with-zod-langgraph-and-aws</link><guid isPermaLink="true">https://martzmakes.com/one-client-two-worlds-building-a-type-safe-api-and-react-agent-interface-with-zod-langgraph-and-aws</guid><category><![CDATA[zod]]></category><category><![CDATA[AWS]]></category><category><![CDATA[CDK]]></category><category><![CDATA[agentic AI]]></category><category><![CDATA[langchain]]></category><dc:creator><![CDATA[Matt Martz]]></dc:creator><pubDate>Sat, 26 Apr 2025 18:55:17 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745691488727/bbf36142-d062-4139-9f0b-bd4c784af821.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this post, I’ll show you how I built a single TypeScript framework for creating <strong>robust, schema-validated API clients that serve both humans and ReAct agents without duplication</strong>. Using <a target="_blank" href="https://zod.dev/">Zod</a>, AWS Serverless tech, and a sprinkle of <a target="_blank" href="https://www.langchain.com/langgraph">LangGraph</a>, this project creates a unified client that's consumable out of the box — for apps, Lambdas, and LLMs alike. No translation layers. No schema drift. Just seamless, scalable type safety.</p>
<p>If you’ve ever stitched together a backend client and an LLM agent and wondered why you’ve got two subtly divergent ways of calling your API, this one's for you.</p>
<p>The code for this post is here: <a target="_blank" href="https://github.com/martzmakes/cdk-zod-agent">https://github.com/martzmakes/cdk-zod-agent</a></p>
<h2 id="heading-why-aws-cdk-and-serverless">Why AWS, CDK, and Serverless?</h2>
<p>As an AWS Community Builder, I’m passionate about leveraging AWS’s powerful cloud-native services to build scalable, reliable, and cost-effective solutions. This project is built from the ground up using AWS CDK (Cloud Development Kit) to define infrastructure as code, and it’s fully serverless—every API endpoint is powered by AWS Lambda, with DynamoDB as the persistent data store.</p>
<h3 id="heading-aws-cdk-infrastructure-as-code-supercharged">AWS CDK: Infrastructure as Code, Supercharged</h3>
<p>The AWS CDK lets you define your entire cloud architecture in TypeScript, making it easy to version, review, and evolve your infrastructure alongside your application code. In this project, the <code>lib/constructs/</code> directory contains reusable CDK constructs for DynamoDB tables, Lambda functions, and internal API routing. The stack is defined in <code>lib/cdk-zod-agent-stack.ts</code>, and you can deploy the whole system with a single command.</p>
<p>If you’d like to learn more about using AWS CDK, I put together a crash course of it for freeCodeCamp: <a target="_blank" href="https://www.youtube.com/watch?v=T-H4nJQyMig">https://www.youtube.com/watch?v=T-H4nJQyMig</a></p>
<h2 id="heading-why-zod">Why Zod?</h2>
<p><a target="_blank" href="https://zod.dev/">Zod</a> lets you define schemas for your data and validate at runtime, giving you the confidence of TypeScript types with the safety net of runtime checks. In this project, every API endpoint is defined with Zod schemas for both requests and responses. That means whether you’re adding a new hero or logging a daring rescue, you know your data is always in tip-top shape.</p>
<p>Here’s the core insight: define your endpoints and schemas once with Zod, and you get <strong>two clients for the price of one</strong>—your human backend code and your ReAct agent both use the same strongly-typed, runtime-validated API client.</p>
<p>No discrepancies. No parallel validation stacks. No translation layers. Just one clean, unified source of truth.</p>
<h2 id="heading-the-api-client-framework-your-hero-utility-belt">The API Client Framework: Your Hero Utility Belt</h2>
<p>Inside the <code>package/</code> folder, you’ll find the core of the framework:</p>
<ul>
<li><p><strong>Endpoint Definitions</strong>: Each endpoint is defined with <code>defineZodEndpoint</code>, specifying the path, HTTP method, and Zod schemas for request/response.</p>
</li>
<li><p><strong>Client Generator</strong>: <code>createApiClient</code> takes these definitions and generates a fully-typed client. Each method validates input/output with Zod.</p>
</li>
<li><p><strong>Path Parameter Magic</strong>: The framework auto-generates Zod schemas for path parameters.</p>
</li>
</ul>
<p>The real magic happens in <code>helpers/endpoints.ts</code>:</p>
<ul>
<li><p><strong>TypeScript Path Parameter Extraction</strong>: Recursively parses a path like <code>/heroes/{hero}/rescues/{rescueId}</code>into <code>{ hero: string; rescueId: string }</code>.</p>
</li>
<li><p><strong>Zod Path Parameter Schemas</strong>: Extracts <code>{param}</code> segments from paths and builds runtime validation schemas.</p>
</li>
<li><p><strong>Endpoint Definition</strong>: Validates both request and response payloads at runtime and compile time.</p>
</li>
</ul>
<p>Example:</p>
<pre><code class="lang-typescript">addHero: defineZodEndpoint({
  path: <span class="hljs-string">"/heroes"</span>,
  method: <span class="hljs-string">"POST"</span>,
  schemas: {
    request: AddHeroRequestSchema,
    response: AddHeroResponseSchema,
  },
  description: <span class="hljs-string">"Add a new hero to the system."</span>
})
</code></pre>
<p>You get full request/response validation, path parameter type inference—everything you’d expect for an internal client. But—crucially—those very same schemas and definitions are directly leveraged to generate callable, Zod-validated “tools” for your ReAct agent.</p>
<h3 id="heading-how-zod-parsing-works-with-the-api-client">How Zod Parsing Works with the API Client</h3>
<p>When using <code>createApiClient</code>:</p>
<ul>
<li><p>Validates the request body before API call.</p>
</li>
<li><p>Validates the response after API call.</p>
</li>
<li><p>Validates path parameters.</p>
</li>
</ul>
<p>The <strong>same client instance</strong> can be:</p>
<ul>
<li><p>Imported into Lambda handlers and backend services.</p>
</li>
<li><p>Wrapped into agent tools for LLM workflows.</p>
</li>
</ul>
<p>One source of truth. One validation story.</p>
<h2 id="heading-lambda-handlers-fortress-of-validation-and-serverless-power">Lambda Handlers: Fortress of Validation (and Serverless Power)</h2>
<p>Lambda handlers use the same Zod schemas for validation. <code>initApiHandler</code> wraps your handler to:</p>
<ul>
<li><p>Parse and validate the incoming event body.</p>
</li>
<li><p>Validate the outgoing response.</p>
</li>
</ul>
<p>Example:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> handler = initApiHandler({
  apiHandler,
  inputSchema: AddHeroRequestSchema,
  outputSchema: AddHeroResponseSchema,
});
</code></pre>
<p>While creating the apiHandler for the addHero endpoint, you get a pre-typed body:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745689747520/cf8047c8-b9b9-49fa-a913-7a28ea6cf539.png" alt class="image--center mx-auto" /></p>
<p>Similarly… the listHeroRescues endpoint’s path parameters are also typed:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745689838196/819f6bc7-5bd4-48c8-94db-6439aabd1f24.png" alt class="image--center mx-auto" /></p>
<p>When the LLM chooses an action, it’s calling the same code-path as your backend logic. This makes logs, errors, and even side effects far more predictable when debugging.</p>
<h2 id="heading-deep-dive-the-react-agent-in-action">Deep Dive: The ReAct Agent in Action</h2>
<p><img src="https://github.com/martzmakes/cdk-zod-agent/blob/main/superman-example.gif?raw=true" alt class="image--center mx-auto" /></p>
<h3 id="heading-1-auto-generating-tools-from-the-api-client">1. Auto-Generating Tools from the API Client</h3>
<p>Every endpoint becomes a tool validated with Zod:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> generateTools = &lt;T <span class="hljs-keyword">extends</span> Record&lt;<span class="hljs-built_in">string</span>, <span class="hljs-built_in">any</span>&gt;&gt;<span class="hljs-function">(<span class="hljs-params">client: T</span>) =&gt;</span> {
  <span class="hljs-keyword">return</span> <span class="hljs-built_in">Object</span>.keys(client)
    .map(<span class="hljs-function">(<span class="hljs-params">key</span>) =&gt;</span> { <span class="hljs-comment">/* generates validated tools */</span> })
    .filter(<span class="hljs-function">(<span class="hljs-params">tool</span>) =&gt;</span> tool !== <span class="hljs-literal">null</span>);
};
</code></pre>
<p>These tools are <strong>directly generated from the same client</strong> you use in your backend.</p>
<h3 id="heading-2-making-tools-available-to-the-agent">2. Making Tools Available to the Agent</h3>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> tools = [...generateTools(heroClient(process.env.API_ID!))];
</code></pre>
<p>As soon as a new endpoint is defined, it's available to the agent—no custom adapter required.</p>
<h3 id="heading-3-customizing-the-agents-prompt">3. Customizing the Agent's Prompt</h3>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> prompt = <span class="hljs-function">(<span class="hljs-params">state, config</span>) =&gt;</span> {
  <span class="hljs-keyword">const</span> userName = config.configurable?.userName || <span class="hljs-string">"Human"</span>;
  <span class="hljs-keyword">return</span> [{ role: <span class="hljs-string">"system"</span>, content: <span class="hljs-string">`You are a helpful assistant. Address the user as <span class="hljs-subst">${userName}</span>.`</span> }, ...state.messages];
};
</code></pre>
<h3 id="heading-4-running-the-agent">4. Running the Agent</h3>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> result = <span class="hljs-keyword">await</span> agent.invoke({
  messages: [<span class="hljs-keyword">new</span> HumanMessage(<span class="hljs-string">`What can you do?`</span>)],
}, {
  configurable: { userName: <span class="hljs-string">"Matt"</span>, userId: <span class="hljs-string">"9efe72ed-b182-46b1-bc96-f125b7042599"</span> }
});
</code></pre>
<h3 id="heading-5-why-this-approach-is-super">5. Why This Approach is Super</h3>
<ul>
<li><p><strong>Safety</strong>: Zod validation prevents malformed requests.</p>
</li>
<li><p><strong>Flexibility</strong>: New endpoints become instantly usable by both humans and agents.</p>
</li>
<li><p><strong>Observability</strong>: Unified logs and error handling across both human and LLM calls.</p>
</li>
</ul>
<h3 id="heading-how-the-react-agent-knows-exactly-how-to-format-api-calls">How the ReAct Agent Knows Exactly How to Format API Calls</h3>
<p>Every tool generated from the API client is backed by a <strong>Zod schema</strong> that defines the expected structure for inputs (path parameters and body) and outputs.</p>
<p>When a ReAct agent is reasoning about which tool to call, it uses the tool's schema to automatically <strong>format requests correctly</strong> — including nesting, field names, required parameters, and types — without needing any extra code or custom adapters.</p>
<p>Because the tool's input schema is a real Zod object, the agent (through LangChain) can:</p>
<ul>
<li><p><strong>Auto-suggest</strong> the correct fields during planning.</p>
</li>
<li><p><strong>Validate</strong> its proposed actions before sending them.</p>
</li>
<li><p><strong>Receive clear, structured error messages</strong> if something goes wrong.</p>
</li>
</ul>
<p>In short: the Zod schema acts like an instruction manual and a safety net at the same time — making sure the agent speaks the API’s language natively.</p>
<p>Example from <code>generateTools</code>:</p>
<pre><code class="lang-ts"><span class="hljs-keyword">const</span> combinedSchema = z.object({
  pathParameters: pathParametersSchema || z.object({}),
  body: requestSchema || z.null(),
});
</code></pre>
<p>This means every tool knows exactly what shape of data to expect, and the agent never has to guess.</p>
<h2 id="heading-practical-takeaways">Practical Takeaways</h2>
<ul>
<li><p>Share definition and validation logic between humans and agents as a first principle.</p>
</li>
<li><p>If it isn’t easy for a human to call, it’s probably a minefield for the LLM.</p>
</li>
<li><p>Avoid translation adapters and parallel validation stacks wherever possible.</p>
</li>
<li><p>Build one honest client and let both humans and AI use it safely.</p>
</li>
</ul>
<hr />
<h2 id="heading-up-up-and-away">Up, Up, and Away!</h2>
<p>By combining AWS, CDK, Serverless, Zod, TypeScript, and LangGraph, you get a framework that’s type-safe, runtime-safe, and <strong>seamlessly integrated</strong> with ReAct agents. Whether you’re building for superheroes or just want to avoid villainous bugs, this approach keeps your API and clients in perfect harmony—for both humans and machines.</p>
<p>Check out the code, try it out, and let me know how you’re using AWS and Zod to save your own day!</p>
]]></content:encoded></item><item><title><![CDATA[Crafting the Ultimate Serverless Discord Slash Bot with AWS Lambda]]></title><description><![CDATA[Unleash the power of serverless architecture by building a Discord bot that elegantly processes slash commands using AWS Lambda and AWS CDK. We’ll tap into the power of constructs from @martzmakes/constructs to streamline our infrastructure managemen...]]></description><link>https://martzmakes.com/crafting-the-ultimate-serverless-discord-slash-bot-with-aws-lambda</link><guid isPermaLink="true">https://martzmakes.com/crafting-the-ultimate-serverless-discord-slash-bot-with-aws-lambda</guid><category><![CDATA[discord]]></category><category><![CDATA[serverless]]></category><category><![CDATA[aws-cdk]]></category><category><![CDATA[lambda]]></category><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Matt Martz]]></dc:creator><pubDate>Sat, 22 Feb 2025 20:30:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1740256187222/9f5c63da-5e68-4a27-9cff-1061b1dbdfec.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Unleash the power of serverless architecture by building a Discord bot that elegantly processes slash commands using AWS Lambda and AWS CDK. We’ll tap into the power of constructs from <code>@martzmakes/constructs</code> to streamline our infrastructure management, ensuring your bot is scalable, cost-effective, and easy to maintain—all without requiring you to manage any servers.</p>
<p>The code for this blog post is located at: <a target="_blank" href="https://github.com/martzmakes/discord-lambda">https://github.com/martzmakes/discord-lambda</a></p>
<p>The CDK Construct library I used for this project is located at: <a target="_blank" href="https://github.com/martzmakes/constructs">https://github.com/martzmakes/constructs</a></p>
<h2 id="heading-the-power-of-serverless-why-aws-lambda-is-your-best-bet">The Power of Serverless: Why AWS Lambda Is Your Best Bet</h2>
<p>AWS Lambda presents a compelling case for bot creators:</p>
<ul>
<li><p><strong>Cost Efficiency</strong>: Pay only for computing time used—no need to maintain costly idle servers.</p>
</li>
<li><p><strong>Scalability on Demand</strong>: Effortlessly manage traffic surges with Lambda’s auto-scaling capabilities.</p>
</li>
<li><p><strong>Simplicity</strong>: Focus on building features while AWS manages the infrastructure.</p>
</li>
</ul>
<p>Unlike an ECS-based bot that could rack up recurring costs due to constant server activity, Lambda charges only during actual code execution, saving you money when demand is low.</p>
<h2 id="heading-getting-started-setting-up-your-discord-bot">Getting Started: Setting Up Your Discord Bot</h2>
<p>First, lay the groundwork for your Discord bot:</p>
<ol>
<li><p>Create a new app in the [Discord Developer Portal](<a target="_blank" href="https://discord.com/developers/applications">https://discord.com/developers/applications</a>).</p>
</li>
<li><p>Build your bot under the Bot section and note down your bot token and application ID.</p>
</li>
<li><p>Store these credentials securely in AWS Secrets Manager:</p>
</li>
</ol>
<pre><code class="lang-typescript">{
  <span class="hljs-string">"BOT_TOKEN"</span>: <span class="hljs-string">"&lt;tokenhere&gt;"</span>,
  <span class="hljs-string">"APPLICATION_ID"</span>: <span class="hljs-string">"&lt;application id&gt;"</span>,
  <span class="hljs-string">"DISCORD_PUBLIC_KEY"</span>: <span class="hljs-string">"&lt;public key&gt;"</span>
}
</code></pre>
<ol start="4">
<li><p>Add the secretArn to the project’s bin file.</p>
</li>
<li><p>Deploy your CDK Project… continue these steps after the project is deployed, you may need to skip ahead.</p>
</li>
<li><p>Add your interactions endpoint to your bot’s configuration</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740255926897/d84f029e-0469-4325-bfc4-0aed4bcfaa9a.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Add the permissions you need</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740255605075/23aabd14-8f71-4d84-95c2-46ad7a0b91aa.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Use the OAuth2 URL from the Developer Portal to add your bot to the server.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740255702690/c63ff9d1-da6e-4891-a5fa-8be6c144183b.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<h2 id="heading-understanding-the-cdk-structure">Understanding the CDK Structure</h2>
<p>Your project begins with setting up the entry point (<code>discord-lambda.ts</code>):</p>
<pre><code class="lang-typescript"><span class="hljs-meta">#!/usr/bin/env node</span>
<span class="hljs-keyword">import</span> { App } <span class="hljs-keyword">from</span> <span class="hljs-string">'aws-cdk-lib'</span>;
<span class="hljs-keyword">import</span> { DiscordLambdaStack } <span class="hljs-keyword">from</span> <span class="hljs-string">'../lib/discord-lambda-stack'</span>;

<span class="hljs-keyword">const</span> app = <span class="hljs-keyword">new</span> App();
<span class="hljs-keyword">new</span> DiscordLambdaStack(app, <span class="hljs-string">'DiscordLambdaStack'</span>, {
  env: {
    account: process.env.CDK_DEFAULT_ACCOUNT,
    region: process.env.CDK_DEFAULT_REGION,
  },
  envName: <span class="hljs-string">'main'</span>,
  eventSource: <span class="hljs-string">'discordLambda'</span>,
  discordSecretArn: <span class="hljs-string">'put your discord secret arn here'</span>,
  domainName: <span class="hljs-string">'martzmakes.com'</span>,
});
</code></pre>
<p>This initializes a <code>DiscordLambdaStack</code> for organizing and managing resources, leveraging constructs for seamless AWS integration.</p>
<h3 id="heading-leveraging-martzmakesconstructs">Leveraging <code>@martzmakes/constructs</code></h3>
<p>Explore the potency of these constructs:</p>
<ul>
<li><p><strong>Discord Construct</strong>: Simplifies setup involving API Gateway and command registration.</p>
</li>
<li><p><strong>Lambda Construct</strong>: Ensures optimal Lambda configuration with added flexibility.</p>
</li>
</ul>
<h2 id="heading-the-build-process-from-stack-to-lambda-setup">The Build Process: From Stack to Lambda Setup</h2>
<p>Dive deeper into resource configuration in the stack (<code>discord-lambda-stack.ts</code>):</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { MMStackProps } <span class="hljs-keyword">from</span> <span class="hljs-string">"@martzmakes/constructs/cdk/interfaces/MMStackProps"</span>;
<span class="hljs-keyword">import</span> { MMStack } <span class="hljs-keyword">from</span> <span class="hljs-string">"@martzmakes/constructs/cdk/stacks/MMStack"</span>;
<span class="hljs-keyword">import</span> { Discord } <span class="hljs-keyword">from</span> <span class="hljs-string">"@martzmakes/constructs/cdk/constructs/discord"</span>;
<span class="hljs-keyword">import</span> { Lambda } <span class="hljs-keyword">from</span> <span class="hljs-string">"@martzmakes/constructs/cdk/constructs/lambda"</span>;
<span class="hljs-keyword">import</span> { join } <span class="hljs-keyword">from</span> <span class="hljs-string">"path"</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">interface</span> DiscordLambdaStackProps <span class="hljs-keyword">extends</span> MMStackProps {
  discordSecretArn: <span class="hljs-built_in">string</span>;
  domainName: <span class="hljs-built_in">string</span>;
}

<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> DiscordLambdaStack <span class="hljs-keyword">extends</span> MMStack {
  <span class="hljs-keyword">constructor</span>(<span class="hljs-params">scope: Construct, id: <span class="hljs-built_in">string</span>, props: DiscordLambdaStackProps</span>) {
    <span class="hljs-built_in">super</span>(scope, id, props);

    <span class="hljs-keyword">const</span> registerCommandLambda = <span class="hljs-keyword">new</span> Lambda(<span class="hljs-built_in">this</span>, <span class="hljs-string">'register'</span>, {
      entry: join(__dirname, <span class="hljs-string">`./fns/register-command.ts`</span>)
    });

    <span class="hljs-keyword">const</span> interactionsLambda = <span class="hljs-keyword">new</span> Lambda(<span class="hljs-built_in">this</span>, <span class="hljs-string">'interactions'</span>, {
      entry: join(__dirname, <span class="hljs-string">`./fns/interactions.ts`</span>)
    });

    <span class="hljs-keyword">new</span> Discord(<span class="hljs-built_in">this</span>, <span class="hljs-string">'Discord'</span>, {
      discordSecretArn: props.discordSecretArn,
      domainName: props.domainName,
      interactionsLambda,
      registerCommandLambda,
    });
  }
}
</code></pre>
<p>Within this stack, you set up Lambda functions to handle command registrations and Discord interactions. The <code>Discord</code> construct takes care of API Gateway integration and domain configuration.</p>
<h2 id="heading-mastering-command-registration-and-interactions">Mastering Command Registration and Interactions</h2>
<h3 id="heading-command-registration-simplified">Command Registration Simplified</h3>
<p>Commands are registered using the Discord API through a custom resource utilizing the bot’s credentials securely stored in Secrets Manager:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { initHandler } <span class="hljs-keyword">from</span> <span class="hljs-string">"@martzmakes/constructs/lambda/handlers/initHandler"</span>;
<span class="hljs-keyword">import</span> { registerCommands } <span class="hljs-keyword">from</span> <span class="hljs-string">"./utils/registerCommands"</span>;

<span class="hljs-keyword">const</span> main = <span class="hljs-keyword">async</span> (event: <span class="hljs-built_in">any</span>) =&gt; {
  <span class="hljs-keyword">await</span> registerCommands({});
  <span class="hljs-keyword">return</span> {
    PhysicalResourceId: event.PhysicalResourceId,
  }
};

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> handler = initHandler({ handler: main });
</code></pre>
<h3 id="heading-interaction-handling-magic">Interaction Handling Magic</h3>
<p>In the <code>interactions.ts</code>, manage how your bot will respond to commands:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> {
  DiscordInteractionHandler,
  initDiscordInteractionHandler,
} <span class="hljs-keyword">from</span> <span class="hljs-string">"@martzmakes/constructs/lambda/handlers/initDiscordInteractionsHandler"</span>;
<span class="hljs-keyword">import</span> { roll } <span class="hljs-keyword">from</span> <span class="hljs-string">"./utils/roll"</span>;

<span class="hljs-keyword">const</span> slashHandler: DiscordInteractionHandler&lt;<span class="hljs-built_in">any</span>, <span class="hljs-built_in">any</span>&gt; = <span class="hljs-keyword">async</span> ({
  body,
}) =&gt; {
  <span class="hljs-keyword">switch</span> (body.data.name) {
    <span class="hljs-keyword">case</span> <span class="hljs-string">"roll"</span>:
      <span class="hljs-keyword">return</span> {
        statusCode: <span class="hljs-number">200</span>,
        data: {
          <span class="hljs-keyword">type</span>: <span class="hljs-number">4</span>,
          data: {
            content: <span class="hljs-keyword">await</span> roll({ input: body.data.options[<span class="hljs-number">0</span>].value }),
          },
        },
      };
    <span class="hljs-keyword">default</span>:
      <span class="hljs-keyword">return</span> {
        statusCode: <span class="hljs-number">200</span>,
        data: {
          content: <span class="hljs-string">"Unknown command"</span>,
        },
      };
  }
};

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> handler = initDiscordInteractionHandler({
  slashHandler,
});
</code></pre>
<p>Discord interactions span several categories, but our focus here is on handling application commands.</p>
<h2 id="heading-example-use-cases">Example Use Cases</h2>
<p>Want your bot to stand out? Here are some ideas:</p>
<ul>
<li><p><strong>Gaming Assistance</strong>: Command your bot to manage scores or deliver game statistics.</p>
</li>
<li><p><strong>Community Manager</strong>: Automate server moderation tasks with slash commands. Kick off processes to archive a channel's contents or perform LLM analysis.</p>
</li>
<li><p><strong>Content Curation</strong>: Use advanced AI to summarize key discussion topics and insights from the server.</p>
</li>
</ul>
<h3 id="heading-dungeons-and-dragons-bot-idea">Dungeons and Dragons Bot Idea</h3>
<p>Imagine a bot dedicated to Dungeons and Dragons that can help settle arguments between players (or players and the DM)… an impartial <em>Rule Judge,</em> if you will. I created this using event-driven architecture and langchain to resolve game queries fairly and efficiently. If you’d like to learn more about RuleJudge, let me know.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740255444500/824e0d30-4ed0-4710-bf6b-cad4ed532451.webp" alt class="image--center mx-auto" /></p>
<h2 id="heading-limitations">Limitations</h2>
<p>While this approach is powerful, it has its boundaries:</p>
<ul>
<li><p><strong>Limited Interaction</strong>: Bots can't be directly DM'd or mentioned with <code>@</code>; command use is via slash commands, buttons, and modals only.</p>
</li>
<li><p><strong>Latency Possibilities</strong>: Cold starts and network delays can affect response times.</p>
</li>
</ul>
<p>It's actually really disappointing that <code>@</code> mentioning the bot isn't considered an interaction. By contrast, Slack's API they do consider <code>@</code> mentions as an interaction.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>By adopting this serverless approach, you open up a multitude of possibilities for your Discord bot. Imagine seamlessly handling various slash command requests, orchestrating server activities, or even integrating AI features—all while enjoying the benefits of reduced costs and easy scalability that AWS Lambda offers. Your bot could become a versatile tool for communities, provide insightful server analytics, or even automate routine tasks for server admins. The combination of serverless architecture and Discord's interaction model empowers you to build a bot that is not only functional but also transformative in its capabilities. Now, it's time for you to transform your Discord ideas into reality. With this guide, you have the blueprint to create a sophisticated, serverless bot ready to meet and exceed the needs of any community. Start experimenting, share your creations, and see how your bot can make a difference in the Discord ecosystem. Let's see what you build!</p>
]]></content:encoded></item><item><title><![CDATA[Secure Your Serverless App with Cognito’s Managed Login Pages]]></title><description><![CDATA[When you think about user authentication, you might picture wrestling with OAuth flows, wrestling with JWT tokens, or setting up a dozen redirects just to log a user in. But what if you could let AWS handle the heavy lifting for you? Enter Cognito’s ...]]></description><link>https://martzmakes.com/secure-your-serverless-app-with-cognitos-managed-login-pages</link><guid isPermaLink="true">https://martzmakes.com/secure-your-serverless-app-with-cognitos-managed-login-pages</guid><category><![CDATA[AWS]]></category><category><![CDATA[serverless]]></category><category><![CDATA[lambda]]></category><category><![CDATA[Cognito]]></category><category><![CDATA[aws-cdk]]></category><category><![CDATA[aws-apigateway]]></category><dc:creator><![CDATA[Matt Martz]]></dc:creator><pubDate>Tue, 04 Feb 2025 03:59:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1738641398247/c71c6e0f-37fa-4b80-b873-51e61b20eb5f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When you think about user authentication, you might picture wrestling with OAuth flows, wrestling with JWT tokens, or setting up a dozen redirects just to log a user in. But what if you could let AWS handle the heavy lifting for you? Enter Cognito’s managed login pages—a slick, customizable way to authenticate users without a headache.</p>
<p>In this post, we’ll explore how to set up AWS Cognito’s managed login pages using AWS CDK to host a lambda-rendered site. We’ll also dive into cookie-based authorization using a Lambda Authorizer, allowing you to protect specific routes without exposing any backend code. If you’re looking for a hands-on example, you can check out the code here: <a target="_blank" href="https://github.com/martzmakes/cognito-hosted">https://github.com/martzmakes/cognito-hosted</a>.</p>
<h2 id="heading-hosting-the-website-with-lambda-rendering-and-api-proxying">Hosting the Website with Lambda Rendering and API Proxying</h2>
<p>One unique part of my setup is that the website is Lambda-rendered, with all non-/api paths proxied. This means that all API requests happen on the same domain name via the /api resource, allowing the front-end to make relative calls to the backend without needing to define different API domains. This simplifies deployment and makes for easy blog examples that don’t require any knowledge of frontend frameworks.</p>
<p>The non /api routes on the apigateway trigger lambda(s) that return vanilla HTML with everything needed to render the site.</p>
<p>I am NOT recommending you do this in practice… but for frameworks like Next.js that support server-side rendering with lambda… you might be able to make use of this. The further ability of being able to protect certain paths with the cookie-validating lambda could add additional protection to your site.</p>
<p>While I wouldn’t recommend this for large-scale production apps, it’s a great way to prototype serverless apps without needing a separate frontend stack. Plus, if you’re working with a framework like Next.js, Lambda rendering can help you keep everything under the same API Gateway domain for a more seamless deployment.</p>
<h2 id="heading-why-use-cognito-managed-login-pages">Why Use Cognito Managed Login Pages?</h2>
<p>AWS Cognito offers a fully managed user authentication and authorization service. By leveraging its hosted UI, you get:</p>
<ul>
<li><p><strong>Ease of Setup:</strong> No need to build your own login pages or OAuth flows.</p>
</li>
<li><p><strong>Security:</strong> AWS handles security best practices out of the box.</p>
</li>
<li><p><strong>Customization:</strong> Tailor the look and feel to match your application’s branding.</p>
</li>
</ul>
<h2 id="heading-initial-setup-with-cdk">Initial Setup with CDK</h2>
<p>When I was first creating the code for this blog post, I was using the regular <code>CognitoUserPoolsAuthorizer</code> construct. This construct expects a header with the Bearer <strong><em>id</em></strong> token that you get back from Cognito. By default it goes in the <code>Authorization</code> header, but you can change it with properties if you want. I started out with this code which is largely pulled <a target="_blank" href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_apigateway-readme.html#cognito-user-pools-authorizer">from the documentation</a>:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> userPool = <span class="hljs-keyword">new</span> UserPool(<span class="hljs-built_in">this</span>, <span class="hljs-string">"UserPool"</span>);
<span class="hljs-keyword">const</span> domainPrefix = <span class="hljs-string">"martzmakes-example"</span>;
<span class="hljs-keyword">const</span> domain = userPool.addDomain(<span class="hljs-string">"CognitoDomainWithBlandingDesignManagedLogin"</span>,
  {
    cognitoDomain: { domainPrefix },
    managedLoginVersion: ManagedLoginVersion.NEWER_MANAGED_LOGIN,
  }
);

<span class="hljs-keyword">const</span> homeUrl = <span class="hljs-string">`https://<span class="hljs-subst">${clientBaseDomain}</span>/home`</span>;
<span class="hljs-keyword">const</span> client = <span class="hljs-keyword">new</span> UserPoolClient(<span class="hljs-built_in">this</span>, <span class="hljs-string">"Client"</span>, {
  userPool,
  oAuth: {
    flows: {
      implicitCodeGrant: <span class="hljs-literal">true</span>,
    },
    callbackUrls: [<span class="hljs-string">`https://<span class="hljs-subst">${clientBaseDomain}</span>`</span>, homeUrl],
  },
});

domain.signInUrl(client, { redirectUri: homeUrl });
</code></pre>
<p>I create the <code>userPool</code>, add a <code>cognitoDomain</code> to it with <code>ManagedLoginVersion.NEWER_MANAGED_LOGIN</code> (which is the newer managed login page). <code>cognitoDomain</code> means that the domain will be hosted on an AWS URL and not your own. Then we create the <code>UserPoolClient</code> that the domain attaches to.</p>
<p>Out-of-the-box this seems like it should work… but it DOES NOT. If you deploy this as-is… when you open the login page you’ll get a non-descript error along the lines of there being a system error and to contact the administrator… which is rich since you ARE the admin. 🤦‍♂️</p>
<p>At first, I thought AWS was just messing with me. Everything should have been working. But after some furious clicking through the console, I stumbled upon the ‘Styles’ section of the App Client. Turns out, AWS simply refuses to render the login pages unless you define a branding resource—because why not add one more undocumented requirement? 😅</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738638781863/463066c1-51ed-4310-9ad9-9246b2908844.png" alt class="image--center mx-auto" /></p>
<p>I manually created a style (just to see if this worked) and that was indeed what was missing. After troubleshooting, I found that the issue stemmed from the absence of <code>CfnManagedLoginBranding</code>. This resource is essential because it defines the branding for Cognito’s managed login UI—without it, AWS simply refuses to render the login pages. While my initial setup wasn’t based on <a target="_blank" href="https://sbstjn.com/blog/aws-cdk-cognito-managed-login/">Sebastian Sturm's post</a>, his blog helped me identify this missing component, which ultimately resolved my issue.</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// all I was missing was this...</span>
<span class="hljs-keyword">new</span> CfnManagedLoginBranding(<span class="hljs-built_in">this</span>, <span class="hljs-string">"ManagedLoginBranding"</span>, {
  userPoolId: userPool.userPoolId,
  clientId: client.userPoolClientId,
  returnMergedResources: <span class="hljs-literal">true</span>,
  useCognitoProvidedValues: <span class="hljs-literal">true</span>,
});
</code></pre>
<p>With the default styles in place I have a pretty nice looking login UI:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738638949944/30e59f6e-e15e-4ddb-937f-1036bf0dac9f.png" alt class="image--center mx-auto" /></p>
<p>Next, we can create the authorizer and attach it to a RestApi:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> restApi = <span class="hljs-keyword">new</span> RestApi(<span class="hljs-built_in">this</span>, <span class="hljs-string">`Api`</span>, {
  defaultCorsPreflightOptions: {
    allowOrigins: Cors.ALL_ORIGINS,
  },
  endpointConfiguration: {
    types: [EndpointType.REGIONAL],
  },
});
restApi.addDomainName(<span class="hljs-string">"domain"</span>, {
  domainName: clientBaseDomain,
  certificate,
});

<span class="hljs-keyword">new</span> ARecord(<span class="hljs-built_in">this</span>, <span class="hljs-string">"ARecord"</span>, { zone: hostedZone, recordName: clientBaseDomain, target: RecordTarget.fromAlias(<span class="hljs-keyword">new</span> ApiGateway(restApi)) });

<span class="hljs-keyword">const</span> apiFn = <span class="hljs-keyword">new</span> NodejsFunction(<span class="hljs-built_in">this</span>, <span class="hljs-string">"api"</span>, {
  entry: join(__dirname, <span class="hljs-string">"fns/api.ts"</span>),
  runtime: Runtime.NODEJS_LATEST,
  logGroup: <span class="hljs-keyword">new</span> LogGroup(<span class="hljs-built_in">this</span>, <span class="hljs-string">`/<span class="hljs-subst">${id}</span>ApiLogs`</span>, { logGroupName: <span class="hljs-string">`/<span class="hljs-subst">${id}</span>-api`</span>, removalPolicy: RemovalPolicy.DESTROY }),
  architecture: Architecture.ARM_64,
});

<span class="hljs-keyword">const</span> authorizer = <span class="hljs-keyword">new</span> CognitoUserPoolsAuthorizer(<span class="hljs-built_in">this</span>, <span class="hljs-string">`<span class="hljs-subst">${id}</span>UserPoolAuthorizer`</span>, { cognitoUserPools: [userPool], });

<span class="hljs-keyword">const</span> apiResource = restApi.root.addResource(<span class="hljs-string">"api"</span>);
apiResource.addProxy({
  anyMethod: <span class="hljs-literal">true</span>,
  defaultIntegration: <span class="hljs-keyword">new</span> LambdaIntegration(apiFn, { proxy: <span class="hljs-literal">true</span> }),
  defaultMethodOptions: {
    authorizer,
    authorizationType: AuthorizationType.COGNITO,
  },
});

<span class="hljs-keyword">const</span> fn = <span class="hljs-keyword">new</span> NodejsFunction(<span class="hljs-built_in">this</span>, <span class="hljs-string">"site"</span>, {
  entry: join(__dirname, <span class="hljs-string">"fns/site.ts"</span>),
  runtime: Runtime.NODEJS_LATEST,
  logGroup: <span class="hljs-keyword">new</span> LogGroup(<span class="hljs-built_in">this</span>, <span class="hljs-string">`/<span class="hljs-subst">${id}</span>SiteLogs`</span>, { logGroupName: <span class="hljs-string">`/<span class="hljs-subst">${id}</span>-site`</span>, removalPolicy: RemovalPolicy.DESTROY }),
  architecture: Architecture.ARM_64,
  environment: {
    AUTH_PREFIX: domainPrefix,
    BASE_DOMAIN: clientBaseDomain,
    USER_POOL_CLIENT_ID: client.userPoolClientId,
    USER_POOL_ID: userPool.userPoolId,
  }
});

<span class="hljs-keyword">const</span> siteProxy = restApi.root.addProxy({
  anyMethod: <span class="hljs-literal">false</span>, <span class="hljs-comment">// Disables automatic handling of all methods</span>
});
siteProxy.addMethod(<span class="hljs-string">"GET"</span>, <span class="hljs-keyword">new</span> LambdaIntegration(fn));
</code></pre>
<p>Key parts in the above code are that initially I used the <code>CognitoUserPoolsAuthorizer</code> and I create a proxy on the <code>api</code> resource that uses it. That means that all requests going to <code>/api</code> on the RestApi require that Authorization header with the cognito id token. Finally, we create the lambda-rendered site proxy which handles non-/api paths. That means requests to <code>/</code> and <code>/home</code> for example, will invoke that site lamdba. Later in this blog post we’ll carve out a <code>/protected</code> path which requires authentication in order to display.</p>
<p>💡 <strong><em>Why the id token and not the access token?</em></strong> The <strong>Cognito User Pool Authorizer</strong> in API Gateway expects the <strong>ID token</strong> instead of the <strong>access token</strong> because it is designed primarily for authenticating users, <strong><em>NOT</em></strong> authorizing API access. You can get more fine grained with <em>authorization</em> by using a lambda authorizer.</p>
<h2 id="heading-lambda-based-site-rendering">Lambda-based Site Rendering</h2>
<p>My site lambda in this case is pretty simple. It always returns HTML in the response. The HTML includes hard-coded javascript (via script tags) that check to see if a cookie has been set. If it hasn’t it redirects to the Managed Auth page. When a user successfully signs in via the Managed Auth page… it redirects to the <code>/home</code> path and includes the Cognito id token in query string parameters. We take those, and store them in the cookie for use with the fetch request. It sounds like a lot but it’s fairly simple javascript.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> handler = <span class="hljs-keyword">async</span> () =&gt; {
  <span class="hljs-keyword">return</span> {
    statusCode: <span class="hljs-number">200</span>,
    headers: {
      <span class="hljs-string">"Content-Type"</span>: <span class="hljs-string">"text/html"</span>, <span class="hljs-comment">// required for proper browser rendering</span>
      <span class="hljs-string">"Access-Control-Allow-Origin"</span>: <span class="hljs-string">"*"</span>, <span class="hljs-comment">// Required for CORS support to work</span>
    },
    body: <span class="hljs-string">`&lt;!DOCTYPE html&gt;
&lt;html lang="en"&gt;
&lt;head&gt;
    &lt;meta charset="UTF-8"&gt;
    &lt;meta name="viewport" content="width=device-width, initial-scale=1.0"&gt;
    &lt;title&gt;Lambda Page&lt;/title&gt;
    &lt;!-- styles removed for brevity --&gt;
    &lt;script type="module"&gt;
        function setCookie(name, value, days) {
            let expires = "";
            if (days) {
                const date = new Date();
                date.setTime(date.getTime() + (days * 24 * 60 * 60 * 1000));
                expires = "; expires=" + date.toUTCString();
            }
            document.cookie = name + "=" + value + "; path=/" + expires;
        }

        function getCookie(name) {
            const match = document.cookie.match(new RegExp('(^| )' + name + '=([^;]+)'));
            return match ? match[2] : null;
        }

        function parseHashParams() {
            const hash = window.location.hash.substring(1);
            const params = new URLSearchParams(hash);
            const idToken = params.get("id_token");
            const expiresIn = params.get("expires_in");

            if (idToken) {
                setCookie("CognitoIdToken", idToken, expiresIn / 86400);
                window.location.hash = "";
            }
        }

        function updateUI() {
            const idToken = getCookie("CognitoIdToken");
            document.getElementById("signIn").classList.toggle("hidden", !!idToken);
            document.getElementById("signOut").classList.toggle("hidden", !idToken);
        }

        async function fetchAPIData() {
            const idToken = getCookie("CognitoIdToken");
            if (!idToken) return;
            try {
                const response = await fetch('/api/hello', {
                    headers: { 'Authorization': \`Bearer \${idToken}\` }
                });
                const data = await response.json();
                document.getElementById("api-response").textContent = JSON.stringify(data, null, 2);
            } catch (error) {
                document.getElementById("api-response").textContent = "Error fetching API.";
            }
        }

        window.onload = () =&gt; {
            parseHashParams();
            updateUI();
            fetchAPIData();
        }

        window.signIn = function () {
            window.location.href = "https://<span class="hljs-subst">${process.env.AUTH_PREFIX}</span>.auth.us-east-1.amazoncognito.com/login?client_id=<span class="hljs-subst">${process.env.USER_POOL_CLIENT_ID}</span>&amp;response_type=token&amp;scope=aws.cognito.signin.user.admin+email+openid+phone+profile&amp;redirect_uri=https%3A%2F%2F<span class="hljs-subst">${process.env.BASE_DOMAIN}</span>%2Fhome";
        }

        window.signOut = function () {
            document.cookie = "CognitoIdToken=; expires=Thu, 01 Jan 1970 00:00:00 UTC; path=/;";
            updateUI();
            window.location.href = "https://<span class="hljs-subst">${process.env.AUTH_PREFIX}</span>.auth.us-east-1.amazoncognito.com/logout?client_id=<span class="hljs-subst">${process.env.USER_POOL_CLIENT_ID}</span>&amp;response_type=token&amp;scope=aws.cognito.signin.user.admin+email+openid+phone+profile&amp;redirect_uri=https%3A%2F%2F<span class="hljs-subst">${process.env.BASE_DOMAIN}</span>";
        }
    &lt;/script&gt;
&lt;/head&gt;
&lt;body&gt;
    &lt;div class="container"&gt;
        &lt;h1&gt;Welcome to a Lambda-Powered Page&lt;/h1&gt;
        &lt;button id="signIn" class="button sign-in" onclick="signIn()"&gt;Sign In&lt;/button&gt;
        &lt;button id="signOut" class="button sign-out hidden" onclick="signOut()"&gt;Sign Out&lt;/button&gt;
        &lt;h2&gt;API Response&lt;/h2&gt;
        &lt;pre id="api-response"&gt;No data yet&lt;/pre&gt;
    &lt;/div&gt;
&lt;/body&gt;
&lt;/html&gt;`</span>,
  };
};
</code></pre>
<p>Note: In <code>fetchAPIData</code> we retrieve the cookie from the document and include that as the Authorization header in the request.</p>
<h2 id="heading-moving-to-lambda-based-authorization">Moving to Lambda-based Authorization</h2>
<p>While the Cognito User Pool Authorizer works well for API authentication, it relies on ID tokens in headers—which doesn’t help when protecting UI routes. Browsers don’t automatically send Authorization headers in GET requests, but they do send cookies. That’s why switching to a Lambda Authorizer with cookie-based authentication makes for a much smoother experience.</p>
<p>By switching to a Lambda-based Authorizer with cookie-based authentication, I could:</p>
<ul>
<li><p>Protect sub-routes across multiple Lambda-rendered pages.</p>
</li>
<li><p>Use the same cookies for seamless user experience.</p>
</li>
<li><p>Avoid exposing any backend code directly.</p>
</li>
</ul>
<p>Here’s an example of how the Lambda Authorizer is set up:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> authFn = <span class="hljs-keyword">new</span> NodejsFunction(<span class="hljs-built_in">this</span>, <span class="hljs-string">"auth"</span>, {
  entry: join(__dirname, <span class="hljs-string">"fns/auth.ts"</span>),
  runtime: Runtime.NODEJS_LATEST,
  logGroup: <span class="hljs-keyword">new</span> LogGroup(<span class="hljs-built_in">this</span>, <span class="hljs-string">`/<span class="hljs-subst">${id}</span>authLogs`</span>, { logGroupName: <span class="hljs-string">`/<span class="hljs-subst">${id}</span>-auth`</span>, removalPolicy: RemovalPolicy.DESTROY }),
  architecture: Architecture.ARM_64,
  environment: {
    AUTH_PREFIX: domainPrefix,
    BASE_DOMAIN: clientBaseDomain,
    USER_POOL_CLIENT_ID: client.userPoolClientId,
    USER_POOL_ID: userPool.userPoolId,
  },
});
<span class="hljs-keyword">const</span> authorizerFn = <span class="hljs-keyword">new</span> RequestAuthorizer(<span class="hljs-built_in">this</span>, <span class="hljs-string">"Authorizer"</span>, {
  handler: authFn,
  identitySources: [<span class="hljs-string">"method.request.header.Cookie"</span>],
});
</code></pre>
<p>The Lambda function verifies the cookie and extracts the user's identity, allowing for seamless authentication without exposing tokens.</p>
<p>The JWT validation is similar to what I did in my old blog post on <a target="_blank" href="https://martzmakes.com/creating-verifiable-json-web-tokens-jwts-with-aws-cdk">https://martzmakes.com/creating-verifiable-json-web-tokens-jwts-with-aws-cdk</a> … If you’re more of a visual person, I gave a talk on this at <a target="_blank" href="https://www.youtube.com/watch?v=v9116ZS3QPc">CDK Day 2021 here</a> (yeah… I know it’s 2025… but it holds up). I’m not going to go into the specifics, but the code for the authorizer function is here: <a target="_blank" href="https://github.com/martzmakes/cognito-hosted/blob/main/lib/fns/auth.ts">https://github.com/martzmakes/cognito-hosted/blob/main/lib/fns/auth.ts</a></p>
<p>Since we’re already storing the id token as a cookie we can modify the <code>fetchAPIData</code> function in the site’s script to not include the header:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">fetchAPIData</span>(<span class="hljs-params"></span>) </span>{
    <span class="hljs-keyword">try</span> {
        <span class="hljs-keyword">const</span> response = <span class="hljs-keyword">await</span> fetch(<span class="hljs-string">'/api/hello'</span>);
        <span class="hljs-keyword">const</span> data = <span class="hljs-keyword">await</span> response.json();
        <span class="hljs-built_in">document</span>.getElementById(<span class="hljs-string">"api-response"</span>).textContent = <span class="hljs-built_in">JSON</span>.stringify(data, <span class="hljs-literal">null</span>, <span class="hljs-number">2</span>);
    } <span class="hljs-keyword">catch</span> (error) {
        <span class="hljs-built_in">document</span>.getElementById(<span class="hljs-string">"api-response"</span>).textContent = <span class="hljs-string">"Error fetching API."</span>;
    }
}
</code></pre>
<p>Furthermore we can add a <code>/protected</code> route on the API gateway for a lambda-rendered page that requires a user to be logged in.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> protectedFn = <span class="hljs-keyword">new</span> NodejsFunction(<span class="hljs-built_in">this</span>, <span class="hljs-string">"protected"</span>, {
  entry: join(__dirname, <span class="hljs-string">"fns/protected.ts"</span>),
  runtime: Runtime.NODEJS_LATEST,
  logGroup: <span class="hljs-keyword">new</span> LogGroup(<span class="hljs-built_in">this</span>, <span class="hljs-string">`/<span class="hljs-subst">${id}</span>ProtectedLogs`</span>, { logGroupName: <span class="hljs-string">`/<span class="hljs-subst">${id}</span>-protected`</span>, removalPolicy: RemovalPolicy.DESTROY }),
  architecture: Architecture.ARM_64,
  environment: {
    AUTH_PREFIX: domainPrefix,
    BASE_DOMAIN: clientBaseDomain,
    USER_POOL_CLIENT_ID: client.userPoolClientId,
    USER_POOL_ID: userPool.userPoolId,
  },
});
restApi.root
  .addResource(<span class="hljs-string">"protected"</span>)
  .addMethod(<span class="hljs-string">"GET"</span>, <span class="hljs-keyword">new</span> LambdaIntegration(protectedFn), {
    authorizer: authorizerFn,
  });
</code></pre>
<h2 id="heading-demo">Demo</h2>
<p>With all of that in place, we can open up the site and see that the API Response is “Unauthorized” because we haven’t logged in.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738640622584/dc729519-1bc2-4ccb-abb1-4fdfa8e2bea8.png" alt class="image--center mx-auto" /></p>
<p>If we go to the <code>PROTECTED</code> link, we see that we get an Unauthorized response.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738640674756/cf224a35-eba0-4719-aed6-c04f1729ce8e.png" alt class="image--center mx-auto" /></p>
<p>After signing in, we get redirected to the home route where the Cookie is stored, and the API request returns successful:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738640716414/5f5cb6ea-261c-4d72-8c19-d1498a7639f2.png" alt class="image--center mx-auto" /></p>
<p>And finally… if we click the <code>PROTECTED</code> link, we see the users only page because that lambda-rendered route is protected with the cookie authorizer:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738640764432/8fdb89c5-10d6-4679-a033-a293af6855b8.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-wrapping-it-up"><strong>Wrapping It Up</strong></h2>
<p>AWS Cognito’s managed login pages offer a powerful, no-fuss way to authenticate users while offloading the complexity of OAuth flows and security best practices to AWS. But by combining Cognito with Lambda-based site rendering and a custom Lambda Authorizer, you get fine-grained control over authentication, seamless cookie-based authorization, and a flexible way to protect both API and UI routes—without exposing backend logic.</p>
<p>This setup isn’t just a cool experiment; it’s a practical approach for serverless applications that need secure, scalable authentication without introducing unnecessary client-side complexity. Whether you’re looking to lock down API routes, protect UI pages, or simply avoid dealing with OAuth headaches, this method offers a clean, effective solution.</p>
<p>Want to dive deeper? Grab the full implementation and try it yourself on GitHub: <a target="_blank" href="https://github.com/martzmakes/cognito-hosted">martzmakes/cognito-hosted</a>. Got questions or ideas? Let’s chat in the comments or on <a target="_blank" href="https://linkedin.com/in/martzmakes">LinkedIn</a>! 🚀</p>
]]></content:encoded></item><item><title><![CDATA[Rendering Diagrams with AWS Lambda]]></title><description><![CDATA[Diagrams are the unsung heroes of technical projects. Whether you’re sketching out the architecture of your latest application or untangling the web of nodes in your LLM agents, a good visual can make all the difference. But what if you could dynamic...]]></description><link>https://martzmakes.com/rendering-diagrams-with-aws-lambda</link><guid isPermaLink="true">https://martzmakes.com/rendering-diagrams-with-aws-lambda</guid><category><![CDATA[serverless]]></category><category><![CDATA[CDK]]></category><category><![CDATA[architecture]]></category><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Matt Martz]]></dc:creator><pubDate>Sat, 21 Dec 2024 15:07:55 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1734793512763/02dae45d-641e-4dae-a00e-6b08a3c3d390.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Diagrams are the unsung heroes of technical projects. Whether you’re sketching out the architecture of your latest application or untangling the web of nodes in your LLM agents, a good visual can make all the difference. But what if you could dynamically render those diagrams on the fly, straight from Mermaid or <a target="_blank" href="http://Draw.io">Draw.io</a> formats, and store them in S3 for easy access? That’s exactly what we’ll explore today.</p>
<p>In this post, I’ll show you how I built a serverless solution using AWS Lambda, Puppeteer, and CDK to do just that. The solution consists of two Lambda functions working in tandem to render PNG images and serve them up via pre-signed S3 URLs. Along the way, we’ll talk about the infrastructure, the code, and some of the lessons learned.</p>
<p>The code for this project is available on <a target="_blank" href="https://github.com/martzmakes/lambda-diagram">GitHub</a>.</p>
<h2 id="heading-the-problem-and-the-solution">The Problem and The Solution</h2>
<p>Let’s set the stage. Imagine you’re working on a project that needs to dynamically render diagrams—whether from <strong>Mermaid markdown</strong> or <a target="_blank" href="http://Draw.io"><strong>Draw.io</strong></a> <strong>XML</strong>. Once rendered, these diagrams must be stored securely as PNG images in <strong>S3</strong> for easy sharing and integration, with access provided via <strong>pre-signed URLs</strong>.</p>
<p>Sounds simple enough, but if you’ve ever tried rendering diagrams programmatically, you know it’s a bit like herding cats—except these cats are diagrams, and they bite. That’s where the power of <strong>AWS Lambda</strong>, <strong>Puppeteer</strong>, and the <strong>AWS CDK</strong> comes in. Together, they tame the chaos and streamline the process.</p>
<p>Here’s how the solution works:</p>
<ol>
<li><p><strong>Mermaid Renderer Lambda</strong>: Accepts Mermaid markdown as input, uses Puppeteer to generate the PNG, and uploads the result to S3.</p>
</li>
<li><p><a target="_blank" href="http://Draw.io"><strong>Draw.io</strong></a> <strong>Renderer Lambda</strong>: Does the same, but for <a target="_blank" href="http://Draw.io">Draw.io</a> XML files.</p>
</li>
<li><p><strong>S3 Bucket with Pre-Signed URLs</strong>: A secure storage and sharing mechanism for the rendered diagrams.</p>
</li>
<li><p><strong>Infrastructure with CDK</strong>: Custom constructs tie everything together in a clean and reusable way.</p>
</li>
</ol>
<h3 id="heading-why-puppeteer">Why Puppeteer?</h3>
<p>Puppeteer, a Node.js library, provides a high-level API to control Chromium or Chrome browsers, making it perfect for rendering web-based content like Mermaid and <a target="_blank" href="http://Draw.io">Draw.io</a> diagrams. However, running Puppeteer in Lambda isn’t plug-and-play. Lambda’s execution environment requires headless browser support, which can be tricky to set up.</p>
<p>To solve this, I used <strong>@sparticuz/chromium-min</strong>, a prebuilt Chromium layer optimized for AWS Lambda. It ensures that Puppeteer runs seamlessly within Lambda's constraints, handling the rendering process efficiently and reliably.</p>
<h2 id="heading-infrastructure-laying-the-groundwork-with-cdk-and-custom-constructs">Infrastructure: Laying the Groundwork with CDK and Custom Constructs</h2>
<p>To power our diagram-rendering solution, we need a robust and flexible infrastructure. Enter AWS CDK, with a sprinkle of customization courtesy of my <a target="_blank" href="https://github.com/martzmakes/constructs">@martzmakes/constructs</a> library—a set of open-source constructs designed to simplify and enhance AWS projects. Let’s break down the infrastructure code for this project.</p>
<h3 id="heading-buckets-lambdas-and-puppeteer-oh-my">Buckets, Lambdas, and Puppeteer, Oh My!</h3>
<p>The heart of the infrastructure is the <code>LambdaDiagramStack</code> class. It uses CDK constructs and custom opinions from <a target="_blank" href="https://github.com/martzmakes/constructs">@martzmakes/constructs</a> to define two Lambda functions (Mermaid and <a target="_blank" href="http://Draw.io">Draw.io</a>) for the rendering work.</p>
<p>Here’s the full code for the stack:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> LambdaDiagramStack <span class="hljs-keyword">extends</span> MMStack {
  <span class="hljs-keyword">constructor</span>(<span class="hljs-params">scope: Construct, id: <span class="hljs-built_in">string</span>, props: MMStackProps</span>) {
    <span class="hljs-built_in">super</span>(scope, id, props);
    <span class="hljs-keyword">const</span> diagramBucket = <span class="hljs-keyword">new</span> Bucket(<span class="hljs-built_in">this</span>, <span class="hljs-string">"DiagramBucket"</span>, {
      blockPublicAccess: BlockPublicAccess.BLOCK_ALL,
      removalPolicy: RemovalPolicy.DESTROY,
      autoDeleteObjects: <span class="hljs-literal">true</span>,
      eventBridgeEnabled: <span class="hljs-literal">true</span>,
      objectOwnership: ObjectOwnership.BUCKET_OWNER_ENFORCED,
      lifecycleRules: [
        {
          expiration: Duration.days(<span class="hljs-number">1</span>),
          enabled: <span class="hljs-literal">true</span>,
        },
      ],
    });

    <span class="hljs-keyword">new</span> Lambda(<span class="hljs-built_in">this</span>, <span class="hljs-string">"MermaidLambda"</span>, {
      entry: join(__dirname, <span class="hljs-string">`./fns/mermaid.ts`</span>),
      eventPattern: {
        source: [<span class="hljs-built_in">this</span>.eventSource],
        detailType: [<span class="hljs-string">"mermaid"</span>],
      },
      name: <span class="hljs-string">"mermaid"</span>,
      architecture: Architecture.X86_64, <span class="hljs-comment">// puppeteer needs x86_64</span>
      bundling: {
        externalModules: [],
      },
      memorySize: <span class="hljs-number">10240</span>,
      buckets: {
        BUCKET_NAME: { bucket: diagramBucket, access: <span class="hljs-string">"rw"</span> },
      },
    });

    <span class="hljs-keyword">new</span> Lambda(<span class="hljs-built_in">this</span>, <span class="hljs-string">"DrawIOLambda"</span>, {
      entry: join(__dirname, <span class="hljs-string">`./fns/drawio.ts`</span>),
      eventPattern: {
        source: [<span class="hljs-built_in">this</span>.eventSource],
        detailType: [<span class="hljs-string">"drawio"</span>],
      },
      bundling: {
        externalModules: [],
      },
      name: <span class="hljs-string">"drawio"</span>,
      architecture: Architecture.X86_64, <span class="hljs-comment">// puppeteer needs x86_64</span>
      memorySize: <span class="hljs-number">10240</span>,
      buckets: {
        BUCKET_NAME: { bucket: diagramBucket, access: <span class="hljs-string">"rw"</span> },
      },
    });
  }
}
</code></pre>
<h3 id="heading-key-components">Key Components</h3>
<ol>
<li><p><strong>The S3 Bucket</strong>:</p>
<ul>
<li><p><strong>Block Public Access</strong>: Ensures the bucket is private and secure.</p>
</li>
<li><p><strong>Lifecycle Rules</strong>: Automatically cleans up old files after one day, keeping things tidy.</p>
</li>
<li><p><strong>Ownership Enforced</strong>: Simplifies managing access policies.</p>
</li>
</ul>
</li>
<li><p><strong>Custom Lambda Constructs</strong>:</p>
<ul>
<li><p><strong>MermaidLambda</strong> and <strong>DrawIOLambda</strong> are created using the <code>Lambda</code> construct from <a target="_blank" href="https://github.com/martzmakes/constructs">@martzmakes/constructs</a>.</p>
</li>
<li><p>Both Lambdas are configured with:</p>
<ul>
<li><p><strong>X86_64 architecture</strong>: Puppeteer requires this architecture to work correctly.</p>
</li>
<li><p><strong>Generous memory allocation</strong>: 10 GB ensures Puppeteer runs smoothly.</p>
</li>
<li><p><strong>Bucket access</strong>: Enables each Lambda to read from and write to the S3 bucket.</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p><strong>CDK Opinions</strong>:</p>
<ul>
<li>The <a target="_blank" href="https://github.com/martzmakes/constructs">@martzmakes/constructs</a> library simplifies common patterns while enabling observability. For instance, the <code>Lambda</code> construct extends CDK’s <code>NodejsFunction</code> to handle bundling, memory configuration, and event patterns.</li>
</ul>
</li>
</ol>
<h3 id="heading-puppeteer-and-the-x8664-architecture">Puppeteer and the X86_64 Architecture</h3>
<p>A critical note for Puppeteer users: it won’t run on ARM-based architectures like Graviton. By explicitly setting the Lambdas to use <code>Architecture.X86_64</code>, we sidestep potential compatibility issues. While this may sacrifice some cost or performance benefits of ARM, it ensures Puppeteer operates reliably.</p>
<h2 id="heading-lambda-functions-rendering-diagrams-with-puppeteer-and-s3-integration">Lambda Functions: Rendering Diagrams with Puppeteer and S3 Integration</h2>
<p>Now that we’ve set up the infrastructure, let’s dive into how the Lambda functions actually work. Both the Mermaid and <a target="_blank" href="http://Draw.io">Draw.io</a> rendering Lambdas follow a similar pattern, leveraging Puppeteer for rendering and S3 for storing the output. They’re wrapped with helper methods from <a target="_blank" href="https://github.com/martzmakes/constructs">@martzmakes/constructs</a>, ensuring observability and seamless integration.</p>
<h3 id="heading-common-patterns-in-both-lambdas">Common Patterns in Both Lambdas</h3>
<ol>
<li><p><strong>Initialization with</strong> <code>initEventHandler</code>: Each Lambda uses <code>initEventHandler</code> from <a target="_blank" href="https://github.com/martzmakes/constructs">@martzmakes/constructs</a> to:</p>
<ul>
<li><p>Set up AWS X-Ray tracing for better observability using Lambda Powertools.</p>
</li>
<li><p>Emit an architecture event to infer system architecture for future observability enhancements.</p>
</li>
</ul>
</li>
<li><p><strong>Chromium Setup with</strong> <code>@sparticuz/chromium-min</code>: The <code>@sparticuz/chromium-min</code> library provides a prebuilt Chromium binary tailored for AWS Lambda. It’s essential for rendering web-based diagrams within Puppeteer.</p>
</li>
<li><p><strong>S3 Upload with Pre-Signed URL</strong>: The rendered diagrams are uploaded to S3 using <code>uploadImageToS3AndGetPresignedUrl</code> from <a target="_blank" href="https://github.com/martzmakes/constructs">@martzmakes/constructs</a>. This helper function simplifies the process of generating a pre-signed URL after uploading an image.</p>
</li>
</ol>
<h3 id="heading-mermaid-lambda">Mermaid Lambda</h3>
<p>Here’s how the Mermaid Lambda works:</p>
<ul>
<li><p><strong>Input</strong>: Takes a <code>title</code> and a <code>mermaidCode</code> (the diagram definition in Mermaid markdown).</p>
</li>
<li><p><strong>Process</strong>:</p>
<ol>
<li><p>Generates an HTML template embedding the Mermaid diagram.</p>
</li>
<li><p>Launches Puppeteer using the prebuilt Chromium binary.</p>
</li>
<li><p>Sets the HTML content and waits for the diagram to render.</p>
</li>
<li><p>Captures the SVG element as a PNG image buffer.</p>
</li>
</ol>
</li>
<li><p><strong>Output</strong>: Uploads the buffer to S3 and returns a pre-signed URL.</p>
</li>
</ul>
<p>Here’s the core implementation:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> mermaidToImageBuffer = <span class="hljs-keyword">async</span> ({
  title,
  mermaidCode,
}: {
  title: <span class="hljs-built_in">string</span>;
  mermaidCode: <span class="hljs-built_in">string</span>;
}): <span class="hljs-built_in">Promise</span>&lt;Buffer&gt; =&gt; {
  <span class="hljs-comment">// HTML template to render the Mermaid diagram</span>
  <span class="hljs-keyword">const</span> htmlTemplate = <span class="hljs-string">`
      &lt;!DOCTYPE html&gt;
      &lt;html lang="en"&gt;
      &lt;head&gt;
          &lt;meta charset="UTF-8"&gt;
          &lt;title&gt;<span class="hljs-subst">${title}</span>&lt;/title&gt;
          &lt;script src="https://cdn.jsdelivr.net/npm/mermaid/dist/mermaid.min.js"&gt;&lt;/script&gt;
          &lt;style&gt;
              body {
                  margin: 0;
                  display: flex;
                  justify-content: center;
                  align-items: center;
                  height: 100vh;
                  background: white;
              }
          &lt;/style&gt;
      &lt;/head&gt;
      &lt;body&gt;
          &lt;div id="mermaidContainer"&gt;
              &lt;div class="mermaid"&gt;<span class="hljs-subst">${mermaidCode}</span>&lt;/div&gt;
          &lt;/div&gt;
          &lt;script&gt;
              mermaid.initialize({ startOnLoad: true });
          &lt;/script&gt;
      &lt;/body&gt;
      &lt;/html&gt;
  `</span>;

  <span class="hljs-comment">// Launch Puppeteer</span>
  chromium.setHeadlessMode = <span class="hljs-literal">true</span>;
  chromium.setGraphicsMode = <span class="hljs-literal">true</span>;
  <span class="hljs-keyword">const</span> browser = <span class="hljs-keyword">await</span> puppeteer.launch({
    args: [
      ...chromium.args,
    ],
    defaultViewport: {
      width: <span class="hljs-number">1920</span>,
      height: <span class="hljs-number">1080</span>,
      deviceScaleFactor: <span class="hljs-number">3</span>,
    },
    executablePath: <span class="hljs-keyword">await</span> chromium.executablePath(
      <span class="hljs-string">"https://github.com/Sparticuz/chromium/releases/download/v119.0.2/chromium-v119.0.2-pack.tar"</span>
    ),
    headless: <span class="hljs-literal">false</span>,
  });
  <span class="hljs-keyword">const</span> page = <span class="hljs-keyword">await</span> browser.newPage();

  <span class="hljs-comment">// Set content</span>
  <span class="hljs-keyword">await</span> page.setContent(htmlTemplate, { waitUntil: <span class="hljs-string">"networkidle0"</span> });

  <span class="hljs-comment">// Wait for the diagram to render</span>
  <span class="hljs-keyword">await</span> page.waitForSelector(<span class="hljs-string">".mermaid &gt; svg"</span>);

  <span class="hljs-comment">// Select the SVG element and capture as an image</span>
  <span class="hljs-keyword">const</span> element = <span class="hljs-keyword">await</span> page.$(<span class="hljs-string">".mermaid &gt; svg"</span>);
  <span class="hljs-keyword">if</span> (!element) {
    <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Error</span>(<span class="hljs-string">"Failed to render Mermaid diagram"</span>);
  }

  <span class="hljs-comment">// Scale bounding box dimensions by the deviceScaleFactor</span>
  <span class="hljs-keyword">const</span> buffer = <span class="hljs-keyword">await</span> element.screenshot({
    <span class="hljs-keyword">type</span>: <span class="hljs-string">"png"</span>,
  });

  <span class="hljs-keyword">await</span> browser.close();
  <span class="hljs-keyword">return</span> Buffer.from(buffer);
};

<span class="hljs-keyword">const</span> eventHandler: EventHandler&lt;{
  title: <span class="hljs-built_in">string</span>;
  mermaidCode: <span class="hljs-built_in">string</span>;
}&gt; = <span class="hljs-keyword">async</span> ({ data }) =&gt; {
  <span class="hljs-keyword">const</span> { title, mermaidCode } = data;
  <span class="hljs-keyword">const</span> buffer = <span class="hljs-keyword">await</span> mermaidToImageBuffer({
    title,
    mermaidCode,
  });

  <span class="hljs-keyword">const</span> key = <span class="hljs-string">`<span class="hljs-subst">${<span class="hljs-built_in">Date</span>.now()}</span>-<span class="hljs-subst">${title}</span>.png`</span>;
  <span class="hljs-keyword">const</span> presignedUrl = <span class="hljs-keyword">await</span> uploadImageToS3AndGetPresignedUrl({ key, buffer });
  <span class="hljs-built_in">console</span>.log(<span class="hljs-built_in">JSON</span>.stringify({ presignedUrl }));
};

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> handler = initEventHandler({ eventHandler });
</code></pre>
<h3 id="heading-drawiohttpdrawio-lambda"><a target="_blank" href="http://Draw.io">Draw.io</a> Lambda</h3>
<p>The <a target="_blank" href="http://Draw.io">Draw.io</a> Lambda follows a similar flow but uses a <a target="_blank" href="http://Draw.io">Draw.io</a>-specific rendering process:</p>
<ul>
<li><p><strong>Input</strong>: Takes a <code>title</code> and a <code>drawioXml</code> (the diagram definition in <a target="_blank" href="http://Draw.io">Draw.io</a> XML).</p>
</li>
<li><p><strong>Process</strong>:</p>
<ol>
<li><p>Navigates Puppeteer to the <a target="_blank" href="http://Draw.io">Draw.io</a> export URL.</p>
</li>
<li><p>Uses page scripts to render the <a target="_blank" href="http://Draw.io">Draw.io</a> XML into an SVG.</p>
</li>
<li><p>Captures the SVG as a PNG image buffer.</p>
</li>
</ol>
</li>
<li><p><strong>Output</strong>: Same as Mermaid Lambda—uploads to S3 and generates a pre-signed URL.</p>
</li>
</ul>
<p>Here’s the core implementation:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> drawioToImageBuffer = <span class="hljs-keyword">async</span> ({
  drawioXml,
}: {
  drawioXml: <span class="hljs-built_in">string</span>;
}): <span class="hljs-built_in">Promise</span>&lt;Buffer&gt; =&gt; {
  <span class="hljs-comment">// Launch Puppeteer</span>
  chromium.setHeadlessMode = <span class="hljs-literal">true</span>;
  chromium.setGraphicsMode = <span class="hljs-literal">true</span>;
  <span class="hljs-keyword">const</span> width = <span class="hljs-number">2</span> * <span class="hljs-number">1920</span>;
  <span class="hljs-keyword">const</span> height = <span class="hljs-number">2</span> * <span class="hljs-number">1080</span>;
  <span class="hljs-keyword">const</span> browser = <span class="hljs-keyword">await</span> puppeteer.launch({
    args: [...chromium.args],
    defaultViewport: {
      width,
      height,
      deviceScaleFactor: <span class="hljs-number">3</span>,
    },
    executablePath: <span class="hljs-keyword">await</span> chromium.executablePath(
      <span class="hljs-string">"https://github.com/Sparticuz/chromium/releases/download/v119.0.2/chromium-v119.0.2-pack.tar"</span>
    ),
    headless: <span class="hljs-literal">false</span>,
  });
  <span class="hljs-keyword">const</span> page = <span class="hljs-keyword">await</span> browser.newPage();
  <span class="hljs-keyword">await</span> page.goto(<span class="hljs-string">"https://www.draw.io/export3.html"</span>, {
    waitUntil: <span class="hljs-string">"networkidle0"</span>,
  });

  <span class="hljs-keyword">await</span> page.evaluate(
    <span class="hljs-function">(<span class="hljs-params">obj</span>) =&gt;</span> {
      <span class="hljs-keyword">return</span> (<span class="hljs-built_in">window</span> <span class="hljs-keyword">as</span> <span class="hljs-built_in">any</span>).render({
        h: obj.height,
        w: obj.width,
        xml: obj.drawioXml,
      });
    },
    { drawioXml, width, height }
  );
  <span class="hljs-keyword">await</span> page.waitForSelector(<span class="hljs-string">"#LoadingComplete"</span>);
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Loading complete"</span>);

  <span class="hljs-comment">// Select the SVG element and capture as an image</span>
  <span class="hljs-keyword">const</span> element = <span class="hljs-keyword">await</span> page.$(<span class="hljs-string">"#graph &gt; svg"</span>);
  <span class="hljs-keyword">if</span> (!element) {
    <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Error</span>(<span class="hljs-string">"Failed to render Mermaid diagram"</span>);
  }

  <span class="hljs-comment">// Scale bounding box dimensions by the deviceScaleFactor</span>
  <span class="hljs-keyword">const</span> buffer = <span class="hljs-keyword">await</span> element.screenshot({
    <span class="hljs-keyword">type</span>: <span class="hljs-string">"png"</span>,
    fullPage: <span class="hljs-literal">true</span>,
  });

  <span class="hljs-keyword">await</span> browser.close();
  <span class="hljs-keyword">return</span> Buffer.from(buffer);
};

<span class="hljs-keyword">const</span> eventHandler: EventHandler&lt;{
  title: <span class="hljs-built_in">string</span>;
  drawioXml: <span class="hljs-built_in">string</span>;
}&gt; = <span class="hljs-keyword">async</span> ({ data }) =&gt; {
  <span class="hljs-keyword">const</span> { title, drawioXml } = data;
  <span class="hljs-keyword">const</span> buffer = <span class="hljs-keyword">await</span> drawioToImageBuffer({
    drawioXml,
  });

  <span class="hljs-keyword">const</span> key = <span class="hljs-string">`<span class="hljs-subst">${<span class="hljs-built_in">Date</span>.now()}</span>-<span class="hljs-subst">${title}</span>.png`</span>;
  <span class="hljs-keyword">const</span> presignedUrl = <span class="hljs-keyword">await</span> uploadImageToS3AndGetPresignedUrl({ key, buffer });
  <span class="hljs-built_in">console</span>.log(<span class="hljs-built_in">JSON</span>.stringify({ presignedUrl }));
};

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> handler = initEventHandler({ eventHandler });
</code></pre>
<h3 id="heading-observability-and-future-enhancements">Observability and Future Enhancements</h3>
<p>Both Lambdas emit architecture events for observability—a feature I’ll explore in a future post. This ensures we can visualize architecture changes and interactions using the rendered diagrams. For now, the event data helps lay the groundwork for a deeper understanding of your system.</p>
<h2 id="heading-putting-it-all-together-seamless-serverless-diagram-rendering">Putting It All Together: Seamless Serverless Diagram Rendering</h2>
<p>Here’s how the system operates end-to-end:</p>
<ol>
<li><p><strong>Trigger</strong>:</p>
<ul>
<li><p>A client sends an event to the EventBridge source configured for either the Mermaid or <a target="_blank" href="http://Draw.io">Draw.io</a> Lambda.</p>
</li>
<li><p>The event payload includes the diagram’s title and its respective code or XML.</p>
</li>
</ul>
</li>
<li><p><strong>Processing</strong>:</p>
<ul>
<li><p>The appropriate Lambda function is triggered based on the event type (<code>mermaid</code> or <code>drawio</code>).</p>
</li>
<li><p>The Lambda:</p>
<ul>
<li><p>Processes the input using Puppeteer to render the diagram.</p>
</li>
<li><p>Uploads the generated PNG to the S3 bucket.</p>
</li>
<li><p>Retrieves a pre-signed URL for accessing the image.</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p><strong>Response</strong>:</p>
<ul>
<li>The Lambda returns the pre-signed URL to the client, making the image available for secure, time-bound access.</li>
</ul>
</li>
<li><p><strong>Lifecycle Management</strong>:</p>
<ul>
<li>A lifecycle policy on the S3 bucket ensures that old diagrams are automatically deleted after one day, maintaining a clean and cost-effective storage environment.</li>
</ul>
</li>
</ol>
<p>By leveraging <a target="_blank" href="https://github.com/martzmakes/constructs">@martzmakes/constructs</a>, the system can provide insights:</p>
<ul>
<li><p><strong>Tracing with Lambda Powertools</strong>:</p>
<ul>
<li><p>End-to-end tracing via AWS X-Ray gives a detailed view of the rendering process.</p>
</li>
<li><p>Helps identify bottlenecks, such as rendering delays or S3 upload issues.</p>
</li>
</ul>
</li>
<li><p><strong>Architecture Events</strong>:</p>
<ul>
<li><p>Emitted by the <code>initEventHandler</code>, these events are invaluable for mapping how diagrams relate to system architecture.</p>
</li>
<li><p>In future posts, we’ll use this data to visualize system interactions and changes.</p>
</li>
</ul>
</li>
</ul>
<h2 id="heading-whats-next">What’s Next?</h2>
<p>In future posts, I’ll show you how to use this solution for:</p>
<ul>
<li><p>Visualizing advanced multi-node LLM agents and their decisions using LangGraph and LangChain with Mermaid.</p>
</li>
<li><p>Observing architectural changes in your projects with <a target="_blank" href="http://Draw.io">Draw.io</a>.</p>
</li>
<li><p>Generating visuals of upstream/downstream error propagation and sending them to Slack.</p>
</li>
</ul>
<p>Stay tuned for those deep dives, but in the meantime, feel free to tinker with the code, experiment with new diagramming formats, or even extend the solution for other use cases.</p>
<p>Serverless rendering of diagrams might not save the world, but it can definitely save your sanity when managing complex systems. By combining the power of AWS Lambda, Puppeteer, and S3, we’ve created a flexible and scalable solution for rendering and sharing diagrams.</p>
<p>What’s the most creative way you can think of to use this? Let me know in the comments!</p>
]]></content:encoded></item><item><title><![CDATA[The "Backend For Frontend" (BFF) Pattern]]></title><description><![CDATA[It’s exhausting, isn’t it? Crafting this pristine backend, where services are finely tuned with domain logic, validation, and business rules, all while your poor frontend is left out in the cold, making endless HTTP requests to multiple services, pie...]]></description><link>https://martzmakes.com/the-backend-for-frontend-bff-pattern</link><guid isPermaLink="true">https://martzmakes.com/the-backend-for-frontend-bff-pattern</guid><category><![CDATA[AWS]]></category><category><![CDATA[CDK]]></category><category><![CDATA[APIs]]></category><dc:creator><![CDATA[Matt Martz]]></dc:creator><pubDate>Mon, 16 Sep 2024 13:00:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1726452603053/2c5669d2-59ab-420d-b0c4-c90ac9aeec27.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It’s exhausting, isn’t it? Crafting this pristine backend, where services are finely tuned with domain logic, validation, and business rules, all while your poor frontend is left out in the cold, making endless HTTP requests to multiple services, piecing it all together—like some ill-fated scavenger hunt.</p>
<p>Enter the <strong>Backend for Frontend (BFF)</strong> pattern—the unsung hero in this story. In today’s post, we’ll look at why BFFs work so well in tandem with <strong>Domain-Driven Design (DDD)</strong>, and why they should be an integral part of your application architecture. For a little bonus, we’ll sprinkle in some practical examples with <strong>AWS CDK</strong> (including a neat workaround for attaching usage plans and API keys via CDK Aspects).</p>
<h2 id="heading-why-bff-and-ddd-form-a-winning-partnership">Why BFF and DDD Form a Winning Partnership</h2>
<p>As your system grows—more domains, more APIs, more everything—the complexities on the frontend will increase accordingly. Domain services might be neatly separated at the backend, but the burden of aggregating and juggling their data will inevitably fall on the frontend developers. It's a bit unfair—and terribly inefficient. That's where BFFs come in, acting as a dedicated middle layer, transforming backend complexity into frontend simplicity.</p>
<p>Let's break it down.</p>
<h3 id="heading-1-streamlining-frontend-requests">1. Streamlining Frontend Requests</h3>
<p><strong>The Problem Without BFF</strong>: When your frontend directly interacts with various domain services, you quickly end up with a mess of HTTP calls, each dealing with its own service. Imagine the complexity of interacting with a dozen APIs from a simple React app—the requests, the retry logic, the sequence of events.</p>
<p><strong>With BFF</strong>: The frontend doesn’t need to know or care about the multitude of services lurking in the depths of your backend kingdom. The BFF abstracts all that, providing a single API that interacts with the necessary services and returns a nicely structured response.</p>
<p>Here’s a simple example of how BFF comes to the rescue using <strong>AWS API Gateway</strong> and <strong>AWS Lambda</strong>. Your frontend can gracefully call a single endpoint to retrieve detailed information about, say, a user or a product. The BFF takes on the dirty work—fetching data from multiple services like <code>Users</code>, <code>Inventory</code>, and <code>Recommendations</code>.</p>
<h3 id="heading-2-keeping-domain-responsibilities-where-they-belong">2. Keeping Domain Responsibilities Where They Belong</h3>
<p><strong>The Problem Without BFF</strong>: Let’s say you’re building the frontend for an e-commerce platform. You need product details, which means calling the <code>Pricing</code>, <code>Inventory</code>, and <code>Reviews</code> domain services, among others.</p>
<p>What happens? Your frontend is now responsible for making all these calls, sequencing them appropriately, aggregating the results, and dealing with all sorts of edge cases—timeouts, failed calls, out-of-sync responses. Complexity creeps in fast, and your frontend is now shouldering work it was never meant to handle.</p>
<p><strong>With BFF</strong>: The BFF can take responsibility for orchestrating these calls. It doesn't merely proxy the requests—it performs the necessary API calls to gather the data, handles errors when they arise, and packages everything as an elegant response for the frontend.</p>
<p>A classic BFF endpoint would look something like <code>/product/{productId}/details</code>. The API call behind the scenes automatically handles pricing, availability, reviews, and product images. The frontend, blissfully unaware of the chaos happening below the surface, receives a neatly compiled JSON response.</p>
<h3 id="heading-3-performance-and-security-benefits">3. Performance and Security Benefits</h3>
<p><strong>Without BFF</strong>: More direct API calls from the frontend mean more round-trips, more latency, and the potential for increased attack surfaces. You’re also pushing API key management onto the frontend, and that’s just bad practice.</p>
<p><strong>With BFF</strong>: The BFF optimizes how backend calls are handled. It can batch and parallelize requests, reducing latency by minimizing unnecessary round-trips. By serving as a single point of contact for all frontend calls, the BFF reduces the risk of exposing sensitive backend services directly to the public.</p>
<p>And yes, in <strong>AWS</strong>, you can manage API Gateway’s keys and usage plans from the BFF side, ensuring that only appropriate calls are routed, secured, and handled efficiently.</p>
<h3 id="heading-putting-it-all-together-a-practical-example-with-aws-cdk">Putting It All Together: A Practical Example with AWS CDK</h3>
<p>Building a BFF can seem daunting, but thankfully the <strong>AWS Cloud Development Kit (CDK)</strong> comes to the rescue. Let’s walk through how you can deploy a BFF using <strong>API Gateway</strong>, <strong>Lambda</strong>, and some CloudFormation magic.</p>
<h4 id="heading-setting-up-the-basics">Setting Up The Basics</h4>
<p>You’ll need the usual suspects:</p>
<ul>
<li><p><strong>API Gateway</strong> to expose your BFF.</p>
</li>
<li><p><strong>AWS Lambda</strong> to process the backend logic and communicate with other domain services.</p>
</li>
<li><p><strong>API Keys and Usage Plans</strong>, because we do care about rate-limiting and access control.</p>
</li>
</ul>
<p>Here’s where things get a bit tricky—out of the box, CDK’s API Gateway constructs don’t play nicely when trying to link specific APIs to usage plans when the APIs are coming from other projects. Thankfully, you can use CDK <strong>Aspects</strong> to work around this.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">class</span> AddApisToUsagePlanAspect <span class="hljs-keyword">implements</span> IAspect {
  <span class="hljs-keyword">constructor</span>(<span class="hljs-params"><span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> externalApiIds: <span class="hljs-built_in">string</span>[]</span>) {}

  <span class="hljs-keyword">public</span> visit(node: IConstruct): <span class="hljs-built_in">void</span> {
    <span class="hljs-keyword">if</span> (node <span class="hljs-keyword">instanceof</span> UsagePlan) {
      <span class="hljs-keyword">const</span> usagePlan = node <span class="hljs-keyword">as</span> UsagePlan;
      <span class="hljs-keyword">const</span> cfnUsagePlan = usagePlan.node.defaultChild <span class="hljs-keyword">as</span> CfnUsagePlan;
      cfnUsagePlan.apiStages = <span class="hljs-built_in">this</span>.externalApiIds.map(<span class="hljs-function"><span class="hljs-params">apiId</span> =&gt;</span> ({
        apiId, stage: <span class="hljs-string">"prod"</span>,
      })) <span class="hljs-keyword">as</span> <span class="hljs-built_in">any</span>;
    }
  }
}
</code></pre>
<p>Let’s break this down so it doesn't feel like we’re just throwing jargon your way and calling it a day:</p>
<ol>
<li><p><strong>Aspect Class</strong>: We created an <strong>Aspect</strong>, which is just a bit of code that inspects each construct in your CDK stack. In our case, we looked for the <strong>UsagePlan</strong> constructs.</p>
</li>
<li><p><strong>CloudFormation Hackery</strong>: We dive into the underlying <strong>CloudFormation representation</strong> of the usage plan (i.e., <code>CfnUsagePlan</code>), pull out the existing <code>apiStages</code> (if any), and append our shiny new list of <strong>external API IDs</strong> to this array. This way, AWS knows that these external APIs should be attached and governed by the same usage plan.</p>
</li>
<li><p><strong>Attach External APIs</strong>: CDK didn’t natively allow us to link multiple external APIs under a single usage plan the way we needed. Thanks to CDK Aspects, we were able to surgically inject the correct <strong>API IDs</strong> into the API Gateway definition, ensuring all external API calls route through the correct usage plan and API key management without manually editing the generated CloudFormation.</p>
</li>
</ol>
<h4 id="heading-want-to-go-deeper-into-cdk-aspects"><strong>Want to Go Deeper into CDK Aspects?</strong></h4>
<p>If this introduced you to the wondrous world of CDK Aspects but left you wanting <em>more</em>, you're in luck! I go <strong>waaaaay deeper</strong> into the theory and practical use cases of Aspects in my other blog post, <a target="_blank" href="https://matt.martz.codes/breaking-bad-practices-with-cdk-aspects"><strong>Breaking Bad Practices with CDK Aspects</strong></a> Whether you're struggling with modifying CDK constructs mid-flight, dealing with compliance checks, or just looking for creative ways to improve your CDK game, that post has got you covered. Go peek at it! I promise you’ll walk away inspired.</p>
<h4 id="heading-with-the-cdk-aspect-out-of-the-way"><strong>With the CDK Aspect out of the way…</strong></h4>
<p>Here’s a simplified example of what your <strong>BFF Stack</strong> might look like:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> BlogBffStack <span class="hljs-keyword">extends</span> Stack {
  <span class="hljs-keyword">constructor</span>(<span class="hljs-params">scope: Construct, id: <span class="hljs-built_in">string</span>, props: BlogBffStackProps</span>) {
    <span class="hljs-built_in">super</span>(scope, id, props);

    <span class="hljs-keyword">const</span> apiKey = <span class="hljs-keyword">new</span> ApiKey(<span class="hljs-built_in">this</span>, <span class="hljs-string">'ApiKey'</span>, { apiKeyName: <span class="hljs-string">'BFFApiKey'</span> });

    <span class="hljs-keyword">const</span> api = <span class="hljs-keyword">new</span> RestApi(<span class="hljs-built_in">this</span>, <span class="hljs-string">'BFFApi'</span>, {
      description: <span class="hljs-string">'Backend for Frontend API'</span>,
      endpointConfiguration: { types: [EndpointType.REGIONAL] },
    });

    <span class="hljs-keyword">const</span> usagePlan = <span class="hljs-keyword">new</span> UsagePlan(<span class="hljs-built_in">this</span>, <span class="hljs-string">'UsagePlan'</span>);
    usagePlan.addApiKey(apiKey);

    Aspects.of(<span class="hljs-built_in">this</span>).add(<span class="hljs-keyword">new</span> AddApisToUsagePlanAspect([api.restApiId, ...Object.values(props.externalApis)]));

    <span class="hljs-built_in">Object</span>.entries(props.externalApis).forEach(<span class="hljs-function">(<span class="hljs-params">[apiName, apiId]</span>) =&gt;</span> {
      <span class="hljs-keyword">const</span> integration = <span class="hljs-keyword">new</span> HttpIntegration(<span class="hljs-string">`https://<span class="hljs-subst">${apiId}</span>.execute-api.us-east-1.amazonaws.com/prod/{proxy}`</span>);
      api.root.addResource(apiName.toLowerCase()).addProxy({ anyMethod: <span class="hljs-literal">true</span>, defaultIntegration: integration });
    });
  }
}
</code></pre>
<p>This will give you a properly configured BFF API, safely hidden behind an API Gateway, and all access controlled via API keys. The frontend, in turn, gets a consistent and reliable interface without ever knowing what’s happening behind the scenes.</p>
<h3 id="heading-advanced-orchestration-in-the-bff">Advanced Orchestration in the BFF</h3>
<p><em>Note: My GitHub repo doesn't actually cover this portion directly, but this is how a BFF endpoint could work in practice to merge multiple sources of information on the backend.</em></p>
<p>Your BFF doesn’t need to merely pass requests through—it can take on roles like service aggregation and orchestration. Here’s a sample Lambda function that aggregates product data from various domain services:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { APIGatewayProxyEvent, APIGatewayProxyResult } <span class="hljs-keyword">from</span> <span class="hljs-string">'aws-lambda'</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> handler = <span class="hljs-keyword">async</span> (event: APIGatewayProxyEvent): <span class="hljs-built_in">Promise</span>&lt;APIGatewayProxyResult&gt; =&gt; {
  <span class="hljs-keyword">const</span> productId = event.pathParameters?.productId;
  <span class="hljs-keyword">const</span> externalApis = <span class="hljs-built_in">JSON</span>.parse(process.env.EXTERNAL_APIS!);

  <span class="hljs-keyword">try</span> {
    <span class="hljs-keyword">const</span> [priceResp, inventoryResp, reviewsResp] = <span class="hljs-keyword">await</span> <span class="hljs-built_in">Promise</span>.all([
      fetch(<span class="hljs-string">`https://<span class="hljs-subst">${externalApis[<span class="hljs-string">"Pricing"</span>]}</span>.execute-api.us-east-1.amazonaws.com/prod/price/<span class="hljs-subst">${productId}</span>`</span>),
      fetch(<span class="hljs-string">`https://<span class="hljs-subst">${externalApis[<span class="hljs-string">"Inventory"</span>]}</span>.execute-api.us-east-1.amazonaws.com/prod/inventory/<span class="hljs-subst">${productId}</span>`</span>),
      fetch(<span class="hljs-string">`https://<span class="hljs-subst">${externalApis[<span class="hljs-string">"Reviews"</span>]}</span>.execute-api.us-east-1.amazonaws.com/prod/reviews/<span class="hljs-subst">${productId}</span>`</span>)
    ]);

    <span class="hljs-keyword">const</span> [priceData, inventoryData, reviewsData] = <span class="hljs-keyword">await</span> <span class="hljs-built_in">Promise</span>.all([
      priceResp.json(),
      inventoryResp.json(),
      reviewsResp.json()
    ]);

    <span class="hljs-keyword">const</span> responseData = {
      productId,
      pricing: priceData,
      inventory: inventoryData,
      reviews: reviewsData,
    };

    <span class="hljs-keyword">return</span> {
      statusCode: <span class="hljs-number">200</span>,
      body: <span class="hljs-built_in">JSON</span>.stringify(responseData),
    };
  } <span class="hljs-keyword">catch</span> (error) {
    <span class="hljs-keyword">return</span> {
      statusCode: <span class="hljs-number">500</span>,
      body: <span class="hljs-built_in">JSON</span>.stringify({ message: <span class="hljs-string">'Failed to retrieve product details.'</span> }),
    };
  }
};
</code></pre>
<p>This Lambda function executes parallel requests to <code>Pricing</code>, <code>Inventory</code>, and <code>Review</code> services, compiles the data, and returns it as a single response, reducing latency and simplifying frontend calls.</p>
<h3 id="heading-full-code-available-on-github"><strong>Full Code Available on GitHub 💾</strong></h3>
<p>All code featured in this article—including the BFF stack, the proxy configuration, the <strong>AddApisToUsagePlanAspect</strong>, and Lambda functions—is available at this GitHub repository:</p>
<blockquote>
<p><a target="_blank" href="https://github.com/martzcodes/blog-bff"><strong><em>https://github.com/martzcodes/blog-bff</em></strong></a></p>
</blockquote>
<p>So feel free to clone it, test it, and even poke around to see how it all works together in glorious AWS harmony. 🎶</p>
<h2 id="heading-wrapping-up-why-you-need-a-bff">Wrapping Up: Why You Need a BFF</h2>
<p>Using a BFF in conjunction with Domain-Driven Design isn't just a nice-to-have; it’s essential. You not only reduce the burden on your frontend but also streamline backend complexity into manageable, secure, and high-performing APIs.</p>
<p>Here’s what you gain:</p>
<ol>
<li><p><strong>Reduced complexity</strong> for your frontend with a single call to a BFF that handles all orchestration.</p>
</li>
<li><p><strong>Optimized performance</strong> via parallel tasks, batching, and reduced round-trips.</p>
</li>
<li><p><strong>Enhanced security</strong> by hiding your backend services behind one simplified API surface controlled through API keys.</p>
</li>
</ol>
<p>And with powerful tools like <strong>AWS CDK</strong>, building and deploying a BFF is no longer a headache—you can set up everything automatically while ensuring you have tight monitoring, access control, and scalability.</p>
<p>In short, your frontend will breathe a sigh of relief, and your backend will perform smoothly without bogging down the user experience. Really, what's not to love?</p>
<p>May your backend architecture remain clean, and your frontend blissfully unaware.</p>
<p>Happy coding—and may your BFF always have your frontend’s back! 👊</p>
]]></content:encoded></item><item><title><![CDATA[Supercharging a Serverless Slackbot with Amazon Bedrock]]></title><description><![CDATA[In the dynamic world of software development, staying abreast of changes and deployments is crucial for team collaboration and efficiency. In my previous post, From Code to Conversation: Bridging GitHub Actions and Slack with CDK, I introduced a solu...]]></description><link>https://martzmakes.com/cdk-slackbot-bedrock</link><guid isPermaLink="true">https://martzmakes.com/cdk-slackbot-bedrock</guid><category><![CDATA[AWS]]></category><category><![CDATA[Amazon Bedrock]]></category><category><![CDATA[generative ai]]></category><category><![CDATA[Devops]]></category><category><![CDATA[serverless]]></category><dc:creator><![CDATA[Matt Martz]]></dc:creator><pubDate>Tue, 07 Nov 2023 13:30:50 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1699333568991/a44a3f95-4662-4068-8b7e-0ddbf246ba7c.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the dynamic world of software development, staying abreast of changes and deployments is crucial for team collaboration and efficiency. In my previous post, <a target="_blank" href="https://matt.martz.codes/cdk-slackbot">From Code to Conversation: Bridging GitHub Actions and Slack with CDK</a>, I introduced a solution that used AWS Cloud Development Kit (CDK) to deploy a Lambda and DynamoDB-powered Slack App that gave teams push-button deployments between environments from Slack. Building on that foundation, this follow-up article delves into a significant enhancement — the integration of Amazon Bedrock, AWS's generative AI service, to revolutionize how we handle commit logs and release summaries.</p>
<p>The updated Deployer Bot is not just smarter; it's designed to be more responsive and informative by utilizing an event-driven architecture that streamlines notifications and summaries. By tapping into the power of generative AI, the bot now offers concise, human-readable summaries of commits and release notes, making it easier for teams to grasp the impact of their work at a glance.</p>
<p>As an example... When I added this to my team's internal Slack Bot (based on the previous post's work). Bedrock provided this commit summary: <strong>"The commit enables the bot to summarize code changes and releases using AI via AWS Bedrock, including analyzing commits for risks and generating release prep recommendations between environments."</strong>. My commit message was only "<em>bedrock... not handling brext w/ios though</em>". 🤯</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1699327753715/1216c1ee-6a6c-409b-a51e-82312e9f27b1.png" alt class="image--center mx-auto" /></p>
<p>Commits trigger webhooks that flow through a series of AWS Lambda functions, orchestrating the process from commit tracking to AI-powered summarization, culminating in neatly packaged per-environment releases communicated via Slack. As part of the commit analysis and summarization, we get results like this on a completed deployment:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1699328051716/9fb1595a-f969-46c3-9d7b-64d26953edeb.png" alt="An example output of bedrock that does not recommend promoting code to the next environment because of a possibly breaking change." class="image--center mx-auto" /></p>
<p>In this article, we'll explore the rationale behind each architectural decision, the process of incorporating Amazon Bedrock into our bot, and the benefits that an event-driven model brings to our CI/CD pipelines. For the DevOps enthusiasts and the code-curious alike, the complete codebase is accessible on GitHub at <a target="_blank" href="https://github.com/martzcodes/blog-cicd-bedrock-releases">https://github.com/martzcodes/blog-cicd-bedrock-releases</a>.</p>
<p><em>A quick note on costs: Amazon Bedrock and the models they run are NOT free. In my case, I've restricted the analysis to a limit of 10000 "tokens" so the max cost of processing a large commit or release for my case is about $0.10 - $0.20. Claude's upper token limit is 100k tokens which would have a max cost of $1-2 per model invocation.</em></p>
<h2 id="heading-revamping-the-architecture-embracing-event-driven-design">Revamping the Architecture: Embracing Event-Driven Design</h2>
<p>The transformation of the Deployer Bot’s architecture to an event-driven model marks a significant enhancement from its original design. This section will explore the rationale behind adopting an event-driven approach, the benefits it offers, and how it is implemented within the context of the Deployer Bot integrated with Amazon Bedrock for AI-powered commit summarization and release management.</p>
<h3 id="heading-understanding-event-driven-architecture-eda"><strong>Understanding Event-Driven Architecture (EDA)</strong></h3>
<p>Event-Driven Architecture (EDA) is a design paradigm centered around the production, detection, consumption, and reaction to events. An event is any significant state change that is of interest to a system or component. EDA allows for highly reactive systems that are more flexible, scalable, and capable of handling complex workflows. It is particularly well-suited for asynchronous data flow and microservices patterns, often found in cloud-native environments.</p>
<h3 id="heading-why-event-driven"><strong>Why Event-Driven?</strong></h3>
<p>The original Deployer Bot followed a more traditional request/response model, where actions were triggered by direct requests. While functional, this approach had limitations in terms of scalability and real-time responsiveness. The integration of Amazon Bedrock and the necessity to process and summarize commit data presented an opportunity to redesign the architecture to be more reactive and efficient.</p>
<h3 id="heading-benefits-of-event-driven-architecture"><strong>Benefits of Event-Driven Architecture</strong></h3>
<ol>
<li><p><strong>Scalability</strong>: EDA allows each component to operate independently, scaling up or down as needed without impacting the entire system.</p>
</li>
<li><p><strong>Resilience</strong>: The decoupled nature of services in EDA results in a system that is less prone to failures. If one service goes down, the rest can continue to operate.</p>
</li>
<li><p><strong>Real-Time Processing</strong>: Events can be processed as soon as they occur, providing immediate feedback and actions, which is crucial for CI/CD workflows.</p>
</li>
<li><p><strong>Flexibility</strong>: New event consumers can be added to the architecture without impacting existing workflows, allowing for easier updates and enhancements.</p>
</li>
</ol>
<h3 id="heading-implementing-eda-in-deployer-bot"><strong>Implementing EDA in Deployer Bot</strong></h3>
<p>The integration of EDA into the Deployer Bot involves several key components working in tandem:</p>
<ol>
<li><p><strong>Event Sources</strong>: These are the triggers for the workflow, such as GitHub webhooks for commits and deployments that initiate the process.</p>
</li>
<li><p><strong>Event Bus</strong>: AWS services like Amazon EventBridge can serve as the backbone of EDA, routing events to the appropriate services.</p>
</li>
<li><p><strong>Lambda Functions</strong>: Serverless functions respond to events, such as fetching, processing, and summarizing commit data, and orchestrating the workflow.</p>
</li>
<li><p><strong>DynamoDB</strong>: Acts as the storage mechanism, logging events, and maintaining state where necessary.</p>
</li>
<li><p><strong>Amazon Bedrock</strong>: Provides AI-powered summarization of commits and releases.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1699329015941/def24ba1-d103-49d8-bf60-d42689f58408.png" alt class="image--center mx-auto" /></p>
<p>In the original architecture (above) we did everything synchronously driven by webhooks. A GitHub Action CI/CD deployment would occur that would lead to a message posted in Slack with a button. If the approve button was clicked it would create the next environment's deployment in GitHub. There was no tracking of commits (or even what was in a release) and as a user of this system for several months, it was often hard to link the Slack messages back to the actual code that was being deployed (despite the commit SHA's being there). There was a lot of mental overhead. 🥵</p>
<p>In the new architecture (below) we expand on this by isolating responsibilities between lambdas, making them simpler (do less) and tracking new information asynchronously.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1699329180985/c9ed58f5-8efa-4d76-8b99-3bf2c96c2e68.png" alt class="image--center mx-auto" /></p>
<p>Two parallel/asynchronous paths happen as part of the deployments in GitHub. The first path relates to the commit and the second relates to the deployment.</p>
<p>Some notes on the architecture diagram:</p>
<ul>
<li><p>The red lines are for visibility only (to help highlight the paths when lines cross each other).</p>
</li>
<li><p>We're only using the Default Event Bus</p>
</li>
<li><p>Lambdas F, B and 2 are API Driven and in the Nested API Stack</p>
</li>
<li><p>The rest of the Lambdas are Event Driven and in the Nested Event Stack</p>
</li>
</ul>
<p>For the first path with the GitHub Commit Webhook...</p>
<ol>
<li><p>GitHub's commit webhook sends the push event to the <code>/github/commit</code> endpoint</p>
</li>
<li><p>The lambda makes sure that it was a commit to the main branch for a project we care about. It forwards an event to the event bus to process the commit message with the minimum information we need. It quickly responds back to GitHub with an OK status. (If we waited for the commit fetching/analysis via bedrock sometimes the lambda wouldn't respond quick enough and GitHub would think it failed).</p>
</li>
<li><p>Asynchronously the process-commit lambda fetches the actual FULL commit from GitHub which includes the patches made in this commit.</p>
</li>
<li><p>The commit w/patches are sent to Bedrock where we use <a target="_blank" href="https://www.anthropic.com/">Anthropic's Claude v2</a> LLM to summarize the commit into 1-2 sentences for a target audience of developers or product managers.</p>
</li>
<li><p>The commit (without the patches) and summary are then stored in DynamoDB for later release-querying.</p>
</li>
</ol>
<p>The Commit path takes on the order of 30 seconds (frequently less) to complete. Meanwhile, since a commit occurred on the main branch and triggered GitHub Actions... the pipeline should be testing/deploying in the background. Once the GitHub Actions pipeline is complete it will send a deployment webhook:</p>
<p>A. The GitHub Actions pipeline completes sending a deployment webhook to the API</p>
<p>B. The Lambda that receives this webhook stores the deployment information in DynamoDB and emits two EventBridge events. One to send a message to the deployment channel in Slack and another to summarize the release.</p>
<p>C. The track-release lambda fetches all of the commits that occurred in the environment since the last release. Here a release is considered a group of commits that were newly deployed in an environment. The dev environment releases are (usually) single-commits. Ideally test and prod would follow this pattern but frequently there's some lag and the test/prod releases end up being larger. <em>Note: this lambda also fetches the NEXT higher environment's commits (a sort of "draft" release) and those also get summarized. I should have spun this out into a separate lambda, but I'll leave that for future-me.</em></p>
<p>D. With the release commits fetched, they're all sent together to Bedrock to be summarized. For a larger release, it ends up summarizing multiple commit summaries.</p>
<p>E. With the release summary and next-env summary the track-release lambda stores the release notes in DynamoDB and sends an event to update the deployment's message with this new information.</p>
<p>F. (Arguably this could be a third path... 😅) Once the user clicks the approve button, the <code>/slack/interactive</code> endpoint emits an event to deploy the next environment.</p>
<p>G. A lambda receives that event and triggers the GitHub Actions pipeline for the next environment.</p>
<h3 id="heading-code-structure">Code Structure</h3>
<p>I'm not going to do a complete walkthrough of the code... because there is a lot of it. Instead I will highlight particular files of interest at a high level. Feel free to reach out on socials or in the comments if you'd like something explained more in-depth.</p>
<ul>
<li><p><a target="_blank" href="https://github.com/martzcodes/blog-cicd-bedrock-releases/blob/main/lib/blog-cicd-bedrock-releases-stack.ts">blog-cicd-bedrock-releases-stack.ts</a> - This stack creates the DynamoDB table and two Nested Stacks</p>
</li>
<li><p><a target="_blank" href="https://github.com/martzcodes/blog-cicd-bedrock-releases/blob/main/lib/nested-api-stack.ts">nested-api-stack.ts</a> - Creates a RestAPI backed by Lambdas for the Webhook Endpoints</p>
</li>
<li><p><a target="_blank" href="https://github.com/martzcodes/blog-cicd-bedrock-releases/blob/main/lib/routes/webhooks.ts">routes/webhooks.ts</a> - Defines the Endpoint structure for the lambdas and what permissions they should have via a common interface (nested-api-stack uses this to build the lambdas)</p>
</li>
<li><p><a target="_blank" href="https://github.com/martzcodes/blog-cicd-bedrock-releases/blob/main/lib/constructs/api.ts">constructs/api.ts</a> - Creates the actual API and Lambdas w/their permissions and paths based on the <code>routes/webhooks.ts</code> file</p>
</li>
<li><p><a target="_blank" href="https://github.com/martzcodes/blog-cicd-bedrock-releases/blob/main/lib/nested-event-stack.ts">nested-event-stack.ts</a> - Creates Event-Driven Lambdas and their Rules</p>
</li>
<li><p><a target="_blank" href="https://github.com/martzcodes/blog-cicd-bedrock-releases/blob/main/lib/routes/events.ts">routes/events.ts</a> - Similar to <code>routes/webhooks.ts</code> this defines the Lambdas with their corresponding Rules and Permissions.</p>
</li>
<li><p><a target="_blank" href="https://github.com/martzcodes/blog-cicd-bedrock-releases/tree/main/lib/lambda">lambda/</a> - This folder contains all of the lambda runtime eligible code. Files from inside of here should not be making imports outside of this file structure (other than external npm libraries). This is to help isolate the code and ensure we aren't accidentally bundling things into our lambdas we don't need. I've seen this a lot in teams that make heavy use of <code>index.ts</code> files for imports 🤢</p>
</li>
<li><p><a target="_blank" href="https://github.com/martzcodes/blog-cicd-bedrock-releases/blob/main/lib/lambda/common/bedrock.ts">lambda/common/bedrock.ts</a> - The Bedrock Helper file which has the functions to do the various summaries.</p>
</li>
</ul>
<p><em>😍 I've been using a similar pattern at work using Nested API and Nested EventBridge Stacks and am loving it. If you'd be interested in a dedicated post on that let me know in the comments!</em></p>
<h2 id="heading-prompt-engineering-for-commit-and-release-summaries">Prompt-Engineering for Commit and Release Summaries</h2>
<p>The integration of generative AI into the Deployer Bot's operations involved precise prompt engineering to ensure that commit and release summaries are informative and accessible. The focus was on creating concise yet comprehensive summaries tailored to the needs of both developers and product managers. The following discussion dives into how the code facilitates this process.</p>
<p>The below prompts are in the lambda helper methods at <a target="_blank" href="https://github.com/martzcodes/blog-cicd-bedrock-releases/blob/main/lib/lambda/common/bedrock.ts">https://github.com/martzcodes/blog-cicd-bedrock-releases/blob/main/lib/lambda/common/bedrock.ts</a></p>
<h3 id="heading-commit-summaries">Commit Summaries</h3>
<p>For commit summaries, I designed a prompt to guide the AI to provide succinct summaries that highlight the purpose and potential impact of the changes, particularly emphasizing backward compatibility and flagging possible breaking changes. The following TypeScript excerpt outlines this process:</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// Function to generate a summary for a single commit</span>
<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> summarizeCommit = <span class="hljs-keyword">async</span> (commit: <span class="hljs-built_in">string</span>): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">string</span>&gt; =&gt; {
  ...
  <span class="hljs-keyword">const</span> prompt = <span class="hljs-string">`...Provide a 1-2 sentence summary of the commit that would be useful for developers and product managers...`</span>;
  ...
};
</code></pre>
<p>In the function <code>summarizeCommit</code>, the prompt specifically instructs the AI to focus on a summary that is relevant to both technical stakeholders and decision-makers. This helps ensure that any non-backwards compatible changes are prominently reported, which is crucial for maintaining the integrity of the API.</p>
<h3 id="heading-release-summaries">Release Summaries</h3>
<p>The task of summarizing releases brings together multiple commits into a narrative that outlines the key developments and their implications. The <code>summarizeRelease</code> function employs a carefully designed prompt to distill this information:</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// Function to create a summary for a release</span>
<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> summarizeRelease = <span class="hljs-keyword">async</span> (release: <span class="hljs-built_in">string</span>): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">string</span>&gt; =&gt; {
  ...
  <span class="hljs-keyword">const</span> prompt = <span class="hljs-string">`...You will create a 1-4 sentence summary of the release below...`</span>;
  ...
};
</code></pre>
<p>Here, the prompt emphasizes not only the inclusion of changes but also highlights the importance of metrics, contributions, and cadence—all of which are critical for assessing the release's impact.</p>
<h3 id="heading-environment-comparison-summaries">Environment Comparison Summaries</h3>
<p>When preparing to promote changes from one environment to another, it's vital to understand the differences. The <code>prepRelease</code> function encapsulates this through its prompt, which is structured to provide a recommendation based on the commits analyzed:</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// Function to summarize differences between environments and provide a release recommendation</span>
<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> prepRelease = <span class="hljs-keyword">async</span> ({
  ...
}): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">string</span>&gt; =&gt; {
  ...
  <span class="hljs-keyword">const</span> prompt = <span class="hljs-string">`...Make a recommendation for whether to promote or not...`</span>;
  ...
};
</code></pre>
<p>In this function, the AI is tasked not just with summarizing the technical changes but also with evaluating the suitability of promoting the release, incorporating a strategic aspect into the summary.</p>
<h3 id="heading-utilizing-amazon-bedrock-runtime">Utilizing Amazon Bedrock Runtime</h3>
<p>All these prompts are then passed to the Amazon Bedrock Runtime, invoking the model through <code>InvokeModelCommand</code> with an input that defines the parameters of the AI's generation process, including token limits and stop sequences. These configurations are essential for controlling costs and ensuring the responses are concise:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> command = <span class="hljs-keyword">new</span> InvokeModelCommand(input);
<span class="hljs-keyword">const</span> response = <span class="hljs-keyword">await</span> client.send(command);
...
</code></pre>
<p>This snippet is a crucial part of the process, as it executes the command and handles the response from the Bedrock AI, translating it into a usable summary.</p>
<p><strong>ONE IMPORTANT NOTE:</strong> At the time of this writing (11/6/2023) the AWS Lambda Runtime for NodeJS (18) does NOT include the <code>@aws-sdk/client-bedrock</code> bundled into that. As an added "bonus" the CDK's <code>NodejsFunction</code> construct (which uses <code>esbuild</code>) by default makes <code>@aws-sdk/*</code> as external modules. This means that <code>@aws-sdk/client-bedrock</code> ends up NOT being bundled into the lambda. In order to get around this I needed to update our NodejsFunction props to bypass this. I also have to give the lambdas IAM access to invoke bedrock models. This can be done via by adding an initial policy to the lambda:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> fn = <span class="hljs-keyword">new</span> NodejsFunction(<span class="hljs-built_in">this</span>, <span class="hljs-string">`<span class="hljs-subst">${endpoint.lambda}</span>Fn`</span>, {
  <span class="hljs-comment">// ...</span>
  bundling: {
    <span class="hljs-comment">// Nodejs function excludes aws-sdk v3 by default because it is included in the lambda runtime</span>
    <span class="hljs-comment">// but bedrock is not built into the lambda runtime so we need to override the @aws-sdk/* exclusions</span>
    externalModules: [
      <span class="hljs-string">"@aws-sdk/client-dynamodb"</span>,
      <span class="hljs-string">"@aws-sdk/client-eventbridge"</span>,
      <span class="hljs-string">"@aws-sdk/client-secrets-manager"</span>,
      <span class="hljs-string">"@aws-sdk/lib-dynamodb"</span>,
    ],
  },
  ...(endpoint.bedrock &amp;&amp; {
    initialPolicy: [
      <span class="hljs-keyword">new</span> PolicyStatement({
        effect: Effect.ALLOW,
        actions: [<span class="hljs-string">"bedrock:InvokeModel"</span>],
        resources: [<span class="hljs-string">"*"</span>],
      }),
    ],
  }),
});
</code></pre>
<h3 id="heading-continual-evolution-of-prompts">Continual Evolution of Prompts</h3>
<p>It's important to note that these prompts are not static. They are subject to continuous evaluation and iteration, ensuring that the summaries remain pertinent and value-adding as the project and AI capabilities evolve.</p>
<p>By embedding such targeted prompts into the Deployer Bot's workflow, the DevOps team ensures that the summaries generated are not only informative but also actionable, fostering a deeper understanding and facilitating informed decision-making throughout the development process.</p>
<h2 id="heading-now-lets-try-it">Now let's try it!</h2>
<p>In order to test this and iterate on my prompt templates, I created a CICD-example project: <a target="_blank" href="https://github.com/martzcodes/cicd-example">https://github.com/martzcodes/cicd-example</a></p>
<p>This project uses OIDC and GitHub Actions to deploy the stack. In this case, I'm just deploying the same stack with an "environment"-specific name to the same account. To reset I would stash my changes and force-push to an earlier state and re-apply the stashes.</p>
<p>Backwards compatibility is really important in software engineering, so one of the first things I wanted to focus on was that. I created a simple RestApi with a single endpoint pointed at <code>/dummy</code>. On my first attempt at prompt engineering, I included a statement like <code>APIs must be backwards compatible, if they are not make a note of it.</code></p>
<p>I then did a deployment where I renamed the <code>/dummy</code> endpoint to <code>/something</code> (creative, I know). The response from bedrock specifically said this was backwards compatible / not a breaking change:</p>
<blockquote>
<p>The release for cicd-example in prod environment contains 5 commits:<br />- Adds an API Gateway API with Lambda endpoint<br />- Defines the Lambda handler function<br />- Updates API path from /dummy to /example (committed twice)<br />- Updates API Gateway path resource from dummy to example<br />No breaking changes or bugs were noted. The API update is backwards compatible.</p>
</blockquote>
<p>A renamed endpoint could absolutely be breaking. After a few iterations, I settled on a prompt line like: <code>APIs must be backwards compatible which includes path changes, if they are not it should be highlighted in the summary.</code> After that, I got much more reliable callouts for path changes*.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1699333938913/e9f8a6d5-414d-48ff-837f-6771847f1380.png" alt class="image--center mx-auto" /></p>
<p>I also made an attempt for bedrock/Claude to detect misspellings in code. For example, I defined an endpoint called <code>/soemthing</code> ... I could not find a prompt combination that would identify that misspelling... and in fact, in the summaries, it actually <em>corrected</em> it (which is VERY BAD) 😬</p>
<blockquote>
<p>The release adds a new API and Lambda function. A new /something endpoint was added to the API without breaking backwards compatibility.</p>
</blockquote>
<p>After installing it at work and running it for a day I asked my colleagues for feedback on the accuracy... and the feedback was very positive but it wasn't perfect.</p>
<p>For example...</p>
<blockquote>
<p>The XXXXXX repo in the dev environment released changes on 2023-11-06T22:26:41.517Z. It includes 1 commit which adds a new '/cognito/revoke' endpoint that could break backwards compatibility if clients are not updated. No other major changes or risks noted.</p>
</blockquote>
<p>New endpoints can rarely be a breaking change. Back to the drawing board, I guess 😅</p>
<h2 id="heading-future-directions-and-conclusion">Future Directions and Conclusion</h2>
<p>As we continue to explore the intersection of generative AI and DevOps, I can see a lot of potential for a GenerativeAI+Serverless Deployer Bot. The integration of AI-driven summaries for commits and releases is just the beginning. The future is poised for a host of innovative features that could transform CI/CD pipelines and development workflows, making them more efficient and intelligent.</p>
<h3 id="heading-expanding-ai-capabilities-in-devops">Expanding AI Capabilities in DevOps</h3>
<ul>
<li><p><strong>Automated Code Review Assistance</strong>: By refining our prompts, we could extend the Deployer Bot's functionality to include automated code reviews, where the bot could provide preliminary feedback on pull requests, analyzing code for style, complexity, and even security vulnerabilities.</p>
</li>
<li><p><strong>Dynamic Troubleshooting Guides</strong>: Generative AI could be harnessed to create real-time troubleshooting guides based on the errors and logs encountered during builds or deployments, providing developers with immediate, context-specific solutions.</p>
</li>
<li><p><strong>Predictive Analytics for CI/CD</strong>: Leveraging historical data, the bot could predict potential bottlenecks and suggest optimizations in the CI/CD pipeline, leading to preemptive resource management and smoother release cycles.</p>
</li>
<li><p><strong>Personalized Developer Assistance</strong>: AI could be programmed to learn individual developer preferences and work patterns, offering customized tips, reminders, and resources to enhance productivity.</p>
</li>
<li><p><strong>Enhanced Onboarding</strong>: For new team members, the Deployer Bot could become an on-demand mentor, explaining CI/CD processes, and codebase navigation, and providing answers to common questions through an interactive AI-driven chat.</p>
</li>
<li><p><strong>AI-Powered Testing and Quality Assurance</strong>: Integrating AI to analyze test results could lead to quicker identification of flaky tests and provide insights on test coverage and quality, potentially predicting which parts of the code are most likely to fail.</p>
</li>
</ul>
<h3 id="heading-conclusion">Conclusion</h3>
<p>The integration of generative AI into the Deployer Bot represents a significant leap forward for DevOps teams. It is a testament to the transformative potential of AI when applied with precision and creativity. The Deployer Bot, once a mere facilitator of notifications, has evolved into a sophisticated assistant that enhances decision-making and streamlines workflows. Looking ahead, I am excited about the prospect of a more proactive, AI-powered assistant that not only informs but also predicts and strategizes, becoming an indispensable ally in the fast-paced world of software development.</p>
<p>The current capabilities of the Deployment Deployer Bot lay the foundation for these advancements. I'm sure this will not be my last post on the matter. My recent trip to EDA Day in Nashville gave me a lot of inspiration.</p>
]]></content:encoded></item><item><title><![CDATA[From Code to Conversation: Bridging GitHub Actions and Slack with CDK]]></title><description><![CDATA[In this post, we're diving into the powerful world of automation using the AWS Cloud Development Kit (CDK) to create a serverless-backed Slack App. The goal? Seamlessly managing application deployments through GitHub Deployments. Here's a glimpse int...]]></description><link>https://martzmakes.com/cdk-slackbot</link><guid isPermaLink="true">https://martzmakes.com/cdk-slackbot</guid><category><![CDATA[aws-cdk]]></category><category><![CDATA[GitHub]]></category><category><![CDATA[slack]]></category><category><![CDATA[Devops]]></category><category><![CDATA[serverless]]></category><dc:creator><![CDATA[Matt Martz]]></dc:creator><pubDate>Mon, 21 Aug 2023 14:13:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1692566797026/c49a3a53-fcab-4879-8261-e40faf88b0ff.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this post, we're diving into the powerful world of automation using the AWS Cloud Development Kit (CDK) to create a serverless-backed Slack App. The goal? Seamlessly managing application deployments through <a target="_blank" href="https://docs.github.com/en/free-pro-team@latest/rest/deployments/deployments?apiVersion=2022-11-28">GitHub Deployments</a>. Here's a glimpse into how it all ties together:</p>
<ul>
<li><p><strong>GitHub Deployments</strong> notifies our serverless app, which consists of an AWS API Gateway Rest API backed by multiple lambda functions. One of these lambdas gets invoked by the GitHub Deployments webhook to handle deployment status updates.</p>
</li>
<li><p>Once informed, our bot relays these updates to a Slack channel.</p>
</li>
<li><p>Upon a successful deployment, the bot prompts users with, "Do you want to deploy this to the next environment?"</p>
</li>
<li><p>Users have the option to greenlight this deployment or reject it.</p>
</li>
<li><p>And when it comes to the all-important production environment, only selected approvers can make the final decision. Others can voice their views through a "vote".</p>
</li>
</ul>
<p>The best part? <strong><em>Although our demonstration employs CDK, this framework isn't limited to CDK deployments.</em></strong> You can adapt it for any project deployable via GitHub Actions, making it a flexible tool for various deployment needs.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692631031067/737541c9-7ed3-4f80-ba3b-924c2d8652c5.png" alt class="image--center mx-auto" /></p>
<p>Here's a structured flow of our approach:</p>
<ol>
<li><p>Setting up a placeholder for secret tokens.</p>
</li>
<li><p>Constructing the CDK App with key components: RestApi, DynamoDB, and several lambdas.</p>
</li>
<li><p>Safeguarding the GitHub access token and Slack Bot token inside our placeholder.</p>
</li>
<li><p>Integrating the GitHub Deployment Webhook.</p>
</li>
<li><p>Ensuring seamless Slack interactions.</p>
</li>
<li><p>Incorporating GitHub Deployments within our GitHub Actions workflows.</p>
</li>
</ol>
<p>Eager to dive into the code? Check out the project on <a target="_blank" href="https://github.com/martzcodes/blog-cicd-slackbot">GitHub: https://github.com/martzcodes/blog-cicd-slackbot</a>.</p>
<p><strong>Note</strong>: This project has multiple layers, and we won't be delving into each line of code. If you find any part lacking, don't be shy. Drop your questions in the comments or catch me on Mastodon at <a target="_blank" href="https://awscommunity.social/@martzcodes">https://awscommunity.social/@martzcodes</a>.</p>
<h2 id="heading-create-a-placeholder-secret">Create a Placeholder Secret</h2>
<p><strong>Why a Placeholder?</strong>: Before diving in, you might wonder why we're setting placeholders. When working with AWS CDK, there's a tendency to automate the creation of secrets. However, I've found this method can sometimes reset these secrets unintentionally. And although CloudFormation is an option, it would expose our Secret values in its YAML file. This is where manually set placeholders come in handy — they ensure our secrets remain secret.</p>
<p>Let's get started:</p>
<ol>
<li><p>Head over to the AWS Console and <a target="_blank" href="https://us-east-1.console.aws.amazon.com/secretsmanager/newsecret?region=us-east-1">create a new Secret</a>.</p>
</li>
<li><p>Choose the "Other type of secret" option.</p>
</li>
<li><p>Toggle over to the <code>Plaintext</code> option under the Key/value pairs tab.</p>
</li>
<li><p>For the value, enter: <code>{"SLACK_TOKEN":"xoxb-placeholder","GITHUB_TOKEN":"github_pat_placeholder"}</code>.</p>
</li>
<li><p>Click on "Next".</p>
</li>
<li><p>Name your secret <code>slackbot-deployer</code>.</p>
</li>
<li><p>Proceed by clicking "Next".</p>
</li>
<li><p>Opt for "No rotation" and click "Next".</p>
</li>
<li><p>Finally, click on "Store" to save the secret.</p>
</li>
</ol>
<p>Once saved, refreshing the secrets list will show you the secret's details. Look out for the secret ARN, which will appear something like this: <code>arn:aws:secretsmanager:us-east-1:123456789012:secret:slackbot-deployer-XYZ123</code>. Keep this ARN handy, as we'll integrate it into our CDK App in the subsequent steps.</p>
<h2 id="heading-deploying-the-cdk-app"><strong>Deploying the CDK App</strong></h2>
<p>Time to create our CDK App! If you're looking to speed things up, grab the <a target="_blank" href="https://github.com/martzcodes/blog-cicd-slackbot">example code from my GitHub</a>. Alternatively, for the DIY enthusiasts, start a CDK project from the ground up. Navigate to your desired project folder and initiate the project with:</p>
<pre><code class="lang-bash">npx cdk init --language typescript
</code></pre>
<p>This command spins up a CDK version 2 project. If you've been working with a globally installed version 1 CDK, it's time to uninstall it and stick with <code>npx</code>.</p>
<p>Once our app is initialized, we're going to make changes to the <code>bin/blog-cicd-slackbot.ts</code> file to introduce necessary configurations.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> oidcs: Record&lt;<span class="hljs-built_in">string</span>, <span class="hljs-built_in">string</span>&gt; = {
  test: <span class="hljs-string">"arn:aws:iam::922113822777:role/GitHubOidcRole"</span>,
  prod: <span class="hljs-string">"arn:aws:iam::349520124959:role/GitHubOidcRole"</span>,
};
<span class="hljs-keyword">const</span> nextEnvs: Record&lt;<span class="hljs-built_in">string</span>, <span class="hljs-built_in">string</span>&gt; = {
  dev: <span class="hljs-string">"test"</span>,
  test: <span class="hljs-string">"prod"</span>,
};

<span class="hljs-keyword">const</span> app = <span class="hljs-keyword">new</span> cdk.App();
<span class="hljs-keyword">new</span> BlogCicdSlackbotStack(app, <span class="hljs-string">"BlogCicdSlackbotStack"</span>, {
  nextEnvs,
  oidcs,
  secretArn: <span class="hljs-string">"YOUR SECRET ARN HERE"</span>,
});
</code></pre>
<p>This configuration sets the stage for our environment flow and the OIDC Roles, enabling GitHub Actions to deploy CDK applications without saving secrets in GitHub. For a hands-on demonstration on crafting these OIDC roles with CDK, check out the Construct in <code>lib/github-oidc.ts</code>. While my roles are provisioned externally and are commented out, this should offer you a practical reference.</p>
<h3 id="heading-create-a-restapi"><strong>Create a RestAPI</strong></h3>
<p>Got the starter code? Jump ahead to "Deploy the App". If not, keep reading.</p>
<p>Let's tweak the <code>lib/blog-cicd-slackbot-stack.ts</code> file, first defining the Stack's Prop interface to resonate with the bin file. Here, we'll amplify the default <code>StackProps</code> interface by integrating three additional attributes:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">interface</span> BlogCicdSlackbotStackProps <span class="hljs-keyword">extends</span> cdk.StackProps {
  nextEnvs: Record&lt;<span class="hljs-built_in">string</span>, <span class="hljs-built_in">string</span>&gt;;
  oidcs: Record&lt;<span class="hljs-built_in">string</span>, <span class="hljs-built_in">string</span>&gt;;
  secretArn: <span class="hljs-built_in">string</span>;
}
</code></pre>
<p>Then, modify the constructor like so:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">constructor</span>(<span class="hljs-params">scope: Construct, id: <span class="hljs-built_in">string</span>, props: BlogCicdSlackbotStackProps</span>) {
</code></pre>
<p>Our next move is forging the RestApi and carving out a <code>slack/</code> resource:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> api = <span class="hljs-keyword">new</span> RestApi(<span class="hljs-built_in">this</span>, <span class="hljs-string">"BlogCicdSlackbotApi"</span>, {
  deployOptions: {
    dataTraceEnabled: <span class="hljs-literal">true</span>,
    tracingEnabled: <span class="hljs-literal">true</span>,
    metricsEnabled: <span class="hljs-literal">true</span>,
  },
  description: <span class="hljs-string">`API for BlogCicdSlackbotApi`</span>,
  endpointConfiguration: {
    types: [EndpointType.REGIONAL],
  },
});
<span class="hljs-keyword">const</span> slackResource = api.root.addResource(<span class="hljs-string">"slack"</span>);
</code></pre>
<p>Let's establish our default lambda environment and properties:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> environment = {
  OIDCS: <span class="hljs-built_in">JSON</span>.stringify(oidcs),
  SECRET_ARN: secret.secretArn,
  NEXT_ENVS: <span class="hljs-built_in">JSON</span>.stringify(nextEnvs),
  TABLE_NAME: table.tableName,
};

<span class="hljs-keyword">const</span> lambdaProps = {
  runtime: Runtime.NODEJS_18_X,
  memorySize: <span class="hljs-number">1024</span>,
  timeout: cdk.Duration.seconds(<span class="hljs-number">30</span>),
  environment,
};
</code></pre>
<p><strong>Remember</strong>: Not all our lambda functions will harness the secret/table and related functionalities. We're only granting access to lambdas if they genuinely require it. Yet, knowledge of the SECRET_ARN is harmless.</p>
<p>Wrap it up by creating the lambda and its endpoint:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> slackAction = <span class="hljs-keyword">new</span> NodejsFunction(<span class="hljs-built_in">this</span>, <span class="hljs-string">"SlackActionFn"</span>, {
  entry: <span class="hljs-string">"lib/lambda/api/slack-action.ts"</span>,
  ...lambdaProps,
});
slackResource.addResource(<span class="hljs-string">"action"</span>).addMethod(
  <span class="hljs-string">"POST"</span>,
  <span class="hljs-keyword">new</span> LambdaIntegration(slackAction)
);
</code></pre>
<p>Peek into my lambda's handler code present in <code>lib/lambda/api/slack-action.ts</code>:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { APIGatewayEvent } <span class="hljs-keyword">from</span> <span class="hljs-string">"aws-lambda"</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> handler = <span class="hljs-keyword">async</span> (event: APIGatewayEvent) =&gt; {
  <span class="hljs-keyword">const</span> body = <span class="hljs-built_in">JSON</span>.parse(event.body || <span class="hljs-string">"{}"</span>);
  <span class="hljs-built_in">console</span>.log(<span class="hljs-built_in">JSON</span>.stringify({body}, <span class="hljs-literal">null</span>, <span class="hljs-number">2</span>));

  <span class="hljs-keyword">return</span> {
    statusCode: <span class="hljs-number">200</span>,
    body: body.challenge,
  };
};
</code></pre>
<p><strong>Heads Up!</strong>: It's essential to have <code>esbuild</code> and <code>aws-lambda</code> types added to your project. Here's how:</p>
<pre><code class="lang-bash">npm i --save-dev esbuild @types/aws-lambda
</code></pre>
<p>This ensures CDK avoids invoking a Docker container for lambda bundling.</p>
<h3 id="heading-deploy-the-app"><strong>Deploy the App</strong></h3>
<p>Everything's set? Deploy the app! Once done, you'll receive the API Url as an Output resembling:</p>
<pre><code class="lang-bash">Outputs:
BlogCicdSlackbotStack.BlogCicdSlackbotApiEndpointCDDA7E36 = https://xicnr82c7a.execute-api.us-east-1.amazonaws.com/prod/
</code></pre>
<p>Jot down this URL (inclusive of <code>/prod/</code>) – we'll integrate it into the Slack App's manifest soon.</p>
<h2 id="heading-gathering-the-secrets"><strong>Gathering the Secrets</strong></h2>
<p>Before diving deep, let's ensure we have all the secrets we need for this project safely tucked away in our placeholder Secret.</p>
<p>We'll retrieve two primary secrets:</p>
<ol>
<li><p><strong>GitHub Personal Access Token</strong>: Enables our application to interact with GitHub Actions.</p>
</li>
<li><p><strong>Slack App Bot Token</strong>: Helps us create and edit Slack messages.</p>
</li>
</ol>
<h3 id="heading-github-personal-access-token"><strong>GitHub Personal Access Token</strong></h3>
<p>For this, we'll employ GitHub's fine-grained tokens, which offer precise control over permissions.</p>
<ol>
<li><p>Head over to the <a target="_blank" href="https://github.com/settings/tokens?type=beta">GitHub tokens page</a>.</p>
</li>
<li><p>Remember, these tokens come with an expiration which you can set for up to a year.</p>
</li>
<li><p>If you're working within an organization like me, set the "Resource owner" to your organization.</p>
</li>
<li><p>Grant access to <strong>All repositories</strong> (both public and private).</p>
</li>
<li><p>Then, ensure you grant the following repository-level access:</p>
<ul>
<li><p><strong>Actions</strong>: Read &amp; Write</p>
</li>
<li><p><strong>Deployments</strong>: Read</p>
</li>
<li><p><strong>Environments</strong>: Read</p>
</li>
<li><p><strong>Metadata</strong>: Read (<em>Mandatory</em>)</p>
</li>
</ul>
</li>
</ol>
<p>Once you've completed these steps, you can copy your access token. It'll look something like this: <code>github_pat_BLAH</code>. Keep this safe – we'll need it shortly.</p>
<h3 id="heading-creating-a-slack-app"><strong>Creating a Slack App</strong></h3>
<p>Time to nab the Slack App's Bot token:</p>
<ol>
<li><p>Start by visiting the <a target="_blank" href="https://api.slack.com/apps">Slack API page</a>.</p>
</li>
<li><p>Click on "Create New App".</p>
</li>
<li><p>Opt to create an app using a manifest:</p>
</li>
</ol>
<pre><code class="lang-json">{
    <span class="hljs-attr">"display_information"</span>: {
        <span class="hljs-attr">"name"</span>: <span class="hljs-string">"deploy-bot"</span>
    },
    <span class="hljs-attr">"features"</span>: {
        <span class="hljs-attr">"bot_user"</span>: {
            <span class="hljs-attr">"display_name"</span>: <span class="hljs-string">"deployer"</span>,
            <span class="hljs-attr">"always_online"</span>: <span class="hljs-literal">true</span>
        },
        <span class="hljs-attr">"slash_commands"</span>: [
            {
                <span class="hljs-attr">"command"</span>: <span class="hljs-string">"/deployer_add_auth"</span>,
                <span class="hljs-attr">"url"</span>: <span class="hljs-string">"REPLACE THIS WITH YOUR APIGW URL/slack/add-approver"</span>,
                <span class="hljs-attr">"description"</span>: <span class="hljs-string">"Add an approver for Deployments"</span>,
                <span class="hljs-attr">"usage_hint"</span>: <span class="hljs-string">"@user"</span>,
                <span class="hljs-attr">"should_escape"</span>: <span class="hljs-literal">true</span>
            },
            {
                <span class="hljs-attr">"command"</span>: <span class="hljs-string">"/deployer_list_auth"</span>,
                <span class="hljs-attr">"url"</span>: <span class="hljs-string">"REPLACE THIS WITH YOUR APIGW URL/slack/list-approvers"</span>,
                <span class="hljs-attr">"description"</span>: <span class="hljs-string">"Show who can approve deployments"</span>,
                <span class="hljs-attr">"should_escape"</span>: <span class="hljs-literal">false</span>
            },
            {
                <span class="hljs-attr">"command"</span>: <span class="hljs-string">"/deployer_remove_auth"</span>,
                <span class="hljs-attr">"url"</span>: <span class="hljs-string">"REPLACE THIS WITH YOUR APIGW URL/slack/remove-approver"</span>,
                <span class="hljs-attr">"description"</span>: <span class="hljs-string">"Remove someone as an approver"</span>,
                <span class="hljs-attr">"usage_hint"</span>: <span class="hljs-string">"@user"</span>,
                <span class="hljs-attr">"should_escape"</span>: <span class="hljs-literal">true</span>
            }
        ]
    },
    <span class="hljs-attr">"oauth_config"</span>: {
        <span class="hljs-attr">"scopes"</span>: {
            <span class="hljs-attr">"user"</span>: [
                <span class="hljs-string">"users.profile:read"</span>
            ],
            <span class="hljs-attr">"bot"</span>: [
                <span class="hljs-string">"app_mentions:read"</span>,
                <span class="hljs-string">"channels:history"</span>,
                <span class="hljs-string">"chat:write"</span>,
                <span class="hljs-string">"chat:write.customize"</span>,
                <span class="hljs-string">"chat:write.public"</span>,
                <span class="hljs-string">"emoji:read"</span>,
                <span class="hljs-string">"groups:history"</span>,
                <span class="hljs-string">"groups:read"</span>,
                <span class="hljs-string">"groups:write"</span>,
                <span class="hljs-string">"im:history"</span>,
                <span class="hljs-string">"im:read"</span>,
                <span class="hljs-string">"im:write"</span>,
                <span class="hljs-string">"incoming-webhook"</span>,
                <span class="hljs-string">"pins:read"</span>,
                <span class="hljs-string">"pins:write"</span>,
                <span class="hljs-string">"reactions:read"</span>,
                <span class="hljs-string">"reactions:write"</span>,
                <span class="hljs-string">"users:read"</span>,
                <span class="hljs-string">"users.profile:read"</span>,
                <span class="hljs-string">"commands"</span>
            ]
        }
    },
    <span class="hljs-attr">"settings"</span>: {
        <span class="hljs-attr">"event_subscriptions"</span>: {
            <span class="hljs-attr">"request_url"</span>: <span class="hljs-string">"REPLACE THIS WITH YOUR APIGW URL/slack/action"</span>,
            <span class="hljs-attr">"bot_events"</span>: [
                <span class="hljs-string">"app_mention"</span>
            ]
        },
        <span class="hljs-attr">"interactivity"</span>: {
            <span class="hljs-attr">"is_enabled"</span>: <span class="hljs-literal">true</span>,
            <span class="hljs-attr">"request_url"</span>: <span class="hljs-string">"REPLACE THIS WITH YOUR APIGW URL/slack/interaction"</span>
        },
        <span class="hljs-attr">"org_deploy_enabled"</span>: <span class="hljs-literal">false</span>,
        <span class="hljs-attr">"socket_mode_enabled"</span>: <span class="hljs-literal">false</span>,
        <span class="hljs-attr">"token_rotation_enabled"</span>: <span class="hljs-literal">false</span>
    }
}
</code></pre>
<ol>
<li><p>Once the app's up and running, install it in your workspace.</p>
</li>
<li><p>Navigate to the <code>OAuth &amp; Permissions</code> page to fetch the <code>Bot User OAuth Token</code>.</p>
</li>
</ol>
<h3 id="heading-update-the-placeholder-secret"><strong>Update the Placeholder Secret</strong></h3>
<p>Armed with both tokens, revisit your placeholder Secret on the AWS Console. Here's what you do:</p>
<ol>
<li><p>Click on <code>Retrieve secret value</code>.</p>
</li>
<li><p>Choose <code>Edit</code>.</p>
</li>
<li><p>Enter the tokens in their respective key-value fields.</p>
</li>
<li><p><strong>Don't forget to hit "Save"</strong> after inputting both tokens.</p>
</li>
</ol>
<p>Get ready, the exciting part is about to start!</p>
<h2 id="heading-setting-up-the-github-deployment-webhook-integration">Setting Up the GitHub Deployment Webhook Integration</h2>
<p>Integrating GitHub Deployment Status webhook can be a game-changer. Not only will it ensure timely Slack notifications but also helps in maintaining a reliable history in DynamoDB.</p>
<h3 id="heading-1-provisioning-a-dynamodb-table"><strong>1. Provisioning a DynamoDB Table</strong></h3>
<p>Firstly, we need to create a table where deployments will be recorded:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> table = <span class="hljs-keyword">new</span> Table(<span class="hljs-built_in">this</span>, <span class="hljs-string">"Table"</span>, {
  partitionKey: { name: <span class="hljs-string">"pk"</span>, <span class="hljs-keyword">type</span>: AttributeType.STRING },
  sortKey: { name: <span class="hljs-string">"sk"</span>, <span class="hljs-keyword">type</span>: AttributeType.STRING },
  billingMode: BillingMode.PAY_PER_REQUEST,
  removalPolicy: cdk.RemovalPolicy.DESTROY,
  timeToLiveAttribute: <span class="hljs-string">"ttl"</span>,
});
</code></pre>
<p>This table is straightforward. Given that our traffic isn't predictable, opting for the Pay Per Request model ensures cost-effectiveness.</p>
<h3 id="heading-2-incorporating-the-secret"><strong>2. Incorporating the Secret</strong></h3>
<p>Before we proceed, it's essential to incorporate the secret saved from the earlier stages:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> secret = Secret.fromSecretCompleteArn(<span class="hljs-built_in">this</span>, <span class="hljs-string">`BlogCicdSlackbotSecret`</span>, secretArn);
</code></pre>
<h3 id="heading-3-lambda-andamp-api-gateway-endpoint"><strong>3. Lambda &amp; API Gateway Endpoint</strong></h3>
<p>As there’s only one GitHub endpoint, bundling the creation of both the Lambda function and its associated resource streamlines the process. Make sure the Lambda function can access both DynamoDB and Secrets Manager:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> githubWebhookFn = <span class="hljs-keyword">new</span> NodejsFunction(<span class="hljs-built_in">this</span>, <span class="hljs-string">"GithubWebhookFn"</span>, {
  entry: <span class="hljs-string">"lib/lambda/api/github-webhook.ts"</span>,
  ...lambdaProps,
});
table.grantReadWriteData(githubWebhookFn);
secret.grantRead(githubWebhookFn);
api.root
  .addResource(<span class="hljs-string">"github"</span>)
  .addMethod(<span class="hljs-string">"POST"</span>, <span class="hljs-keyword">new</span> LambdaIntegration(githubWebhookFn));
</code></pre>
<p>Dive into <code>lib/lambda/api/github-webhook.ts</code> to examine the logic. While the file may seem hefty, the bulk of it centers around Slack message formatting.</p>
<h4 id="heading-storing-deployment-details">- Storing Deployment Details:</h4>
<p>We extract vital details from the event to be logged in DynamoDB:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> {
  state: status,
  environment: env,
  created_at: createdAt,
  updated_at: updatedAt,
  target_url: url,
} = body.deployment_status;
<span class="hljs-keyword">const</span> { id: deploymentId, ref: branch, sha } = body.deployment;
<span class="hljs-keyword">const</span> repo = body.repository.name;
<span class="hljs-keyword">const</span> author = body.deployment_status.creator.login;
<span class="hljs-keyword">const</span> owner = body.repository.owner.login;
</code></pre>
<h4 id="heading-retrieving-slack-token">- Retrieving Slack Token:</h4>
<p>Fetch the Slack token securely from Secrets Manager:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> secret = <span class="hljs-keyword">await</span> sm.send(
  <span class="hljs-keyword">new</span> GetSecretValueCommand({
    SecretId: process.env.SECRET_ARN,
  })
);
<span class="hljs-keyword">const</span> slackToken = <span class="hljs-built_in">JSON</span>.parse(secret.SecretString || <span class="hljs-string">""</span>).SLACK_TOKEN;
</code></pre>
<h4 id="heading-managing-deployment-statuses">- Managing Deployment Statuses:</h4>
<p>When a new deployment status hits, it's crucial to determine its relative standing. We do this by fetching the most recent deployment status and comparing the <code>deploymentId</code>:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> pk = <span class="hljs-string">`REPO#<span class="hljs-subst">${repo}</span>#ENV#<span class="hljs-subst">${env}</span>`</span>.toUpperCase();
<span class="hljs-keyword">const</span> sk = <span class="hljs-string">"LATEST"</span>;

<span class="hljs-comment">// get deployment status from dynamodb</span>
<span class="hljs-keyword">const</span> ddbRes = <span class="hljs-keyword">await</span> ddbDocClient.send(
  <span class="hljs-keyword">new</span> GetCommand({
    TableName: process.env.TABLE_NAME,
    Key: {
      pk,
      sk,
    },
  })
);
</code></pre>
<p>If the <code>deploymentId</code> from the incoming webhook doesn't match the last "LATEST" item, it implies a newer deployment has superseded it. As a result, we should:</p>
<ol>
<li><p>Update the previous Slack message by removing its action buttons.</p>
</li>
<li><p>Append a note indicating <code>Automatic rejection by subsequent deployment</code>.</p>
</li>
<li><p>Archive the last item under a unique Sort Key, ensuring it’s retrievable but not in the immediate queue.</p>
</li>
</ol>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> slackRes = <span class="hljs-keyword">await</span> fetch(<span class="hljs-string">"https://slack.com/api/chat.update"</span>, {
  method: <span class="hljs-string">"POST"</span>,
  body: <span class="hljs-built_in">JSON</span>.stringify({
    channel: <span class="hljs-string">"C04KW81UAAV"</span>,
    ts: existingItem.slackTs,
    blocks: oldBlocks,
  }),
  headers: {
    <span class="hljs-string">"Content-Type"</span>: <span class="hljs-string">"application/json"</span>,
    Authorization: <span class="hljs-string">`Bearer <span class="hljs-subst">${slackToken}</span>`</span>,
  },
});
<span class="hljs-keyword">await</span> slackRes.json();
<span class="hljs-keyword">await</span> ddbDocClient.send(
  <span class="hljs-keyword">new</span> PutCommand({
    TableName: process.env.TABLE_NAME,
    Item: {
      ...existingItem,
      sk: <span class="hljs-string">`DEPLOYMENT#<span class="hljs-subst">${existingItem.deploymentId}</span>`</span>.toUpperCase(),
      blocks: <span class="hljs-built_in">JSON</span>.stringify(oldBlocks),
    },
  })
);
</code></pre>
<p>New deployments, on the other hand, engage the <code>chat.postMessage</code> Slack API, furnishing essential deployment details.</p>
<p>For deployments matching the "LATEST" <code>deploymentId</code>, it’s crucial to ensure the incoming status isn’t redundant. Successful deployments headed for another environment get approve/reject action buttons. This updated deployment, now the newest, gets stored under the "LATEST" sort key, paired with the Slack message timestamp for subsequent edits.</p>
<p>Lastly, successful deployments are logged in a meta item, an inventory of all triumphant deployments segregated by environment:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> envLatest = <span class="hljs-keyword">await</span> ddbDocClient.send(
    <span class="hljs-keyword">new</span> GetCommand({
      TableName: process.env.TABLE_NAME,
      Key: {
        pk: <span class="hljs-string">`LATEST`</span>,
        sk: <span class="hljs-string">`<span class="hljs-subst">${nextEnvs[env]}</span>`</span>,
      },
    })
  );
  <span class="hljs-keyword">const</span> updatedRepo = {
    url: item.url,
    sha: item.sha,
    deploymentId: item.deploymentId,
    deployedAt: <span class="hljs-built_in">Date</span>.now(),
    branch: item.branch,
    owner: item.owner,
  };
  <span class="hljs-keyword">if</span> (!envLatest.Item) {
    <span class="hljs-keyword">await</span> ddbDocClient.send(
      <span class="hljs-keyword">new</span> PutCommand({
        TableName: process.env.TABLE_NAME,
        Item: {
          pk: <span class="hljs-string">`LATEST`</span>,
          sk: <span class="hljs-string">`<span class="hljs-subst">${nextEnvs[env]}</span>`</span>,
          repos: {
            [repo]: updatedRepo,
          },
        },
      })
    );
  } <span class="hljs-keyword">else</span> {
    <span class="hljs-comment">// Update existingItem by replacing the repo</span>
    <span class="hljs-keyword">await</span> ddbDocClient.send(
      <span class="hljs-keyword">new</span> UpdateCommand({
        TableName: process.env.TABLE_NAME,
        Key: {
          pk: <span class="hljs-string">`LATEST`</span>,
          sk: <span class="hljs-string">`<span class="hljs-subst">${nextEnvs[env]}</span>`</span>,
        },
        <span class="hljs-comment">// update the repos attribute</span>
        UpdateExpression: <span class="hljs-string">"SET repos.#repo = :repo"</span>,
        ExpressionAttributeNames: {
          <span class="hljs-string">"#repo"</span>: repo,
        },
        ExpressionAttributeValues: {
          <span class="hljs-string">":repo"</span>: updatedRepo,
        },
      })
    );
  }
</code></pre>
<p>The structuring of this meta item is worth noting. The deployment details nestle within a <code>repos</code> map in DynamoDB. Leveraging <code>UpdateCommand</code> helps pinpoint updates to specific repositories, ensuring that new build data doesn't accidentally overwrite unrelated data due to race conditions.</p>
<h3 id="heading-4-integrating-the-github-webhook"><strong>4. Integrating the GitHub Webhook</strong></h3>
<p>Navigate to your GitHub Organization's settings. Create a fresh webhook with the following attributes:</p>
<ul>
<li><p><strong>Payload URL</strong>: <code>your-apigateway-url/github</code></p>
</li>
<li><p><strong>Content type</strong>: <code>application/json</code></p>
</li>
<li><p><strong>SSL Verification</strong>: Enabled</p>
</li>
<li><p><strong>Events</strong>: Specifically opt for <code>Deployment statuses</code></p>
</li>
<li><p><strong>Status</strong>: Active</p>
</li>
</ul>
<p>Finalize your configurations. Now, every GitHub Deployment activates the Lambda, meticulously processing deployment data in line with your design.</p>
<h2 id="heading-integrating-slack-interactions">Integrating Slack Interactions</h2>
<p>Let's enhance the workflow and user experience by setting up Slack interactions for our deployment notifications.</p>
<h3 id="heading-1-establishing-the-interaction-endpoint"><strong>1. Establishing the Interaction Endpoint</strong></h3>
<p>To catch Slack interactions like button presses, we need a dedicated endpoint. Update the <code>lib/blog-cicd-slackbot-stack.ts</code> to introduce this Lambda function and endpoint:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> slackInteractiveFn = <span class="hljs-keyword">new</span> NodejsFunction(<span class="hljs-built_in">this</span>, <span class="hljs-string">"SlackInteractiveFn"</span>, {
  entry: <span class="hljs-string">"lib/lambda/api/slack-interactive.ts"</span>,
  ...lambdaProps,
});
table.grantReadWriteData(slackInteractiveFn);
secret.grantRead(slackInteractiveFn);
slackResource
  .addResource(<span class="hljs-string">"interaction"</span>)
  .addMethod(<span class="hljs-string">"POST"</span>, <span class="hljs-keyword">new</span> LambdaIntegration(slackInteractiveFn));
</code></pre>
<p>Though the handler for this function (<code>lib/lambda/api/slack-interactive.ts</code>) is extensive, it's primarily devoted to processing Slack messages and extracting information from the event.</p>
<h4 id="heading-decoding-the-slack-payload">- Decoding the Slack Payload:</h4>
<p>Given Slack's use of the x-www-form-urlencoded content type, manual decoding becomes imperative:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> decodedString = <span class="hljs-built_in">decodeURIComponent</span>(event.body!);
<span class="hljs-keyword">const</span> jsonString = decodedString.replace(<span class="hljs-string">"payload="</span>, <span class="hljs-string">""</span>);
<span class="hljs-keyword">const</span> jsonObject = <span class="hljs-built_in">JSON</span>.parse(jsonString);
<span class="hljs-keyword">const</span> message = jsonObject.message;
<span class="hljs-keyword">const</span> approved = jsonObject.actions[<span class="hljs-number">0</span>].value === <span class="hljs-string">"approved"</span>;
<span class="hljs-keyword">const</span> repo = jsonObject.message.text.split(<span class="hljs-string">"Repo:*\n"</span>)[<span class="hljs-number">1</span>].split(<span class="hljs-string">"+"</span>)[<span class="hljs-number">0</span>];
<span class="hljs-keyword">const</span> env = jsonObject.message.text
  .split(<span class="hljs-string">"+deployment+to+"</span>)[<span class="hljs-number">1</span>]
  .split(<span class="hljs-string">"+by+"</span>)[<span class="hljs-number">0</span>];
<span class="hljs-keyword">const</span> authority = jsonObject.user.name; <span class="hljs-comment">// user who did the interaction</span>
<span class="hljs-keyword">const</span> branch = jsonObject.message.text.split(<span class="hljs-string">"Branch:*\n"</span>)[<span class="hljs-number">1</span>].split(<span class="hljs-string">"+"</span>)[<span class="hljs-number">0</span>];
</code></pre>
<h4 id="heading-fetching-tokens">- Fetching Tokens:</h4>
<p>This Lambda could require tokens for both GitHub and Slack. Thus, let’s retrieve them:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> secret = <span class="hljs-keyword">await</span> sm.send(
  <span class="hljs-keyword">new</span> GetSecretValueCommand({
    SecretId: process.env.SECRET_ARN,
  })
);
<span class="hljs-keyword">const</span> slackToken = <span class="hljs-built_in">JSON</span>.parse(secret.SecretString || <span class="hljs-string">""</span>).SLACK_TOKEN;
<span class="hljs-keyword">const</span> githubToken = <span class="hljs-built_in">JSON</span>.parse(secret.SecretString || <span class="hljs-string">""</span>).GITHUB_TOKEN;
</code></pre>
<h4 id="heading-retrieving-users-image">- Retrieving User's Image:</h4>
<p>To enrich our Slack messages with user details, fetch the user's profile picture via the Slack API:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> slackAuthorityRes = <span class="hljs-keyword">await</span> fetch(
  <span class="hljs-string">`https://slack.com/api/users.profile.get?user=<span class="hljs-subst">${jsonObject.user.id}</span>`</span>,
  {
    method: <span class="hljs-string">"GET"</span>,
    headers: {
      <span class="hljs-string">"Content-Type"</span>: <span class="hljs-string">"application/json"</span>,
      Authorization: <span class="hljs-string">`Bearer <span class="hljs-subst">${slackToken}</span>`</span>,
    },
  }
);
<span class="hljs-keyword">const</span> slackAuthority = <span class="hljs-keyword">await</span> slackAuthorityRes.json();
<span class="hljs-keyword">const</span> userImg = slackAuthority.profile.image_24;
</code></pre>
<h4 id="heading-processing-slack-messages">- Processing Slack Messages:</h4>
<p>A series of operations then follows:</p>
<ol>
<li><p>Embed the user's profile picture, corresponding to their action (approve/reject).</p>
</li>
<li><p>Handle vote changes to ensure clarity in responses.</p>
</li>
<li><p>For deployments to "prod", cross-verify the user's authority against a list of approved users.</p>
</li>
</ol>
<p>On gaining approval:</p>
<ol>
<li><p>Update the LATEST deployment item status</p>
</li>
<li><p>Retrieve the workflow list for the repository, enabling us to identify the correct <code>workflowId</code></p>
</li>
<li><p>Initiate the deployment for the succeeding environment</p>
</li>
</ol>
<pre><code class="lang-typescript"><span class="hljs-keyword">await</span> ddbDocClient.send(
  <span class="hljs-keyword">new</span> PutCommand({ TableName: process.env.TABLE_NAME, Item: existingItem })
);
<span class="hljs-keyword">const</span> githubListWorkflowsRes = <span class="hljs-keyword">await</span> fetch(
  <span class="hljs-string">`https://api.github.com/repos/<span class="hljs-subst">${existingItem.owner}</span>/<span class="hljs-subst">${repo}</span>/actions/workflows`</span>,
  {
    method: <span class="hljs-string">"GET"</span>,
    headers: {
      Accept: <span class="hljs-string">"application/vnd.github+json"</span>,
      <span class="hljs-string">"X-GitHub-Api-Version"</span>: <span class="hljs-string">"2022-11-28"</span>,
      Authorization: <span class="hljs-string">`Bearer <span class="hljs-subst">${githubToken}</span>`</span>,
    },
  }
);
<span class="hljs-keyword">const</span> { workflows } = <span class="hljs-keyword">await</span> githubListWorkflowsRes.json();
<span class="hljs-keyword">const</span> workflow = workflows.find(
  <span class="hljs-function">(<span class="hljs-params">workflow: <span class="hljs-built_in">any</span></span>) =&gt;</span> workflow.name === <span class="hljs-string">"deploy-to-env"</span>
);
<span class="hljs-keyword">await</span> fetch(
  <span class="hljs-string">`https://api.github.com/repos/<span class="hljs-subst">${existingItem.owner}</span>/<span class="hljs-subst">${repo}</span>/actions/workflows/<span class="hljs-subst">${workflow.id}</span>/dispatches`</span>,
  {
    method: <span class="hljs-string">"POST"</span>,
    body: <span class="hljs-built_in">JSON</span>.stringify({
      ref: branch,
      inputs: {
        deploy_env: nextEnvs[env],
        oidc_role: oidcs[nextEnvs[env]],
      },
    }),
    headers: {
      Accept: <span class="hljs-string">"application/vnd.github+json"</span>,
      <span class="hljs-string">"X-GitHub-Api-Version"</span>: <span class="hljs-string">"2022-11-28"</span>,
      Authorization: <span class="hljs-string">`Bearer <span class="hljs-subst">${githubToken}</span>`</span>,
    },
  }
);
</code></pre>
<p>Concluding this segment, the Slack message is refreshed, and the updated information is stored in DynamoDB.</p>
<h3 id="heading-2-managing-approvers"><strong>2. Managing Approvers</strong></h3>
<p>The next phase involves facilitating the management of deployment approvers via Slack slash commands. Three distinct Lambdas will handle these functionalities – addition, removal, and listing of approvers.</p>
<p>An initial approver sets the tone, and subsequent approvals or removals of approvers happen under their discretion. To glean details about an approver, like their name, email, or profile picture, the <code>add-approver</code> endpoint uses the Slack Token.</p>
<p>For those eager to dive into the implementation, the code is hosted in the following:</p>
<ul>
<li><p><code>lib/lambda/api/slack-add-approver.ts</code></p>
</li>
<li><p><code>lib/lambda/api/slack-list-approvers.ts</code></p>
</li>
<li><p><code>lib/lambda/api/slack-remove-approver.ts</code></p>
</li>
</ul>
<p>Though we're summarizing this section, remember that having a robust system of approval is paramount, especially for production deployments.</p>
<h2 id="heading-setting-up-github-deployments-with-github-actions">Setting Up GitHub Deployments with GitHub Actions</h2>
<p>To harness the power of GitHub Deployments, let's configure some environments for your project.</p>
<h4 id="heading-1-configuring-environments"><strong>1. Configuring Environments:</strong></h4>
<p>First, head to your repository's settings page and open the Environments tab. For instance, the URL might resemble: <a target="_blank" href="https://github.com/martzcodes/blog-cicd-slackbot/settings/environments"><code>https://github.com/martzcodes/blog-cicd-slackbot/settings/environments</code></a>. Here, introduce a fresh environment for each stage you aim to monitor. Ensure their names correspond with your <code>nextEnvs</code> configuration object. For example, my setup included <code>dev</code>, <code>test</code>, and <code>prod</code>.</p>
<h4 id="heading-2-configuring-github-actions"><strong>2. Configuring GitHub Actions:</strong></h4>
<p>Now, ensure your deployment-focused GitHub Actions are set to employ GitHub Deployments. Some sample workflows are available at: <a target="_blank" href="https://github.com/martzcodes/blog-cicd-slackbot/tree/main/workflows">GitHub Sample Workflows</a>.</p>
<p>The <code>pipeline.yml</code> file, which springs into action upon commits to the <code>main</code> branch, facilitates continuous deployment to the dev environment. This action sets the stage for our Slack integration. Notably, this pipeline is named <code>Deploy</code> - a detail the GitHub webhook Lambda verifies.</p>
<p>Subsequently, the <code>deploy-to-env.yml</code> workflow is tailored to possess matching inputs. Triggered by the <code>pipeline.yml</code> workflow, both <code>workflow_call</code> and <code>workflow_dispatch</code> triggers accept these inputs:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">inputs:</span>
  <span class="hljs-attr">deploy_env:</span>
    <span class="hljs-attr">description:</span> <span class="hljs-string">'Environment to deploy to'</span>
    <span class="hljs-attr">required:</span> <span class="hljs-literal">true</span>
    <span class="hljs-attr">type:</span> <span class="hljs-string">string</span>
  <span class="hljs-attr">oidc_role:</span>
    <span class="hljs-attr">description:</span> <span class="hljs-string">'OIDC Role to assume for deployment'</span>
    <span class="hljs-attr">required:</span> <span class="hljs-literal">true</span>
    <span class="hljs-attr">type:</span> <span class="hljs-string">string</span>
</code></pre>
<p>Although your deployment steps could differ, it's crucial to encapsulate your deployment within status update stages:</p>
<pre><code class="lang-yaml">  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">start</span> <span class="hljs-string">deployment</span>
    <span class="hljs-attr">uses:</span> <span class="hljs-string">bobheadxi/deployments@v1.2.0</span>
    <span class="hljs-attr">id:</span> <span class="hljs-string">deployment</span>
    <span class="hljs-attr">with:</span>
      <span class="hljs-attr">step:</span> <span class="hljs-string">start</span>
      <span class="hljs-attr">env:</span> <span class="hljs-string">${{</span> <span class="hljs-string">inputs.deploy_env</span> <span class="hljs-string">}}</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">${{</span> <span class="hljs-string">inputs.deploy_env</span> <span class="hljs-string">}}</span> <span class="hljs-string">deploy</span>
    <span class="hljs-attr">env:</span>
      <span class="hljs-attr">DEPLOY_ENV:</span> <span class="hljs-string">${{</span> <span class="hljs-string">inputs.deploy_env</span> <span class="hljs-string">}}</span>
    <span class="hljs-attr">run:</span> <span class="hljs-string">npx</span> <span class="hljs-string">cdk</span> <span class="hljs-string">deploy</span> <span class="hljs-string">--ci</span> <span class="hljs-string">--require-approval</span> <span class="hljs-string">never</span> <span class="hljs-string">--concurrency</span> <span class="hljs-number">5</span> <span class="hljs-string">-v</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">update</span> <span class="hljs-string">deployment</span> <span class="hljs-string">status</span>
    <span class="hljs-attr">uses:</span> <span class="hljs-string">bobheadxi/deployments@v1.2.0</span>
    <span class="hljs-attr">with:</span>
      <span class="hljs-attr">step:</span> <span class="hljs-string">finish</span>
      <span class="hljs-attr">status:</span> <span class="hljs-string">${{</span> <span class="hljs-string">job.status</span> <span class="hljs-string">}}</span>
      <span class="hljs-attr">env:</span> <span class="hljs-string">${{</span> <span class="hljs-string">inputs.deploy_env</span> <span class="hljs-string">}}</span>
      <span class="hljs-attr">deployment_id:</span> <span class="hljs-string">${{</span> <span class="hljs-string">steps.deployment.outputs.deployment_id</span> <span class="hljs-string">}}</span>
</code></pre>
<p>The central idea here is that the <code>bobheadxi/deployments</code> action communicates with GitHub's API to register a deployment for the relevant environment. For a clearer perspective, a live example resides here: <a target="_blank" href="https://github.com/aws-community-projects/cicd/deployments">Live GitHub Example</a>.</p>
<h2 id="heading-live-demonstration">Live Demonstration</h2>
<p>Let's observe this integration in its full glory.</p>
<p><strong>1 - Initialization:</strong> Using the Slack slash command <code>/deployer_list_auth</code>, I'll confirm our approver list starts empty:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692565275731/ece4617b-d28c-4f5f-8e79-92392fffca67.png" alt class="image--center mx-auto" /></p>
<p><strong>2 - Commencing a Deployment:</strong> I'll initiate a deployment in my dev environment and after a brief wait an <code>in_progress</code> message surfaces:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692565302536/7c035f0d-d855-426e-937f-91b2c65cc142.png" alt class="image--center mx-auto" /></p>
<p><strong>3 - Deployment Completion:</strong> On the successful completion of the deployment, our message updates, introducing action buttons. Given our next environment, "test", no approvers are mandated:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692565321230/e5d99f2d-6a4a-48ee-8e0b-0ee23a9e9dc7.png" alt class="image--center mx-auto" /></p>
<p><strong>4 - Deployment Approval:</strong> Tapping the "Approve" button results in another message transformation, indicating approval along with the removal of the action buttons:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692565333551/29578b18-0261-4153-ada8-8a21243c74e3.png" alt class="image--center mx-auto" /></p>
<p><strong>5 - Test Environment Deployment:</strong> Shortly, the <code>in_progress</code> message for the test environment arrives:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692565346615/72c650d7-89ad-4f19-8a0a-122890832e2f.png" alt class="image--center mx-auto" /></p>
<p><strong>6 - Rejection Attempt:</strong> After the test environment has a successful deployment and the buttons appear... As an outsider to the approver list, my "Reject" action prompts an updated message, retaining the buttons:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692565357257/0dfab636-96f5-4807-9e9c-3257061ee206.png" alt class="image--center mx-auto" /></p>
<p><strong>7 - Adding to the Approver List:</strong> I'll employ the <code>/deployer_add_auth</code> command to add myself to the approver list:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692565369329/1eb6c1d4-4dee-4bef-958d-5d5738bf069c.png" alt class="image--center mx-auto" /></p>
<p><strong>8 - Final Rejection:</strong> After clicking the "Reject' button again the deployment is successfully rejected, with our message updated accordingly:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692565378150/554b05c4-3757-4fb5-b371-40e012266ef9.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>The intertwining of continuous integration and deployment with communication tools, as we've explored in this journey, opens a myriad of possibilities. Not only does it seamlessly merge developer actions with team-wide notifications, but it also introduces an additional layer of transparency and control over our deployment processes. The adaptability of the systems we've integrated, namely GitHub and Slack, offer a rich tapestry of features to build upon, as demonstrated by our use of GitHub Deployment and its potential with GitHub Actions.</p>
<p>It's worth noting the original plan was to leverage <a target="_blank" href="https://docs.github.com/en/actions/deployment/protecting-deployments/creating-custom-deployment-protection-rules">GitHub Deployment Protection rules</a>. These rules present a solid framework for controlling deployments in a more granular way. However, a significant limitation arose: their availability is restricted to either public repositories or those operating under GitHub Enterprise. This limitation led to a more creative approach, embedding the essence of what these rules offer, but in a broader context suitable for various repository types.</p>
<p>To conclude, technology continues to provide tools and platforms, ripe with features and functionalities, waiting to be moulded and interconnected in ways that best suit our needs. This exploration was just a glimpse into the vast world of CI/CD and team communication integrations. As you embark on your own integrative ventures, remember that while off-the-shelf solutions are great, sometimes thinking outside the box— or outside the repository, in this case— can lead to even more robust and tailor-made solutions for your team.</p>
]]></content:encoded></item><item><title><![CDATA[Amplifying AWS Tutorials: Building a Social Notes App with "Sign in with Apple" and AWS Pinpoint Analytics]]></title><description><![CDATA[For the 2023 AWS Amplify + Hashnode Hackathon, I wanted to take a closer look into iOS development and take the introductory iOS notes app from AWS Amplify's tutorial to the next level by incorporating powerful features like federated login via Apple...]]></description><link>https://martzmakes.com/amplifying-aws-tutorials-building-a-social-notes-app-with-sign-in-with-apple-and-aws-pinpoint-analytics</link><guid isPermaLink="true">https://martzmakes.com/amplifying-aws-tutorials-building-a-social-notes-app-with-sign-in-with-apple-and-aws-pinpoint-analytics</guid><category><![CDATA[AWS Amplify]]></category><category><![CDATA[AWS Amplify Hackathon]]></category><category><![CDATA[iOS]]></category><category><![CDATA[AWSCommunity]]></category><dc:creator><![CDATA[Matt Martz]]></dc:creator><pubDate>Tue, 01 Aug 2023 00:24:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/mRB1Ws_6FsQ/upload/3b7805b14f79ed210f59fdbfed39b124.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>For the 2023 <a target="_blank" href="https://aws.amazon.com/pm/amplify/?sc_channel=el&amp;trk=bc603709-686b-4e27-b79f-07e5de3686ec">AWS Amplify</a> + <a target="_blank" href="https://hashnode.com/?source=aws-amplify-2023">Hashnode</a> Hackathon, I wanted to take a closer look into iOS development and take the <a target="_blank" href="https://aws.amazon.com/getting-started/hands-on/build-ios-app-amplify/">introductory iOS notes app from AWS Amplify's tutorial</a> to the next level by incorporating powerful features like federated login via <a target="_blank" href="https://aws.amazon.com/blogs/mobile/federating-users-using-sign-in-with-apple-and-aws-amplify-for-swift/">Apple's "Sign in with Apple"</a> Then we'll add some easy Analytics to our app with AWS Pinpoint.</p>
<p>While AWS's initial tutorial covered basic username/password authentication... users appreciate convenience and security. To support this, we'll add "Sign in with Apple" support, which offers users a seamless and privacy-focused login option. With "Sign in with Apple," users can authenticate with their Apple ID and stay in control of their personal information (including giving them the ability to hide their email).</p>
<p>A successful app requires understanding user behavior and optimizing user experiences. That's where AWS Pinpoint Analytics comes into play. I'll demonstrate how to integrate AWS Pinpoint into the notes app, enabling us to collect critical user engagement data. With this newfound insight, we can analyze user interactions, monitor feature adoption, and make data-driven decisions to enhance the app's performance.</p>
<p><em>AWS Amplify is a comprehensive development platform offered by Amazon Web Services (AWS) that simplifies the process of building web and mobile applications. It provides developers with a set of tools, services, and libraries to accelerate the development of cloud-powered applications.</em></p>
<p>This tutorial assumes you are familiar with the basics of AWS Amplify, including authentication setup and working with the Amplify Storage component. <strong>If you're new to these concepts, I recommend checking out the</strong> <a target="_blank" href="https://aws.amazon.com/getting-started/hands-on/build-ios-app-amplify"><strong>original AWS Amplify tutorial</strong></a> <strong>to get up to speed.</strong></p>
<p>The code for this project is located here: <a target="_blank" href="https://github.com/martzcodes/blog-amplifyhackathon-ios-2023">https://github.com/martzcodes/blog-amplifyhackathon-ios-2023</a></p>
<h2 id="heading-updating-the-app-to-use-sign-in-with-apple">Updating the App to use "Sign in with Apple"</h2>
<p>Amplify and Apple make it really easy to incorporate "Sign in with Apple" (SIWA) into your applications. SIWA allows you to federate users into an Amazon Cognito identity pool using the AWS Amplify Libraries for Swift. By federating a user into a Cognito identity pool, it allows us to use temporary AWS IAM credentials for the users based on the identity token that Apple provides us. In doing so, we can use those AWS IAM credentials to access other services like Amazon S3, AppSync and Pinpoint.</p>
<p>This article will show you how to use Sign in with Apple (SIWA) to retrieve an identity token and federate the user in an Amazon Cognito identity pool using the AWS Amplify Libraries for Swift. Federating a user in an identity pool provides them with credentials that allow them to access other services like Amazon S3 and Amazon DynamoDB.</p>
<p>First, we'll update amplify's auth to use Apple's provider. Below is the full list of answers when running <code>amplify update auth</code>:</p>
<pre><code class="lang-plaintext">$ amplify update auth
 What do you want to do? Walkthrough all the auth configurations
 Select the authentication/authorization services that you want to use: User Sign-Up, Sign-In, connected with AWS IAM controls (Enables per-user Storage features for images or other content, Analytics, and more)
 Allow unauthenticated logins? (Provides scoped down permissions that you can control via AWS IAM) Yes
 Do you want to enable 3rd party authentication providers in your identity pool? Yes
 Select the third party identity providers you want to configure for your identity pool: Apple

 You've opted to allow users to authenticate via Sign in with Apple. If you haven't already, you'll need to go to https://developer.apple.com/account/#/welcome and configure Sign in with Apple.

 Enter your Bundle Identifier for your identity pool:  &lt;your app bundle id from apple&gt;
 Do you want to add User Pool Groups? No
 Do you want to add an admin queries API? No
 Multifactor authentication (MFA) user login options: OFF
 Email based user registration/forgot password: Enabled (Requires per-user email entry at registration)
 Specify an email verification subject: Your verification code
 Specify an email verification message: Your verification code is {####}
 Do you want to override the default password policy for this User Pool? No
 Specify the app's refresh token expiration period (in days): 30
 Do you want to specify the user attributes this app can read and write? No
 Do you want to enable any of the following capabilities?
 Do you want to use an OAuth flow? No
? Do you want to configure Lambda Triggers for Cognito? Yes
? Which triggers do you want to enable for Cognito
</code></pre>
<p>Identity pools do NOT use Cognito User Groups... they use AWS IAM-based access. In order to make use of these for our app, we need to switch AWS AppSync to use IAM auth instead of Cognito User Groups. Let's run <code>amplify update api</code>. Below are the full answers for this section:</p>
<pre><code class="lang-plaintext">amplify update api
? Select from one of the below mentioned services: GraphQL
...
? Select a setting to edit Authorization modes
? Choose the default authorization type for the API IAM
? Configure additional auth types? No
</code></pre>
<p>Before we can push these changes, we need to update our <code>schema.graphql</code>. It was using auth based on Cognito Pools, but we need to switch it to IAM-based auth. For demo sake, we also want to enable guests (unauthenticated users) to also read notes. We'll update the <code>schema.graphql</code> file to:</p>
<pre><code class="lang-graphql"><span class="hljs-keyword">type</span> NoteData
<span class="hljs-meta">@model</span>
<span class="hljs-meta">@auth</span>(
    <span class="hljs-symbol">rules:</span> [
      { <span class="hljs-symbol">allow:</span> public, <span class="hljs-symbol">provider:</span> iam, <span class="hljs-symbol">operations:</span> [read] }
      { <span class="hljs-symbol">allow:</span> private, <span class="hljs-symbol">provider:</span> iam, <span class="hljs-symbol">operations:</span> [read, create, update, delete] }
    ]
  ) {
    <span class="hljs-symbol">id:</span> ID!
    <span class="hljs-symbol">name:</span> String!
    <span class="hljs-symbol">description:</span> String
    <span class="hljs-symbol">image:</span> String
}
</code></pre>
<ul>
<li><p><code>{ allow: public, provider: iam, operations: [read] }</code> gives guests read access</p>
</li>
<li><p><code>{ allow: private, provider: iam, operations: [read, create, update, delete] }</code> gives logged in users full CRUD access</p>
</li>
</ul>
<p>With the <code>schema.graphql</code> updated, we can run <code>amplify codegen models</code> to regenerate the APIs.</p>
<p>With these changes, we can run <code>amplify push</code> and our backend will update to reflect these new changes. Next, we'll need to modify some of the swift code.</p>
<p>Instead of using <code>Amplify.Auth.signInWithWebUI</code> we need to use the <code>SignInWithApple</code> button and then make a call to <code>federateToIdentityPool</code>. <code>SignInWithApple</code> is a capability provided by Apple. Once signed in, it provides an identity token that is used by <code>federateToIdentityPool</code></p>
<p>In <code>ContentView.swift</code>:</p>
<pre><code class="lang-swift"><span class="hljs-comment">// add this to the top of the file</span>
<span class="hljs-keyword">import</span> AuthenticationServices

<span class="hljs-comment">// In the ContentView view:</span>
<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">configureRequest</span><span class="hljs-params">(<span class="hljs-number">_</span> request: ASAuthorizationAppleIDRequest)</span></span> {
    request.requestedScopes = [.email]
}
<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">handleResult</span><span class="hljs-params">(<span class="hljs-number">_</span> result: Result&lt;ASAuthorization, Error&gt;)</span></span> {
    <span class="hljs-keyword">switch</span> result {
    <span class="hljs-keyword">case</span> .success(<span class="hljs-keyword">let</span> authorization):
        <span class="hljs-keyword">guard</span> <span class="hljs-keyword">let</span> credential = authorization.credential <span class="hljs-keyword">as</span>? <span class="hljs-type">ASAuthorizationAppleIDCredential</span>,
                <span class="hljs-keyword">let</span> identityToken = credential.identityToken <span class="hljs-keyword">else</span> {
                    <span class="hljs-keyword">return</span>
                }
        <span class="hljs-keyword">guard</span> <span class="hljs-keyword">let</span> tokenString = <span class="hljs-type">String</span>(data: identityToken, encoding: .utf8) <span class="hljs-keyword">else</span> {
            <span class="hljs-keyword">return</span>
        }
        <span class="hljs-type">Backend</span>.shared.federateToIdentityPools(with: tokenString)
        <span class="hljs-keyword">self</span>.userData.isSignedIn = <span class="hljs-literal">true</span>;
    <span class="hljs-keyword">case</span> .failure(<span class="hljs-keyword">let</span> error):
        <span class="hljs-built_in">print</span>(error)
    }
}

<span class="hljs-comment">// replace the original sign in button with:</span>
<span class="hljs-type">SignInWithAppleButton</span>(
    onRequest: configureRequest,
    onCompletion: handleResult
)
.frame(maxWidth: <span class="hljs-number">300</span>, maxHeight: <span class="hljs-number">45</span>)
</code></pre>
<p><code>SignInWithAppleButton</code> triggers the SIWA capability by Apple. <code>configureRequest</code> ensures that the email is returned as part of the scope of the <code>identityToken</code>. On completion then parses the <code>identityToken</code> and sends it to our <code>Backend.swift</code> service.</p>
<p>In <code>Backend.swift</code> the <code>Amplify.Auth</code> plugin calls <code>federateToIdentityPool</code>:</p>
<pre><code class="lang-swift"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">federateToIdentityPools</span><span class="hljs-params">(with tokenString: String)</span></span> {
    <span class="hljs-keyword">guard</span>
        <span class="hljs-keyword">let</span> plugin = <span class="hljs-keyword">try</span>? <span class="hljs-type">Amplify</span>.<span class="hljs-type">Auth</span>.getPlugin(<span class="hljs-keyword">for</span>: <span class="hljs-string">"awsCognitoAuthPlugin"</span>) <span class="hljs-keyword">as</span>? <span class="hljs-type">AWSCognitoAuthPlugin</span>
    <span class="hljs-keyword">else</span> { <span class="hljs-keyword">return</span> }

    <span class="hljs-type">Task</span> {
        <span class="hljs-keyword">do</span> {
            <span class="hljs-keyword">let</span> result = <span class="hljs-keyword">try</span> await plugin.federateToIdentityPool(
                withProviderToken: tokenString,
                <span class="hljs-keyword">for</span>: .apple
            )
            <span class="hljs-built_in">print</span>(<span class="hljs-string">"Successfully federated user to identity pool with result:"</span>, result)
        } <span class="hljs-keyword">catch</span> {
            <span class="hljs-built_in">print</span>(<span class="hljs-string">"Failed to federate to identity pool with error:"</span>, error)
        }
    }
}
</code></pre>
<p>In this version of our code, we also have an auth listener that triggers UI updates based on session events. The auth listener in our <code>Backend.swift</code> file looks like this now:</p>
<pre><code class="lang-swift"><span class="hljs-keyword">public</span> <span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">listenAuthUpdate</span><span class="hljs-params">()</span></span> async -&gt; <span class="hljs-type">AsyncStream</span>&lt;<span class="hljs-type">AuthStatus</span>&gt; { 
    <span class="hljs-keyword">return</span> <span class="hljs-type">AsyncStream</span> { continuation <span class="hljs-keyword">in</span>
        continuation.onTermination = { @<span class="hljs-type">Sendable</span> status <span class="hljs-keyword">in</span>
                   <span class="hljs-built_in">print</span>(<span class="hljs-string">"[BACKEND] streaming auth status terminated with status : \(status)"</span>)
        }

        <span class="hljs-comment">// listen to auth events.</span>
        <span class="hljs-comment">// see https://github.com/aws-amplify/amplify-ios/blob/master/Amplify/Categories/Auth/Models/AuthEventName.swift</span>
        <span class="hljs-keyword">let</span> <span class="hljs-number">_</span>  = <span class="hljs-type">Amplify</span>.<span class="hljs-type">Hub</span>.listen(to: .auth) { payload <span class="hljs-keyword">in</span>            
            <span class="hljs-built_in">print</span>(payload.eventName)
            <span class="hljs-keyword">switch</span> payload.eventName {
            <span class="hljs-keyword">case</span> <span class="hljs-string">"Auth.federatedToIdentityPool"</span>:
                <span class="hljs-built_in">print</span>(<span class="hljs-string">"User federated, update UI"</span>)
                continuation.yield(<span class="hljs-type">AuthStatus</span>.signedIn)
                <span class="hljs-type">Task</span> {
                    await <span class="hljs-keyword">self</span>.updateUserData(withSignInStatus: <span class="hljs-literal">true</span>)
                }
            <span class="hljs-keyword">case</span> <span class="hljs-string">"Auth.federationToIdentityPoolCleared"</span>:
                <span class="hljs-built_in">print</span>(<span class="hljs-string">"User unfederated, update UI"</span>)
                continuation.yield(<span class="hljs-type">AuthStatus</span>.signedOut)
                <span class="hljs-type">Task</span> {
                    await <span class="hljs-keyword">self</span>.updateUserData(withSignInStatus: <span class="hljs-literal">false</span>)
                }
            <span class="hljs-keyword">case</span> <span class="hljs-type">HubPayload</span>.<span class="hljs-type">EventName</span>.<span class="hljs-type">Auth</span>.sessionExpired:
                <span class="hljs-built_in">print</span>(<span class="hljs-string">"Session expired, show sign in aui"</span>)
                continuation.yield(<span class="hljs-type">AuthStatus</span>.sessionExpired)
                <span class="hljs-type">Task</span> {
                    await <span class="hljs-keyword">self</span>.updateUserData(withSignInStatus: <span class="hljs-literal">false</span>)
                }
            <span class="hljs-keyword">default</span>:
                <span class="hljs-built_in">print</span>(<span class="hljs-string">"\(payload)"</span>)
                <span class="hljs-keyword">break</span>
            }
        }
    }
}
</code></pre>
<p>For federation events, the events end up being <code>Auth.federatedToIdentityPool</code> and <code>Auth.federationToIdentityPoolCleared</code>.</p>
<h2 id="heading-adding-aws-pinpoint-for-app-analytics">Adding AWS Pinpoint for App Analytics</h2>
<p>Next, we want to see what our users are doing in our Notes app and see if they're really making use of the "new" image upload feature.</p>
<p>First, we'll need to update amplify by running <code>amplify add analytics</code>:</p>
<pre><code class="lang-bash">amplify add analytics
? Select an Analytics provider Amazon Pinpoint
✔ Provide your pinpoint resource name: · amplifypushup
</code></pre>
<p>Then we'll run <code>amplify push</code>.</p>
<p>We'll need to <a target="_blank" href="https://docs.amplify.aws/lib/analytics/getting-started/q/platform/ios/#view-analytics-console">add the Amplify Analytics libraries to our app</a> by making sure the <code>AWSPinpointAnalyticsPlugin</code> is installed. And then we'll add the initialization in our Backend.swift file:</p>
<pre><code class="lang-swift"><span class="hljs-keyword">private</span> <span class="hljs-keyword">init</span>() {
  <span class="hljs-comment">// initialize amplify</span>
  <span class="hljs-keyword">do</span> {
      <span class="hljs-keyword">try</span> <span class="hljs-type">Amplify</span>.add(plugin: <span class="hljs-type">AWSCognitoAuthPlugin</span>())
      <span class="hljs-keyword">try</span> <span class="hljs-type">Amplify</span>.add(plugin: <span class="hljs-type">AWSAPIPlugin</span>(modelRegistration: <span class="hljs-type">AmplifyModels</span>()))
      <span class="hljs-keyword">try</span> <span class="hljs-type">Amplify</span>.add(plugin: <span class="hljs-type">AWSS3StoragePlugin</span>())
      <span class="hljs-keyword">try</span> <span class="hljs-type">Amplify</span>.add(plugin: <span class="hljs-type">AWSPinpointAnalyticsPlugin</span>()) <span class="hljs-comment">// &lt;-- add this</span>
      <span class="hljs-keyword">try</span> <span class="hljs-type">Amplify</span>.configure()
      <span class="hljs-built_in">print</span>(<span class="hljs-string">"Initialized Amplify"</span>);
  } <span class="hljs-keyword">catch</span> {
    <span class="hljs-built_in">print</span>(<span class="hljs-string">"Could not initialize Amplify: \(error)"</span>)
  }
</code></pre>
<p>If we want to track how popular our image upload feature is, we can add an event:</p>
<pre><code class="lang-swift"><span class="hljs-keyword">let</span> properties: <span class="hljs-type">AnalyticsProperties</span> = [
    <span class="hljs-string">"eventPropertyStringKey"</span>: <span class="hljs-string">"eventPropertyStringValue"</span>,
    <span class="hljs-string">"eventPropertyIntKey"</span>: <span class="hljs-number">123</span>,
    <span class="hljs-string">"eventPropertyDoubleKey"</span>: <span class="hljs-number">12.34</span>,
    <span class="hljs-string">"eventPropertyBoolKey"</span>: <span class="hljs-literal">true</span>
]

<span class="hljs-keyword">let</span> event = <span class="hljs-type">BasicAnalyticsEvent</span>(
    name: <span class="hljs-string">"imageUploaded"</span>,
    properties: properties
)

<span class="hljs-keyword">try</span> <span class="hljs-type">Amplify</span>.<span class="hljs-type">Analytics</span>.record(event: event)
</code></pre>
<p>You can also emit events related to authentication that Pinpoint will automatically track:</p>
<ul>
<li><p><code>_userauth.sign_in</code></p>
</li>
<li><p><code>_userauth.sign_up</code></p>
</li>
<li><p><code>_userauth.auth_fail</code></p>
</li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In my quest to elevate the capabilities of AWS Amplify tutorials, I ventured into iOS development for the 2023 <a target="_blank" href="https://aws.amazon.com/pm/amplify/?sc_channel=el&amp;trk=bc603709-686b-4e27-b79f-07e5de3686ec">AWS Amplify</a> + <a target="_blank" href="https://hashnode.com/?source=aws-amplify-2023">Hashnode</a> Hackathon. Building upon the introductory iOS notes app tutorial provided by AWS Amplify, I took the app to the next level by incorporating sought-after features like "Sign in with Apple" and AWS Pinpoint Analytics.</p>
<p>The inclusion of "Sign in with Apple" as a federated login option significantly enhances the app's authentication experience. Users can now seamlessly log in with their Apple ID, ensuring both convenience and privacy. This advanced authentication option empowers users to stay in control of their personal information, including the ability to hide their email.</p>
<p>In addition to empowering users with a better way to sign-in, I integrated AWS Pinpoint Analytics into the notes app. By leveraging AWS Pinpoint's analytical capabilities, we gained valuable insights into user engagement and behavior. This data-driven approach allows us to analyze user interactions, monitor feature adoption, and make informed decisions to optimize the app's performance and user experience.</p>
<p>As you explore the code and implement these advanced features, I encourage you to further experiment and customize the app to suit your specific use case and preferences. Remember to refer to the <a target="_blank" href="https://github.com/martzcodes/blog-amplifyhackathon-ios-2023">code repository</a> for detailed implementations.</p>
<p>Thank you for joining me on this amplified journey to enhance AWS tutorials. With "Sign in with Apple" and AWS Pinpoint Analytics at your fingertips, you're equipped to build robust and engaging mobile apps that connect users with seamless authentication and actionable insights.</p>
]]></content:encoded></item><item><title><![CDATA[Leveraging CDK and Serverless for Bluesky Feed Generation]]></title><description><![CDATA[In the dynamic world of social media, Bluesky is carving out a niche with its innovative approach to content curation. With its unique "My Feeds" feature, Bluesky empowers users to customize their social media experience by choosing from a variety of...]]></description><link>https://martzmakes.com/leveraging-cdk-and-serverless-for-bluesky-feed-generation</link><guid isPermaLink="true">https://martzmakes.com/leveraging-cdk-and-serverless-for-bluesky-feed-generation</guid><category><![CDATA[AWS]]></category><category><![CDATA[serverless]]></category><category><![CDATA[aws-cdk]]></category><category><![CDATA[Bluesky]]></category><dc:creator><![CDATA[Matt Martz]]></dc:creator><pubDate>Wed, 05 Jul 2023 19:04:11 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1688583575052/73696073-6688-46c6-aba7-26f46b84e9b4.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the dynamic world of social media, Bluesky is <a target="_blank" href="https://www.wired.com/story/bluesky-my-feeds-custom-algorithms/">carving out a niche</a> with its innovative approach to content curation. With its unique "My Feeds" feature, Bluesky empowers users to customize their social media experience by choosing from a variety of feeds, each powered by a different algorithm.</p>
<p>Creating these diverse feeds, however, requires a <a target="_blank" href="https://github.com/bluesky-social/feed-generator">Bluesky feed generator</a>, a tool that necessitates some technical know-how. This is where AWS services come into the picture, simplifying the deployment and operation of a Bluesky feed generator.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1688582974217/b6f82195-78d2-494a-b18f-829e6f912441.png" alt class="image--center mx-auto" /></p>
<p>Our proposed architecture leverages AWS Fargate, AWS Lambda, and Aurora Serverless. AWS Fargate runs a container designed to parse the Bluesky event stream, capturing only the relevant data. This data is then stored in an Aurora Serverless database. AWS Lambda is employed to deliver the stream, working in tandem with AWS Fargate to reduce operational overhead and allow us to focus on core functionality. This combination also serves as our "feed algorithm".</p>
<p>Deploying the Bluesky feed generator as an AWS CDK project not only streamlines the process of feed creation but also democratizes it, making it accessible to a wider audience. It facilitates easy scaling of resources, and efficient cost management, and enables us to focus more on developing unique, user-centric feeds rather than managing the underlying infrastructure.</p>
<p>In the upcoming sections of this blog post, we'll delve deeper into the technical aspects of deploying a Bluesky feed generator using AWS services. We'll provide a step-by-step guide to help you get started, including creating a <a target="_blank" href="https://bsky.app/profile/did:plc:a62mzn6xxxxwktpdprw2lvnc/feed/aws-community">feed of "skeets" from AWS Employees, AWS Heroes, and AWS Community Builders</a>.</p>
<p><em>PSST... feel free to</em> <a target="_blank" href="https://bsky.app/profile/martz.codes"><em>follow me on Bluesky too</em></a></p>
<h2 id="heading-the-bluesky-provided-feed-generator">The Bluesky-provided Feed Generator</h2>
<p><a target="_blank" href="https://github.com/bluesky-social/feed-generator">Bluesky provides a basic feed generator on their GitHub</a>, but it's a bit like having a camera without the right settings - it lacks the necessary architecture to capture the perfect shot.</p>
<p>At its heart, the service provided by Bluesky's repo performs three key functions:</p>
<ol>
<li><p>It latches onto the Bluesky Websocket stream and filters events into a database. It's like adjusting the focus on your camera, ensuring only the relevant subjects are in clear view <a target="_blank" href="https://github.com/bluesky-social/feed-generator/blob/main/src/subscription.ts#L8">1</a>.</p>
</li>
<li><p>It features a feed endpoint that cherry-picks relevant rows from the database. Think of it as the photographer, selecting the best shots for the final album <a target="_blank" href="https://github.com/bluesky-social/feed-generator/blob/main/src/algos/whats-alf.ts#L8">2</a>.</p>
</li>
<li><p>It includes a static endpoint that "registers" the feed. This is like the metadata of a photo, providing all the necessary information about the shot <a target="_blank" href="https://github.com/bluesky-social/feed-generator/blob/main/src/well-known.ts#L11">3</a>.</p>
</li>
</ol>
<p>To register the feed service, a script is run that's a bit like the final editing process before the photos are published. It connects to Bluesky and registers the feed name and service URL, making sure everything is picture-perfect <a target="_blank" href="https://github.com/bluesky-social/feed-generator/blob/main/scripts/publishFeedGen.ts">4</a>.</p>
<p>Now, here's where we bring in the big guns - AWS CDK. We're going to give this feed generator a major upgrade.</p>
<p>Firstly, we'll move the WebSocket stream connection to a Fargate service. This is akin to upgrading from a manual focus to an automatic one - it's faster, more efficient, and doesn't require as much manual effort.</p>
<p>Next, we'll transform the feed endpoint into an AWS Lambda function. This is like having an automated photo editor - it's more efficient, scalable, and doesn't require constant supervision.</p>
<p>The static endpoint will be relocated to a simple MockIntegration in the APIGateway. This is like moving your photo metadata management to a digital platform - it's more efficient, reliable, and easily accessible.</p>
<p>Lastly, we'll shift the feed registration script to be run by a CustomResource-invoked Lambda. This is like automating your final photo editing process - it's more reliable, efficient, and doesn't require constant attention.</p>
<p>In essence, we're taking the basic structure provided by Bluesky and supercharging it with the power of AWS CDK.</p>
<h2 id="heading-crafting-the-bluesky-database">Crafting The Bluesky Database</h2>
<p>First up on our agenda is the creation of the database that will serve as the meeting point for our parser and feed. This setup bears a resemblance to my recent post about <a target="_blank" href="https://matt.martz.codes/creating-an-aurora-mysql-database-and-setting-up-a-kinesis-cdc-stream-with-aws-cdk">Creating an Aurora MySQL Database and Setting Up a Kinesis CDC Stream with AWS CDK</a>. However, there's a twist - we won't be needing the stream this time, and we'll be employing Aurora Serverless instead.</p>
<p>Our game plan involves crafting a CDK Construct that houses the database and a CustomResource-invoked Lambda that will lay the groundwork for the database schema. Let's dive into the creation of the <a target="_blank" href="https://github.com/aws-community-projects/bluesky-feed-generator/blob/main/lib/bluesky-db.ts">database</a>:</p>
<pre><code class="lang-typescript"><span class="hljs-built_in">this</span>.db = <span class="hljs-keyword">new</span> ServerlessCluster(<span class="hljs-built_in">this</span>, <span class="hljs-string">'cluster'</span>, {
  clusterIdentifier: <span class="hljs-string">`bluesky`</span>,
  credentials: Credentials.fromGeneratedSecret(<span class="hljs-string">'admin'</span>),
  defaultDatabaseName: dbName,
  engine: DatabaseClusterEngine.AURORA_MYSQL,
  removalPolicy: RemovalPolicy.DESTROY,
  enableDataApi: <span class="hljs-literal">true</span>,
  vpc,
});
</code></pre>
<p>Next, we'll whip up the lambda function and the custom resource:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> dbInitFn = <span class="hljs-keyword">new</span> NodejsFunction(<span class="hljs-built_in">this</span>, <span class="hljs-string">"dbInitFn"</span>, {
  functionName: <span class="hljs-string">"bluesky-db-init"</span>,
  entry: join(__dirname, <span class="hljs-string">"lambda/db-init.ts"</span>),
  runtime: Runtime.NODEJS_18_X,
  timeout: Duration.minutes(<span class="hljs-number">15</span>),
  tracing: Tracing.ACTIVE,
  environment: {
    DB_NAME: dbName,
    CLUSTER_ARN: <span class="hljs-built_in">this</span>.db.clusterArn,
    SECRET_ARN: <span class="hljs-built_in">this</span>.db.secret?.secretArn || <span class="hljs-string">''</span>,
  },
});
<span class="hljs-built_in">this</span>.db.grantDataApiAccess(dbInitFn);

<span class="hljs-keyword">const</span> initProvider = <span class="hljs-keyword">new</span> Provider(<span class="hljs-built_in">this</span>, <span class="hljs-string">`init-db-provider`</span>, {
  onEventHandler: dbInitFn,
});

<span class="hljs-keyword">new</span> CustomResource(<span class="hljs-built_in">this</span>, <span class="hljs-string">`init-db-resource`</span>, {
  serviceToken: initProvider.serviceToken,
});
</code></pre>
<p>Our lambda's <a target="_blank" href="https://github.com/aws-community-projects/bluesky-feed-generator/blob/main/lib/lambda/db-init.ts">handler</a> will be interacting with the database using the RDS Data API, taking advantage of the secrets that CloudFormation has created for the database:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">if</span> (event.RequestType === <span class="hljs-string">"Create"</span>) {
    <span class="hljs-keyword">await</span> client.send(cmd(<span class="hljs-string">`select 1`</span>));
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"db-init: create tables"</span>);
    <span class="hljs-keyword">await</span> client.send(
      cmd(
        <span class="hljs-string">`CREATE TABLE IF NOT EXISTS post (uri VARCHAR(255) NOT NULL, cid VARCHAR(255) NOT NULL, author VARCHAR(255) NOT NULL, replyParent VARCHAR(255), replyRoot VARCHAR(255), indexedAt DATETIME NOT NULL, PRIMARY KEY (uri));`</span>
      )
    );
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"created post table"</span>);
    <span class="hljs-keyword">await</span> client.send(cmd(<span class="hljs-string">`CREATE TABLE IF NOT EXISTS sub_state (service VARCHAR(255) NOT NULL, cursor_value INT NOT NULL, PRIMARY KEY (service));`</span>))
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"created sub_state table"</span>);
}
</code></pre>
<p>Since we want the tables to be created only when the stack is first deployed, we filter on <code>event.RequestType === "Create"</code>. However, to err on the side of caution, we've also included <code>IF NOT EXISTS</code> in the SQL commands. Better safe than sorry!</p>
<h2 id="heading-constructing-the-parser-connecting-to-the-bluesky-event-stream">Constructing the Parser: Connecting to the Bluesky Event Stream</h2>
<p>In the next phase of our process, we're going to construct the parser that connects to the Bluesky event stream. This component is the linchpin that connects us to the publicly available Bluesky web socket event stream. It's like the lens of our camera, capturing the events that we're interested in and focusing them into a coherent image.</p>
<p>Let's dive into the <a target="_blank" href="https://github.com/aws-community-projects/bluesky-feed-generator/blob/main/lib/bluesky-parser.ts">code</a> and understand the key parts:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> cluster = <span class="hljs-keyword">new</span> Cluster(<span class="hljs-built_in">this</span>, <span class="hljs-string">"bluesky-feed-generator-cluster"</span>, {
  vpc,
  enableFargateCapacityProviders: <span class="hljs-literal">true</span>,
});
</code></pre>
<p>Here, we're setting up our Fargate cluster. This is akin to positioning our camera on a tripod, providing a stable platform for our operations.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> taskDefinition = <span class="hljs-keyword">new</span> FargateTaskDefinition(
  <span class="hljs-built_in">this</span>,
  <span class="hljs-string">"bluesky-feed-generator-task"</span>,
  {
    runtimePlatform: {
      cpuArchitecture: CpuArchitecture.ARM64,
    },
    memoryLimitMiB: <span class="hljs-number">1024</span>,
    cpu: <span class="hljs-number">512</span>,
  }
);
db.grantDataApiAccess(taskDefinition.taskRole);
</code></pre>
<p>Next, we're defining a Fargate task. This is like adjusting the camera's settings, such as aperture and shutter speed, to ensure we capture the best possible shot. We're also granting this task access to our database, much like giving our camera the ability to store the photos it captures.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> logging = <span class="hljs-keyword">new</span> AwsLogDriver({
  logRetention: RetentionDays.ONE_DAY,
  streamPrefix: <span class="hljs-string">"bluesky-feed-generator"</span>,
});
</code></pre>
<p>We're setting up a log driver here, which is akin to the camera's viewfinder, allowing us to monitor and review our operations.</p>
<pre><code class="lang-typescript">taskDefinition.addContainer(<span class="hljs-string">"bluesky-feed-parser"</span>, {
  logging,
  image: ContainerImage.fromDockerImageAsset(
    <span class="hljs-keyword">new</span> DockerImageAsset(<span class="hljs-built_in">this</span>, <span class="hljs-string">"bluesky-feed-parser-img"</span>, {
      directory: join(__dirname, <span class="hljs-string">".."</span>),
      platform: Platform.LINUX_ARM64,
    })
  ),
  environment: {
    DB_NAME: dbName,
    CLUSTER_ARN: db.clusterArn,
    SECRET_ARN: db.secret?.secretArn || <span class="hljs-string">''</span>,
  }
});
</code></pre>
<p>Here, we're adding a container to our task definition. This is like attaching a lens to our camera, defining what it will capture. We're also specifying the environment variables, which are akin to the camera's internal settings.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">new</span> FargateService(<span class="hljs-built_in">this</span>, <span class="hljs-string">"bluesky-feed-generator"</span>, {
  cluster,
  taskDefinition,
  enableExecuteCommand: <span class="hljs-literal">true</span>,
  <span class="hljs-comment">// fargate service needs to select subnets with the NAT in order to access AWS services</span>
  vpcSubnets: {
    subnetType: SubnetType.PRIVATE_WITH_EGRESS,
  },
  securityGroups: [securityGroup]
});
</code></pre>
<p>Finally, we're creating a new Fargate service, which is like pressing the camera's shutter button, setting everything into motion. We're specifying that the service should be able to execute commands and that it should select subnets with NAT to access AWS services, ensuring our camera can communicate with the outside world.</p>
<h3 id="heading-the-service-code">The Service Code</h3>
<p>The <a target="_blank" href="https://github.com/aws-community-projects/bluesky-feed-generator/tree/main/bluesky-feed-parser">code</a> for our Fargate Service is heavily inspired by the feed generator provided by Bluesky. However, we've stripped out all the unnecessary parts (like express, etc). All we need to do is connect to the WebSocket stream and save relevant events to the already-created database.</p>
<p>When we created the <code>DockerImageAsset</code> above, we pointed it to our root directory which contains a <a target="_blank" href="https://github.com/aws-community-projects/bluesky-feed-generator/blob/main/Dockerfile"><code>Dockerfile</code></a>. This <a target="_blank" href="https://github.com/aws-community-projects/bluesky-feed-generator/blob/main/Dockerfile"><code>Dockerfile</code></a> installs the dependencies and runs the app which <a target="_blank" href="https://github.com/aws-community-projects/bluesky-feed-generator/blob/main/bluesky-feed-parser/app.ts">creates a FirehoseSubscription</a>.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">for</span> (<span class="hljs-keyword">const</span> post <span class="hljs-keyword">of</span> ops.posts.creates) {
  <span class="hljs-keyword">if</span> (awsCommunityDids.includes(post.author)) {
    <span class="hljs-keyword">const</span> user = awsCommunityDidsToKeys[post.author];
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`<span class="hljs-subst">${user}</span> posted <span class="hljs-subst">${post.record.text}</span>`</span>);
    postsToCreate.push({
      uri: post.uri,
      cid: post.cid,
      author: user,
      replyParent: post.record?.reply?.parent.uri ?? <span class="hljs-literal">null</span>,
      replyRoot: post.record?.reply?.root.uri ?? <span class="hljs-literal">null</span>,
      indexedAt: <span class="hljs-keyword">new</span> <span class="hljs-built_in">Date</span>().toISOString().slice(<span class="hljs-number">0</span>, <span class="hljs-number">19</span>).replace(<span class="hljs-string">"T"</span>, <span class="hljs-string">" "</span>),
    });
  }
}

<span class="hljs-keyword">if</span> (postsToCreate.length &gt; <span class="hljs-number">0</span>) {
  <span class="hljs-built_in">console</span>.log(<span class="hljs-built_in">JSON</span>.stringify({ postsToCreate }));
  <span class="hljs-keyword">const</span> insertSql = <span class="hljs-string">`INSERT INTO post (uri, cid, author, replyParent, replyRoot, indexedAt) VALUES <span class="hljs-subst">${postsToCreate
    .map(
      () =&gt; <span class="hljs-string">"(:uri, :cid, :author, :replyParent, :replyRoot, :indexedAt)"</span>
    )
    .join(<span class="hljs-string">", "</span>)}</span> ON DUPLICATE KEY UPDATE uri = uri`</span>;

  <span class="hljs-keyword">const</span> insertParams = postsToCreate.flatMap(<span class="hljs-function">(<span class="hljs-params">post</span>) =&gt;</span> [
    { name: <span class="hljs-string">"uri"</span>, value: { stringValue: post.uri } },
    { name: <span class="hljs-string">"cid"</span>, value: { stringValue: post.cid } },
    { name: <span class="hljs-string">"author"</span>, value: { stringValue: post.author } },
    {
      name: <span class="hljs-string">"replyParent"</span>,
      value: post.replyParent
        ? { stringValue: post.replyParent }
        : { isNull: <span class="hljs-literal">true</span> },
    },
    {
      name: <span class="hljs-string">"replyRoot"</span>,
      value: post.replyRoot
        ? { stringValue: post.replyRoot }
        : { isNull: <span class="hljs-literal">true</span> },
    },
    { name: <span class="hljs-string">"indexedAt"</span>, value: { stringValue: post.indexedAt } },
  ]);

  <span class="hljs-keyword">const</span> insertCmd = cmd(insertSql, insertParams);
  <span class="hljs-keyword">await</span> client.send(insertCmd);
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Created <span class="hljs-subst">${postsToCreate.length}</span> posts`</span>);
}
</code></pre>
<p>This is akin to the post-processing phase in photography. We're selecting the best shots (or in this case, posts) based on our predefined criteria, and storing them in our database. We're also ensuring that we handle deletions appropriately, keeping our collection of shots up-to-date.</p>
<h2 id="heading-the-bluesky-feed">The Bluesky Feed</h2>
<p>The <a target="_blank" href="https://github.com/aws-community-projects/bluesky-feed-generator/blob/main/lib/bluesky-feed.ts">final CDK Construct</a> creates an API Gateway with two endpoints, and the CustomResource that registers the feed.</p>
<h3 id="heading-creating-the-feed">Creating the Feed</h3>
<p>This section of the code is primarily concerned with setting up the API Gateway and the DNS records for the service. Here are the key parts:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> hostedzone = HostedZone.fromLookup(<span class="hljs-built_in">this</span>, <span class="hljs-string">"hostedzone"</span>, {
  domainName: zoneDomain,
});
<span class="hljs-keyword">const</span> certificate = <span class="hljs-keyword">new</span> Certificate(<span class="hljs-built_in">this</span>, <span class="hljs-string">"certificate"</span>, {
  domainName,
  validation: CertificateValidation.fromDns(hostedzone),
});
</code></pre>
<p>Here, we're looking up the hosted zone for our domain and creating a certificate for it. This is like setting up the address and credentials for our online photo gallery.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> api = <span class="hljs-keyword">new</span> RestApi(<span class="hljs-built_in">this</span>, <span class="hljs-string">"RestApi"</span>, {
  defaultMethodOptions: {
    methodResponses: [{ statusCode: <span class="hljs-string">"200"</span> }],
  },
  deployOptions: {
    tracingEnabled: <span class="hljs-literal">true</span>,
    metricsEnabled: <span class="hljs-literal">true</span>,
    dataTraceEnabled: <span class="hljs-literal">true</span>,
  },
  endpointConfiguration: {
    types: [EndpointType.REGIONAL],
  },
});
</code></pre>
<p>Next, we're setting up the REST API. This is like setting up the interface for our online gallery, defining how users will interact with it.</p>
<p>Of course, let's not forget the MockIntegration:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> didIntegration = <span class="hljs-keyword">new</span> MockIntegration(didOptions);
<span class="hljs-keyword">const</span> didResource = api.root
  .addResource(<span class="hljs-string">".well-known"</span>)
  .addResource(<span class="hljs-string">"did.json"</span>);
didResource.addMethod(<span class="hljs-string">"GET"</span>, didIntegration, {
  methodResponses: [
    {
    statusCode: <span class="hljs-string">"200"</span>,
    },
  ],
});
</code></pre>
<p>Here, we're setting up a MockIntegration. In the context of AWS API Gateway, a MockIntegration is a type of integration that allows you to simulate API behavior without implementing any backend logic. It's like a placeholder or a dummy that returns pre-configured responses to requests.</p>
<p>In this case, we're using it to serve a static JSON response for the did.json endpoint under the .well-known path. This endpoint is typically used to provide a standard way to discover information about the domain, and in this case, it's providing information about the Bluesky feed generator service.</p>
<p>This is akin to having a static information page in our photo gallery that provides details about the gallery itself. It doesn't change or interact with the visitor but provides essential information for anyone who asks.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> feedFn = <span class="hljs-keyword">new</span> NodejsFunction(<span class="hljs-built_in">this</span>, <span class="hljs-string">"feed"</span>, {
  functionName: <span class="hljs-string">"bluesky-feed"</span>,
  entry: join(__dirname, <span class="hljs-string">"lambda/feed.ts"</span>),
  runtime: Runtime.NODEJS_18_X,
  timeout: Duration.seconds(<span class="hljs-number">30</span>),
  tracing: Tracing.ACTIVE,
  environment: {
    DB_NAME: dbName,
    CLUSTER_ARN: db.clusterArn,
    SECRET_ARN: db.secret?.secretArn || <span class="hljs-string">""</span>,
  },
});
db.grantDataApiAccess(feedFn);
</code></pre>
<p>Here, we're defining a new Node.js function that will serve as the feed for our service. This is like setting up the mechanism that will display the photos in our gallery.</p>
<pre><code class="lang-typescript">api.root
  .addResource(<span class="hljs-string">"xrpc"</span>)
  .addResource(<span class="hljs-string">"app.bsky.feed.getFeedSkeleton"</span>)
  .addMethod(<span class="hljs-string">"GET"</span>, feedIntegration);
</code></pre>
<p>This is where we're defining the endpoint for our feed. This is like setting up the URL where users can view our photo gallery.</p>
<pre><code class="lang-typescript">api.addDomainName(<span class="hljs-string">`Domain`</span>, {
  domainName,
  certificate,
  securityPolicy: SecurityPolicy.TLS_1_2,
});
<span class="hljs-keyword">new</span> ARecord(scope, <span class="hljs-string">`ARecord`</span>, {
  zone: hostedzone,
  recordName: domainName,
  target: RecordTarget.fromAlias(<span class="hljs-keyword">new</span> ApiGateway(api)),
});
</code></pre>
<p>Finally, we're associating our domain name with our API and creating an A record for it. This is like linking our online gallery to our chosen web address, making it accessible to the public.</p>
<h3 id="heading-registering-the-feed">Registering the feed</h3>
<p>We create another lambda and custom resource with:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> publishSecret = Secret.fromSecretCompleteArn(<span class="hljs-built_in">this</span>, <span class="hljs-string">"publish-secret"</span>, props.publishFeed.blueskySecretArn);
<span class="hljs-keyword">const</span> publishFeedFn = <span class="hljs-keyword">new</span> NodejsFunction(<span class="hljs-built_in">this</span>, <span class="hljs-string">"publish-feed"</span>, {
  functionName: <span class="hljs-string">"bluesky-publish-feed"</span>,
  entry: join(__dirname, <span class="hljs-string">"lambda/publish-feed.ts"</span>),
  runtime: Runtime.NODEJS_18_X,
  timeout: Duration.seconds(<span class="hljs-number">30</span>),
  tracing: Tracing.ACTIVE,
  environment: {
    HANDLE: props.publishFeed.handle,
    SECRET_ARN: props.publishFeed.blueskySecretArn,
    FEEDGEN_HOSTNAME: domainName,
    FEEDS: <span class="hljs-built_in">JSON</span>.stringify(props.publishFeed.feeds),
  },
});
publishSecret.grantRead(publishFeedFn);

<span class="hljs-keyword">const</span> publishProvider = <span class="hljs-keyword">new</span> Provider(<span class="hljs-built_in">this</span>, <span class="hljs-string">`publish-feed-provider`</span>, {
  onEventHandler: publishFeedFn,
});

<span class="hljs-keyword">new</span> CustomResource(<span class="hljs-built_in">this</span>, <span class="hljs-string">`publish-feed-resource`</span>, {
  serviceToken: publishProvider.serviceToken,
  properties: {
    Version: <span class="hljs-string">`<span class="hljs-subst">${<span class="hljs-built_in">Date</span>.now()}</span>`</span>,
  },
});
</code></pre>
<p>Here we make sure that it has access to our Bluesky App password (created on our Bluesky account's settings page).</p>
<p>The handler code follows the same process as Bluesky's own <a target="_blank" href="https://github.com/bluesky-social/feed-generator/blob/main/scripts/publishFeedGen.ts">publishFeedGen.ts</a> script, except we get the password from the secrets manager first. This code runs with every deployment and supports registering multiple feeds pointing at the same lambda.</p>
<h2 id="heading-tying-it-all-together-deploying-the-stack">Tying It All Together: Deploying the Stack</h2>
<p>Having created the individual constructs, we now need to assemble them into a <a target="_blank" href="https://github.com/aws-community-projects/bluesky-feed-generator/blob/main/lib/bluesky-feed-generator-stack.ts">cohesive stack</a>. This is akin to putting together our camera, lens, and tripod into a complete photography setup.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> vpc = <span class="hljs-keyword">new</span> Vpc(<span class="hljs-built_in">this</span>, <span class="hljs-string">"vpc"</span>);
<span class="hljs-keyword">const</span> securityGroup = <span class="hljs-keyword">new</span> SecurityGroup(<span class="hljs-built_in">this</span>, <span class="hljs-string">"security-group"</span>, {
  vpc,
  allowAllOutbound: <span class="hljs-literal">true</span>,
});

<span class="hljs-keyword">const</span> dbName = <span class="hljs-string">'bluesky'</span>;
<span class="hljs-keyword">const</span> { db } = <span class="hljs-keyword">new</span> BlueskyDb(<span class="hljs-built_in">this</span>, <span class="hljs-string">'bluesky-db'</span>, {
  dbName,
  vpc,
});

<span class="hljs-keyword">const</span> domainName = <span class="hljs-string">'martz.codes'</span>;

<span class="hljs-keyword">new</span> BlueskyParser(<span class="hljs-built_in">this</span>, <span class="hljs-string">'bluesky-parser'</span>, {
  db,
  dbName,
  securityGroup,
  vpc,
});

<span class="hljs-keyword">new</span> BlueskyFeed(<span class="hljs-built_in">this</span>, <span class="hljs-string">'bluesky-feed'</span>, {
  db,
  dbName,
  domainName,
  publishFeed,
});
</code></pre>
<p>In the code above, we're first setting up a Virtual Private Cloud (VPC) and a security group. This is like choosing a location for our photo shoot and setting up the necessary security measures.</p>
<p>Next, we're creating our Bluesky database within this VPC. This is akin to setting up our storage system for the photos we'll capture.</p>
<p>We then instantiate our Bluesky parser and feed constructs, passing in the necessary parameters such as the database, domain name, and security group. This is like setting up our camera and lens, ready to start capturing photos.</p>
<p>In the <a target="_blank" href="https://github.com/aws-community-projects/bluesky-feed-generator/blob/main/bin/bluesky-feed-generator.ts">bin file</a>, we include the required <code>publishFeed</code> properties:</p>
<pre><code class="lang-typescript">publishFeed: {
    handle: <span class="hljs-string">'martz.codes'</span>,
    blueskySecretArn: <span class="hljs-string">"arn:aws:secretsmanager:us-east-1:359317520455:secret:bluesky-rQXJxQ"</span>,
    feeds: {
      <span class="hljs-string">"aws-community"</span>: {
        displayName: <span class="hljs-string">"AWS Community"</span>,
        description: <span class="hljs-string">"This is a test feed served from an AWS Lambda. It is a list of AWS Employees, AWS Heroes and AWS Community Builders"</span>,
      }
    },
  },
</code></pre>
<p>This is like setting up the details of our photo gallery, including the name, description, and access credentials.</p>
<p><strong><em>It's worth noting that this stack was built iteratively. While it should work as expected, there may be a missing dependency that could affect the deployment order for the CustomResources.</em></strong></p>
<p>Once deployed, we can visit the feed's page on Bluesky: <a target="_blank" href="https://bsky.app/profile/did:plc:a62mzn6xxxxwktpdprw2lvnc/feed/aws-community">https://bsky.app/profile/did:plc:a62mzn6xxxxwktpdprw2lvnc/feed/aws-community</a></p>
<p>The URL structure is as follows: <a target="_blank" href="https://bsky.app/profile/%3Cowner's"><code>https://bsky.app/profile/&lt;owner's</code></a> <code>DID&gt;/feed/&lt;short-name&gt;</code>. When this URL is loaded in Bluesky, the Bluesky service makes a call to the feed URL. The feed then replies with several DIDs of posts, which the Bluesky service hydrates. This is like visiting our online photo gallery, where the service fetches and displays the photos based on the visitor's request.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>And there you have it! We've walked through the process of deploying a Bluesky feed generator using AWS services, specifically AWS CDK, Fargate, Lambda, and Aurora Serverless. This setup allows us to parse the Bluesky event stream, store relevant events in a database, and serve the feed using a serverless architecture. It's like setting up a fully automated photography studio that captures, stores, and displays photos based on specific criteria.</p>
<p>But this is just the beginning. There are countless ways you could expand on this setup to suit your specific needs or explore new possibilities. Here are a few ideas to get you started:</p>
<ol>
<li><p><strong>Customize Your Feed Algorithm:</strong> The feed algorithm we used in this example is relatively simple, focusing on posts from specific authors. You could expand on this by incorporating more complex criteria, such as keywords, hashtags, or even sentiment analysis. This would be like using advanced filters or editing techniques to select and enhance your photos.</p>
</li>
<li><p><strong>Integrate with Other Services:</strong> You could integrate your feed generator with other AWS services or third-party APIs to add more functionality. For example, you could use AWS Comprehend to analyze the sentiment of posts, AWS Translate to support multiple languages, or AWS SNS to send notifications when new posts are added to the feed.</p>
</li>
<li><p><strong>Create a User Interface:</strong> While Bluesky provides a platform to view the feeds, you could also create your own user interface to display the feeds in a unique way. This could be a web app, a mobile app, or even an Alexa skill. This would be like creating your own online gallery or photo app to showcase your photos.</p>
</li>
<li><p><strong>Scale Up:</strong> Our setup is designed to be scalable, but you could take this further by implementing more advanced scaling strategies. For example, you could use AWS Auto Scaling to automatically adjust the capacity of your Fargate service based on demand, or AWS ElastiCache to improve the performance of your database.</p>
</li>
<li><p><strong>Secure Your Setup:</strong> While we've implemented basic security measures, there's always more you can do to protect your data and your users. You could use AWS Shield for DDoS protection, AWS WAF for web application firewall, or AWS Macie to discover, classify, and protect sensitive data.</p>
</li>
</ol>
<p>Remember, the sky's the limit when it comes to what you can achieve with AWS services and Bluesky. So don't be afraid to experiment, innovate, and create something truly unique.</p>
]]></content:encoded></item><item><title><![CDATA[Destroy THEIR Stacks - Ephemeral CDK Stacks as a Service]]></title><description><![CDATA[In this post, we will enhance our ephemeral stack architecture by consolidating the destruction process to a central service. We will utilize a stack lifetime tag in conjunction with the MakeDestroyable aspect from the @aws-community/ephemeral npm li...]]></description><link>https://martzmakes.com/destroy-their-stacks-ephemeral-cdk-stacks-as-a-service</link><guid isPermaLink="true">https://martzmakes.com/destroy-their-stacks-ephemeral-cdk-stacks-as-a-service</guid><category><![CDATA[AWS]]></category><category><![CDATA[aws-cdk]]></category><category><![CDATA[Devops]]></category><category><![CDATA[serverless]]></category><dc:creator><![CDATA[Matt Martz]]></dc:creator><pubDate>Thu, 29 Jun 2023 15:53:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1688053922496/edf9776f-0a60-4728-84c4-73b25b7e0672.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this post, we will enhance our ephemeral stack architecture by consolidating the destruction process to a central service. We will utilize a stack lifetime tag in conjunction with the <code>MakeDestroyable</code> aspect from the <a target="_blank" href="https://www.npmjs.com/package/@aws-community/ephemeral">@aws-community/ephemeral</a> npm library.</p>
<p>Ephemeral stacks are temporary stacks in AWS that are designed to exist for a short period of time. This is particularly useful in development environments where you want to test something but don’t need the stack to be up indefinitely.</p>
<p>This article is a follow-up to two previous posts on the topic of ephemeral stacks:</p>
<ul>
<li><p><a target="_blank" href="https://matt.martz.codes/say-goodbye-to-your-cdk-stacks-a-guide-to-self-destruction">Say Goodbye to Your CDK Stacks: A Guide to Self-Destruction</a></p>
</li>
<li><p><a target="_blank" href="https://matt.martz.codes/blink-and-its-gone-embracing-ephemeral-cdk-stacks-for-efficient-devops">Blink and It's Gone: Embracing Ephemeral CDK Stacks for Efficient DevOps</a></p>
</li>
</ul>
<h2 id="heading-the-value-of-ephemeral-stacks">The Value of Ephemeral Stacks</h2>
<p>But why would you want to use ephemeral stacks?</p>
<ol>
<li><p><strong>Cost Savings 💰</strong>: By using resources only for the time needed, you can significantly reduce costs. You no longer have to worry about unused resources accumulating costs because the stacks self-terminate after the stipulated period.</p>
</li>
<li><p><strong>Efficient Resource Allocation 🔄</strong>: In fast-paced development environments, resources are constantly being allocated and deallocated. Ephemeral stacks make this process more efficient, ensuring that resources are available when needed and are released when no longer in use.</p>
</li>
<li><p><strong>Reduced Complexity 🧠</strong>: Keeping track of which resources are actively being used can be a complex task. By using ephemeral stacks, you know that any active resource is being used for a good reason. This reduces the complexity of managing your infrastructure.</p>
</li>
<li><p><strong>Enhanced Security 🔒</strong>: Minimizing the lifespan of your stacks reduces the exposure window for potential security vulnerabilities. By limiting the duration a resource is up, you inherently limit the time it can be exploited.</p>
</li>
<li><p><strong>Realistic Testing Environments</strong> 🧪: Ephemeral stacks are great for simulating production environments without the permanence. They allow you to conduct realistic tests and experiments, enabling you to glean insights and identify issues that might not be evident in traditional development environments.</p>
</li>
<li><p><strong>Simplified Clean-Up</strong> 🧹: Forget the days of manually cleaning up resources post-testing. With the self-destruction aspect of ephemeral stacks, the clean-up is automatic. This not only saves time but also ensures that no remnants are left behind that can cause clutter or additional costs.</p>
</li>
<li><p><strong>Easy Scalability for Temporary Needs</strong> ⚖️: Sometimes you need to scale resources quickly to meet a temporary need (e.g., a one-time data processing job). Ephemeral stacks allow for such scalability without the long-term commitment.</p>
</li>
</ol>
<p>Armed with these benefits, it’s clear that ephemeral stacks are an incredibly powerful tool for optimizing AWS resource management, especially in development environments. Let's dive into how we can further improve the architecture by consolidating the destruction process.</p>
<h2 id="heading-understanding-the-key-components">Understanding the Key Components</h2>
<p>We will go through the changes to the <a target="_blank" href="https://www.npmjs.com/package/@aws-community/ephemeral">@aws-community/ephemeral</a> npm library and demonstrate how it can be used.</p>
<p>The code for the <a target="_blank" href="https://www.npmjs.com/package/@aws-community/ephemeral">@aws-community/ephemeral</a> npm library is here: <a target="_blank" href="https://github.com/aws-community-projects/ephemeral">https://github.com/aws-community-projects/ephemeral</a></p>
<p>The example project that uses it is here: <a target="_blank" href="https://github.com/martzcodes/blog-ephemeral">https://github.com/martzcodes/blog-ephemeral</a></p>
<h3 id="heading-the-destroyme-stack-and-construct">The DestroyMe Stack and Construct</h3>
<p>The <code>DestroyMeConstruct</code> uses the <code>SelfDestructAspect</code> from previous posts to ensure that all of the AWS Resources in the stack are set to a DESTROY retention policy. Additionally, it sets a <code>STACK_LIFE</code> tag on the stack, which indicates how long the stack should remain if there are no updates to it. This tag will be used by an external service to pick up and process the stack for destruction. Here’s the code snippet for this part:</p>
<pre><code class="lang-typescript">Tags.of(Stack.of(<span class="hljs-built_in">this</span>)).add(<span class="hljs-string">'STACK_LIFE'</span>, duration.toSeconds().toString());
Aspects.of(Stack.of(<span class="hljs-built_in">this</span>)).add(<span class="hljs-keyword">new</span> SelfDestructAspect());
</code></pre>
<p><code>DestroyMeStack</code> is a higher-level construct that simply includes <code>DestroyMeConstruct</code>, making it convenient to extend.</p>
<h3 id="heading-the-destroyer-stack">The Destroyer Stack</h3>
<p>The <code>DestroyerStack</code> is the core of this enhancement. Instead of having each stack deploy a step function that will self-destroy, which could lead to conflicts or complications, we centralize the destruction process.</p>
<p><code>DestroyerStack</code> uses <a target="_blank" href="https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-service-event-list.html">AWS Service Events</a> from <a target="_blank" href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacks-event-bridge.html">CloudFormation</a> to detect stacks that have the <code>STACK_LIFE</code> tag. Every time a CDK Stack deploys, it generates a CloudFormation Stack Status event. Using this event, we can fetch the stack details, including the tags, and determine if we should track the stack for deletion.</p>
<p>If the stack has the <code>STACK_LIFE</code> tag, we add an entry into a DynamoDB table with a <code>TimeToLive</code> (TTL) property. This TTL is the sum of the current time and the stack life duration. When DynamoDB removes the item due to TTL expiration, we trigger a Lambda function to delete the stack.</p>
<p>Here's how the table is created:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> tableName = <span class="hljs-string">'destroyer'</span>;
<span class="hljs-keyword">const</span> table = <span class="hljs-keyword">new</span> Table(<span class="hljs-built_in">this</span>, tableName, {
  tableName,
  partitionKey: {
    name: <span class="hljs-string">'pk'</span>,
    <span class="hljs-keyword">type</span>: AttributeType.STRING,
  },
  billingMode: BillingMode.PAY_PER_REQUEST,
  removalPolicy: RemovalPolicy.DESTROY,
  timeToLiveAttribute: <span class="hljs-string">'ttl'</span>,
  stream: StreamViewType.NEW_AND_OLD_IMAGES,
});
</code></pre>
<p>In case the stack deletion fails, we can also track the <code>DELETE FAILED</code> status and send notifications to an SNS Topic for manual intervention.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> failTopic = <span class="hljs-keyword">new</span> Topic(<span class="hljs-built_in">this</span>, <span class="hljs-string">'fail-topic'</span>);
<span class="hljs-keyword">new</span> Rule(<span class="hljs-built_in">this</span>, <span class="hljs-string">'delete-failed-rule'</span>, {
  eventPattern: {
    source: [<span class="hljs-string">'aws.cloudformation'</span>],
    detailType: [<span class="hljs-string">'CloudFormation Stack Status Change'</span>],
    detail: {
      resourceStatus: [<span class="hljs-string">'DELETE_FAILED'</span>],
    },
  },
  targets: [<span class="hljs-keyword">new</span> SnsTopic(failTopic)],
});
</code></pre>
<h3 id="heading-cloudformation-event-function">CloudFormation Event Function</h3>
<p>This Lambda function is triggered by AWS Service Events. It retrieves information from the CloudFormation service and writes to the DynamoDB table.</p>
<p>Here's how the Lambda function is configured:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> cloudformationFn = <span class="hljs-keyword">new</span> NodejsFunction(<span class="hljs-built_in">this</span>, <span class="hljs-string">'fn-cloudformation'</span>, {
  runtime: Runtime.NODEJS_18_X,
  memorySize: <span class="hljs-number">1024</span>,
  timeout: Duration.minutes(<span class="hljs-number">5</span>),
  entry: join(__dirname, local ? <span class="hljs-string">'destroyer-stack.fn-cloudformation.ts'</span> : <span class="hljs-string">'destroyer-stack.fn-cloudformation.js'</span>),
  initialPolicy: [
    <span class="hljs-keyword">new</span> PolicyStatement({
      effect: Effect.ALLOW,
      actions: [
        <span class="hljs-string">'cloudformation:Describe*'</span>,
        <span class="hljs-string">'cloudformation:Get*'</span>,
        <span class="hljs-string">'cloudformation:List*'</span>,
      ],
      resources: [<span class="hljs-string">'*'</span>],
    }),
  ],
});
table.grantReadWriteData(cloudformationFn);
cloudformationFn.addEnvironment(<span class="hljs-string">'DESTROY_TABLE_NAME'</span>, table.tableName);
</code></pre>
<p>And then we trigger the lambda on those AWS Service Events:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">new</span> Rule(<span class="hljs-built_in">this</span>, <span class="hljs-string">'cloudformation-rule'</span>, {
  eventPattern: {
    source: [<span class="hljs-string">'aws.cloudformation'</span>],
    detailType: [<span class="hljs-string">'CloudFormation Stack Status Change'</span>],
  },
  targets: [<span class="hljs-keyword">new</span> LambdaFunction(cloudformationFn)],
});
</code></pre>
<p>In our case we don't really care if the stack successfully deployed or not. We reset the ttl with every deployment (failure or not). Since a developer is actively working on the project, we don't want to delete it.</p>
<p>The lambda handler code simply describes the stack and if the <code>STACK_LIFE</code> tag exists, it puts the item into DynamoDB with the StackName as the primary key.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> StackName = event.detail[<span class="hljs-string">'stack-id'</span>];
<span class="hljs-keyword">const</span> describeCommand = <span class="hljs-keyword">new</span> DescribeStacksCommand({
StackName,
});
<span class="hljs-keyword">const</span> stacks = <span class="hljs-keyword">await</span> cf.send(describeCommand);
<span class="hljs-keyword">const</span> stack = stacks.Stacks?.[<span class="hljs-number">0</span>];
<span class="hljs-keyword">const</span> stackLife = stack?.Tags?.find(<span class="hljs-function">(<span class="hljs-params">tag</span>) =&gt;</span> tag.Key === <span class="hljs-string">'STACK_LIFE'</span>)?.Value;
<span class="hljs-keyword">if</span> (stackLife) {
    <span class="hljs-keyword">try</span> {
      <span class="hljs-keyword">await</span> ddbDocClient.send(
        <span class="hljs-keyword">new</span> PutCommand({
          TableName: process.env.DESTROY_TABLE_NAME,
          Item: {
            pk: stack.StackName,
            ttl: <span class="hljs-built_in">Math</span>.ceil(<span class="hljs-keyword">new</span> <span class="hljs-built_in">Date</span>().getTime() / <span class="hljs-number">1000</span> + <span class="hljs-built_in">Number</span>(stackLife)),
          },
        }),
      );
    } <span class="hljs-keyword">catch</span> (e) {
      <span class="hljs-built_in">console</span>.log(e);
    }
}
</code></pre>
<h3 id="heading-destroy-function">Destroy Function</h3>
<p>The destroy function operates similarly. We make sure that it is triggered from the DynamoDB Stream and that it has access to <code>cloudformation:DeleteStack</code>.</p>
<pre><code class="lang-typescript">destroyFn.addEventSource(
  <span class="hljs-keyword">new</span> DynamoEventSource(table, {
    startingPosition: StartingPosition.LATEST,
  }),
);
destroyFn.addToRolePolicy(
  <span class="hljs-keyword">new</span> PolicyStatement({
    actions: [<span class="hljs-string">'cloudformation:DeleteStack'</span>],
    resources: [<span class="hljs-string">'*'</span>],
    effect: Effect.ALLOW,
  }),
);
</code></pre>
<p>The destroy function handler code filters the dynamodb stream records to make sure that the item is being removed and the the ttl is actually expired. For safety there's an escape hatch that you can remove an item from DynamoDB if it's before the expiration, and it won't delete the stack.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> currentTimeInSeconds = <span class="hljs-keyword">new</span> <span class="hljs-built_in">Date</span>().getTime() / <span class="hljs-number">1000</span>;
<span class="hljs-keyword">if</span> (item.ttl &gt; currentTimeInSeconds) {
  <span class="hljs-comment">// item was manually removed and not expired</span>
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'item was manually removed and not expired'</span>, currentTimeInSeconds, item.ttl);
  <span class="hljs-keyword">return</span> [...p];
}
</code></pre>
<p>Then it removes the the valid expired stacks:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> client = <span class="hljs-keyword">new</span> CloudFormationClient({});
<span class="hljs-keyword">await</span> <span class="hljs-built_in">Promise</span>.all(
  stacksToDestroy.map(
    <span class="hljs-keyword">async</span> (stackName) =&gt;
      <span class="hljs-keyword">await</span> client.send(
        <span class="hljs-keyword">new</span> DeleteStackCommand({
          StackName: stackName,
        }),
      ),
  ),
);
</code></pre>
<h2 id="heading-how-to-use-aws-communityephemeral">How to Use @aws-community/ephemeral</h2>
<p>Example Code for this section is located here: <a target="_blank" href="https://github.com/martzcodes/blog-ephemeral">https://github.com/martzcodes/blog-ephemeral</a></p>
<p>First, you need to deploy the DestroyerStack. You can do this in a separate project or by manually deploying the stack with <code>npx cdk deply DestroyerStack</code></p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { DestroyerStack } <span class="hljs-keyword">from</span> <span class="hljs-string">'@aws-community/ephemeral'</span>;

<span class="hljs-keyword">const</span> app = <span class="hljs-keyword">new</span> cdk.App();
<span class="hljs-keyword">new</span> DestroyerStack(app, <span class="hljs-string">'DestroyerStack'</span>);
</code></pre>
<p>Once the DestroyerStack is in-place and monitoring the AWS Service Events, you can make any of your stacks ephemeral by extending the <code>DestroyMeStack</code> or adding the <code>DestroyMeConstruct</code>.</p>
<p>Here, we have extended the stack:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { DestroyMeStack, DestroyMeStackProps } <span class="hljs-keyword">from</span> <span class="hljs-string">'@aws-community/ephemeral'</span>;
<span class="hljs-keyword">import</span> { Construct } <span class="hljs-keyword">from</span> <span class="hljs-string">'constructs'</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> BlogEphemeralStack <span class="hljs-keyword">extends</span> DestroyMeStack {
  <span class="hljs-keyword">constructor</span>(<span class="hljs-params">scope: Construct, id: <span class="hljs-built_in">string</span>, props: DestroyMeStackProps</span>) {
    <span class="hljs-built_in">super</span>(scope, id, props);
    <span class="hljs-comment">// your stuff here</span>
  }
}
</code></pre>
<p>and then we can deploy it using <code>npx cdk deploy EphemeralStack</code></p>
<pre><code class="lang-typescript"><span class="hljs-keyword">new</span> BlogEphemeralStack(app, <span class="hljs-string">'EphemeralStack'</span>, {
  destroyMeEnable: <span class="hljs-literal">true</span>,
  destroyMeDuration: cdk.Duration.minutes(<span class="hljs-number">3</span>),
});
</code></pre>
<p><a target="_blank" href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/howitworks-ttl.html"><strong><em>It is important to note that the Dynamo TTL timing is NOT exact</em></strong></a></p>
<blockquote>
<p>TTL typically deletes expired items within a few days. Depending on the size and activity level of a table, the actual delete operation of an expired item can vary. Because TTL is meant to be a background process, the nature of the capacity used to expire and delete items via TTL is variable (but free of charge).</p>
</blockquote>
<p>If you need to delete the stack sooner, you can manually do it or delete the item from DynamoDB after the ttl has expired.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In this blog post, we dove into enhancing our ephemeral stack architecture by centralizing the stack destruction process and employing a stack life tag with the MakeDestroyable aspect from the <code>@aws-community/ephemeral</code> npm library. This approach ensures that all of the AWS resources in the stack are set to a DESTROY retention and also sets a <code>STACK_LIFE</code> tag, indicating the lifetime of the stack in the absence of updates.</p>
<p>To summarize the key enhancements:</p>
<ol>
<li><p><strong>Centralized Destruction Service</strong>: The centralization of destruction using the <code>DestroyerStack</code> minimizes the risks of conflicts and complications, making it more efficient.</p>
</li>
<li><p><strong>AWS Service Events</strong>: Utilizing AWS Service Events to detect stacks with the <code>STACK_LIFE</code> tag enables automation and efficiency in monitoring and managing the lifetime of resources.</p>
</li>
<li><p><strong>Automated Cleanup</strong>: The architecture now has an automated cleanup mechanism, which will be triggered based on the <code>STACK_LIFE</code> tag, and if there's a failure in the cleanup process, you will be notified via SNS.</p>
</li>
<li><p><strong>Enhanced Resource Management</strong>: With this setup, resources can be more efficiently managed, particularly during development stages where resource provisioning might be ephemeral.</p>
</li>
</ol>
<p>This enhancement is particularly beneficial for DevOps environments, where teams frequently create and destroy resources for testing and development purposes. By automating the destruction of temporary resources, teams can ensure that only necessary resources are retained, leading to cost savings and more manageable infrastructure.</p>
<p>However, do remember that the timing for deletion with DynamoDB's TTL is not precise. If you require more exact timing for resource cleanup, additional manual steps may be necessary.</p>
<p>By integrating these enhancements into your ephemeral stack architecture, you’ll enable more streamlined, automated, and efficient resource management within your AWS environment.</p>
]]></content:encoded></item><item><title><![CDATA[Creating an Aurora MySQL Database and Setting Up a Kinesis CDC Stream with AWS CDK]]></title><description><![CDATA[Welcome to this comprehensive guide where we will be using the AWS Cloud Development Kit (CDK) to create an Aurora MySQL Database, initialize it using Custom Resources, and set up a Change Data Capture (CDC) Stream with Amazon Data Migration Service ...]]></description><link>https://martzmakes.com/creating-an-aurora-mysql-database-and-setting-up-a-kinesis-cdc-stream-with-aws-cdk</link><guid isPermaLink="true">https://martzmakes.com/creating-an-aurora-mysql-database-and-setting-up-a-kinesis-cdc-stream-with-aws-cdk</guid><category><![CDATA[AWS]]></category><category><![CDATA[aws-cdk]]></category><category><![CDATA[event-driven-architecture]]></category><category><![CDATA[advanced]]></category><dc:creator><![CDATA[Matt Martz]]></dc:creator><pubDate>Mon, 26 Jun 2023 15:39:26 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/vd8DbM-5pDg/upload/4c20f5efac9d02532d2ac934e9fe4d70.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Welcome to this comprehensive guide where we will be using the AWS Cloud Development Kit (CDK) to create an Aurora MySQL Database, initialize it using Custom Resources, and set up a Change Data Capture (CDC) Stream with Amazon Data Migration Service (DMS) and Kinesis.</p>
<p>This post builds upon the concepts introduced in <a target="_blank" href="https://matt.martz.codes/how-to-use-binlogs-to-make-an-aurora-mysql-event-stream">How to Use BinLogs to Make an Aurora MySQL Event Stream</a>. Instead of relying on a lambda to parse the BinLog periodically, we'll be leveraging the capabilities of AWS DMS. The future integration of <a target="_blank" href="https://aws.amazon.com/blogs/aws/new-aws-dms-serverless-automatically-provisions-and-scales-capacity-for-migration-and-data-replication/">Serverless DMS</a> with CloudFormation promises to further enhance this system.</p>
<p>You can find the code for this project on <a target="_blank" href="https://github.com/martzcodes/blog-dms-stream">GitHub</a>.</p>
<p>To ensure clarity and organization, our project will be structured into two separate stacks: the DatabaseStack and the DMS Stack. The DatabaseStack includes the VPC, the Aurora MySQL database, and the lambdas responsible for initializing and seeding the database. The DMS Stack encompasses DMS, the CustomResources that manage the DMS Replication Task, and the target Kinesis Stream.</p>
<p>This division allows us to accommodate those who already have a VPC and Aurora MySQL database in place. If you fall into this category, you can easily integrate your existing resources into the DMS stack.</p>
<h2 id="heading-step-1-creating-the-vpc-and-aurora-mysql-database">Step 1: Creating the VPC and Aurora MySQL Database</h2>
<p>Our first step is to create the DatabaseStack, which involves setting up the Aurora MySQL database and the VPC. You can find the code for this step <a target="_blank" href="https://github.com/martzcodes/blog-dms-stream/blob/main/lib/db-stack.ts">here</a>.</p>
<p>When creating the VPC, it's crucial to ensure that it includes a NAT Gateway. This gateway allows instances in a private subnet to connect to the internet or other AWS Services while preventing inbound connections from the internet. This is essential because our resources within the VPC need to communicate with AWS Services. Fortunately, VPCs are equipped with a NAT Gateway by default.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> vpc = <span class="hljs-keyword">new</span> Vpc(<span class="hljs-built_in">this</span>, <span class="hljs-string">"vpc"</span>, {
  maxAzs: <span class="hljs-number">2</span>,
});
</code></pre>
<p>Next, we create the database cluster using the appropriate code.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> db = <span class="hljs-keyword">new</span> DatabaseCluster(<span class="hljs-built_in">this</span>, <span class="hljs-string">"db"</span>, {
  clusterIdentifier: <span class="hljs-string">`db`</span>,
  credentials: Credentials.fromGeneratedSecret(<span class="hljs-string">"admin"</span>),
  defaultDatabaseName: dbName,
  engine: DatabaseClusterEngine.auroraMysql({
    version: AuroraMysqlEngineVersion.VER_3_03_0,
  }),
  iamAuthentication: <span class="hljs-literal">true</span>,
  instanceProps: {
    instanceType: InstanceType.of(InstanceClass.T4G, InstanceSize.MEDIUM),
    vpc,
    vpcSubnets: {
      onePerAz: <span class="hljs-literal">true</span>,
    },
  },
  removalPolicy: RemovalPolicy.DESTROY,
  parameters: {
    binlog_format: <span class="hljs-string">"ROW"</span>,
    log_bin_trust_function_creators: <span class="hljs-string">"1"</span>,
    <span class="hljs-comment">// https://aws.amazon.com/blogs/database/introducing-amazon-aurora-mysql-enhanced-binary-log-binlog/</span>
    aurora_enhanced_binlog: <span class="hljs-string">"1"</span>,
    binlog_backup: <span class="hljs-string">"0"</span>,
    binlog_replication_globaldb: <span class="hljs-string">"0"</span>
  },
});
db.connections.allowDefaultPortInternally();
</code></pre>
<p>Now that the database is created, we need to initialize the schema. We achieve this by utilizing a <a target="_blank" href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources.html">CustomResource</a>, which triggers actions during a CloudFormation stack deployment. In our case, we'll trigger a lambda function that connects to the database and creates a table. This CustomResource can also be used to create users or seed the database with data, but for now, we'll focus on creating an empty table.</p>
<p>The first step in this process is to create the lambda. We ensure that the lambda has access to the database's secret (which is automatically created) with <code>db.secret?.grantRead(initFn)</code>. This secret contains the credentials that the lambda needs to connect to the database.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> initFn = <span class="hljs-keyword">new</span> NodejsFunction(<span class="hljs-built_in">this</span>, <span class="hljs-string">`db-init`</span>, {
  ...lambdaProps,
  entry: join(__dirname, <span class="hljs-string">"lambda/db-init.ts"</span>),
  environment: {
    SECRET_ARN: secret.secretArn,
    DB_NAME: dbName,
    TABLE_NAME: tableName,
  },
  vpc,
  vpcSubnets: {
    onePerAz: <span class="hljs-literal">true</span>,
  },
  securityGroups: db.connections.securityGroups,
});
db.secret?.grantRead(initFn);
initFn.node.addDependency(db);
</code></pre>
<p>The <a target="_blank" href="https://github.com/martzcodes/blog-dms-stream/blob/main/lib/lambda/db-init.ts">lambda handler code</a> is responsible for creating the table, and we ensure that this action is carried out when the stack is created:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> <span class="hljs-keyword">type</span> {
  CloudFormationCustomResourceEvent,
  CloudFormationCustomResourceFailedResponse,
  CloudFormationCustomResourceSuccessResponse,
} <span class="hljs-keyword">from</span> <span class="hljs-string">'aws-lambda'</span>;

<span class="hljs-keyword">import</span> { getConnectionPool } <span class="hljs-keyword">from</span> <span class="hljs-string">'./utils/connection'</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> handler = <span class="hljs-keyword">async</span> (
  event: CloudFormationCustomResourceEvent,
): <span class="hljs-built_in">Promise</span>&lt;CloudFormationCustomResourceSuccessResponse | CloudFormationCustomResourceFailedResponse&gt; =&gt; {
  <span class="hljs-keyword">switch</span> (event.RequestType) {
    <span class="hljs-keyword">case</span> <span class="hljs-string">'Create'</span>:
      <span class="hljs-keyword">try</span> {
        <span class="hljs-keyword">const</span> connection = <span class="hljs-keyword">await</span> getConnectionPool();

        <span class="hljs-keyword">await</span> connection.query(
          <span class="hljs-string">"CALL mysql.rds_set_configuration('binlog retention hours', 24);"</span>
        );

        <span class="hljs-keyword">await</span> connection.query(<span class="hljs-string">`DROP TABLE IF EXISTS <span class="hljs-subst">${process.env.DB_NAME}</span>.<span class="hljs-subst">${process.env.TABLE_NAME}</span>;`</span>);
        <span class="hljs-keyword">await</span> connection.query(<span class="hljs-string">`CREATE TABLE <span class="hljs-subst">${process.env.DB_NAME}</span>.<span class="hljs-subst">${process.env.TABLE_NAME}</span> (id INT NOT NULL AUTO_INCREMENT, example VARCHAR(255) NOT NULL, PRIMARY KEY (id));`</span>);

        <span class="hljs-keyword">return</span> { ...event, PhysicalResourceId: <span class="hljs-string">`init-db`</span>, Status: <span class="hljs-string">'SUCCESS'</span> };
      } <span class="hljs-keyword">catch</span> (e) {
        <span class="hljs-built_in">console</span>.error(<span class="hljs-string">`initialization failed!`</span>, e);
        <span class="hljs-keyword">return</span> { ...event, PhysicalResourceId: <span class="hljs-string">`init-db`</span>, Reason: (e <span class="hljs-keyword">as</span> <span class="hljs-built_in">Error</span>).message, Status: <span class="hljs-string">'FAILED'</span> };
      }
    <span class="hljs-keyword">default</span>:
      <span class="hljs-built_in">console</span>.error(<span class="hljs-string">'No op for'</span>, event.RequestType);
      <span class="hljs-keyword">return</span> { ...event, PhysicalResourceId: <span class="hljs-string">'init-db'</span>, Status: <span class="hljs-string">'SUCCESS'</span> };
  }
};
</code></pre>
<p>To ensure that the lambda is invoked as part of the stack deployment, we <a target="_blank" href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.custom_resources-readme.html">create a provider for the lambda and a CustomResource</a>. The provider specifies the lambda to be invoked, and the CustomResource triggers the invocation when the stack is deployed. This ensures that the database initialization is fully integrated into the stack deployment process.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> initProvider = <span class="hljs-keyword">new</span> Provider(<span class="hljs-built_in">this</span>, <span class="hljs-string">`init-db-provider`</span>, {
  onEventHandler: initFn,
});

<span class="hljs-keyword">new</span> CustomResource(<span class="hljs-built_in">this</span>, <span class="hljs-string">`init-db-resource`</span>, {
  serviceToken: initProvider.serviceToken,
});
</code></pre>
<h2 id="heading-step-2-understanding-cloudformation-and-dms-streams">Step 2: Understanding CloudFormation and DMS Streams</h2>
<p>DMS Change Data Capture replication relies on MySQL's Binlog. To enable DMS, binlog must be enabled in MySQL. When creating the database in the previous step, we included parameters that enable Aurora's enhanced binlog for improved performance. More information about this feature can be found <a target="_blank" href="https://aws.amazon.com/blogs/database/introducing-amazon-aurora-mysql-enhanced-binary-log-binlog/">here</a>.</p>
<pre><code class="lang-bash">binlog_format: <span class="hljs-string">"ROW"</span>,
log_bin_trust_function_creators: <span class="hljs-string">"1"</span>,
aurora_enhanced_binlog: <span class="hljs-string">"1"</span>,
binlog_backup: <span class="hljs-string">"0"</span>,
binlog_replication_globaldb: <span class="hljs-string">"0"</span>
</code></pre>
<p>Moving on, we can now create the stack that contains the DMS Replication Task and Kinesis stream. You can access the relevant code <a target="_blank" href="https://github.com/martzcodes/blog-dms-stream/blob/main/lib/blog-dms-stream-stack.ts">here</a>.</p>
<p>First, we create the Kinesis stream that will serve as the target for the events.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> dbStream = <span class="hljs-keyword">new</span> Stream(<span class="hljs-built_in">this</span>, <span class="hljs-string">`db-stream`</span>, {
  streamName: <span class="hljs-string">`db-stream`</span>,
  streamMode: StreamMode.ON_DEMAND,
});
</code></pre>
<p>DMS requires a role called <code>dms-vpc-role</code> to function correctly, but it doesn't have the necessary permissions by default. Therefore, we need to manually create this role.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> dmsRole = <span class="hljs-keyword">new</span> Role(<span class="hljs-built_in">this</span>, <span class="hljs-string">`dms-role`</span>, {
  roleName: <span class="hljs-string">`dms-vpc-role`</span>, <span class="hljs-comment">// need the name for this one</span>
  assumedBy: <span class="hljs-keyword">new</span> ServicePrincipal(<span class="hljs-string">"dms.amazonaws.com"</span>),
});
dmsRole.addManagedPolicy(
  ManagedPolicy.fromManagedPolicyArn(<span class="hljs-built_in">this</span>, <span class="hljs-string">`AmazonDMSVPCManagementRole`</span>, <span class="hljs-string">`arn:aws:iam::aws:policy/service-role/AmazonDMSVPCManagementRole`</span>)
);
</code></pre>
<p>Next, we create the Replication Subnet Group that DMS will use to connect to the database. Since it would attempt to create the <code>dms-vpc-role</code> with the wrong permissions, we need to ensure that it uses the existing role we created. This requires adding a dependency between the two resources.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> dmsSubnet = <span class="hljs-keyword">new</span> CfnReplicationSubnetGroup(<span class="hljs-built_in">this</span>, <span class="hljs-string">`dms-subnet`</span>, {
  replicationSubnetGroupDescription: <span class="hljs-string">"DMS Subnet"</span>,
  subnetIds: vpc.selectSubnets({
    onePerAz: <span class="hljs-literal">true</span>,
  }).subnetIds,
});
dmsSubnet.node.addDependency(dmsRole);
</code></pre>
<p>Now we can create the replication instance itself, utilizing the subnet we just created. For simplicity, we are using the smallest instance class, although ideally, we would support using <a target="_blank" href="https://aws.amazon.com/blogs/aws/new-aws-dms-serverless-automatically-provisions-and-scales-capacity-for-migration-and-data-replication/">Serverless DMS</a>. Unfortunately, CloudFormation does not yet provide support for this.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> dmsRep = <span class="hljs-keyword">new</span> CfnReplicationInstance(<span class="hljs-built_in">this</span>, <span class="hljs-string">`dms-replication`</span>, {
  replicationInstanceClass: <span class="hljs-string">"dms.t2.micro"</span>,
  multiAz: <span class="hljs-literal">false</span>,
  publiclyAccessible: <span class="hljs-literal">false</span>,
  replicationSubnetGroupIdentifier: dmsSubnet.ref,
  vpcSecurityGroupIds: securityGroups.map(
    <span class="hljs-function">(<span class="hljs-params">sg</span>) =&gt;</span> sg.securityGroupId
  ),
});
</code></pre>
<p>To enable DMS to connect to Aurora, we need to grant it access to the role created by the database. We accomplish this by manually creating a Role, granting it permission to read the database's secret, and providing it as the source endpoint for DMS.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> dmsSecretRole = <span class="hljs-keyword">new</span> Role(<span class="hljs-built_in">this</span>, <span class="hljs-string">`dms-secret-role`</span>, {
  assumedBy: <span class="hljs-keyword">new</span> ServicePrincipal(
    <span class="hljs-string">`dms.<span class="hljs-subst">${Stack.<span class="hljs-keyword">of</span>(<span class="hljs-built_in">this</span>).region}</span>.amazonaws.com`</span>
  ),
});
secret.grantRead(dmsSecretRole);

<span class="hljs-keyword">const</span> source = <span class="hljs-keyword">new</span> CfnEndpoint(<span class="hljs-built_in">this</span>, <span class="hljs-string">`dms-source-endpoint`</span>, {
  endpointType: <span class="hljs-string">"source"</span>,
  engineName: <span class="hljs-string">"aurora"</span>,
  mySqlSettings: {
    secretsManagerAccessRoleArn: dmsSecretRole.roleArn,
    secretsManagerSecretId: secret.secretName,
  },
});
</code></pre>
<p>Since our target is Kinesis, we also need to create a "target" endpoint and assign a role that has access to put records on the Kinesis stream.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> streamWriterRole = <span class="hljs-keyword">new</span> Role(<span class="hljs-built_in">this</span>, <span class="hljs-string">`dms-stream-role`</span>, {
  assumedBy: <span class="hljs-keyword">new</span> ServicePrincipal(
    <span class="hljs-string">`dms.<span class="hljs-subst">${Stack.<span class="hljs-keyword">of</span>(<span class="hljs-built_in">this</span>).region}</span>.amazonaws.com`</span>
  ),
});

streamWriterRole.addToPolicy(
  <span class="hljs-keyword">new</span> PolicyStatement({
    resources: [dbStream.streamArn],
    actions: [
      <span class="hljs-string">"kinesis:DescribeStream"</span>,
      <span class="hljs-string">"kinesis:PutRecord"</span>,
      <span class="hljs-string">"kinesis:PutRecords"</span>,
    ],
  })
);

<span class="hljs-keyword">const</span> target = <span class="hljs-keyword">new</span> CfnEndpoint(<span class="hljs-built_in">this</span>, <span class="hljs-string">`dms-target-endpoint`</span>, {
  endpointType: <span class="hljs-string">"target"</span>,
  engineName: <span class="hljs-string">"kinesis"</span>,
  kinesisSettings: {
    messageFormat: <span class="hljs-string">"JSON"</span>,
    streamArn: dbStream.streamArn,
    serviceAccessRoleArn: streamWriterRole.roleArn,
  },
});
</code></pre>
<p>Finally, we create the replication task itself. We provide a generic table mapping that emits events for changes to any table. It's worth noting that wildcards can be used to restrict the mapping to specific tables if desired. For more information about table mappings take a look at the <a target="_blank" href="https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TableMapping.html">docs</a></p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> dmsTableMappings = {
  rules: [
    {
      <span class="hljs-string">"rule-type"</span>: <span class="hljs-string">"selection"</span>,
      <span class="hljs-string">"rule-id"</span>: <span class="hljs-string">"1"</span>,
      <span class="hljs-string">"rule-name"</span>: <span class="hljs-string">"1"</span>,
      <span class="hljs-string">"object-locator"</span>: {
        <span class="hljs-string">"schema-name"</span>: dbName,
        <span class="hljs-string">"table-name"</span>: <span class="hljs-string">"%"</span>,
        <span class="hljs-string">"table-type"</span>: <span class="hljs-string">"table"</span>,
      },
      <span class="hljs-string">"rule-action"</span>: <span class="hljs-string">"include"</span>,
      filters: [],
    },
  ],
};
<span class="hljs-keyword">const</span> task = <span class="hljs-keyword">new</span> CfnReplicationTask(<span class="hljs-built_in">this</span>, <span class="hljs-string">`dms-stream-rep`</span>, {
  replicationInstanceArn: dmsRep.ref,
  migrationType: <span class="hljs-string">"cdc"</span>,
  sourceEndpointArn: source.ref,
  targetEndpointArn: target.ref,
  tableMappings: <span class="hljs-built_in">JSON</span>.stringify(dmsTableMappings),
  replicationTaskSettings: <span class="hljs-built_in">JSON</span>.stringify({
    BeforeImageSettings: {
      EnableBeforeImage: <span class="hljs-literal">true</span>,
      FieldName: <span class="hljs-string">"before"</span>,
      ColumnFilter: <span class="hljs-string">"all"</span>,
    }
  }),
});
</code></pre>
<p>Additionally, we provide <a target="_blank" href="https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.BeforeImage.html"><code>BeforeImageSettings</code></a> in the <code>replicationTaskSettings</code>, which enables us to include a before image for row updates, allowing us to infer deltas on table row updates. For Change Data Capture, we are using the migration type <code>cdc</code> since we are not migrating existing data.</p>
<h3 id="heading-important-considerations-for-cloudformation-and-dms">Important Considerations for CloudFormation and DMS</h3>
<p>CloudFormation can NOT update DMS tasks that are actively running. It also does not automatically start a DMS Replication Task as part of the deployment. In order to get around this we'll set up some CustomResources and enforce their order that one will run before DMS changes, and one will run after.</p>
<p>The <a target="_blank" href="https://github.com/martzcodes/blog-dms-stream/blob/main/lib/lambda/dms-pre.ts">"pre" lambda</a> will check if there are changes to DMS in the CloudFormation change set, if there are it will check if DMS is running, and if it is it will stop the task. It will also wait for the task to finish stopping before responding.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> StackName = <span class="hljs-string">`<span class="hljs-subst">${process.env.STACK_NAME}</span>`</span>;
<span class="hljs-keyword">if</span> (!ReplicationTaskArn) {
  ReplicationTaskArn = <span class="hljs-keyword">await</span> getDmsTask({ cf, StackName });
}
<span class="hljs-keyword">const</span> status = <span class="hljs-keyword">await</span> getDmsStatus({ dms, ReplicationTaskArn });
<span class="hljs-keyword">if</span> (status === <span class="hljs-string">'running'</span>) {
  <span class="hljs-keyword">if</span> (event.RequestType === <span class="hljs-string">'Delete'</span> || <span class="hljs-keyword">await</span> hasDmsChanges({ cf, StackName })) {
    <span class="hljs-comment">// pause task</span>
    <span class="hljs-keyword">const</span> stopCmd = <span class="hljs-keyword">new</span> StopReplicationTaskCommand({
      ReplicationTaskArn,
    });
    <span class="hljs-keyword">await</span> dms.send(stopCmd);
    <span class="hljs-comment">// wait for task to be fully paused</span>
    <span class="hljs-keyword">await</span> waitForDmsStatus({ dms, ReplicationTaskArn, targetStatus: <span class="hljs-string">'stopped'</span> });
  }
}
</code></pre>
<p>On the other end, the <a target="_blank" href="https://github.com/martzcodes/blog-dms-stream/blob/main/lib/lambda/dms-post.ts">"post" lambda</a> will do the opposite. It will start (or resume) the DMS replication and wait for it to finish spinning up.</p>
<pre><code class="lang-typescript">  <span class="hljs-keyword">const</span> startCmd = <span class="hljs-keyword">new</span> StartReplicationTaskCommand({
    ReplicationTaskArn,
    StartReplicationTaskType: <span class="hljs-string">"resume-processing"</span>,
  });
  <span class="hljs-keyword">await</span> dms.send(startCmd);
  <span class="hljs-keyword">await</span> waitForDmsStatus({
    dms,
    ReplicationTaskArn,
    targetStatus: <span class="hljs-string">"running"</span>,
  });
</code></pre>
<p>Additionally, we set up a lambda function that is invoked by the Kinesis stream. We add the stream as an event source for this lambda function.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> kinesisFn = <span class="hljs-keyword">new</span> NodejsFunction(<span class="hljs-built_in">this</span>, <span class="hljs-string">`stream-kinesis`</span>, {
  ...lambdaProps,
  entry: join(__dirname, <span class="hljs-string">"lambda/stream-subscriber.ts"</span>),
  tracing: Tracing.ACTIVE,
});

kinesisFn.addEventSource(
  <span class="hljs-keyword">new</span> KinesisEventSource(dbStream, {
    batchSize: <span class="hljs-number">100</span>, <span class="hljs-comment">// default</span>
    startingPosition: StartingPosition.LATEST,
    filters: [
      { pattern: <span class="hljs-built_in">JSON</span>.stringify({ partitionKey: [<span class="hljs-string">`<span class="hljs-subst">${dbName}</span>.<span class="hljs-subst">${tableName}</span>`</span>] }) },
    ],
  })
);
</code></pre>
<h2 id="heading-step-3-testing-the-event-stream">Step 3: Testing the Event Stream</h2>
<p>With both stacks deployed, we can now test DMS. To facilitate this process, a lambda function has been created, and its code can be accessed <a target="_blank" href="https://github.com/martzcodes/blog-dms-stream/blob/main/lib/lambda/db-seed.ts">here</a>. You can invoke this function using test events via the AWS console.</p>
<p>By logging in to the DMS console, we can observe that the replication task is already running, thanks to the CustomResources.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687793566192/0263bf18-e2e7-42c0-bfc8-a68814f3ed39.png" alt class="image--center mx-auto" /></p>
<p>To view the table statistics for the task, we can see that our schema has been identified. If there were additional tables, they would also be listed here, but in our case, we only have the <code>examples</code> table.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687793594551/fdd316f3-2f9f-4811-9bcb-0a5f224c8e14.png" alt class="image--center mx-auto" /></p>
<p>Invoking our seed lambda function will insert a row into the table. After a short time, the table statistics page will reflect the insert operation.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687793604762/69d2e5bd-6f36-42ad-9ee3-7cb888070c6c.png" alt class="image--center mx-auto" /></p>
<p>If we invoke a <a target="_blank" href="https://github.com/martzcodes/blog-dms-stream/blob/main/lib/lambda/stream-subscriber.ts">lambda from the kinesis stream</a>, it will get the following event:</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"record"</span>: {
        <span class="hljs-attr">"kinesis"</span>: {
            <span class="hljs-attr">"kinesisSchemaVersion"</span>: <span class="hljs-string">"1.0"</span>,
            <span class="hljs-attr">"partitionKey"</span>: <span class="hljs-string">"blog.examples"</span>,
            <span class="hljs-attr">"sequenceNumber"</span>: <span class="hljs-string">"49642050638404656466522708801490648817992453925189451794"</span>,
            <span class="hljs-attr">"data"</span>: <span class="hljs-string">"ewoJImRhdGEiOgl7CgkJImlkIjoJMSwKCQkiZXhhbXBsZSI6CSJoZWxsbyA2MzkiCgl9LAoJIm1ldGFkYXRhIjoJewoJCSJ0aW1lc3RhbXAiOgkiMjAyMy0wNi0yNVQxNToyMToxNC4wNTUxMzdaIiwKCQkicmVjb3JkLXR5cGUiOgkiZGF0YSIsCgkJIm9wZXJhdGlvbiI6CSJpbnNlcnQiLAoJCSJwYXJ0aXRpb24ta2V5LXR5cGUiOgkic2NoZW1hLXRhYmxlIiwKCQkic2NoZW1hLW5hbWUiOgkiYmxvZyIsCgkJInRhYmxlLW5hbWUiOgkiZXhhbXBsZXMiLAoJCSJ0cmFuc2FjdGlvbi1pZCI6CTEyODg0OTAyNjA5Cgl9Cn0="</span>,
            <span class="hljs-attr">"approximateArrivalTimestamp"</span>: <span class="hljs-number">1687706474.102</span>
        },
        <span class="hljs-attr">"eventSource"</span>: <span class="hljs-string">"aws:kinesis"</span>,
        <span class="hljs-attr">"eventVersion"</span>: <span class="hljs-string">"1.0"</span>,
        <span class="hljs-attr">"eventID"</span>: <span class="hljs-string">"shardId-000000000001:49642050638404656466522708801490648817992453925189451794"</span>,
        <span class="hljs-attr">"eventName"</span>: <span class="hljs-string">"aws:kinesis:record"</span>,
        <span class="hljs-attr">"invokeIdentityArn"</span>: <span class="hljs-string">"arn:aws:iam::359317520455:role/BlogDmsStreamStack-streamkinesisServiceRole6A79529-7U8Q9JUVULLO"</span>,
        <span class="hljs-attr">"awsRegion"</span>: <span class="hljs-string">"us-east-1"</span>,
        <span class="hljs-attr">"eventSourceARN"</span>: <span class="hljs-string">"arn:aws:kinesis:us-east-1:359317520455:stream/db-stream"</span>
    }
}
</code></pre>
<p>It's worth noting that Kinesis Base64 encodes the data, so decoding is necessary to make it usable. By decoding the event's data we get the following:</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"data"</span>: {
        <span class="hljs-attr">"id"</span>: <span class="hljs-number">1</span>,
        <span class="hljs-attr">"example"</span>: <span class="hljs-string">"hello 639"</span>
    },
    <span class="hljs-attr">"metadata"</span>: {
        <span class="hljs-attr">"timestamp"</span>: <span class="hljs-string">"2023-06-25T15:21:14.055137Z"</span>,
        <span class="hljs-attr">"record-type"</span>: <span class="hljs-string">"data"</span>,
        <span class="hljs-attr">"operation"</span>: <span class="hljs-string">"insert"</span>,
        <span class="hljs-attr">"partition-key-type"</span>: <span class="hljs-string">"schema-table"</span>,
        <span class="hljs-attr">"schema-name"</span>: <span class="hljs-string">"blog"</span>,
        <span class="hljs-attr">"table-name"</span>: <span class="hljs-string">"examples"</span>,
        <span class="hljs-attr">"transaction-id"</span>: <span class="hljs-number">12884902609</span>
    }
}
</code></pre>
<p>By examining the decoded event, we can determine that it was an "insert" operation, and the "data" field contains the full row. In this case, since it was an insert, there is no "before" image.</p>
<p>If a row is updated, the event will contain an image representing the changes.</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"parsed"</span>: {
        <span class="hljs-attr">"data"</span>: {
            <span class="hljs-attr">"id"</span>: <span class="hljs-number">1</span>,
            <span class="hljs-attr">"example"</span>: <span class="hljs-string">"hello 297"</span>
        },
        <span class="hljs-attr">"before"</span>: {
            <span class="hljs-attr">"id"</span>: <span class="hljs-number">1</span>,
            <span class="hljs-attr">"example"</span>: <span class="hljs-string">"hello 639"</span>
        },
        <span class="hljs-attr">"metadata"</span>: {
            <span class="hljs-attr">"timestamp"</span>: <span class="hljs-string">"2023-06-25T15:50:51.449661Z"</span>,
            <span class="hljs-attr">"record-type"</span>: <span class="hljs-string">"data"</span>,
            <span class="hljs-attr">"operation"</span>: <span class="hljs-string">"update"</span>,
            <span class="hljs-attr">"partition-key-type"</span>: <span class="hljs-string">"schema-table"</span>,
            <span class="hljs-attr">"schema-name"</span>: <span class="hljs-string">"blog"</span>,
            <span class="hljs-attr">"table-name"</span>: <span class="hljs-string">"examples"</span>,
            <span class="hljs-attr">"transaction-id"</span>: <span class="hljs-number">12884903827</span>
        }
    }
}
</code></pre>
<p>From there you could calculate a diff to see that the <code>example</code> column went from <code>hello 639</code> to <code>hello 297</code>.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In conclusion, this comprehensive guide has provided you with the knowledge and steps necessary to create an Aurora MySQL Database, initialize it using Custom Resources, and set up a Change Data Capture (CDC) Stream with AWS CDK, AWS Data Migration Service (DMS), and Kinesis. By leveraging AWS DMS and event-driven architecture principles, you can unlock the full potential of real-time data replication and event streaming.</p>
<p>As you move forward, there are several ways you can expand on the concepts and ideas covered in this guide. Here are a few suggestions:</p>
<ol>
<li><p><strong>Explore Advanced CDC Stream Configurations</strong>: Dive deeper into the configuration options available with DMS CDC streams. Experiment with table mappings, filtering options, and advanced settings to tailor the replication process to your specific use cases.</p>
</li>
<li><p><strong>Integrate Additional AWS Services</strong>: Consider integrating other AWS services into your event-driven architecture. For example, you could explore using AWS Lambda to process the replicated events, Amazon S3 for data storage, or AWS Glue for data cataloging and ETL operations.</p>
</li>
<li><p><strong>Implement Event-Driven Microservices</strong>: Build event-driven microservices that consume the CDC stream events to trigger actions or updates across different systems. Explore how you can use services like AWS Step Functions or AWS EventBridge to orchestrate complex workflows based on the captured events.</p>
</li>
<li><p><strong>Scale and Optimize</strong>: Experiment with scaling and optimizing your CDC stream setup. Explore strategies for handling high-velocity data streams, optimizing performance, and implementing fault-tolerant architectures.</p>
</li>
<li><p><strong>Monitor and Analyze</strong>: Set up monitoring and analytics solutions to gain insights into your event-driven system. Utilize services like Amazon CloudWatch, AWS X-Ray, or AWS AppSync to track and analyze the performance, reliability, and usage patterns of your CDC stream and associated components.</p>
</li>
</ol>
<p>By expanding on the ideas presented in this guide, you can harness the full potential of a Change Data Capture stream in an event-driven architecture. This approach allows you to build scalable, real-time systems that react to changes in your data and drive intelligent decision-making in your applications. The possibilities for innovation and optimization are vast, so take this foundation and continue exploring the exciting world of event-driven architectures.</p>
]]></content:encoded></item><item><title><![CDATA[Blink and It's Gone: Embracing Ephemeral CDK Stacks for Efficient DevOps]]></title><description><![CDATA[I'm excited to announce that I'll be speaking at AWS Summit Washington, DC on June 8th, 2023, at 2:15PM (DEV206). My DevChat will discuss the benefits of ephemeral CDK Stacks for development workflows and CI/CD pipelines. If you're attending, I'd lov...]]></description><link>https://martzmakes.com/blink-and-its-gone-embracing-ephemeral-cdk-stacks-for-efficient-devops</link><guid isPermaLink="true">https://martzmakes.com/blink-and-its-gone-embracing-ephemeral-cdk-stacks-for-efficient-devops</guid><category><![CDATA[AWS]]></category><category><![CDATA[aws-cdk]]></category><category><![CDATA[Devops]]></category><category><![CDATA[cloudformation]]></category><dc:creator><![CDATA[Matt Martz]]></dc:creator><pubDate>Tue, 30 May 2023 18:27:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/od287vQyufw/upload/2289f440ddfe7a37933f061d5bc844e7.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>I'm excited to announce that I'll be speaking at AWS Summit Washington, DC on June 8th, 2023, at 2:15PM (DEV206). My DevChat will discuss the benefits of ephemeral CDK Stacks for development workflows and CI/CD pipelines. If you're attending, I'd love to see you there and answer any questions you might have.</em></p>
<p>This post serves as a supplement to my talk (spoilers), providing more insights into ephemeral CDK Stacks, their implementation, and best practices. If you're exploring the topic or haven't yet decided on attending the AWS Summit, I hope this post sparks your interest and encourages you to join me for an engaging discussion. See you there! 🎉</p>
<p>In the fast-paced DevOps world, managing and cleaning up temporary cloud resources can be challenging. Forgotten stacks from testing or PoC stages lead to resource wastage and inflated cloud bills. To address this, ephemeral AWS CDK Stacks are a game changer.</p>
<p>This blog post explores integrating ephemeral CDK Stacks into CI/CD pipelines and development workflows. Powered by Self-Destructing Constructs and CDK Aspects, they automate the cleanup process, ensuring a tidy AWS environment and resource savings.</p>
<p>Ephemeral stacks are a simple yet powerful addition to your DevOps toolkit, streamlining resource management and reducing costs. Let's dive into leveraging ephemeral stacks, saving you from manual cleanup and the perils of forgotten stacks.</p>
<h2 id="heading-self-destructing-construct-and-cdk-aspects">Self-Destructing Construct and CDK Aspects</h2>
<p>Ephemeral CDK Stacks are made possible by combining the concepts from two of my previous posts on Self-Destructing Constructs and CDK Aspects.</p>
<p><strong>Self-Destructing Constructs</strong>: As discussed in my previous post, <a target="_blank" href="https://matt.martz.codes/say-goodbye-to-your-cdk-stacks-a-guide-to-self-destruction">Say Goodbye to Your CDK Stacks: A Guide to Self-Destruction</a>, these are unique AWS CDK constructs that employ Step Functions to automatically delete the stack after a defined duration. This construct helps ensure that unnecessary resources aren't lingering around, reducing costs and freeing up space in your AWS account.</p>
<p><strong>CDK Aspects</strong>: My last post, <a target="_blank" href="https://matt.martz.codes/breaking-bad-practices-with-cdk-aspects">Breaking Bad Practices with CDK Aspects</a>, explained how CDK Aspects allow for implementing cross-cutting concerns across your stack, such as applying an aggressive removal policy to all resources. When paired with self-destructing stacks, this ensures a clean slate post-destruction, leaving no stray resources behind.</p>
<h2 id="heading-integrating-ephemeral-stacks-with-cicd-pipelines">Integrating Ephemeral Stacks with CI/CD Pipelines</h2>
<p>CI/CD pipelines, a cornerstone of modern DevOps practices, can greatly benefit from ephemeral CDK Stacks. In a typical CI/CD pipeline, each code commit triggers a process that includes building, testing, and deploying the application. This often involves deploying a stack and, once the tests are executed, cleaning up the resources.</p>
<p>However, sometimes stacks are left behind due to failures in the pipeline or prematurely terminated tests. These forgotten stacks lead to unnecessary costs and clutter. Ephemeral stacks resolve this by auto-deleting after a specific duration, ensuring a clean AWS environment and reducing the wastage of resources.</p>
<p>To illustrate this, let's compare the sequence of events in a typical CI/CD pipeline and one enhanced with ephemeral stacks.</p>
<h3 id="heading-typical-cicd-pipeline">Typical CI/CD pipeline:</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685470449704/d0d2004e-866c-452e-b63c-dba22aa5546d.png" alt class="image--center mx-auto" /></p>
<p>In the traditional CI/CD pipeline, the stack is explicitly deleted as part of the pipeline, which could fail or be skipped, resulting in lingering stacks.</p>
<h3 id="heading-cicd-pipeline-with-ephemeral-stacks">CI/CD pipeline with Ephemeral Stacks:</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685470477278/10e06a95-e00f-4558-b6e1-deaa2ca8754b.png" alt class="image--center mx-auto" /></p>
<p>With ephemeral stacks, the stack deletion is automated and independent of the CI/CD pipeline, ensuring that stacks are always cleaned up.</p>
<p>This approach reduces the complexity of your CI/CD pipeline and ensures cleaner resource management in your AWS environment. Additionally, you can set up an aggressive removal policy using CDK Aspects for a more comprehensive cleanup of all resources associated with the stack.</p>
<p><em>It doesn't sound like a lot, but in practice, it can save hours (over the course of weeks) while ensuring AWS resources (and money) aren't being wasted. 🚀</em></p>
<h2 id="heading-implementing-ephemeral-stacks-in-development-workflows">Implementing Ephemeral Stacks in Development Workflows</h2>
<p>Developers often deploy proof of concept (PoC) stacks to test new features, validate concepts, or debug issues. These stacks provide a sandboxed environment to experiment without affecting the production infrastructure. However, once the purpose is served, these stacks often become "forgotten" entities in the cloud environment. They no longer serve a meaningful purpose and, over time, contribute to clutter and unnecessary costs. Implementing ephemeral stacks in development workflows can be an excellent solution to this common oversight.</p>
<h3 id="heading-the-problem-of-forgotten-stacks">The Problem of Forgotten Stacks</h3>
<p>In the energetic world of development work, the adrenaline rush of solving complex problems or moving onto the next exhilarating task can often eclipse the essential, albeit less exciting, cleanup step. This oversight, often spurred by the thrill of innovation 💡 or the satisfaction of squashing a bug 🐛, can lead to the pile-up of forgotten temporary stacks. Over time, these lingering stacks clutter your environment and inflate your cloud bill 💸.</p>
<p>Consider some common scenarios where these temporary stacks crop up:</p>
<ul>
<li><p><strong>Proof of Concept Testing:</strong> During the innovation process, PoC stacks are often crafted to assess feasibility or demonstrate the practicality of a concept. Upon approval or rejection of the concept, these stacks fulfill their purpose and, ideally, should vanish 🗑️.</p>
</li>
<li><p><strong>Feature Testing:</strong> To ensure isolated and accurate testing, developers frequently deploy separate stacks for new features. Once validated and merged into the primary codebase, these stacks have served their purpose and become redundant 🔄.</p>
</li>
<li><p><strong>Debugging:</strong> In the process of troubleshooting intricate issues, developers may create replica stacks to isolate and understand the problem. After resolving the issue, these stacks lose their relevance and ought to be removed 🔎.</p>
</li>
</ul>
<p>Let me illustrate this with a personal anecdote to drive the point home:</p>
<p>In my role, I once assumed the responsibility of cleaning up unused stacks in our development accounts. On this particular day, I found myself navigating through and deleting over 200 of these forgotten stacks. As a serverless-first shop, this exercise didn't incur exorbitant costs, but it did consume resources and impacted our resource quotas ⏳. This served as a stark reminder of the importance of efficient stack management, and how easily these forgotten stacks can clutter our environment 🧹.</p>
<p>Ephemeral stacks can be the silver bullet to these common oversights, introducing an automated cleanup process to ensure your development environment stays neat, tidy and cost-effective 💰.</p>
<h3 id="heading-the-ephemeral-stack-solution">The Ephemeral Stack Solution</h3>
<p>Ephemeral stacks can serve as a fail-safe, ensuring that even if a developer forgets to delete a stack, it won't linger indefinitely. By setting a self-destruct timer at the time of stack creation, developers can rest easy knowing the stack will automatically clean itself up after a specific period.</p>
<p>Here's how this can be integrated into the common scenarios mentioned above:</p>
<ul>
<li><p><strong>Proof of Concept Testing</strong>: Set the stack to auto-delete after the meeting where the PoC will be demonstrated. This way, if the concept is rejected, the stack is automatically cleaned up. If the concept is approved, the stack can be manually preserved or redeployed as a more permanent resource.</p>
</li>
<li><p><strong>Feature Testing</strong>: Set a short lifespan for the stack — perhaps a few hours or a day — to allow for feature validation. After that period, the stack, if not needed, will self-destruct.</p>
</li>
<li><p><strong>Debugging</strong>: Given the unpredictable nature of debugging, a slightly longer lifespan could be set. Once the issue is resolved, if the developer forgets to clean up, the stack will still auto-delete after the set duration.</p>
</li>
</ul>
<p>Integrating ephemeral stacks into your development workflow can lead to more efficient resource utilization, a cleaner cloud environment, and cost savings. It's a practical way to safeguard against human error without adding an extra burden on the developers.</p>
<h2 id="heading-best-practices-and-practical-insights">Best Practices and Practical Insights</h2>
<p>Incorporating ephemeral stacks into your development workflow requires careful planning and adherence to best practices. Here are some insights to help you avoid common pitfalls and get the most out of self-destructing stacks.</p>
<h3 id="heading-set-a-reasonable-lifespan">Set a Reasonable Lifespan</h3>
<p>While the primary goal is to prevent forgotten stacks from lingering, setting the self-destruction timer too short might interrupt development or testing work. Consider the typical time required for the task at hand and add some buffer to determine the optimal lifespan.</p>
<h3 id="heading-integrate-with-cicd-pipeline">Integrate with CI/CD Pipeline</h3>
<p>To get the most out of ephemeral stacks, they should be integrated with your CI/CD pipeline. This ensures that the stacks are created as part of your automated testing and deployment process and that they clean up after themselves when no longer needed.</p>
<h3 id="heading-manage-permissions-carefully">Manage Permissions Carefully</h3>
<p>Remember, self-destructing stacks need permission to delete resources. Ensure that these permissions are granted judiciously, keeping in line with the principle of least privilege. Be especially cautious when dealing with production environments.</p>
<h3 id="heading-clearly-label-ephemeral-stacks">Clearly Label Ephemeral Stacks</h3>
<p>To avoid confusion and potential mishaps, it's important to clearly label ephemeral stacks. This could be through a naming convention or tagging. Clear labels also make it easier to locate and manage these stacks in the AWS Management Console.</p>
<h3 id="heading-employ-a-monitoring-system">Employ a Monitoring System</h3>
<p>While the self-destruction mechanism should work reliably, it's a good idea to have a monitoring system in place. This will alert you to any stacks that didn't delete as expected or if there are any issues with the self-destruction process.</p>
<h3 id="heading-consider-exceptions">Consider Exceptions</h3>
<p>Not every stack should be ephemeral. Some stacks may need to persist for longer periods for certain tasks, like long-running data processing or scenarios where manual intervention is required. Ensure that your system allows for such exceptions.</p>
<h3 id="heading-educate-your-team">Educate Your Team</h3>
<p>Finally, make sure that your development team is fully aware of the ephemeral stack concept and its implications. They should understand the purpose, the lifespan of these stacks, and what they can expect during the self-destruction process.</p>
<p>Implementing ephemeral stacks requires more than just technical setup — it's a shift in the development mindset. With proper planning and adherence to these best practices, you can seamlessly integrate ephemeral stacks into your development workflow, leading to more efficient resource utilization and cost savings.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In the realm of DevOps, resource management efficiency is critical, and ephemeral CDK Stacks provide a practical, automated solution to a common problem. By incorporating these self-destructing constructs into your CI/CD pipelines and development workflows, you can ensure the timely cleanup of temporary stacks, reducing resource wastage and maintaining a cleaner AWS environment.</p>
<p>Ephemeral stacks offer a powerful combination of convenience and economy, alleviating developers from the burden of manual cleanup and saving valuable time and costs. They also guard against human error and oversight, a common cause of lingering, unnecessary cloud resources.</p>
<p>Remember, the effective implementation of ephemeral stacks involves more than just technical setup. It also requires a shift in mindset, careful planning, and adherence to best practices. But with these in place, ephemeral stacks can become a vital part of your DevOps toolkit, promoting efficiency, tidiness, and cost-effectiveness in your cloud journey.</p>
]]></content:encoded></item><item><title><![CDATA[Breaking Bad Practices with CDK Aspects]]></title><description><![CDATA[In the ever-evolving landscape of cloud infrastructure, AWS Cloud Development Kit (CDK) continues to stand as a groundbreaking tool, simplifying the process of defining cloud resources. Within this universe, Aspects—a feature within CDK—hold a distin...]]></description><link>https://martzmakes.com/breaking-bad-practices-with-cdk-aspects</link><guid isPermaLink="true">https://martzmakes.com/breaking-bad-practices-with-cdk-aspects</guid><category><![CDATA[AWS]]></category><category><![CDATA[aws-cdk]]></category><category><![CDATA[Infrastructure as code]]></category><dc:creator><![CDATA[Matt Martz]]></dc:creator><pubDate>Wed, 17 May 2023 13:58:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/yplNhhXxBtM/upload/77d68bddd10f217a35ef8c0992efe9d1.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the ever-evolving landscape of cloud infrastructure, AWS Cloud Development Kit (CDK) continues to stand as a groundbreaking tool, simplifying the process of defining cloud resources. Within this universe, Aspects—a feature within CDK—hold a distinctive position. <a target="_blank" href="https://docs.aws.amazon.com/cdk/v2/guide/aspects.html">Aspects</a> act as autonomous agents within your CDK constructs, systematically traversing and applying consistent modifications.</p>
<p>This article aims to dissect the intricacies of CDK Aspects, shedding light on their fundamental purpose, operation, and advanced use cases. Aspects, in many ways, embody the balance between consistency and flexibility in cloud infrastructure development, a concept that's growing increasingly important in this era of complex, scalable applications.</p>
<p>Whether you are a seasoned AWS CDK user or a newcomer looking to expand your cloud development toolkit, this deep dive into CDK Aspects will provide valuable insights into this powerful feature. As we peel back the layers, you'll discover how Aspects can enhance resource management, improve security protocols, and promote code efficiency. Let's 'cook' up some knowledge on CDK Aspects.</p>
<p><img src="https://media.giphy.com/media/yE72eDy7lj3JS/giphy.gif" alt /></p>
<p>The example code for this repository is located here: https://github.com/martzcodes/blog-aspects</p>
<p>This article by <a target="_blank" href="https://hashnode.com/@JannikWempe">@JannikWempe</a> is another great resource: <a target="_blank" href="https://aws.hashnode.com/the-power-of-aws-cdk-aspects">https://aws.hashnode.com/the-power-of-aws-cdk-aspects</a></p>
<h2 id="heading-understanding-cdk-aspects-the-basics">Understanding CDK Aspects - The Basics</h2>
<p>At its core, the AWS Cloud Development Kit (CDK) is a software development framework that allows developers to define cloud infrastructure in code. This is where CDK Aspects come into play. Aspects are a feature within CDK that act like intelligent filters, traversing your code, identifying specific constructs, and applying modifications to them.</p>
<p>Consider this simple TypeScript code snippet of an AWS S3 bucket defined using CDK:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { Stack, StackProps } <span class="hljs-keyword">from</span> <span class="hljs-string">'aws-cdk-lib'</span>;
<span class="hljs-keyword">import</span> { Bucket } <span class="hljs-keyword">from</span> <span class="hljs-string">'aws-cdk-lib/aws-s3'</span>;
<span class="hljs-keyword">import</span> { Construct } <span class="hljs-keyword">from</span> <span class="hljs-string">'constructs'</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> MyCDKStack <span class="hljs-keyword">extends</span> Stack {
  <span class="hljs-keyword">constructor</span>(<span class="hljs-params">scope: Construct, id: <span class="hljs-built_in">string</span>, props?: StackProps</span>) {
    <span class="hljs-built_in">super</span>(scope, id, props);

    <span class="hljs-keyword">new</span> Bucket(<span class="hljs-built_in">this</span>, <span class="hljs-string">'MyBucket'</span>, {
      versioned: <span class="hljs-literal">false</span>,
    });
  }
}
</code></pre>
<p>Now, let's say you want to ensure that every S3 bucket in your CDK application has versioning enabled. With CDK Aspects you can do this in two ways.</p>
<ol>
<li><p>Validation - Create a CDK Aspect to validate all Buckets in the Stack and throw an Error if it is not set</p>
</li>
<li><p>Modification - Create a CDK Aspect to automatically set the versioned property on all buckets</p>
</li>
</ol>
<p>In this context, your CDK code is like a complex chemistry experiment that needs careful management to prevent unwanted reactions.</p>
<ol>
<li><p><strong>Validation (the Chemistry Professor)</strong>: In the validation role, the Chemistry Professor is like a vigilant observer, watching you perform your experiment. If they notice you're about to mix incompatible substances or your measurements are incorrect, they intervene immediately (throw an error) to prevent a potential disaster and ensure the effectiveness of your experiment.</p>
</li>
<li><p><strong>Modification (the Chemistry Assistant)</strong>: In the lab, an Assistant stands by to help with the experiment, offering a slight adjustment to ensure the reactions go as planned. They don't conduct the experiment for you but provide just enough assistance to keep you on track. This is akin to the modification role of Aspects, which scan your constructs and make slight but important tweaks to ensure consistency and conformity to standards.</p>
</li>
</ol>
<p>To draw the parallel back to our CDK Aspects, the Aspect could, like a Chemistry Assistant, automatically adjust certain aspects of your resources (like enabling versioning for all S3 buckets), ensuring your infrastructure maintains the proper 'formula' throughout its configuration.</p>
<p><img src="https://media.giphy.com/media/73j8OT8DqHGyQ/giphy.gif" alt /></p>
<p>Here's how you might define that Aspect:</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// For validation</span>
<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> ValidateVersioningAspect <span class="hljs-keyword">implements</span> IAspect {
  <span class="hljs-keyword">public</span> visit(node: IConstruct): <span class="hljs-built_in">void</span> {
    <span class="hljs-keyword">if</span> (node <span class="hljs-keyword">instanceof</span> CfnBucket) {
      <span class="hljs-keyword">if</span> (!node.versioningConfiguration
        || (!Tokenization.isResolvable(node.versioningConfiguration)
            &amp;&amp; node.versioningConfiguration.status !== <span class="hljs-string">'Enabled'</span>)) {
              Annotations.of(node).addError(<span class="hljs-string">'Bucket versioning is not enabled'</span>);
      }
    }
  }
}

<span class="hljs-keyword">const</span> app = <span class="hljs-keyword">new</span> App();
<span class="hljs-keyword">const</span> stack = <span class="hljs-keyword">new</span> MyCDKStack(app, <span class="hljs-string">'MyStack'</span>);
Aspects.of(stack).add(<span class="hljs-keyword">new</span> ValidateVersioningAspect());
</code></pre>
<p><a target="_blank" href="https://github.com/martzcodes/blog-aspects/blob/main/lib/ValidateVersioningAspect.ts">In this code</a>, the <code>ValidateVersioningAspect</code> Aspect will add an error <a target="_blank" href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.assertions.Annotations.html">Annotation</a> if it finds an S3 bucket with versioning disabled, ensuring that all S3 buckets comply with the requirement for versioning to be enabled. On synth, the error would look like this:</p>
<pre><code class="lang-bash">[Error at /MyTestStack/MyBucket/Resource] Bucket versioning is not enabled

Found errors
</code></pre>
<p><a target="_blank" href="https://github.com/martzcodes/blog-aspects/blob/main/test/validate.test.ts">Annotations can also be tested</a> and have support from CDK's built-in assertions: https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.assertions.Annotations.html</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// For modification</span>
<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> EnableVersioningAspect <span class="hljs-keyword">implements</span> IAspect {
  <span class="hljs-keyword">public</span> visit(node: IConstruct): <span class="hljs-built_in">void</span> {
    <span class="hljs-keyword">if</span> (node <span class="hljs-keyword">instanceof</span> CfnBucket) {
      <span class="hljs-keyword">if</span> (!node.versioningConfiguration
        || (!Tokenization.isResolvable(node.versioningConfiguration)
            &amp;&amp; node.versioningConfiguration.status !== <span class="hljs-string">'Enabled'</span>)) {
              node.versioningConfiguration = {
                status: <span class="hljs-string">'Enabled'</span>
              };
      }
    }
  }
}

<span class="hljs-keyword">const</span> app = <span class="hljs-keyword">new</span> App();
<span class="hljs-keyword">const</span> stack = <span class="hljs-keyword">new</span> MyCDKStack(app, <span class="hljs-string">'MyStack'</span>);
Aspects.of(stack).add(<span class="hljs-keyword">new</span> EnableVersioningAspect());
</code></pre>
<p><a target="_blank" href="https://github.com/martzcodes/blog-aspects/blob/main/lib/EnableVersioningAspect.ts">In this code</a>, the EnableVersioningAspect class defines an Aspect that will "visit" every construct in the stack. If the construct is an instance of the Bucket class, the Aspect will set its versioned property to true, effectively enabling versioning for every bucket in the stack.</p>
<h2 id="heading-deep-dive-into-cdk-aspects">Deep Dive into CDK Aspects</h2>
<p>In this section, we'll delve deeper into the inner workings of CDK Aspects and uncover the magic behind the scenes. We'll explain the crucial role of the <code>visit</code> method, discuss the <code>Aspects.of(scope).add(aspect)</code> pattern, and illustrate how Aspects interact with the CDK's synthesis process. When using Aspects, it's crucial to be aware of how the AWS CDK uses <em>tokenization</em> to manage resources' properties.</p>
<h3 id="heading-the-visit-method">The <code>visit</code> Method</h3>
<p>At the heart of any CDK Aspect is the <code>visit</code> method. As an implementer of the <code>IAspect</code> interface, this method is called when the Aspect is applied to a construct. The <code>visit</code> method takes a single argument—<code>IConstruct</code>—which is the construct the Aspect is visiting. What you do inside this method is the meat of your Aspect: whether you choose to throw an error for validation, or modify a property of the construct.</p>
<h3 id="heading-the-intricacies-of-tokenization-in-aws-cdk">The Intricacies of Tokenization in AWS CDK</h3>
<p>Tokens are placeholders used by the CDK to represent values that are not known until deployment time. For example, if you create an S3 bucket without specifying a bucket name, CDK generates a unique name and represents it with a token in your code.</p>
<p><img src="https://media.giphy.com/media/NUBp5KcV0PJBe/giphy.gif" alt /></p>
<p>When you inspect the <code>bucketName</code> property during the <code>visit</code> method, you might expect to see an actual bucket name. However, you'll instead see a token, something like <code>${Token[TOKEN.12]}</code>.</p>
<p>The tokenization system can lead to unexpected results when using Aspects. For instance, if you attempt to modify a property that uses a token, your Aspect might not behave as expected. This is because tokens aren't resolved until the CDK synthesizes your app into a CloudFormation template.</p>
<p><a target="_blank" href="https://github.com/martzcodes/blog-aspects/blob/main/lib/TokenAwareAspect.ts">Here's an example</a>:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> TokenAwareAspect <span class="hljs-keyword">implements</span> IAspect {
  visit(node: IConstruct): <span class="hljs-built_in">void</span> {
    <span class="hljs-keyword">if</span> (node <span class="hljs-keyword">instanceof</span> Bucket) {
      <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Bucket name is <span class="hljs-subst">${node.bucketName}</span>`</span>);
    }
  }
}

<span class="hljs-keyword">const</span> app = <span class="hljs-keyword">new</span> App();
<span class="hljs-keyword">const</span> stack = <span class="hljs-keyword">new</span> Stack(app, <span class="hljs-string">'MyStack'</span>);
<span class="hljs-keyword">new</span> Bucket(stack, <span class="hljs-string">'MyBucket'</span>);
Aspects.of(stack).add(<span class="hljs-keyword">new</span> TokenAwareAspect());
</code></pre>
<p>In the console output, you'll see a token as the bucket name, not a real bucket name. Keep this in mind when designing your Aspects!</p>
<h3 id="heading-applying-aspects">Applying Aspects</h3>
<p>The <code>Aspects.of(scope).add(aspect)</code> pattern is the standard way to apply an Aspect to a construct. In this pattern, <code>Aspects.of(scope)</code> returns an <code>Aspects</code> object associated with a construct, and <code>add(aspect)</code> adds an Aspect to this object. The <code>scope</code> here could be any construct to which you want to apply the Aspect—typically an instance of a Stack or an App.</p>
<h3 id="heading-aspects-and-the-synthesis-process">Aspects and the Synthesis Process</h3>
<p>CDK Aspects play a crucial role during the CDK's synthesis process. The synthesis process is a multi-stage operation where CDK translates your code into a CloudFormation template, which AWS can understand. During this process, Aspects are invoked after the construct tree has been fully initialized, but before synthesis. This allows Aspects to validate or modify constructs right before the CloudFormation templates are generated.</p>
<p>Just as Skyler White had to understand the sequence of money laundering, let's delve deeper into the sequence of CDK Aspects with a diagram.</p>
<pre><code class="lang-typescript">sequenceDiagram
    participant User
    participant CDK App
    participant Aspect
    participant CloudFormation
    User-&gt;&gt;CDK App: Runs CDK Synth
    CDK App-&gt;&gt;CDK App: Initializes Construct Tree
    CDK App-&gt;&gt;Aspect: Invokes Aspects
    Aspect--&gt;&gt;CDK App: Validates/Modifies Constructs
    CDK App-&gt;&gt;CloudFormation: Generates CloudFormation Template
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1684329634714/09c818e4-28c4-4a6d-9e7b-9f14f2499d76.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-cdk-aspects-in-action-an-architecture-diagram-generator">CDK Aspects in Action: An Architecture Diagram Generator</h2>
<p>In this section, we'll explore a concrete example of using CDK Aspects in a real-world scenario. We'll delve into the internals of a recently published npm library, <a target="_blank" href="https://github.com/aws-community-projects/arch-dia"><code>@aws-community/arch-dia</code></a>, which uses a CDK Aspect to generate a pseudo-architecture diagram of a project. Not only does it visualize your AWS infrastructure, but it also tracks changes between synthesis stages, providing a visual diff.</p>
<h3 id="heading-the-architecture-diagram-aspect">The Architecture Diagram Aspect</h3>
<p>The key component in <a target="_blank" href="https://github.com/aws-community-projects/arch-dia"><code>@aws-community/arch-dia</code></a> is the <a target="_blank" href="https://github.com/aws-community-projects/arch-dia/blob/main/src/architecture-diagram.ts"><code>ArchitectureDiagramAspect</code></a>, an implementation of the <code>IAspect</code> interface. This Aspect traverses the constructs in a given Stack to generate a Mermaid diagram representing the architecture of your AWS resources. Here's an overview of the code:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> ArchitectureDiagramAspect <span class="hljs-keyword">implements</span> IAspect {
  <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> mermaidDiagram: <span class="hljs-built_in">string</span>[];
  <span class="hljs-keyword">private</span> stackName = <span class="hljs-string">''</span>;

  <span class="hljs-keyword">constructor</span> (<span class="hljs-params"></span>) {
    <span class="hljs-built_in">this</span>.mermaidDiagram = [];
  }

  visit (node: IConstruct): <span class="hljs-built_in">void</span> {
    <span class="hljs-keyword">if</span> (node <span class="hljs-keyword">instanceof</span> Stack) {
      <span class="hljs-built_in">this</span>.stackName = node.stackName;
      <span class="hljs-built_in">this</span>.traverseConstruct(node, <span class="hljs-string">''</span>);
    }
  }
  ...
}
</code></pre>
<p>This Aspect, like all Aspects, has a <code>visit</code> method. It checks if the visited construct is an instance of the Stack class. If it is, it initiates a traversal of the constructs in that stack.</p>
<p>The <code>traverseConstruct</code> method iteratively visits all children of a given construct, building up a Mermaid diagram string in the process:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">private</span> traverseConstruct (construct: IConstruct, parentPath: <span class="hljs-built_in">string</span>): <span class="hljs-built_in">void</span> {
  ...
  construct.node.children.forEach(<span class="hljs-function">(<span class="hljs-params">child</span>) =&gt;</span> {
    <span class="hljs-built_in">this</span>.traverseConstruct(child, currentPath);
  });
}
</code></pre>
<h3 id="heading-generating-and-comparing-diagrams">Generating and Comparing Diagrams</h3>
<p>Once all constructs have been visited, the Aspect can generate a Mermaid diagram of the entire Stack using the <code>generateDiagram</code> method. This method also handles comparing the newly generated diagram with the previous one, if it exists, to create a visual diff:</p>
<pre><code class="lang-typescript">generateDiagram (): <span class="hljs-built_in">string</span> {
  ...
  <span class="hljs-keyword">const</span> addedElements = [...newElements].filter(<span class="hljs-function">(<span class="hljs-params">e</span>) =&gt;</span> !oldElements.has(e));
  <span class="hljs-keyword">const</span> removedElements = [...oldElements].filter(<span class="hljs-function">(<span class="hljs-params">e</span>) =&gt;</span> !newElements.has(e));
  <span class="hljs-keyword">const</span> added = <span class="hljs-built_in">this</span>.mermaidDiagram.filter(<span class="hljs-function">(<span class="hljs-params">line</span>) =&gt;</span> !old.includes(line));
  <span class="hljs-keyword">const</span> removed = old.filter(<span class="hljs-function">(<span class="hljs-params">line</span>) =&gt;</span> !<span class="hljs-built_in">this</span>.mermaidDiagram.includes(line));
  ...
}
</code></pre>
<p>This visual diff highlights the changes between the old and new architectures, providing a clear visualization of how your resources have evolved.</p>
<p>By traversing the constructs in a Stack, <code>@aws-community/arch-dia</code> can generate a visual representation of your AWS resources and track changes over time. This not only aids in understanding and documenting your infrastructure but can also serve as a powerful tool for communicating changes to stakeholders.</p>
<h2 id="heading-best-practices-and-tips-using-cdk-aspects-effectively">Best Practices and Tips: Using CDK Aspects Effectively</h2>
<p>Once you've got a handle on the basics of CDK Aspects, here are a few additional best practices and tips to help you use them more effectively.</p>
<h3 id="heading-1-use-aspects-for-cross-cutting-concerns">1. Use Aspects for Cross-Cutting Concerns</h3>
<p>CDK Aspects are ideal for applying changes or enforcing rules that cut across different layers or types of resources in your infrastructure. Consider using Aspects when you want to apply a consistent policy or setting across multiple resources, especially when they are of different types.</p>
<h3 id="heading-2-be-cognizant-of-the-construct-tree-traversal">2. Be Cognizant of the Construct Tree Traversal</h3>
<p>CDK Aspects traverse the construct tree using a depth-first approach. This means that the <code>visit</code> method is invoked on a construct only after it has been invoked on all of its children. In certain scenarios, you may need to be aware of this order of traversal to achieve the desired results.</p>
<h3 id="heading-3-account-for-cdk-tokenization">3. Account for CDK Tokenization</h3>
<p>As we discussed earlier, the CDK uses tokenization to handle values that aren't known until deployment time. Be aware of this while designing your Aspects, especially when inspecting or modifying properties that might be tokenized. If necessary, consider using the <code>Token.isUnresolved</code> method to check if a value is a token.</p>
<h3 id="heading-4-avoid-making-changes-outside-the-visit-method">4. Avoid Making Changes Outside the Visit Method</h3>
<p>The <code>visit</code> method is the only place where you should make changes to constructs when using Aspects. While it might be technically possible to modify constructs outside this method, doing so can lead to unexpected behavior and hard-to-debug issues.</p>
<h3 id="heading-5-test-your-aspects">5. Test Your Aspects</h3>
<p>As with any code, you should thoroughly test your Aspects. Given that Aspects can modify constructs across your app, a small error in an Aspect can have a broad impact. Consider using the AWS CDK's built-in testing tools, like the <code>aws-cdk-lib/assertions</code> library, to write unit tests for your Aspects.</p>
<p>My example code includes tests for the aspects: <a target="_blank" href="https://github.com/martzcodes/blog-aspects/tree/main/test">https://github.com/martzcodes/blog-aspects/tree/main/test</a> as does the <a target="_blank" href="https://github.com/aws-community-projects/arch-dia"><code>@aws-community/arch-dia</code></a> library</p>
<p>CDK Aspects offer a powerful way to enforce consistency and automate modifications across your cloud infrastructure code. With these best practices and tips, you'll be well-equipped to use Aspects effectively in your AWS CDK applications. 🚀</p>
<h2 id="heading-wrap-up">Wrap Up</h2>
<p>To wrap it up, here are some final thoughts:</p>
<ol>
<li><p><strong>Harnessing the Power of Aspects</strong>: CDK Aspects are a powerful tool for AWS developers, offering a way to automate cross-cutting concerns and enforce consistency across an entire application. While they may seem complex at first, understanding how they work and how to use them effectively can greatly enhance your AWS CDK toolkit.</p>
</li>
<li><p><strong>Exploration and Experimentation</strong>: Don't be afraid to explore and experiment with Aspects. Whether you're trying to create an architecture diagram generator or a recursive Aspect, there's a lot of potential for creative and effective solutions.</p>
</li>
<li><p><strong>Caution and Diligence</strong>: Despite their power, it's important to be cautious when working with Aspects. Be aware of potential pitfalls, such as tokenization and the performance implications of using Aspects. Always test your Aspects thoroughly to avoid introducing broad-reaching errors into your application.</p>
</li>
</ol>
<p><img src="https://media.giphy.com/media/xT8qBpPTFsLrkrZahO/giphy.gif" alt /></p>
<p>In the end, CDK Aspects can be your 'Heisenberg' in managing cloud infrastructure - they have the potential to be influential, powerful, and transforming. They provide a way to simplify and automate many tasks that would otherwise require manual, error-prone work.</p>
]]></content:encoded></item><item><title><![CDATA[Say Goodbye to Your CDK Stacks: A Guide to Self-Destruction]]></title><description><![CDATA[Are you tired of constantly managing your CDK Stacks and dealing with the associated costs? If so, self-destructing CDK Stacks might be the solution you've been looking for. With the ability to automatically delete themselves after a set time, these ...]]></description><link>https://martzmakes.com/say-goodbye-to-your-cdk-stacks-a-guide-to-self-destruction</link><guid isPermaLink="true">https://martzmakes.com/say-goodbye-to-your-cdk-stacks-a-guide-to-self-destruction</guid><category><![CDATA[AWS]]></category><category><![CDATA[aws-cdk]]></category><category><![CDATA[serverless]]></category><category><![CDATA[stepfunction]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Matt Martz]]></dc:creator><pubDate>Wed, 22 Feb 2023 20:27:05 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/YndHL7gQIJE/upload/d9390343ea8c33e212e2dedbc248d9c9.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Are you tired of constantly managing your CDK Stacks and dealing with the associated costs? If so, self-destructing CDK Stacks might be the solution you've been looking for. With the ability to automatically delete themselves after a set time, these stacks can help free up resources and streamline your development process.</p>
<p>In this guide, we'll show you how to set up self-destructing CDK Stacks and integrate them into your CI/CD pipeline. By doing so, you can reduce costs and improve the efficiency of your development process. We'll also share some best practices and tips to help you make the most out of this feature. So, if you're ready to optimize your development process, read on to learn how to implement self-destructing CDK Stacks! 🤯</p>
<p>Code: <a target="_blank" href="https://github.com/martzcodes/blog-cdk-self-destruct">https://github.com/martzcodes/blog-cdk-self-destruct</a></p>
<h2 id="heading-what-will-we-make">What Will We Make?</h2>
<p>We'll create a Step Function that will be executed during the deployment of a Stack and will wait for a specified period of time. Since Step Functions are charged based on state transitions, and not the duration of the run, this will not result in additional costs. Additionally, Standard Step Functions can run for up to a year, providing us with plenty of flexibility. Once the Wait period is over, the Step Function will use the AWS SDK to automatically delete the Stack. 🗑️</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677094877291/40308f6b-9c74-4d10-a1df-93b3a02dc09b.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-creating-a-selfdestruct-construct">Creating a SelfDestruct Construct</h2>
<p>We're going to get started by creating a new CDK Construct that can be used in any project. The only thing this Construct will need as a property input will be the Duration that we want the stack to destroy itself after.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">interface</span> SelfDestructProps {
  duration: Duration;
}

<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> SelfDestruct <span class="hljs-keyword">extends</span> Construct {
  <span class="hljs-keyword">constructor</span>(<span class="hljs-params">scope: Construct, id: <span class="hljs-built_in">string</span>, props: SelfDestructProps</span>) {
    <span class="hljs-built_in">super</span>(scope, id);
    <span class="hljs-keyword">const</span> { duration } = props;
  }
}
</code></pre>
<p>From here, we're going to want the Step Function to handle a few things. It should:</p>
<ol>
<li><p>Re-execute the Step Function on every Stack deployment</p>
</li>
<li><p>Close out old executions on new deployments (only have one execution running at any given time)</p>
</li>
<li><p>Wait for a pre-defined duration</p>
</li>
<li><p>Delete the Stack after the Wait period</p>
</li>
</ol>
<h3 id="heading-list-already-running-step-functions">List Already Running Step Functions</h3>
<p>First, we need to get the list of running executions of this Step Function. We can do that with the <a target="_blank" href="https://docs.aws.amazon.com/step-functions/latest/apireference/API_ListExecutions.html"><code>states:ListExecutions</code></a> SDK Command.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> listExecutions = <span class="hljs-keyword">new</span> CallAwsService(<span class="hljs-built_in">this</span>, <span class="hljs-string">`ListExecutions`</span>, {
  action: <span class="hljs-string">"listExecutions"</span>,
  iamAction: <span class="hljs-string">"states:ListExecutions"</span>,
  iamResources: [<span class="hljs-string">"*"</span>],
  parameters: {
    <span class="hljs-string">"StateMachineArn.$"</span>: <span class="hljs-string">"$$.StateMachine.Id"</span>,
    StatusFilter: <span class="hljs-string">"RUNNING"</span>,
  },
  service: <span class="hljs-string">"sfn"</span>,
});
</code></pre>
<p>🏃‍♂️We pass in the <code>StatusFilter: "RUNNING"</code> to make sure we only get back executions that are still in the RUNNING state. Typically there should only be one of these (from the last deployment).</p>
<h3 id="heading-stop-other-executions">Stop Other Executions</h3>
<p>Next we'll want to <code>Map</code> over the returned Executions. <code>Map</code>s are Step Function for-loops, effectively.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> executionsMap = <span class="hljs-keyword">new</span> <span class="hljs-built_in">Map</span>(<span class="hljs-built_in">this</span>, <span class="hljs-string">`ExecutionsMap`</span>, {
  inputPath: <span class="hljs-string">"$.Executions"</span>,
});
</code></pre>
<p>In this loop, we're going to want to make sure that the execution isn't going to kill itself (not yet at least). We do this by checking the map item's Execution Id versus the current running execution's Arn:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> stopExecution = <span class="hljs-keyword">new</span> CallAwsService(<span class="hljs-built_in">this</span>, <span class="hljs-string">`StopExecution`</span>, {
  action: <span class="hljs-string">"stopExecution"</span>,
  iamAction: <span class="hljs-string">"states:StopExecution"</span>,
  iamResources: [<span class="hljs-string">"*"</span>],
  parameters: {
    Cause: <span class="hljs-string">"Superceded"</span>,
    <span class="hljs-string">"ExecutionArn.$"</span>: <span class="hljs-string">"$.ExecutionArn"</span>,
  },
  service: <span class="hljs-string">"sfn"</span>,
});

executionsMap.iterator(
  <span class="hljs-keyword">new</span> Choice(<span class="hljs-built_in">this</span>, <span class="hljs-string">"NotSelf?"</span>)
    .when(
      Condition.not(
        Condition.stringEqualsJsonPath(<span class="hljs-string">"$.ExecutionArn"</span>, <span class="hljs-string">"$$.Execution.Id"</span>)
      ),
      stopExecution
    )
    .otherwise(<span class="hljs-keyword">new</span> Pass(<span class="hljs-built_in">this</span>, <span class="hljs-string">"self"</span>))
);
</code></pre>
<p><code>$.ExecutionArn</code> refers to the mapped execution's item and <code>$$.Execution.Id</code> refers to the Step Function itself... that is <code>$$</code> is an escape to "top-level".</p>
<h3 id="heading-check-and-wait-to-delete">Check and Wait to Delete</h3>
<p>Next, we can check the State Machine to make sure this resource isn't invoking because of a Stack that is already destroying itself. If it is, we can exit. This is actually very nice because since we just killed the other executions, we're tying up loose ends from previous deployments by making sure that there won't be any executions running.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> wait = <span class="hljs-keyword">new</span> Wait(<span class="hljs-built_in">this</span>, <span class="hljs-string">"Wait"</span>, {
  time: WaitTime.duration(duration),
});
<span class="hljs-keyword">const</span> wasDelete = <span class="hljs-keyword">new</span> Choice(<span class="hljs-built_in">this</span>, <span class="hljs-string">"WasDelete?"</span>)
  .when(
    Condition.stringEquals(<span class="hljs-string">"$$.Execution.Input.Action"</span>, <span class="hljs-string">"Delete"</span>),
    <span class="hljs-keyword">new</span> Succeed(<span class="hljs-built_in">this</span>, <span class="hljs-string">"DeleteSuccess"</span>)
  )
  .otherwise(wait);
</code></pre>
<p>As part of this, we end up Waiting the duration we set. This could be anywhere from seconds to days (up to 1 year).</p>
<p>After the Wait is over, we need to delete the stack:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> deleteStack = <span class="hljs-keyword">new</span> CallAwsService(<span class="hljs-built_in">this</span>, <span class="hljs-string">`DeleteStack`</span>, {
  action: <span class="hljs-string">"deleteStack"</span>,
  iamAction: <span class="hljs-string">"cloudformation:DeleteStack"</span>,
  iamResources: [<span class="hljs-string">"*"</span>],
  parameters: {
    <span class="hljs-string">"StackName.$"</span>: <span class="hljs-string">"$$.Execution.Input.StackName"</span>,
  },
  service: <span class="hljs-string">"cloudformation"</span>,
});
</code></pre>
<p>This is done by an AWS SDK Call <a target="_blank" href="https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_DeleteStack.html"><code>cloudformation:DeleteStack</code></a>.</p>
<h3 id="heading-creating-the-state-machine">Creating the State Machine</h3>
<p>With all the steps created, we can tie them together to create the actual Step Function:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> finished = <span class="hljs-keyword">new</span> Succeed(<span class="hljs-built_in">this</span>, <span class="hljs-string">`Finished`</span>);

listExecutions.next(executionsMap);
executionsMap.next(wasDelete);
wait.next(deleteStack);
deleteStack.next(finished);

<span class="hljs-keyword">const</span> sm = <span class="hljs-keyword">new</span> StateMachine(<span class="hljs-built_in">this</span>, <span class="hljs-string">`SelfDestructMachine`</span>, {
  definition: listExecutions,
});
</code></pre>
<h3 id="heading-running-the-step-function-with-every-deployment">Running the Step Function with Every Deployment</h3>
<p>This construct is only useful if it is consistently run with Stack Deployments. So, let's add a Custom Resource that executes the Step Function as part of the Deployment. We can do this with an <a target="_blank" href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.custom_resources.AwsCustomResource.html"><code>AwsCustomResource</code></a> construct:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">new</span> AwsCustomResource(<span class="hljs-built_in">this</span>, <span class="hljs-string">`SelfDestructCR`</span>, {
  onCreate: {
    action: <span class="hljs-string">"startExecution"</span>,
    parameters: {
      input: <span class="hljs-built_in">JSON</span>.stringify({
        Action: <span class="hljs-string">"Create"</span>,
        StackArn: Stack.of(<span class="hljs-built_in">this</span>).stackId,
        StackName: Stack.of(<span class="hljs-built_in">this</span>).stackName,
      }),
      stateMachineArn: sm.stateMachineArn,
    },
    physicalResourceId: PhysicalResourceId.of(<span class="hljs-string">"SelfDestructCR"</span>),
    service: <span class="hljs-string">"StepFunctions"</span>,
  },
  onDelete: {
    action: <span class="hljs-string">"startExecution"</span>,
    parameters: {
      input: <span class="hljs-built_in">JSON</span>.stringify({
        Action: <span class="hljs-string">"Delete"</span>,
        StackArn: Stack.of(<span class="hljs-built_in">this</span>).stackId,
        StackName: Stack.of(<span class="hljs-built_in">this</span>).stackName,
      }),
      stateMachineArn: sm.stateMachineArn,
    },
    physicalResourceId: PhysicalResourceId.of(<span class="hljs-string">"SelfDestructCR"</span>),
    service: <span class="hljs-string">"StepFunctions"</span>,
  },
  onUpdate: {
    action: <span class="hljs-string">"startExecution"</span>,
    parameters: {
      input: <span class="hljs-built_in">JSON</span>.stringify({
        Action: <span class="hljs-string">"Update"</span>,
        Version: <span class="hljs-keyword">new</span> <span class="hljs-built_in">Date</span>().getTime().toString(),
        StackArn: Stack.of(<span class="hljs-built_in">this</span>).stackId,
        StackName: Stack.of(<span class="hljs-built_in">this</span>).stackName,
      }),
      stateMachineArn: sm.stateMachineArn,
    },
    physicalResourceId: PhysicalResourceId.of(<span class="hljs-string">"SelfDestructCR"</span>),
    service: <span class="hljs-string">"StepFunctions"</span>,
  },
  policy: AwsCustomResourcePolicy.fromSdkCalls({
    resources: [sm.stateMachineArn],
  }),
});
</code></pre>
<p>When the Stack deploys it makes a different SDK call based on the type of Stack operation (Create, Update, Delete). Custom Resources only execute when input parameters change. <code>onCreate</code> and <code>onDelete</code> are considered "new" since the stack is being created or destroyed, but in order to make sure the <code>onUpdate</code> call happens we have to touch an input parameter within it. That's why we set the <code>Version</code> to the current time.</p>
<h2 id="heading-tips-for-self-destruction">Tips for Self-Destruction</h2>
<p>💡 Did you notice that the code above didn't explicitly set any IAM permissions? CDK + Step Functions handle all of that for you. By defining the action, and iamActions / services as part of the Step and AwsCustomResource constructs CDK automatically infers IAM permissions and make sure those are attached to the Resources so that they have access to perform their functions!</p>
<h3 id="heading-creating-a-developerstack">Creating a DeveloperStack</h3>
<p>For a better DevEx you could create a standardized Stack template that includes the self-destruct Construct by default. For example, you could publish <code>BlogCdkSelfDestructStack</code> as your common stack in an npm library:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> BlogCdkSelfDestructStack <span class="hljs-keyword">extends</span> cdk.Stack {
  <span class="hljs-keyword">constructor</span>(<span class="hljs-params">scope: Construct, id: <span class="hljs-built_in">string</span>, props?: cdk.StackProps</span>) {
    <span class="hljs-built_in">super</span>(scope, id, props);

    <span class="hljs-keyword">new</span> SelfDestruct(<span class="hljs-built_in">this</span>, <span class="hljs-string">`SelfDestruct`</span>, {
      duration: Duration.minutes(<span class="hljs-number">3</span>),
    });
  }
}
</code></pre>
<p>When teams create new projects, instead of creating a stack and basing it off of <code>cdk.Stack</code> .... they would base it off of <code>BlogCdkSelfDestructStack</code> which has self-destruction built in!</p>
<h3 id="heading-automatically-detecting-temporary-stacks">Automatically Detecting Temporary Stacks</h3>
<p>Clearly you don't want your production stacks to delete themselves. Another tip would be to introduce a property into your Base stack that indicates whether it should self-destruct or not. You could do this by stack naming conventions, or have a developer or CI/CD property. For example:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">interface</span> BlogCdkSelfDestructStackProps <span class="hljs-keyword">extends</span> cdk.StackProps {
  cicd?: <span class="hljs-built_in">boolean</span>;
  developer?: <span class="hljs-built_in">boolean</span>;
  production?: <span class="hljs-built_in">boolean</span>;
}
<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> BlogCdkSelfDestructStack <span class="hljs-keyword">extends</span> cdk.Stack {
  <span class="hljs-keyword">constructor</span>(<span class="hljs-params">scope: Construct, id: <span class="hljs-built_in">string</span>, props: BlogCdkSelfDestructStackProps</span>) {
    <span class="hljs-built_in">super</span>(scope, id, props);
    <span class="hljs-keyword">const</span> { cicd, developer, production } = props;

    <span class="hljs-keyword">if</span> (developer &amp;&amp; production) {
      <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Error</span>(<span class="hljs-string">"Don't use developer stacks in production"</span>);
    }

    <span class="hljs-keyword">if</span> (!production &amp;&amp; (developer || cicd)) {
      <span class="hljs-keyword">new</span> SelfDestruct(<span class="hljs-built_in">this</span>, <span class="hljs-string">`SelfDestruct`</span>, {
        duration: Duration.minutes(<span class="hljs-number">3</span>),
      });
    }
  }
}
</code></pre>
<p>And then in your bin file, you would pass in the appropriate properties (which could come from node config, environment variables / etc. <a target="_blank" href="https://docs.aws.amazon.com/cdk/v2/guide/best-practices.html">CDK Best Practices</a> recommend <code>Configure with properties and methods, not environment variables</code> which is why you would place these properties into your bin file.</p>
<p>Many CI/CD systems have pre-defined system environment variables, and those could be used to automatically detect the CI/CD for self-destruction. For example you could create a namespaced Stack that gets deployed as part of an automated PR integration check. Then succeed or fail the stack would automatiaclly clean up after itself without CI/CD having to do it.</p>
<h3 id="heading-can-i-extend-the-wait-without-re-deploying">Can I Extend the Wait Without Re-Deploying?</h3>
<p>Absolutely! Simply re-execute the Step Function! This will reset the timer giving you more time if you need it.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>And just like that, you're a self-destructing CDK Stack pro! You can now confidently say "adios" to stacks that are taking up too much space and draining your resources.</p>
<p>With this newfound knowledge, you can save on infrastructure costs and keep your AWS account looking fresh and tidy. Plus, you'll have the satisfaction of knowing that you're incorporating a little excitement and danger into your development process.</p>
<p>Just remember, with great power comes great responsibility. Be sure to set a reasonable Wait period and test your code thoroughly before deploying. And don't worry, we won't tell anyone if you shed a tear or two as your stacks go boom.</p>
]]></content:encoded></item><item><title><![CDATA[Improving a Serverless App To Cross-Post Blogs]]></title><description><![CDATA[Allen Helton is an AWS Hero and absolute LEGEND. In December he wrote a post titled "I Built a Serverless App To Cross-Post My Blogs" and after some begging from some AWS Community Builders he published his code to our shiny new AWS Community Project...]]></description><link>https://martzmakes.com/improving-a-serverless-app-to-cross-post-blogs</link><guid isPermaLink="true">https://martzmakes.com/improving-a-serverless-app-to-cross-post-blogs</guid><category><![CDATA[AWS]]></category><category><![CDATA[aws-cdk]]></category><category><![CDATA[serverless]]></category><category><![CDATA[community]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Matt Martz]]></dc:creator><pubDate>Tue, 21 Feb 2023 15:36:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/CrDnEQE_9vY/upload/7c7bd8de2e5175f2ac3b2646a6e730fc.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Allen Helton is an AWS Hero and absolute LEGEND. In December he wrote a post titled "<a target="_blank" href="https://www.readysetcloud.io/blog/allen.helton/how-i-built-a-serverless-automation-to-cross-post-my-blogs/"><strong>I Built a Serverless App To Cross-Post My Blogs</strong></a>" and after some begging from some <a target="_blank" href="https://aws.amazon.com/developer/community/community-builders/">AWS Community Builders</a> he published his code to our shiny new <a target="_blank" href="https://github.com/aws-community-projects/blog-crossposting-automation">AWS Community Projects</a> GitHub Org.</p>
<p>Allen is quite a prolific writer and publishes his articles in (at least) four places. He has a self-hosted static blog built with Amplify using Hugo, as well as using <a target="_blank" href="http://dev.to">dev.to</a>, hashnode, and medium. His self-hosted blog on his personal domain is his primary platform and <a target="_blank" href="http://dev.to">dev.to</a>, hashnode, and medium all get canonical URLs for SEO purposes. 🌟</p>
<p>While Allen's code is great, it does have some <a target="_blank" href="https://github.com/aws-community-projects/blog-crossposting-automation#limitations">limitations</a>. For instance, it's written using SAM/yaml, requires a Hugo/Amplify built blog, effectively has no optional features, and he still manually uploads image assets to S3 for all of his articles. 😱</p>
<p>In this article, we'll go over my fork of Allen's code, where I have:</p>
<ul>
<li><p>Converted the project to use AWS CDK</p>
</li>
<li><p>Made Hugo/Amplify and most of the other platforms optional 🚀</p>
</li>
<li><p>Added a direct (private) GitHub webhook integration</p>
</li>
<li><p>Automatically parsed images committed to GitHub and uploaded them to a public S3 Bucket (and updated the content to use them)</p>
</li>
</ul>
<p>I'm excited to share these improvements with you and hope you find them useful!</p>
<p>Code: <a target="_blank" href="https://github.com/martzcodes/blog-crossposting-automation">https://github.com/martzcodes/blog-crossposting-automation</a></p>
<h2 id="heading-converting-the-project-to-cdk">Converting the Project to CDK</h2>
<p>Let's talk about converting projects from a format, like SAM, to CDK. It can be a bit tricky, but the easiest way is to focus on the architecture. Get the architecture skeleton right first, and everything else will fall into place. 💪</p>
<p>So, looking at Allen's project structure, we can see that he has one DynamoDB table, five lambda functions, and a step function. One lambda is triggered by an Amplify EventBridge Event. That lambda then triggers a step function where the other four lambdas are used. 🤓</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676928569736/9309ed78-c0d2-44cd-bb0d-4323ed1197f5.png" alt="Allen's architecture invokes a lambda from an Amplify status event which triggers a step function that stores publish status in DynamoDB as it posts to the target services. Image assets are manually stored in S3" class="image--center mx-auto" /></p>
<p>To improve this, we're going to make Amplify optional and add the ability to pull images used in GitHub and re-store them in S3, bringing the S3 bucket into our CloudFormation Stack. 🚀</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676928729665/4a7dae29-f57e-4ff6-a772-e960d6c00240.png" alt="Changes in Red.  Make Amplify Optional, Bring the bucket into the stack and have the Ingest Lambda re-store images from GitHub to our bucket." class="image--center mx-auto" /></p>
<p>We can start by creating a Construct for the DynamoDB Table <a target="_blank" href="https://github.com/martzcodes/blog-crossposting-automation/blob/main/lib/dyanmo.ts"><code>lib/dynamo.ts</code></a></p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> DynamoDb <span class="hljs-keyword">extends</span> Construct {
  table: Table;
  <span class="hljs-keyword">constructor</span>(<span class="hljs-params">scope: Construct, id: <span class="hljs-built_in">string</span></span>) {
    <span class="hljs-built_in">super</span>(scope, id);
    <span class="hljs-built_in">this</span>.table = <span class="hljs-keyword">new</span> Table(<span class="hljs-built_in">this</span>, <span class="hljs-string">`ActivityPubTable`</span>, {
      partitionKey: { name: <span class="hljs-string">"pk"</span>, <span class="hljs-keyword">type</span>: AttributeType.STRING },
      sortKey: { name: <span class="hljs-string">"sk"</span>, <span class="hljs-keyword">type</span>: AttributeType.STRING },
      billingMode: BillingMode.PAY_PER_REQUEST,
      timeToLiveAttribute: <span class="hljs-string">"ttl"</span>,
      removalPolicy: RemovalPolicy.DESTROY,
    });

    <span class="hljs-built_in">this</span>.table.addGlobalSecondaryIndex({
      indexName: <span class="hljs-string">"GSI1"</span>,
      partitionKey: { name: <span class="hljs-string">"GSI1PK"</span>, <span class="hljs-keyword">type</span>: AttributeType.STRING },
      sortKey: { name: <span class="hljs-string">"GSI1SK"</span>, <span class="hljs-keyword">type</span>: AttributeType.STRING },
      projectionType: ProjectionType.ALL,
    });
  }
}
</code></pre>
<p>We create the secret via CDK (and then manually put the secrets into it):</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> secret = Secret.fromSecretNameV2(
  <span class="hljs-built_in">this</span>,
  <span class="hljs-string">`CrosspostSecrets`</span>,
  secretName
);
</code></pre>
<p>Allen had a single lambda do the data transformations for the three blog services... I opted to split that up for better traceability. My architecture will end up having somewhere between 3-7 lambdas depending on what options you turned on. The lambdas are only created if you pass in the properties. They're all created the same general way (<em>side note... I also updated Allen's code from javascript to TypeScript #scopecreep</em>):</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> lambdaProps: NodejsFunctionProps = {
  architecture: Architecture.ARM_64,
  memorySize: <span class="hljs-number">1024</span>,
  timeout: Duration.minutes(<span class="hljs-number">5</span>),
  runtime: Runtime.NODEJS_18_X,
  environment: {
    TABLE_NAME: table.tableName,
    SECRET_ID: secret.secretName,
  },
};

<span class="hljs-keyword">const</span> sendApiRequestFn = <span class="hljs-keyword">new</span> NodejsFunction(<span class="hljs-built_in">this</span>, <span class="hljs-string">`SendApiRequestFn`</span>, {
  ...lambdaProps,
  entry: join(__dirname, <span class="hljs-string">`../functions/send-api-request.ts`</span>),
});
sendApiRequestFn.addEnvironment(<span class="hljs-string">"DRY_RUN"</span>, dryRun ? <span class="hljs-string">"1"</span> : <span class="hljs-string">"0"</span>);
secret.grantRead(sendApiRequestFn);
</code></pre>
<p>Finally, we need to create the Step Function. Step Functions are notoriously hard to code since they use Amazon States Language (ASL) to define all of the steps. I created a separate <a target="_blank" href="https://github.com/martzcodes/blog-crossposting-automation/blob/main/lib/step-function.ts">CrossPostStepFunction</a> construct.</p>
<p>My step function adds the ability to pick which service creates the Canonical URL and it will first post to that service... get the canonical URL and use that in the subsequent services. There's also a lot of logic to remove things from the State Machine if properties weren't configured... which makes this very flexible.</p>
<p>We were able to abstract out the process for posting to a service to a <a target="_blank" href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_stepfunctions.StateMachineFragment.html">State Machine Fragment</a>. This fragment is a CDK construct that allows us to re-use a lot of the underlying logic used in the parallel paths for posting to the services. When I configured my stack to not send status emails, not use Hugo and have Hashnode be the primary blog post we get a Step Function that looks like this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676930084900/a4614f2c-d760-4468-8716-958acd95dbd9.png" alt class="image--center mx-auto" /></p>
<p>A lot of this is 1:1 with what Allen had in his <a target="_blank" href="https://github.com/aws-community-projects/blog-crossposting-automation/blob/main/workflows/cross-post.asl.json">ASL json file</a>. One interesting fact is that Allen's JSON is 953 lines while my two files of TypeScript code that make up the Step Function ends up being 596 lines (431 + 165)... so <em>almost</em> half while adding a few additional features.</p>
<h2 id="heading-adding-a-direct-github-webhook-integration">Adding a Direct GitHub Webhook Integration</h2>
<p>For our next trick, we will use GitHub Webhook events to trigger our cross-posting, instead of Amplify Events. We can do this by adding a Function URL to the Identify Content Lambda:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> fnUrl = identifyNewContentFn.addFunctionUrl({
  authType: FunctionUrlAuthType.NONE,
  cors: {
    allowedOrigins: [<span class="hljs-string">"*"</span>],
  },
});
<span class="hljs-keyword">new</span> CfnOutput(<span class="hljs-built_in">this</span>, <span class="hljs-string">`GithubWebhook`</span>, { value: fnUrl.url });
</code></pre>
<p>We can then enter this lambda into our GitHub repo's webhook settings so that it will use the webhook for Push events. This enables us to skip some of the identify lambda's code... since the push event happens for every commit and pre-includes the list of files that were added.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> handler = <span class="hljs-keyword">async</span> (event: <span class="hljs-built_in">any</span>) =&gt; {
  <span class="hljs-keyword">try</span> {
    <span class="hljs-keyword">await</span> initializeOctokit();

    <span class="hljs-keyword">let</span> newContent: { fileName: <span class="hljs-built_in">string</span>; commit: <span class="hljs-built_in">string</span> }[] = [];
    <span class="hljs-keyword">if</span> (event.body) {
      <span class="hljs-keyword">const</span> body = <span class="hljs-built_in">JSON</span>.parse(event.body);
      <span class="hljs-built_in">console</span>.log(<span class="hljs-built_in">JSON</span>.stringify({ body }, <span class="hljs-literal">null</span>, <span class="hljs-number">2</span>));
      <span class="hljs-keyword">if</span> (body.commits) {
        newContent = body.commits.reduce(
          <span class="hljs-function">(<span class="hljs-params">
            p: { fileName: <span class="hljs-built_in">string</span>; commit: <span class="hljs-built_in">string</span> }[],
            commit: {
              id: <span class="hljs-built_in">string</span>;
              added: <span class="hljs-built_in">string</span>[];
              modified: <span class="hljs-built_in">string</span>[];
              // ... there is more stuff here, but <span class="hljs-built_in">this</span> is all we need
            }
          </span>) =&gt;</span> {
            <span class="hljs-keyword">const</span> addedFiles = commit.added.filter(
              <span class="hljs-function">(<span class="hljs-params">addedFile: <span class="hljs-built_in">string</span></span>) =&gt;</span>
                (!blogPathDefined ||
                  addedFile.startsWith(<span class="hljs-string">`<span class="hljs-subst">${process.env.BLOG_PATH}</span>/`</span>)) &amp;&amp;
                addedFile.endsWith(<span class="hljs-string">".md"</span>)
            );
            <span class="hljs-keyword">return</span> [
              ...p,
              ...addedFiles.map(<span class="hljs-function">(<span class="hljs-params">addedFile</span>) =&gt;</span> ({
                fileName: addedFile,
                commit: commit.id,
              })),
            ];
          },
          [] <span class="hljs-keyword">as</span> { fileName: <span class="hljs-built_in">string</span>; commit: <span class="hljs-built_in">string</span> }[]
        );
      } <span class="hljs-keyword">else</span> {
        <span class="hljs-keyword">const</span> recentCommits = <span class="hljs-keyword">await</span> getRecentCommits();
        <span class="hljs-keyword">if</span> (recentCommits.length) {
          newContent = <span class="hljs-keyword">await</span> getNewContent(recentCommits);
        }
      }
    }
    <span class="hljs-keyword">if</span> (newContent.length) {
      <span class="hljs-keyword">const</span> data = <span class="hljs-keyword">await</span> getContentData(newContent);
      <span class="hljs-keyword">const</span> imagesProcessed = <span class="hljs-keyword">await</span> saveImagesToS3(data);
      <span class="hljs-keyword">await</span> processNewContent(imagesProcessed);
    }
  } <span class="hljs-keyword">catch</span> (err) {
    <span class="hljs-built_in">console</span>.error(err);
  }
};
</code></pre>
<p>The webhook's event body includes a list of the commits. The Amplify event doesn't have this same list, so we can save some GitHub API calls here. I <em>think</em> this would also be compatible with Allen's code (just in case he wants to switch to that 😈).</p>
<h2 id="heading-parse-and-store-images-in-s3">Parse and Store Images in S3</h2>
<p>But what do we do about images? My overall idea here (for my personal use) was to use a private GitHub repo to store these posts (to avoid SEO shenanigans) and just use relative image linking within the repo for the draft images... that way I could use VS Code's Markdown Preview or <a target="_blank" href="https://obsidian.md/">Obsidian.md</a> to draft my posts. I asked Allen what he does and was surprised to hear that he hasn't automated this part yet... and as part of his writing he manually uploads images to S3.</p>
<p>So, I got a little creative with some Regular Expressions and parsed out any embedded markdown links... which are formatted with an exclamation point in front of a markdown link (ironically, I can't post an example because my RegExp would incorrectly ingest that 😅)</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> contentData: {
  fileName: <span class="hljs-built_in">string</span>;
  commit: <span class="hljs-built_in">string</span>;
  content: <span class="hljs-built_in">string</span>;
  sendStatusEmail: <span class="hljs-built_in">boolean</span>;
}[] = [];
<span class="hljs-keyword">const</span> imgRegex = <span class="hljs-regexp">/!\[(.*?)\]\((.*?)\)/g</span>;
<span class="hljs-keyword">for</span> (<span class="hljs-keyword">let</span> j = <span class="hljs-number">0</span>; j &lt; newContent.length; j++) {
  <span class="hljs-keyword">const</span> workingContent = { ...newContent[j] };
  <span class="hljs-keyword">const</span> imageSet = <span class="hljs-keyword">new</span> <span class="hljs-built_in">Set</span>&lt;<span class="hljs-built_in">string</span>&gt;([]);
  <span class="hljs-keyword">let</span> match;
  <span class="hljs-keyword">while</span> ((match = imgRegex.exec(newContent[j].content)) !== <span class="hljs-literal">null</span>) {
    imageSet.add(match[<span class="hljs-number">2</span>]);
  }
  <span class="hljs-keyword">const</span> images = [...imageSet];
  <span class="hljs-keyword">if</span> (images.length === <span class="hljs-number">0</span>) {
    <span class="hljs-comment">// no images in the post... passthrough</span>
    contentData.push(newContent[j]);
    <span class="hljs-keyword">continue</span>;
  }
  <span class="hljs-keyword">const</span> blogFile = newContent[j].fileName;
  <span class="hljs-keyword">const</span> blogSplit = <span class="hljs-string">`<span class="hljs-subst">${blogFile}</span>`</span>.split(<span class="hljs-string">"/"</span>);
  blogSplit.pop();
  <span class="hljs-keyword">const</span> blogBase = blogSplit.join(<span class="hljs-string">"/"</span>);
  <span class="hljs-keyword">const</span> s3Mapping: Record&lt;<span class="hljs-built_in">string</span>, <span class="hljs-built_in">string</span>&gt; = {};
  <span class="hljs-keyword">for</span> (<span class="hljs-keyword">let</span> k = <span class="hljs-number">0</span>; k &lt; images.length; k++) {
    <span class="hljs-keyword">const</span> image = images[k];
    <span class="hljs-keyword">const</span> githubPath = join(blogBase, image);
    <span class="hljs-keyword">const</span> imageSplit = image.split(<span class="hljs-string">"."</span>);
    <span class="hljs-keyword">const</span> imageExtension = imageSplit[imageSplit.length - <span class="hljs-number">1</span>];
    <span class="hljs-keyword">const</span> s3Path = <span class="hljs-string">`<span class="hljs-subst">${blogFile}</span>/<span class="hljs-subst">${k}</span>.<span class="hljs-subst">${imageExtension}</span>`</span>.replace(<span class="hljs-regexp">/\ /g</span>, <span class="hljs-string">"-"</span>);
    <span class="hljs-keyword">const</span> s3Url = <span class="hljs-string">`https://s3.amazonaws.com/<span class="hljs-subst">${process.env.MEDIA_BUCKET}</span>/<span class="hljs-subst">${s3Path}</span>`</span>;
    <span class="hljs-keyword">const</span> postContent = <span class="hljs-keyword">await</span> octokit.request(
      <span class="hljs-string">"GET /repos/{owner}/{repo}/contents/{path}"</span>,
      {
        owner: <span class="hljs-string">`<span class="hljs-subst">${process.env.OWNER}</span>`</span>,
        repo: <span class="hljs-string">`<span class="hljs-subst">${process.env.REPO}</span>`</span>,
        path: githubPath,
      }
    );

    <span class="hljs-keyword">const</span> buffer = Buffer.from((postContent.data <span class="hljs-keyword">as</span> <span class="hljs-built_in">any</span>).content, <span class="hljs-string">"base64"</span>);

    <span class="hljs-comment">// upload images to s3</span>
    <span class="hljs-keyword">const</span> putImage = <span class="hljs-keyword">new</span> PutObjectCommand({
      Bucket: <span class="hljs-string">`<span class="hljs-subst">${process.env.MEDIA_BUCKET}</span>`</span>,
      Key: s3Path,
      Body: buffer,
    });
    <span class="hljs-keyword">await</span> s3.send(putImage);

    s3Mapping[image] = s3Url;
  }
  <span class="hljs-keyword">const</span> rewriteLink = <span class="hljs-function">(<span class="hljs-params">match: <span class="hljs-built_in">string</span>, text: <span class="hljs-built_in">string</span>, url: <span class="hljs-built_in">string</span></span>) =&gt;</span> {
    <span class="hljs-keyword">return</span> <span class="hljs-string">`![<span class="hljs-subst">${text}</span>](<span class="hljs-subst">${s3Mapping[url]}</span>)`</span>;
  }
  workingContent.content = workingContent.content.replace(imgRegex, rewriteLink);
  contentData.push(workingContent);
}
<span class="hljs-keyword">return</span> contentData;
</code></pre>
<p>This code parses out the image links, fetches them from GitHub, uploads them to S3 and replaces the public S3 URLs in the blog post before proceeding. I suppose a requirement here is that the images are required to be in GitHub and if they aren't... things will break. That would be an easy thing for someone to fix/make more flexible 😉</p>
<h2 id="heading-final-thoughts">Final Thoughts</h2>
<p>That was quite the journey! Although this project took me longer than expected, it was still a lot of fun to work on. I had a tough time trying to limit myself from adding more and more features to it. 😂</p>
<p>After spending some time with it, I'm not entirely convinced that the return on investment is worth it <em>for my specific needs</em>. I only post to two platforms, hashnode and <a target="_blank" href="http://dev.to">dev.to</a>, it's simple enough for me to copy and paste from one to the other and add the canonical URL to the <a target="_blank" href="http://dev.to">dev.to</a> metadata. In fact, the two platforms even have an integration that might allow me to skip the copy/paste step entirely. 🤔</p>
<p>But even though I may not use this stack myself, I do hope that it showcases the power and flexibility of creating with CDK. In comparing SAM to CDK... the CDK code clocked in at 907 lines of <strong><em>code</em></strong> (<em>including the Step Function + additional features)</em> while the SAM YAML + ASL JSON came in at 1259 lines of <strong><em>configuration</em></strong>.</p>
<p>This project would have been much quicker to build if I already had a Hugo/Amplify setup or if I hadn't converted everything to TypeScript or added all the other features. 😅</p>
<p>What do you think? Have you ever worked on a project that ended up taking longer than you expected? Did you find it hard to limit yourself from adding more and more features? What do you think about the differences between SAM and CDK here? Let's chat about it! 💬</p>
]]></content:encoded></item><item><title><![CDATA[Core Web Vitals, CDK Constructs and YOU!]]></title><description><![CDATA[As a web developer, you know the importance of delivering a fast and smooth user experience. But with the constantly evolving web landscape, it can be challenging to keep up with the latest best practices. In this blog post, I'll take you through the...]]></description><link>https://martzmakes.com/core-web-vitals-cdk-constructs-and-you</link><guid isPermaLink="true">https://martzmakes.com/core-web-vitals-cdk-constructs-and-you</guid><category><![CDATA[AWS]]></category><category><![CDATA[aws-cdk]]></category><category><![CDATA[UX]]></category><category><![CDATA[Core Web Vitals]]></category><dc:creator><![CDATA[Matt Martz]]></dc:creator><pubDate>Wed, 01 Feb 2023 22:58:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/QsBfOwMoPNY/upload/ea15bea36c30d4f066fb40f77c18162b.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As a web developer, you know the importance of delivering a fast and smooth user experience. But with the constantly evolving web landscape, it can be challenging to keep up with the latest best practices. In this blog post, I'll take you through the ins and outs of integrating Core Web Vitals into your development projects. With its focus on real-world user experience, Core Web Vitals is quickly becoming an essential part of any web development process. So buckle up, grab a coffee and let's dive in together to see how you can elevate your website's performance and provide a top-notch user experience for your visitors!</p>
<h2 id="heading-what-are-core-web-vitals">What are Core Web Vitals?</h2>
<p>Core Web Vitals are a set of metrics defined by Google to measure the user experience on the web. They focus on the key aspects of website performance that directly impact user experience, such as loading speed, interactivity, and visual stability. The basics of Core Web Vitals are:</p>
<ol>
<li><p>Largest Contentful Paint (LCP): measures loading performance and is calculated as the time it takes for the largest content element on the page (e.g. an image or text block) to load and become visible to the user.</p>
</li>
<li><p>First Input Delay (FID): measures interactivity and is calculated as the time it takes for a page to become responsive after a user first interacts with it (e.g. clicks a button).</p>
</li>
<li><p>Cumulative Layout Shift (CLS): measures visual stability and is calculated as the total amount of unexpected layout shifts that occur during the page load.</p>
</li>
</ol>
<p>These metrics are considered important because they directly impact user experience and are tied to search ranking factors. Websites that score well on Core Web Vitals are likely to have better search engine rankings and provide a better user experience. In other words, they lead to:</p>
<ol>
<li><p>Happy Users: By optimizing Core Web Vitals, your website will load faster, be more interactive, and have less visual instability. This leads to a better overall user experience, which means visitors will stay on your site longer and be more likely to return in the future.</p>
</li>
<li><p>Better SEO: Google uses Core Web Vitals as part of its ranking algorithm, so websites that score well on these metrics are likely to have better search engine rankings. That means more visibility, more traffic, and more potential customers!</p>
</li>
<li><p>Increased Conversions: A great user experience leads to higher engagement and increased conversions. By optimizing Core Web Vitals, you'll give your visitors a smooth and seamless experience that will keep them coming back for more.</p>
</li>
<li><p>Industry Standard: Core Web Vitals are becoming the industry standard for measuring website performance and user experience. By optimizing these metrics, you'll ensure that your website is up-to-date and providing the best possible experience for your visitors.</p>
</li>
</ol>
<h2 id="heading-real-user-monitoring-with-cloudwatch-rum-and-cdk">Real User Monitoring with CloudWatch RUM and CDK</h2>
<p>Let's get started with Core Web Vitals by writing some reusable components: A CDK Construct and Typescript snippet so that we can add CloudWatch RUM (Real User Monitoring) to our application.</p>
<p>The code for this section is located at <a target="_blank" href="http://github.com/martzcodes/blog-cdk-rum">http://github.com/martzcodes/blog-cdk-rum</a></p>
<p>We're going to start with a baseline project that includes a simple website deployed to S3 and hosted by a CloudFront distribution. This is similar to the site we created in my article about how to <a target="_blank" href="https://matt.martz.codes/protect-a-static-site-with-auth0-using-lambdaedge-and-cloudfront">Protect a Static Site with Auth0 Using Lambda@Edge and CloudFront</a></p>
<p>We're going to focus on the 3 most important files:</p>
<ol>
<li><p><a target="_blank" href="https://github.com/martzcodes/blog-cdk-rum/blob/main/lib/rum-runner-construct.ts"><code>/lib/rum-runner-construct.ts</code> - CDK Construct</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/martzcodes/blog-cdk-rum/blob/main/lib/rum-runner-fn.ts"><code>/lib/rum-runner-fn.ts</code> - Custom Resource Function</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/martzcodes/blog-cdk-rum/blob/main/ui/rum.ts"><code>/ui/rum.ts</code> - Typescript Snippet to add to our front-end code</a></p>
</li>
</ol>
<h3 id="heading-cdk-construct">CDK Construct</h3>
<p>Our <code>RumRunnerConstruct</code>, <a target="_blank" href="https://github.com/martzcodes/blog-cdk-rum/blob/main/lib/rum-runner-construct.ts"><code>/lib/rum-runner-construct.ts</code></a>, will take in two properties to its interface:</p>
<ul>
<li><p><code>bucket</code> - the UI's deployment bucket</p>
</li>
<li><p><code>cloudFront</code> - the UI's CloudFront distribution</p>
</li>
</ul>
<p>CloudWatch RUM requires a Cognito Identity Pool to allow unauthenticated access for the CloudWatch RUM web client to publish events.</p>
<p>We create the Identity Pool</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> cwRumIdentityPool = <span class="hljs-keyword">new</span> CfnIdentityPool(
  <span class="hljs-built_in">this</span>,
  <span class="hljs-string">"cw-rum-identity-pool"</span>,
  { allowUnauthenticatedIdentities: <span class="hljs-literal">true</span> }
);
</code></pre>
<p>A role for unauthenticated users to use:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> cwRumUnauthenticatedRole = <span class="hljs-keyword">new</span> Role(
  <span class="hljs-built_in">this</span>,
  <span class="hljs-string">"cw-rum-unauthenticated-role"</span>,
  {
    assumedBy: <span class="hljs-keyword">new</span> FederatedPrincipal(
      <span class="hljs-string">"cognito-identity.amazonaws.com"</span>,
      {
        StringEquals: {
          <span class="hljs-string">"cognito-identity.amazonaws.com:aud"</span>: cwRumIdentityPool.ref,
        },
        <span class="hljs-string">"ForAnyValue:StringLike"</span>: {
          <span class="hljs-string">"cognito-identity.amazonaws.com:amr"</span>: <span class="hljs-string">"unauthenticated"</span>,
        },
      },
      <span class="hljs-string">"sts:AssumeRoleWithWebIdentity"</span>
    ),
  }
);
</code></pre>
<p>And we make sure that role has access to put events into CloudWatch RUM:</p>
<pre><code class="lang-typescript">cwRumUnauthenticatedRole.addToPolicy(
  <span class="hljs-keyword">new</span> PolicyStatement({
    effect: Effect.ALLOW,
    actions: [<span class="hljs-string">"rum:PutRumEvents"</span>],
    resources: [
      <span class="hljs-string">`arn:aws:rum:<span class="hljs-subst">${Stack.<span class="hljs-keyword">of</span>(<span class="hljs-built_in">this</span>).region}</span>:<span class="hljs-subst">${
        Stack.<span class="hljs-keyword">of</span>(<span class="hljs-built_in">this</span>).account
      }</span>:appmonitor/<span class="hljs-subst">${cloudFront.distributionDomainName}</span>`</span>,
    ],
  })
);
</code></pre>
<p>Then we attach the role to the unauthenticated users:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">new</span> CfnIdentityPoolRoleAttachment(
  <span class="hljs-built_in">this</span>,
  <span class="hljs-string">"cw-rum-identity-pool-role-attachment"</span>,
  {
    identityPoolId: cwRumIdentityPool.ref,
    roles: {
      unauthenticated: cwRumUnauthenticatedRole.roleArn,
    },
  }
);
</code></pre>
<p>Next, we need to create the app monitor, which we do by using the Level 1 CDK Construct <code>CfnAppMonitor</code>:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> cwRumAppMonitor = <span class="hljs-keyword">new</span> CfnAppMonitor(<span class="hljs-built_in">this</span>, <span class="hljs-string">"cw-rum-app-monitor"</span>, {
  domain: cloudFront.distributionDomainName,
  name: cloudFront.distributionDomainName,
  appMonitorConfiguration: {
    allowCookies: <span class="hljs-literal">true</span>,
    enableXRay: <span class="hljs-literal">false</span>,
    sessionSampleRate: <span class="hljs-number">1</span>,
    telemetries: [<span class="hljs-string">"errors"</span>, <span class="hljs-string">"performance"</span>, <span class="hljs-string">"http"</span>],
    identityPoolId: cwRumIdentityPool.ref,
    guestRoleArn: cwRumUnauthenticatedRole.roleArn,
  },
  cwLogEnabled: <span class="hljs-literal">true</span>,
});
</code></pre>
<p>Now, to automatically include the newly created app monitor client into our app... we need to take a few extra steps in our Construct. We'll need to run a Custom Resource to fetch some metadata and store it in the UI's bucket for the UI to load. The RUM App Monitor web client needs the CloudWatch RUM App Monitor Id, the Role ARN and the Identity Pool Id.</p>
<p>We create the NodeJS Function:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> rumRunnerFn = <span class="hljs-keyword">new</span> NodejsFunction(<span class="hljs-built_in">this</span>, <span class="hljs-string">"rum-runner"</span>, {
  runtime: Runtime.NODEJS_18_X,
  environment: {
    BUCKET_NAME: bucket.bucketName,
    RUM_APP: cloudFront.distributionDomainName,
    GUEST_ROLE_ARN: cwRumUnauthenticatedRole.roleArn,
    IDENTITY_POOL_ID: cwRumIdentityPool.ref,
  },
  timeout: Duration.seconds(<span class="hljs-number">30</span>),
  entry: join(__dirname, <span class="hljs-string">"./rum-runner-fn.ts"</span>),
});
</code></pre>
<p>And give it the right permissions:</p>
<pre><code class="lang-typescript">bucket.grantWrite(rumRunnerFn);
rumRunnerFn.addToRolePolicy(
  <span class="hljs-keyword">new</span> PolicyStatement({
    effect: Effect.ALLOW,
    resources: [
      <span class="hljs-string">`arn:aws:rum:<span class="hljs-subst">${Stack.<span class="hljs-keyword">of</span>(<span class="hljs-built_in">this</span>).region}</span>:<span class="hljs-subst">${
        Stack.<span class="hljs-keyword">of</span>(<span class="hljs-built_in">this</span>).account
      }</span>:appmonitor/<span class="hljs-subst">${cloudFront.distributionDomainName}</span>`</span>,
    ],
    actions: [<span class="hljs-string">"rum:GetAppMonitor"</span>],
  })
);
</code></pre>
<p>Then we create the Custom Resource, with a dependency on the App Monitor so that it runs AFTER the App Monitor is created:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> rumRunnerProvider = <span class="hljs-keyword">new</span> Provider(<span class="hljs-built_in">this</span>, <span class="hljs-string">"rum-runner-provider"</span>, {
  onEventHandler: rumRunnerFn,
});

<span class="hljs-keyword">const</span> customResource = <span class="hljs-keyword">new</span> CustomResource(<span class="hljs-built_in">this</span>, <span class="hljs-string">"rum-runner-resource"</span>, {
  serviceToken: rumRunnerProvider.serviceToken,
  properties: {
    <span class="hljs-comment">// Bump to force an update</span>
    Version: <span class="hljs-string">"2"</span>,
  },
});

customResource.node.addDependency(cwRumAppMonitor);
</code></pre>
<h3 id="heading-custom-resource-function">Custom Resource Function</h3>
<p><a target="_blank" href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources.html">CloudFormation Custom Resources</a> are a way to write some provisioning logic that is executed as part of a CDK (CloudFormation) deployment. The Custom Resource will invoke our lambda (<a target="_blank" href="https://github.com/martzcodes/blog-cdk-rum/blob/main/lib/rum-runner-fn.ts"><code>/lib/rum-runner-fn.ts</code></a> ) which will use the AWS SDK to fetch the App Monitor Id after it's created and store it in S3, along with the Role ARN and Identity Pool Id.</p>
<p>We import and create two clients:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { S3Client, PutObjectCommand } <span class="hljs-keyword">from</span> <span class="hljs-string">"@aws-sdk/client-s3"</span>;
<span class="hljs-keyword">import</span> { RUMClient, GetAppMonitorCommand } <span class="hljs-keyword">from</span> <span class="hljs-string">"@aws-sdk/client-rum"</span>;

<span class="hljs-keyword">const</span> s3 = <span class="hljs-keyword">new</span> S3Client({});
<span class="hljs-keyword">const</span> rum = <span class="hljs-keyword">new</span> RUMClient({});
</code></pre>
<p>Fetch the App Monitor config using the RUMClient:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> handler = <span class="hljs-keyword">async</span> () =&gt; {
  <span class="hljs-keyword">const</span> app = <span class="hljs-keyword">await</span> rum.send(
    <span class="hljs-keyword">new</span> GetAppMonitorCommand({ Name: <span class="hljs-string">`<span class="hljs-subst">${process.env.RUM_APP}</span>`</span> })
  );
};
</code></pre>
<p>And then upload them to the UI bucket:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> handler = <span class="hljs-keyword">async</span> () =&gt; {
  <span class="hljs-comment">// ...</span>
  <span class="hljs-keyword">const</span> command = <span class="hljs-keyword">new</span> PutObjectCommand({
    Key: <span class="hljs-string">"rum.json"</span>,
    Bucket: process.env.BUCKET_NAME,
    Body: <span class="hljs-built_in">JSON</span>.stringify({
      APPLICATION_ID: <span class="hljs-string">`<span class="hljs-subst">${app?.AppMonitor?.Id}</span>`</span>,
      guestRoleArn: <span class="hljs-string">`<span class="hljs-subst">${process.env.GUEST_ROLE_ARN}</span>`</span>,
      identityPoolId: <span class="hljs-string">`<span class="hljs-subst">${process.env.IDENTITY_POOL_ID}</span>`</span>,
    }),
  });
  <span class="hljs-keyword">await</span> s3.send(command);
};
</code></pre>
<h3 id="heading-typescript-snippet">Typescript Snippet</h3>
<p>The final piece of the puzzle is injecting this into our front end. The front-end will need to include our front-end typescript snippet<a target="_blank" href="https://github.com/martzcodes/blog-cdk-rum/blob/main/ui/rum.ts"><code>/ui/rum.ts</code></a></p>
<p>This snippet extends the official/boilerplate snippet, that you would download from the CloudWatch RUM Console, by including some code to fetch the required metadata we stored using the Custom Resource above.</p>
<p>We <em>fetch</em> the configuration by calling:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> res = <span class="hljs-keyword">await</span> fetch(<span class="hljs-string">'/rum.json'</span>);
<span class="hljs-keyword">const</span> rum = <span class="hljs-keyword">await</span> res.json();
</code></pre>
<p>And then using the fetched metadata in the configuration of the web client:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> config: AwsRumConfig = {
  sessionSampleRate: <span class="hljs-number">1</span>,
  guestRoleArn: <span class="hljs-string">`<span class="hljs-subst">${rum?.guestRoleArn}</span>`</span>,
  identityPoolId: <span class="hljs-string">`<span class="hljs-subst">${rum?.identityPoolId}</span>`</span>,
  endpoint: <span class="hljs-string">"https://dataplane.rum.us-east-1.amazonaws.com"</span>,
  telemetries: [<span class="hljs-string">"errors"</span>,<span class="hljs-string">"performance"</span>,<span class="hljs-string">"http"</span>],
  allowCookies: <span class="hljs-literal">true</span>,
  enableXRay: <span class="hljs-literal">false</span>
};

<span class="hljs-keyword">const</span> APPLICATION_ID: <span class="hljs-built_in">string</span> = <span class="hljs-string">`<span class="hljs-subst">${rum?.APPLICATION_ID}</span>`</span>;
<span class="hljs-keyword">const</span> APPLICATION_VERSION: <span class="hljs-built_in">string</span> = <span class="hljs-string">"1.0.0"</span>;
<span class="hljs-keyword">const</span> APPLICATION_REGION: <span class="hljs-built_in">string</span> = <span class="hljs-string">"us-east-1"</span>;

<span class="hljs-keyword">if</span> (APPLICATION_ID) {
  <span class="hljs-keyword">const</span> awsRum: AwsRum = <span class="hljs-keyword">new</span> AwsRum(
    APPLICATION_ID,
    APPLICATION_VERSION,
    APPLICATION_REGION,
    config
  );
  <span class="hljs-built_in">console</span>.log(awsRum);
}
</code></pre>
<p>Then we only need to make sure we import and run the <code>rumRunner</code> function at the start of our front-end code (<code>ui/main.ts</code>):</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { rumRunner } <span class="hljs-keyword">from</span> <span class="hljs-string">"./rum"</span>;

rumRunner();
</code></pre>
<h2 id="heading-deploying-and-integrating-with-other-web-applications">Deploying and Integrating with Other Web Applications</h2>
<p>By deploying this code we can start to gain insights from our applications including Core Web Vitals and other performance metrics.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675292026155/d20757a3-b374-4e47-8c70-ec2230aa69b6.png" alt class="image--center mx-auto" /></p>
<p>By packaging this as a CDK Construct and Typescript snippet... we could easily add this to other CDK projects by adding in FOUR lines of code 🤯</p>
<p>Two lines in your CDK App:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { RumRunnerConstruct } <span class="hljs-keyword">from</span> <span class="hljs-string">'./rum-runner-construct'</span>;
<span class="hljs-keyword">new</span> RumRunnerConstruct(<span class="hljs-built_in">this</span>, <span class="hljs-string">`Rum`</span>, { bucket, cloudFront, });
</code></pre>
<p>And two lines in your front-end app:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { rumRunner } <span class="hljs-keyword">from</span> <span class="hljs-string">"./rum"</span>;
rumRunner();
</code></pre>
<p>(and I'm being generous with the line counts)</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Measuring the quality of user experience is key to delivering a seamless and enjoyable experience for your website visitors. Core Web Vitals play a crucial role in this measurement, providing objective metrics to assess the performance of your website. To make the most of these metrics, it's important to consider Core Web Vitals early on in the development process. One way to do this is by using reusable CDK constructs, which can help you identify areas for improvement and optimize the user experience of your website.</p>
]]></content:encoded></item><item><title><![CDATA[Automate Documenting EventBridge Schemas in EventCatalog]]></title><description><![CDATA[In this series we're going to SUPERCHARGE developer experience by implementing Event Driven Documentation.  In part 1 we used CDK to deploy EventCatalog to a custom domain using CloudFront and S3.  In part 2 we used AWS Service Events from CloudForma...]]></description><link>https://martzmakes.com/automate-documenting-eventbridge-schemas-in-eventcatalog</link><guid isPermaLink="true">https://martzmakes.com/automate-documenting-eventbridge-schemas-in-eventcatalog</guid><category><![CDATA[AWS]]></category><category><![CDATA[aws-cdk]]></category><category><![CDATA[documentation]]></category><category><![CDATA[event-driven-architecture]]></category><dc:creator><![CDATA[Matt Martz]]></dc:creator><pubDate>Thu, 27 Oct 2022 12:46:15 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/unsplash/MkaA7QrPLjU/upload/v1666789613180/rJ5HFmWDD.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this series we're going to SUPERCHARGE developer experience by implementing <em>Event Driven Documentation</em>.  In <a target="_blank" href="https://matt.martz.codes/using-aws-cdk-to-deploy-eventcatalog">part 1</a> we used CDK to deploy <a target="_blank" href="https://eventcatalog.dev">EventCatalog</a> to a custom domain using CloudFront and S3.  In <a target="_blank" href="https://matt.martz.codes/automate-documenting-api-gateways-in-eventcatalog">part 2</a> we used <a target="_blank" href="https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-service-event.html">AWS Service Events</a> from CloudFormation to detect when an API Gateway has deployed and export the <a target="_blank" href="https://www.openapis.org">OpenAPI</a> spec from AWS to bundle it in our EventCatalog.  In this post, we'll export the JSONSchema of EventBridge Events using schema discovery and bundle them into the EventCatalog.</p>
<p>🛑 <em>Not sure where to start with CDK? See my <a target="_blank" href="https://youtu.be/T-H4nJQyMig">CDK Crash Course on freeCodeCamp</a></em></p>
<p>The architecture we'll be deploying with CDK is:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1666473368096/jTQ0lrEnP.png" alt="Dev Portal - Blog Arch.png" /></p>
<p>In this part we'll focus on the final bit of architecture for subscribing to EventBridge Schema Registry Events and bootstrapping them into the EventCatalog.  We'll also talk about strategies for integrating this into CI/CD to make it fully automated.</p>
<p>💻 The code for this series is published here: <a target="_blank" href="https://github.com/martzcodes/blog-event-driven-documentation">https://github.com/martzcodes/blog-event-driven-documentation</a></p>
<p>🤔 If you have any architecture or post questions/feedback... feel free to hit me up on Twitter <a target="_blank" href="https://twitter.com/martzcodes">@martzcodes</a>.</p>
<h1 id="heading-eventbridge-schema-discovery">EventBridge Schema Discovery</h1>
<p>Amazon EventBridge offers a <a target="_blank" href="https://aws.amazon.com/blogs/compute/introducing-amazon-eventbridge-schema-registry-and-discovery-in-preview/">Schema Registry and Discovery</a> feature.  This feature monitors Event traffic and creates JSON Schemas based on the events it sees.  The awesome thing about this is every time it creates a new schema or updates a new one... it emits an AWS Event that we can trigger off of!  We'll use these events to export the discovered event's schema and bundle them in to EventCatalog, similar to how we did with API Gateways in part 2.</p>
<p>⚠️ <em>If you have inconsistent Event Schemas (schemas with "optional" fields) a new version will be created every time the optional fields appear/disappear.  <strong>A best practice for Event Schemas would be to make sure the event interfaces stay consistent (no optional fields and try not to use objects with changing keys).</strong></em></p>
<h2 id="heading-enabling-schema-discovery">Enabling Schema Discovery</h2>
<p>First, we'll create a new construct for our Account Stack:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">interface</span> EventsConstructProps {
  bus: IEventBus;
  specBucket: Bucket;
}

<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> EventsConstruct <span class="hljs-keyword">extends</span> Construct {
  <span class="hljs-keyword">constructor</span>(<span class="hljs-params">scope: Construct, id: <span class="hljs-built_in">string</span>, props: EventsConstructProps</span>) {
    <span class="hljs-built_in">super</span>(scope, id);
    <span class="hljs-keyword">const</span> { bus, specBucket } = props;

    <span class="hljs-keyword">new</span> CfnDiscoverer(<span class="hljs-built_in">this</span>, <span class="hljs-string">`Discoverer`</span>, {
      sourceArn: bus.eventBusArn,
      description: <span class="hljs-string">"Schema Discoverer"</span>,
      crossAccount: <span class="hljs-literal">false</span>,
    });
  }
}
</code></pre>
<p>This construct uses CDK's level 1 construct called <a target="_blank" href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_eventschemas.CfnDiscoverer.html"><code>CfnDiscoverer</code></a>.  We provide it with our default bus and tell it not to track events that came from outside of the account we're currently in (that could get noisy).</p>
<p>🌈✨ <em>Level 1 Constructs are 1:1 mappings with the equivalent CloudFormation(e.g. <a target="_blank" href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-eventschemas-discoverer.html">CloudFormation</a> vs <a target="_blank" href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_eventschemas.CfnDiscoverer.html">CDK L1</a>)</em></p>
<h2 id="heading-exporting-event-schemas">Exporting Event Schemas</h2>
<h3 id="heading-creating-the-infrastructure">Creating the Infrastructure</h3>
<p>With Schema Discovery enabled, we can create our lambda and invoke that lambda based on the AWS Service Events.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> eventsFn = <span class="hljs-keyword">new</span> NodejsFunction(<span class="hljs-built_in">this</span>, <span class="hljs-string">`eventsFn`</span>, {
  runtime: Runtime.NODEJS_16_X,
  entry: join(__dirname, <span class="hljs-string">`./events-lambda.ts`</span>),
  logRetention: RetentionDays.ONE_DAY,
  initialPolicy: [
    <span class="hljs-keyword">new</span> PolicyStatement({
      effect: Effect.ALLOW,
      actions: [<span class="hljs-string">"schemas:*"</span>],
      resources: [<span class="hljs-string">"*"</span>],
    }),
  ],
});
specBucket.grantReadWrite(eventsFn);
eventsFn.addEnvironment(<span class="hljs-string">"SPEC_BUCKET"</span>, specBucket.bucketName);
bus.grantPutEventsTo(eventsFn);
</code></pre>
<p>We make sure to grant the lambda the right permissions (read/write to the bucket, the bucket name as an environment variable and putEvents to the default bus).</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">new</span> Rule(<span class="hljs-built_in">this</span>, <span class="hljs-string">`eventsRule`</span>, {
  eventBus: props.bus,
  eventPattern: {
    source: [<span class="hljs-string">"aws.schemas"</span>],
    detailType: [<span class="hljs-string">"Schema Created"</span>, <span class="hljs-string">"Schema Version Created"</span>],
  },
  targets: [<span class="hljs-keyword">new</span> LambdaFunction(eventsFn)],
});
</code></pre>
<p>AWS Schema events offer two detail types: "Schema Created" and "Schema Version Created".  You can see the contents of these on the <a target="_blank" href="https://us-east-1.console.aws.amazon.com/events/home?region=us-east-1#/explore">Explore page in the EventBridge console</a>.  We invoke our lambda using these detail types.</p>
<h3 id="heading-processing-the-events">Processing the Events</h3>
<p>Unlike Part 2... processing these events are a lot easier because we have everything we need via the Event and only need to make one aws-sdk call.  The event includes the Schema Name and Version and we use that to export the JSONSchema via the aws sdk:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> RegistryName = event.detail!.RegistryName;
<span class="hljs-keyword">const</span> SchemaName = event.detail!.SchemaName;
<span class="hljs-keyword">const</span> SchemaVersion = event.detail!.Version;
<span class="hljs-keyword">const</span> SchemaDate = event.detail!.CreationDate;

<span class="hljs-keyword">const</span> exportSchemaCommand = <span class="hljs-keyword">new</span> ExportSchemaCommand({
  RegistryName,
  SchemaName,
  Type: <span class="hljs-string">"JSONSchemaDraft4"</span>,
});
<span class="hljs-keyword">const</span> schemaResponse = <span class="hljs-keyword">await</span> schemasClient.send(exportSchemaCommand);
</code></pre>
<p>From there we put it in our spec bucket:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> schema = <span class="hljs-built_in">JSON</span>.parse(schemaResponse.Content);

<span class="hljs-keyword">const</span> fileLoc = {
  Bucket: process.env.SPEC_BUCKET,
  Key: <span class="hljs-string">`events/<span class="hljs-subst">${SchemaName}</span>/spec.json`</span>,
};

<span class="hljs-keyword">const</span> putObjectCommand = <span class="hljs-keyword">new</span> PutObjectCommand({
  ...fileLoc,
  Body: <span class="hljs-built_in">JSON</span>.stringify(schema),
});
<span class="hljs-keyword">await</span> s3.send(putObjectCommand);
</code></pre>
<p>And emit the event with our presigned URL:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> getObjectCommand = <span class="hljs-keyword">new</span> GetObjectCommand({
  ...fileLoc,
});
<span class="hljs-keyword">const</span> url = <span class="hljs-keyword">await</span> getSignedUrl(s3, getObjectCommand, { expiresIn: <span class="hljs-number">60</span> * <span class="hljs-number">60</span> });

<span class="hljs-keyword">const</span> eventDetail: EventSchemaEvent = {
  SchemaName,
  SchemaVersion,
  RegistryName,
  SchemaDate,
  url,
};

<span class="hljs-keyword">const</span> putEvent = <span class="hljs-keyword">new</span> PutEventsCommand({
  Entries: [
    {
      Source,
      DetailType: BlogDetailTypes.EVENT,
      Detail: <span class="hljs-built_in">JSON</span>.stringify(eventDetail),
    },
  ],
});
<span class="hljs-keyword">await</span> eb.send(putEvent);
</code></pre>
<h3 id="heading-updating-the-watcher-to-copy-the-schemas">Updating the Watcher to Copy the Schemas</h3>
<p>In Part 2 we added a utility method to our spec construct that creates a lambda with a rule.  We need to use that here to add a lambda for these Event schemas:</p>
<pre><code class="lang-typescript"><span class="hljs-built_in">this</span>.addRule({
  detailType: BlogDetailTypes.EVENT,
  lambdaName: <span class="hljs-string">`eventWatcher`</span>,
});
</code></pre>
<p>This lambda simply copies the spec files using a certain S3 Key naming convention:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> handler = <span class="hljs-keyword">async</span> (
  event: EventBridgeEvent&lt;<span class="hljs-built_in">string</span>, EventSchemaEvent&gt;
) =&gt; {
  <span class="hljs-keyword">const</span> res = <span class="hljs-keyword">await</span> fetch(event.detail.url);
  <span class="hljs-keyword">const</span> spec = (<span class="hljs-keyword">await</span> res.json()) <span class="hljs-keyword">as</span> Record&lt;<span class="hljs-built_in">string</span>, <span class="hljs-built_in">any</span>&gt;;

  <span class="hljs-keyword">const</span> fileLoc = {
    Bucket: process.env.SPEC_BUCKET,
    Key: <span class="hljs-string">`events/<span class="hljs-subst">${event.account}</span>/<span class="hljs-subst">${event.detail.SchemaName}</span>/<span class="hljs-subst">${event.detail.SchemaVersion}</span>.json`</span>,
  };

  <span class="hljs-keyword">const</span> putObjectCommand = <span class="hljs-keyword">new</span> PutObjectCommand({
    ...fileLoc,
    Body: <span class="hljs-built_in">JSON</span>.stringify(spec, <span class="hljs-literal">null</span>, <span class="hljs-number">2</span>),
  });
  <span class="hljs-keyword">await</span> s3.send(putObjectCommand);
};
</code></pre>
<h2 id="heading-bootstrapping-the-markdown-files-for-eventcatalog">Bootstrapping the Markdown files for EventCatalog</h2>
<p>Now that we have our event JSONSchemas stored in our Watcher's Spec Bucket, we can update our prepare scripts to pull the files and bootstrap them (similar to how we did the API Gateway files in Part 2).  One notable difference is that EventCatalog's Event interface offers <a target="_blank" href="https://www.eventcatalog.dev/docs/events/consumers-and-producers">"Consumers and Producers"</a> and <a target="_blank" href="https://www.eventcatalog.dev/docs/events/versioning">Event Versioning</a>.  We're going to create a pseudo-service that represents our Account's EventBus and specify that as the Producer for these events.  This is kind of a hack, but it's a useful one.  We're also going to create the files needed to version our events.</p>
<p>The folder structure for a domain will end up looking like this:</p>
<pre><code>acct-&lt;account&gt;/
┣ events/
┃ ┣ blog.dev.catalog@Spec.event/
┃ ┃ ┣ index.md
┃ ┃ ┗ schema.json
┃ ┣ blog.dev.catalog@Spec.openapi/
┃ ┃ ┣ versioned/
┃ ┃ ┃ ┣ 1/
┃ ┃ ┃ ┃ ┣ changelog.md
┃ ┃ ┃ ┃ ┣ index.md
┃ ┃ ┃ ┃ ┗ schema.json
┃ ┃ ┃ ┗ 2/
┃ ┃ ┃   ┣ changelog.md
┃ ┃ ┃   ┣ index.md
┃ ┃ ┃   ┗ schema.json
┃ ┃ ┣ index.md
┃ ┃ ┗ schema.json
┣ services/
┃ ┣&lt;account&gt;-bus/
┃ ┃ ┣ index.md
┃ ┃ ┗ openapi.json
┃ ┗ iam-backed-api/
┃   ┣ index.md
┃   ┗ openapi.json
┗ index.md
</code></pre><h3 id="heading-fetch-the-events">Fetch the Events</h3>
<p>To fetch the events we use aws-sdk's <code>ListObjectsCommand</code> to get the files prefixed with <code>events/</code>.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> listBucketObjectsCommand = <span class="hljs-keyword">new</span> ListObjectsCommand({
  Bucket,
  Prefix: <span class="hljs-string">"events/"</span>,
});
<span class="hljs-keyword">const</span> bucketObjects = <span class="hljs-keyword">await</span> s3Client.send(listBucketObjectsCommand);
<span class="hljs-keyword">const</span> specs = bucketObjects.Contents!.reduce(<span class="hljs-function">(<span class="hljs-params">p, c</span>) =&gt;</span> {
  <span class="hljs-keyword">const</span> key: <span class="hljs-built_in">string</span> = c.Key!;
  <span class="hljs-keyword">const</span> splitKey = key.split(<span class="hljs-string">"/"</span>);
  <span class="hljs-keyword">const</span> account = splitKey[<span class="hljs-number">1</span>];
  <span class="hljs-keyword">const</span> schemaName = splitKey[<span class="hljs-number">2</span>];
  <span class="hljs-keyword">const</span> schemaVersion = splitKey[<span class="hljs-number">3</span>].split(<span class="hljs-string">"."</span>)[<span class="hljs-number">0</span>];
  <span class="hljs-keyword">if</span> (!<span class="hljs-built_in">Object</span>.keys(p).includes(<span class="hljs-string">`<span class="hljs-subst">${account}</span>-<span class="hljs-subst">${schemaName}</span>`</span>)) {
    <span class="hljs-keyword">return</span> {
      ...p,
      [<span class="hljs-string">`<span class="hljs-subst">${account}</span>-<span class="hljs-subst">${schemaName}</span>`</span>]: {
        key,
        account,
        schemaName,
        schemaVersion,
        versions: [{ schemaVersion, key }],
      },
    };
  }
  p[<span class="hljs-string">`<span class="hljs-subst">${account}</span>-<span class="hljs-subst">${schemaName}</span>`</span>].versions.push({ schemaVersion, key });
  <span class="hljs-keyword">return</span> p;
}, {} <span class="hljs-keyword">as</span> Record&lt;<span class="hljs-built_in">string</span>, { key: <span class="hljs-built_in">string</span>; account: <span class="hljs-built_in">string</span>; schemaName: <span class="hljs-built_in">string</span>; schemaVersion: <span class="hljs-built_in">string</span>; versions: { schemaVersion: <span class="hljs-built_in">string</span>; key: <span class="hljs-built_in">string</span> }[] }&gt;);
</code></pre>
<p>We store these S3 keys in an object so that we can determine the latest version of each spec, and we process them by schema.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> specKeys = <span class="hljs-built_in">Object</span>.keys(specs);
<span class="hljs-keyword">for</span> (<span class="hljs-keyword">let</span> j = <span class="hljs-number">0</span>; j &lt; specKeys.length; j++) {
  <span class="hljs-keyword">const</span> specMeta = specs[specKeys[j]];
  <span class="hljs-keyword">const</span> versionInfo = {
    schemaVersion: <span class="hljs-number">0</span>,
    key: <span class="hljs-string">""</span>,
    index: <span class="hljs-number">-1</span>,
  };
  specMeta.versions.forEach(<span class="hljs-function">(<span class="hljs-params">version, versionInd</span>) =&gt;</span> {
    <span class="hljs-keyword">if</span> (<span class="hljs-built_in">Number</span>(version.schemaVersion) &gt; versionInfo.schemaVersion) {
      versionInfo.schemaVersion = <span class="hljs-built_in">Number</span>(version.schemaVersion);
      versionInfo.key = version.key;
      versionInfo.index = versionInd;
    }
  });
  <span class="hljs-keyword">if</span> (versionInfo.index &gt; <span class="hljs-number">-1</span>) {
    specMeta.key = versionInfo.key;
    specMeta.schemaVersion = <span class="hljs-string">`latest`</span>;
    specMeta.versions.splice(versionInfo.index, <span class="hljs-number">1</span>);
  }

  <span class="hljs-keyword">const</span> getSpecCommand = <span class="hljs-keyword">new</span> GetObjectCommand({
    Bucket,
    Key: specMeta.key,
  });

  <span class="hljs-keyword">const</span> specObj = <span class="hljs-keyword">await</span> s3Client.send(getSpecCommand);
  <span class="hljs-keyword">const</span> spec = <span class="hljs-keyword">await</span> streamToString(specObj.Body <span class="hljs-keyword">as</span> Readable);
  <span class="hljs-comment">// ...</span>
}
</code></pre>
<h3 id="heading-ensure-the-domain-folder-exists">Ensure the Domain folder exists</h3>
<p>In Part 2 we created a <code>makeDomain</code> shared method.  To ensure the domain folder exists we just need to call it:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> domainPath = makeDomain(specMeta.account);
</code></pre>
<h3 id="heading-create-the-pseudo-bus-service">Create the Pseudo Bus Service</h3>
<p>Next, we create the pseudo service:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> pseudoServiceName = <span class="hljs-string">`<span class="hljs-subst">${specMeta.account}</span>-bus`</span>;
<span class="hljs-keyword">const</span> pseudoServicePath = join(
  domainPath,
  <span class="hljs-string">`./services/<span class="hljs-subst">${pseudoServiceName}</span>`</span>
);
mkdirSync(pseudoServicePath, { recursive: <span class="hljs-literal">true</span> });
<span class="hljs-keyword">const</span> apiMd = [
  <span class="hljs-string">`---`</span>,
  <span class="hljs-string">`name: <span class="hljs-subst">${pseudoServiceName}</span>`</span>,
  <span class="hljs-string">`summary: |`</span>,
  <span class="hljs-string">`  This is a pseudo-service that represents the Default Event Bus in the AWS Account.  It isn't a real service.`</span>,
  <span class="hljs-string">`owners:`</span>,
  <span class="hljs-string">`  - martzcodes`</span>,
  <span class="hljs-string">`badges:`</span>,
  <span class="hljs-string">`  - content: EventBus`</span>,
  <span class="hljs-string">`    backgroundColor: red`</span>,
  <span class="hljs-string">`    textColor: red`</span>,
  <span class="hljs-string">`---`</span>,
];
writeFileSync(join(pseudoServicePath, <span class="hljs-string">`./index.md`</span>), apiMd.join(<span class="hljs-string">"\n"</span>));
</code></pre>
<h3 id="heading-create-the-events">Create the Events</h3>
<p>We create the latest (parent) event:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> eventPath = join(domainPath, <span class="hljs-string">`./events/<span class="hljs-subst">${specMeta.schemaName}</span>`</span>);
mkdirSync(eventPath, { recursive: <span class="hljs-literal">true</span> });
writeFileSync(join(eventPath, <span class="hljs-string">`./schema.json`</span>), spec);
<span class="hljs-keyword">if</span> (!existsSync(join(eventPath, <span class="hljs-string">`./index.md`</span>))) {
  <span class="hljs-keyword">const</span> apiMd = [
    <span class="hljs-string">`---`</span>,
    <span class="hljs-string">`name: <span class="hljs-subst">${specMeta.schemaName}</span>`</span>,
    <span class="hljs-string">`version: latest`</span>,
    <span class="hljs-string">`summary: |`</span>,
    <span class="hljs-string">`  This is the automatically stubbed documentation for the <span class="hljs-subst">${specMeta.schemaName}</span> Event in the <span class="hljs-subst">${specMeta.account}</span> AWS Account.`</span>,
    <span class="hljs-string">`producers:`</span>,
    <span class="hljs-string">`  - <span class="hljs-subst">${pseudoServiceName}</span>`</span>,
    <span class="hljs-string">`owners:`</span>,
    <span class="hljs-string">`  - martzcodes`</span>,
    <span class="hljs-string">`---`</span>,
    <span class="hljs-string">``</span>,
    <span class="hljs-string">`&lt;Schema /&gt;`</span>,
  ];
  writeFileSync(join(eventPath, <span class="hljs-string">`./index.md`</span>), apiMd.join(<span class="hljs-string">"\n"</span>));
}
</code></pre>
<h3 id="heading-add-versioning">Add Versioning</h3>
<p>And finally the version:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">for</span> (<span class="hljs-keyword">let</span> k = <span class="hljs-number">0</span>; k &lt; specMeta.versions.length; k++) {
  <span class="hljs-keyword">const</span> specMetaVersion = specMeta.versions[k];

  <span class="hljs-keyword">const</span> getSpecVersionCommand = <span class="hljs-keyword">new</span> GetObjectCommand({
    Bucket,
    Key: specMeta.key,
  });

  <span class="hljs-keyword">const</span> specVersionObj = <span class="hljs-keyword">await</span> s3Client.send(getSpecVersionCommand);
  <span class="hljs-keyword">const</span> specVersion = <span class="hljs-keyword">await</span> streamToString(
    specVersionObj.Body <span class="hljs-keyword">as</span> Readable
  );

  <span class="hljs-keyword">const</span> versionPath = join(
    eventPath,
    <span class="hljs-string">`./versioned/<span class="hljs-subst">${specMetaVersion.schemaVersion}</span>`</span>
  );
  mkdirSync(versionPath, { recursive: <span class="hljs-literal">true</span> });
  writeFileSync(join(versionPath, <span class="hljs-string">`./schema.json`</span>), specVersion);
  <span class="hljs-keyword">const</span> apiMd = [
    <span class="hljs-string">`---`</span>,
    <span class="hljs-string">`name: <span class="hljs-subst">${specMeta.schemaName}</span>`</span>,
    <span class="hljs-string">`version: <span class="hljs-subst">${specMetaVersion.schemaVersion}</span>`</span>,
    <span class="hljs-string">`summary: |`</span>,
    <span class="hljs-string">`  This is the automatically stubbed documentation for the <span class="hljs-subst">${specMeta.schemaName}</span> Event in the <span class="hljs-subst">${specMeta.account}</span> AWS Account.  This is an old version of the spec.`</span>,
    <span class="hljs-string">`producers:`</span>,
    <span class="hljs-string">`  - <span class="hljs-subst">${pseudoServiceName}</span>`</span>,
    <span class="hljs-string">`owners:`</span>,
    <span class="hljs-string">`  - martzcodes`</span>,
    <span class="hljs-string">`---`</span>,
    <span class="hljs-string">``</span>,
    <span class="hljs-string">`&lt;Schema /&gt;`</span>,
  ];
  writeFileSync(join(versionPath, <span class="hljs-string">`./index.md`</span>), apiMd.join(<span class="hljs-string">"\n"</span>));

  <span class="hljs-keyword">const</span> changelog = [<span class="hljs-string">`### Changes`</span>];
  writeFileSync(
    join(versionPath, <span class="hljs-string">`./changelog.md`</span>),
    changelog.join(<span class="hljs-string">"\n"</span>)
  );
}
</code></pre>
<h1 id="heading-the-final-result">The Final Result</h1>
<p>⚡️ You can see this in action at <a target="_blank" href="https://docs.martz.dev">docs.martz.dev</a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1666874635458/BbQTiM-bV.png" alt="Screenshot 2022-10-27 at 8.43.51 AM.png" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1666874617476/wea9f32wu.png" alt="Screenshot 2022-10-27 at 8.43.33 AM.png" /></p>
<h1 id="heading-cicd-strategies">CI/CD Strategies</h1>
<p>There are a few factors when determining your own CI/CD strategy for this.  Right now everything is automatically updated when we do CDK deploys... but the CDK deploys themselves aren't automated.</p>
<p>The big factor is how many things are you tracking.  At work we monitor 30+ AWS accounts used by &gt; 100 developers.  That runs the risk of being too much to do something like having the watched events kick off a deployment pipeline.  Instead we'll likely use a scheduled CI/CD build to periodically update the documentation.</p>
<p>For CI/CD you could:</p>
<ul>
<li>Create a CodeBuild/CodePipeline project to automatically deploy the EventCatalog based on "watched" events.</li>
<li>Connect your normal CI/CD up to a schedule (maybe you have a lot of events from many accounts and only want to update documentation every hour or so).</li>
<li>Continue manually deploying it <em>(which is what I'll do for my personal account since I don't deploy there often)</em></li>
</ul>
<h1 id="heading-whats-next">What's Next?</h1>
<p><a target="_blank" href="https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-service-event.html">AWS Service Events</a> offer a lot of useful insights in to your applications deployed to AWS.</p>
<p>💡Want to see what other Service Events are available?  <a target="_blank" href="https://us-east-1.console.aws.amazon.com/events/home?region=us-east-1#/explore">Check out the EventBridge "Explore" page in the console</a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1666791166348/HV_rRLNrc.png" alt="Screenshot 2022-10-26 at 9.32.41 AM.png" /></p>
<p>From here you could:</p>
<ul>
<li>You could extend the schemas by <a target="_blank" href="https://matt.martz.codes/improving-eventbridge-schema-discovery">Improving EventBridge Schema Discovery</a></li>
<li>Maybe you want to store additional information from GitHub webhooks into DynamoDB</li>
<li>Track EventBridge Rule changes via CloudFormation deployments</li>
</ul>
<p><em>What would you do next?</em></p>
<p>🙌 If anything wasn't clear or if you want to be notified on future posts... feel free to hit me up on Twitter <a target="_blank" href="https://twitter.com/martzcodes">@martzcodes</a>.</p>
]]></content:encoded></item><item><title><![CDATA[Automate Documenting API Gateways in EventCatalog]]></title><description><![CDATA[In this series we're going to SUPERCHARGE developer experience by implementing Event Driven Documentation.  In part 1 we used CDK to deploy EventCatalog to a custom domain using CloudFront and S3.  In this post we'll use AWS Service Events from Cloud...]]></description><link>https://martzmakes.com/automate-documenting-api-gateways-in-eventcatalog</link><guid isPermaLink="true">https://martzmakes.com/automate-documenting-api-gateways-in-eventcatalog</guid><category><![CDATA[AWS]]></category><category><![CDATA[documentation]]></category><category><![CDATA[aws-cdk]]></category><category><![CDATA[event-driven-architecture]]></category><dc:creator><![CDATA[Matt Martz]]></dc:creator><pubDate>Wed, 26 Oct 2022 12:43:30 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/unsplash/d4zUOOkm3ko/upload/v1666579919025/e91rSczW1.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this series we're going to SUPERCHARGE developer experience by implementing <em>Event Driven Documentation</em>.  In <a target="_blank" href="https://matt.martz.codes/using-aws-cdk-to-deploy-eventcatalog">part 1</a> we used CDK to deploy <a target="_blank" href="https://eventcatalog.dev">EventCatalog</a> to a custom domain using CloudFront and S3.  In this post we'll use <a target="_blank" href="https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-service-event.html">AWS Service Events</a> from CloudFormation to detect when an API Gateway has deployed and export the <a target="_blank" href="https://www.openapis.org">OpenAPI</a> spec from AWS to bundle it in our EventCatalog.  In Part 3 we'll export the JSONSchema of EventBridge Events using schema discovery and bundle them into the EventCatalog.</p>
<p>🛑 <em>Not sure where to start with CDK? See my <a target="_blank" href="https://youtu.be/T-H4nJQyMig">CDK Crash Course on freeCodeCamp</a></em></p>
<p>The architecture we'll be deploying with CDK is:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1666473368096/jTQ0lrEnP.png" alt="Dev Portal - Blog Arch.png" /></p>
<p>We'll focus on creating an "Account Stack" that will subscribe to CloudFormation Events within that stack and forward them to a central stack (possibly in another account) to be included in the EventCatalog.</p>
<p>💻 The code for this series is published here: <a target="_blank" href="https://github.com/martzcodes/blog-event-driven-documentation">https://github.com/martzcodes/blog-event-driven-documentation</a></p>
<p>🤔 If you have any architecture or post questions/feedback... feel free to hit me up on Twitter <a target="_blank" href="https://twitter.com/martzcodes">@martzcodes</a>.</p>
<h1 id="heading-wait-account-stack">Wait... Account Stack?</h1>
<p>This architecture is designed to be modular.  If you only have a single AWS account you could install all of the constructs from this series in a single stack and still get the same result.</p>
<p>At my work we practice Domain Driven Design.  Because of this we end up having over 30 AWS Accounts in use by 10+ teams.  There are plenty of cases where these domains need to interact with each other via API gateway and EventBridge events, so we wanted a single place to hold the documentation for all of them.</p>
<p>We do this by having a central "Watcher" stack and deploy an "Account" stack to each AWS Account.  CDK can handle these with a single <code>cdk deploy</code> command (provided you have the right permissions / have <a target="_blank" href="https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping.html">bootstrapped the trusts</a>).  In practice, I deploy these Account Stacks in 3 waves (one for each environment type: dev -&gt; qa -&gt; prod).</p>
<p>The Account Stack is responsible for monitoring the API gateway deployments (and in Part 3... the EventBridge schemas) within its domain.  When something we care about changes in an account, the Account Stack forwards it to the central AWS Account with the "Watcher" stack where it gets processed.</p>
<h1 id="heading-creating-the-account-stack">Creating the Account Stack</h1>
<p>Our "Account" stack will only need two main things: A bucket to store our account artifacts and a lambda to process AWS Service events and fetch the API gateway spec.  The components of this stack will live in the <a target="_blank" href="https://github.com/martzcodes/blog-event-driven-documentation/tree/main/lib/account">./lib/account</a> project folder.</p>
<h2 id="heading-spec-bucket-and-event-bus">Spec Bucket and Event Bus</h2>
<p>💡Looking ahead... we can re-use the spec bucket for both API Gateway specs and EventBridge schemas.  We'll create this on the stack itself so we can share it with the CloudFormation listener construct as well as the construct we make in part 3.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> specBucket = <span class="hljs-keyword">new</span> Bucket(<span class="hljs-built_in">this</span>, <span class="hljs-string">`AccountSpecBucket`</span>, {
  removalPolicy: RemovalPolicy.DESTROY,
  blockPublicAccess: BlockPublicAccess.BLOCK_ALL,
  objectOwnership: ObjectOwnership.BUCKET_OWNER_ENFORCED,
  lifecycleRules: [ { expiration: Duration.days(<span class="hljs-number">7</span>), }, ],
  autoDeleteObjects: <span class="hljs-literal">true</span>,
});
</code></pre>
<p>The bucket follows my "standard" Bucket template with the addition of setting lifecycle rules on the objects.  This will expire the objects after 7 days.  <em>This is probably unnecessary since the bucket will never grow too large in size, but it doesn't hurt to have.</em></p>
<p>We also need access to the default Event Bus that we'll be using to grant permissions to our lambda to put events with.  We can import that within the stack too:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> bus = EventBus.fromEventBusName(<span class="hljs-built_in">this</span>, <span class="hljs-string">`bus`</span>, <span class="hljs-string">"default"</span>);
</code></pre>
<h2 id="heading-cloudformation-listener-construct">CloudFormation Listener Construct</h2>
<p>Next, we'll create a construct.  When I make constructs I like to start out with a simple template where I define the construct along with some props:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">interface</span> CloudFormationListenerProps {
  bus: IEventBus;
  specBucket: Bucket;
}
<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> CFListener <span class="hljs-keyword">extends</span> Construct {
  <span class="hljs-keyword">constructor</span>(<span class="hljs-params">
    scope: Construct,
    id: <span class="hljs-built_in">string</span>,
    props: CloudFormationListenerProps
  </span>) {
    <span class="hljs-built_in">super</span>(scope, id);
    <span class="hljs-keyword">const</span> { bus, specBucket } = props;
  }
}
</code></pre>
<p>Our construct only needs to 1) create the lambda and 2) create a rule to trigger the lambda.</p>
<p>Our lambda will be using <a target="_blank" href="https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-service-event.html">AWS Service events</a> by subscribing to CloudFormation service events.  Any time a CloudFormation (CDK) stack deploys with changes to an API Gateway we'll want to export that gateway's OpenAPI Spec.</p>
<p>😢 <em>Unfortunately there aren't any service events for API Gateway deployments, which is why we're using CloudFormation Events.</em></p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> cfFn = <span class="hljs-keyword">new</span> NodejsFunction(<span class="hljs-built_in">this</span>, <span class="hljs-string">`cfStatusFn`</span>, {
    runtime: Runtime.NODEJS_16_X,
    entry: join(__dirname, <span class="hljs-string">`./cf-listener-lambda.ts`</span>),
    logRetention: RetentionDays.ONE_DAY,
    initialPolicy: [
      <span class="hljs-keyword">new</span> PolicyStatement({
        effect: Effect.ALLOW,
        actions: [<span class="hljs-string">'cloudformation:Describe*'</span>, <span class="hljs-string">'cloudformation:Get*'</span>, <span class="hljs-string">'cloudformation:List*'</span>, <span class="hljs-string">'apigateway:Get*'</span>],
        resources: [<span class="hljs-string">'*'</span>]
      }),
    ],
});
</code></pre>
<p>Here we grant the lambda access to some CloudFormation actions that will be used to fetch information on the stack, and we also include <code>apigateway:Get*</code> since we'll use that to get the OpenAPI Spec export.</p>
<p>Our lambda needs to be able to read and write from our spec bucket, and to putEvents onto the event bus:</p>
<pre><code class="lang-typescript">specBucket.grantReadWrite(cfFn);
cfFn.addEnvironment(<span class="hljs-string">'SPEC_BUCKET'</span>, specBucket.bucketName);
bus.grantPutEventsTo(cfFn);
</code></pre>
<p>Finally we create the rule to invoke the lambda:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">new</span> Rule(<span class="hljs-built_in">this</span>, <span class="hljs-string">`cfRule`</span>, {
  eventBus: props.bus,
  eventPattern: {
    source: [<span class="hljs-string">"aws.cloudformation"</span>],
    detailType: [<span class="hljs-string">"CloudFormation Stack Status Change"</span>],
    detail: {
      <span class="hljs-string">"status-details"</span>: {
        status: [<span class="hljs-string">"CREATE_COMPLETE"</span>, <span class="hljs-string">"UPDATE_COMPLETE"</span>, <span class="hljs-string">"IMPORT_COMPLETE"</span>,],
      },
    },
  },
  targets: [<span class="hljs-keyword">new</span> LambdaFunction(cfFn)]
});
</code></pre>
<p>⚠️ We only care about events where the stack was <em>successfully</em> updated.  We don't want to re-export the OpenAPI spec for a failed deployment.</p>
<h2 id="heading-cloudformation-listener-lambda">CloudFormation Listener Lambda</h2>
<p>Our lambda is actually slightly complicated.  Each CloudFormation SDK API has slightly different things within it (and the property names even change).  For example... there's no way to get API Gateway Ids out of it.  We're going to end up having to:</p>
<ol>
<li>Describe the Stack</li>
<li>Get (and paginate through) the CloudFormation ChangeSet</li>
<li>Get the "processed" Stack Template</li>
<li>List all of the API Gateways and Find the API Gateways that match our Stack Id</li>
<li>Export the OpenAPI specs from the API Gateway to S3</li>
<li>Emit an Event that OpenAPI Specs were exported</li>
</ol>
<p>We'll be using the AWS-SDK v3 clients to do all of this.</p>
<h3 id="heading-describe-the-stack">Describe the Stack</h3>
<p>We want the "friendly" stack name as part of our output.  We get this by describing the stack:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> StackName = event.detail[<span class="hljs-string">"stack-id"</span>];
<span class="hljs-keyword">const</span> describeCommand = <span class="hljs-keyword">new</span> DescribeStacksCommand({ StackName });
<span class="hljs-keyword">const</span> stacks = <span class="hljs-keyword">await</span> cf.send(describeCommand);
<span class="hljs-keyword">const</span> stack = stacks.Stacks?.[<span class="hljs-number">0</span>];
</code></pre>
<h3 id="heading-get-the-changesets">Get the ChangeSets</h3>
<p>Next, we only care about these events if the ChangeSet includes API Gateways that change.  We'll paginate through the ChangeSet sdk and if we see an API Gateway we'll stop:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> ChangeSetName = stack?.ChangeSetId;
<span class="hljs-keyword">const</span> getChangeSets = <span class="hljs-keyword">async</span> (NextToken?: <span class="hljs-built_in">string</span>): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">boolean</span>&gt; =&gt; {
  <span class="hljs-keyword">const</span> changeSet = <span class="hljs-keyword">await</span> cf.send(
    <span class="hljs-keyword">new</span> DescribeChangeSetCommand({
      StackName,
      ChangeSetName,
      NextToken,
    })
  );
  <span class="hljs-keyword">const</span> apiChanged =
    (changeSet.Changes || []).filter(<span class="hljs-function">(<span class="hljs-params">change</span>) =&gt;</span>
      change.ResourceChange?.ResourceType?.startsWith(<span class="hljs-string">"AWS::ApiGateway"</span>)
    ).length !== <span class="hljs-number">0</span>;
  <span class="hljs-keyword">if</span> (apiChanged) {
    <span class="hljs-keyword">return</span> <span class="hljs-literal">true</span>;
  }
  <span class="hljs-keyword">if</span> (changeSet.NextToken) {
    <span class="hljs-keyword">return</span> getChangeSets(changeSet.NextToken);
  }
  <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>;
};

<span class="hljs-keyword">const</span> apiChanged = <span class="hljs-keyword">await</span> getChangeSets();
</code></pre>
<p>We use <code>change.ResourceChange?.ResourceType?.startsWith("AWS::ApiGateway")</code> to detect if API Gateways had any changes.</p>
<h3 id="heading-get-the-template">Get the Template</h3>
<p>Next we get the "processed" CloudFormation template.  "Processed" means it has the ARNs in it after everything is deployed / resolved.  We need this so we can get the names of the API Gateway stages (which is unfortunately not in any other query).</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> getTemplateCommand = <span class="hljs-keyword">new</span> GetTemplateCommand({
  StackName,
  TemplateStage: TemplateStage.Processed,
});
<span class="hljs-keyword">const</span> template = <span class="hljs-keyword">await</span> cf.send(getTemplateCommand);
<span class="hljs-keyword">if</span> (!template.TemplateBody) {
  <span class="hljs-keyword">return</span>;
}
<span class="hljs-keyword">const</span> resources: Record&lt;<span class="hljs-built_in">string</span>, <span class="hljs-built_in">any</span>&gt; = <span class="hljs-built_in">JSON</span>.parse(
  template.TemplateBody
)?.Resources;
<span class="hljs-keyword">const</span> apiStages = <span class="hljs-built_in">Object</span>.values(resources).filter(
  <span class="hljs-function">(<span class="hljs-params">res: <span class="hljs-built_in">any</span></span>) =&gt;</span> res.Type === <span class="hljs-string">"AWS::ApiGateway::Stage"</span>
);
</code></pre>
<h3 id="heading-find-our-api-gateways">Find our API Gateways</h3>
<p>🤢 This part sucks.  There's no way to query API Gateways for a particular stack... so you have to list them all and filter by the apigateway being tagged with a cloudformation stack id:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> getApis = <span class="hljs-keyword">new</span> GetRestApisCommand({
  limit: <span class="hljs-number">500</span>,
});
<span class="hljs-keyword">const</span> apiRes = <span class="hljs-keyword">await</span> api.send(getApis);
<span class="hljs-keyword">if</span> (!apiRes.items) {
  <span class="hljs-keyword">return</span>;
}
<span class="hljs-keyword">const</span> apis = apiRes.items.reduce(<span class="hljs-function">(<span class="hljs-params">p, c</span>) =&gt;</span> {
  <span class="hljs-keyword">if</span> (c.tags?.[<span class="hljs-string">"aws:cloudformation:stack-id"</span>] !== StackName) {
    <span class="hljs-keyword">return</span> p;
  }
  <span class="hljs-keyword">if</span> (c.tags?.[<span class="hljs-string">"aws:cloudformation:logical-id"</span>]) {
    <span class="hljs-keyword">return</span> { ...p, [c.tags?.[<span class="hljs-string">"aws:cloudformation:logical-id"</span>]]: c.id! };
  }
  <span class="hljs-keyword">return</span> p;
}, {} <span class="hljs-keyword">as</span> Record&lt;<span class="hljs-built_in">string</span>, <span class="hljs-built_in">string</span>&gt;);
</code></pre>
<h3 id="heading-export-the-openapi-specs-to-s3">Export the OpenAPI Specs to S3</h3>
<p>Now that we know what APIs are in our stack, we can export the OpenAPI Specs:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">for</span> (<span class="hljs-keyword">let</span> j = <span class="hljs-number">0</span>; j &lt; apiStages.length; j++) {
  <span class="hljs-keyword">const</span> restApiId = apis[apiStages[j].Properties.RestApiId.Ref];
  <span class="hljs-keyword">const</span> stageName = apiStages[j].Properties.StageName;
  <span class="hljs-keyword">const</span> exportCommand = <span class="hljs-keyword">new</span> GetExportCommand({
    accepts: <span class="hljs-string">"application/json"</span>,
    exportType: <span class="hljs-string">"oas30"</span>,
    restApiId,
    stageName,
  });
  <span class="hljs-keyword">const</span> exportRes = <span class="hljs-keyword">await</span> api.send(exportCommand);
  <span class="hljs-keyword">const</span> oas = Buffer.from(exportRes.body!.buffer).toString();
  apiSpecs[<span class="hljs-string">`<span class="hljs-subst">${restApiId}</span>-<span class="hljs-subst">${stageName}</span>`</span>] = <span class="hljs-built_in">JSON</span>.parse(oas);
}
</code></pre>
<p>And upload them to our Account's Spec Bucket:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> fileLoc = {
  Bucket: process.env.SPEC_BUCKET,
  Key: <span class="hljs-string">`openapi/<span class="hljs-subst">${stack.StackName}</span>/specs.json`</span>,
};

<span class="hljs-keyword">const</span> putObjectCommand = <span class="hljs-keyword">new</span> PutObjectCommand({
  ...fileLoc,
  Body: <span class="hljs-built_in">JSON</span>.stringify(apiSpecs),
});
<span class="hljs-keyword">await</span> s3.send(putObjectCommand);
</code></pre>
<h3 id="heading-emit-the-event">Emit the Event</h3>
<p>Finally, we can emit the Event to the default Event Bus.  We get a pre-signed URL and send that to EventBridge with the "friendly" stack name and the the number of API Specs contained within:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> getObjectCommand = <span class="hljs-keyword">new</span> GetObjectCommand({
  ...fileLoc,
});
<span class="hljs-keyword">const</span> url = <span class="hljs-keyword">await</span> getSignedUrl(s3, getObjectCommand, { expiresIn: <span class="hljs-number">60</span> * <span class="hljs-number">60</span> });

<span class="hljs-keyword">const</span> eventDetail: OpenApiEvent = {
  stackName: stack.StackName!,
  apiSpecs: <span class="hljs-built_in">Object</span>.keys(apiSpecs).length,
  url,
};

<span class="hljs-keyword">const</span> putEvent = <span class="hljs-keyword">new</span> PutEventsCommand({
  Entries: [
    {
      Source,
      DetailType: BlogDetailTypes.OPEN_API,
      Detail: <span class="hljs-built_in">JSON</span>.stringify(eventDetail),
    },
  ],
});
<span class="hljs-keyword">await</span> eb.send(putEvent);
</code></pre>
<p>We use pre-signed URLs because they're easy and flexible.  We could also grant read access to the Watcher Stack's Account Principal.  The spec files are generally too large to include in the event body.</p>
<h2 id="heading-forward-events">Forward Events</h2>
<p>⚠️ For single-account setups, this step isn't necessary.  But for cross-account setups you'd need to forward your events from your account's bus to the target bus:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">if</span> (watcherAccount &amp;&amp; watcherAccount !== Stack.of(<span class="hljs-built_in">this</span>).account) {
  <span class="hljs-keyword">new</span> Rule(<span class="hljs-built_in">this</span>, <span class="hljs-string">`WatcherFwd`</span>, {
    eventPattern: {
      source: [Source],
    },
    targets: [<span class="hljs-keyword">new</span> EventBusTarget(
      EventBus.fromEventBusArn(<span class="hljs-built_in">this</span>, <span class="hljs-string">`watcher-bus`</span>, <span class="hljs-string">`arn:aws:events:<span class="hljs-subst">${Stack.<span class="hljs-keyword">of</span>(<span class="hljs-built_in">this</span>).region}</span>:<span class="hljs-subst">${watcherAccount}</span>:event-bus/default`</span>)
    )]
  });
}
</code></pre>
<p>This rule forwards anything with our "shared" Source to the "Watcher" account's default bus.</p>
<h1 id="heading-updating-the-watcher-stack">Updating the Watcher Stack</h1>
<p>Now we have events "flowing" to our "Watcher" account and we need to use them.  The next part of our architecture will subscribe to these events, copy the specs to our Watcher Bucket and bootstrap the needed markdown files for EventCatalog to use them.</p>
<h2 id="heading-watcher-spec-construct">Watcher Spec Construct</h2>
<p>The Watcher spec construct will grant cross-account permissions (if in a multi-account situation) and create a new bucket for the specs.</p>
<pre><code class="lang-typescript"><span class="hljs-built_in">this</span>.bus = EventBus.fromEventBusName(<span class="hljs-built_in">this</span>, <span class="hljs-string">`bus`</span>, <span class="hljs-string">"default"</span>);
<span class="hljs-keyword">const</span> { watchedAccounts = [] } = props;
watchedAccounts.forEach(<span class="hljs-function">(<span class="hljs-params">watchedAccount</span>) =&gt;</span> {
  <span class="hljs-keyword">if</span> (watchedAccount !== Stack.of(<span class="hljs-built_in">this</span>).account) {
    <span class="hljs-built_in">this</span>.bus.grantPutEventsTo(<span class="hljs-keyword">new</span> AccountPrincipal(watchedAccount));
  }
});
</code></pre>
<p>In a cross-account situation, buses need to grant access by ARN to allow other buses to have putEvent access (which is used by the rules).  We didn't do this in the "Account" stack because the Watcher stack doesn't emit events.</p>
<p>And then we create the bucket.</p>
<pre><code class="lang-typescript"><span class="hljs-built_in">this</span>.specBucket = <span class="hljs-keyword">new</span> Bucket(<span class="hljs-built_in">this</span>, <span class="hljs-string">`WatcherSpecBucket`</span>, {
  removalPolicy: RemovalPolicy.DESTROY,
  blockPublicAccess: BlockPublicAccess.BLOCK_ALL,
  objectOwnership: ObjectOwnership.BUCKET_OWNER_ENFORCED,
  autoDeleteObjects: <span class="hljs-literal">true</span>,
});
<span class="hljs-keyword">new</span> CfnOutput(<span class="hljs-built_in">this</span>, <span class="hljs-string">`WatcherSpecBucketOutput`</span>, {
  value: <span class="hljs-built_in">this</span>.specBucket.bucketName,
});
</code></pre>
<p>Here the <code>CfnOutput</code> is useful because later we'll need the bucket's name for our post-processing script.</p>
<h3 id="heading-add-rule-method">Add Rule Method</h3>
<p>Next we need to create a lambda and invoke it via a rule.  We know we're going to use a similar pattern in part 3... so we can abstract this a little bit.  Both will need a lambda invoked by a rule... so we'll create an <code>addRule</code> method on our Construct:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> fn = <span class="hljs-keyword">new</span> NodejsFunction(<span class="hljs-built_in">this</span>, lambdaName, {
  runtime: Runtime.NODEJS_16_X,
  entry: join(__dirname, <span class="hljs-string">`./<span class="hljs-subst">${lambdaName}</span>.ts`</span>),
  logRetention: RetentionDays.ONE_DAY,
});
<span class="hljs-built_in">this</span>.specBucket.grantWrite(fn);
fn.addEnvironment(<span class="hljs-string">"SPEC_BUCKET"</span>, <span class="hljs-built_in">this</span>.specBucket.bucketName);

<span class="hljs-keyword">new</span> Rule(<span class="hljs-built_in">this</span>, <span class="hljs-string">`<span class="hljs-subst">${lambdaName}</span>Rule`</span>, {
  eventBus: <span class="hljs-built_in">this</span>.bus,
  eventPattern: {
    source: [Source],
    detailType: [detailType],
  },
  targets: [<span class="hljs-keyword">new</span> LambdaFunction(fn)],
});
</code></pre>
<h2 id="heading-copy-the-specs">Copy the Spec(s)</h2>
<p>To reduce the number of file transfers our Account Stack combined (potentially) multiple OpenAPI specs in a single file.  We'll grab that file and split it up.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> res = <span class="hljs-keyword">await</span> fetch(event.detail.url);
<span class="hljs-keyword">const</span> specs = (<span class="hljs-keyword">await</span> res.json()) <span class="hljs-keyword">as</span> Record&lt;<span class="hljs-built_in">string</span>, <span class="hljs-built_in">any</span>&gt;;

<span class="hljs-keyword">const</span> gateways = <span class="hljs-built_in">Object</span>.keys(specs);
<span class="hljs-keyword">for</span> (<span class="hljs-keyword">let</span> j = <span class="hljs-number">0</span>; j &lt; gateways.length; j++) {
  <span class="hljs-keyword">const</span> fileLoc = {
    Bucket: process.env.SPEC_BUCKET,
    Key: <span class="hljs-string">`openapi/<span class="hljs-subst">${event.account}</span>/<span class="hljs-subst">${event.detail.stackName}</span>/<span class="hljs-subst">${gateways[j]}</span>/openapi.json`</span>,
  };

  <span class="hljs-keyword">const</span> putObjectCommand = <span class="hljs-keyword">new</span> PutObjectCommand({
    ...fileLoc,
    Body: <span class="hljs-built_in">JSON</span>.stringify(specs[gateways[j]], <span class="hljs-literal">null</span>, <span class="hljs-number">2</span>),
  });
  <span class="hljs-keyword">await</span> s3.send(putObjectCommand);
}
</code></pre>
<p>We use the pre-signed url to fetch the file and then we store each spec within it as its own file.  We're storing this with a templated path that we'll be able to extract information out of later.</p>
<h2 id="heading-bootstrap-the-specs-with-markdown-into-eventcatalog">Bootstrap the Spec(s) with Markdown into EventCatalog</h2>
<p>Unfortunately with EventCatalog you can't just throw it a bunch of spec files and have everything work.  It's a static site and those spec files need some markdown in a particular folder structure in order to work.  It does have a handy <em>domain</em> concept though and we're going to leverage that to split out things by AWS Account.</p>
<pre><code class="lang-bash">documentation
├── domains
│   ├──AWS Account A
│   │     ├──index.md
│   │     ├──services
│   │     │  └──API Gateway
│   │     │     └──index.md
│   │     │     └──schema.json
│   ├──AWS Account B
│   │     ├──index.md
│   │     ├──services
│   │     │  └──API Gateway
│   │     │     └──index.md
│   │     │     └──schema.json
│   │     ├──events
├── eventcatalog.config.js
├── package.json
├── README.md
└── yarn.lock
</code></pre>
<p>So we need to run a script that pulls the S3 files and processes them into this structure, creating the files as-needed.  We'll need to run this with an AWS Profile that has access to read from the bucket.</p>
<p>Locally, we're going to store this in <a target="_blank" href="https://github.com/martzcodes/blog-event-driven-documentation/tree/main/lib/prepare">./lib/prepare</a> a subfolder and we'll add a script to our <code>package.json</code> to run it: <code>"prepare:catalog": "ts-node ./lib/prepare $SPEC_BUCKET",</code></p>
<p>We also use the <code>WatcherSpecBucketOutput</code> from the <code>CfnOutput</code> we made when creating the bucket and <code>export SPEC_BUCKET=&lt;WatcherSpecBucketOutput&gt;</code>.</p>
<h3 id="heading-list-the-files">List the files</h3>
<p>We'll use the v3 aws-sdk client to list the objects with the <code>openapi/</code> prefix.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> listBucketObjectsCommand = <span class="hljs-keyword">new</span> ListObjectsCommand({
  Bucket,
  Prefix: <span class="hljs-string">"openapi/"</span>,
});
<span class="hljs-keyword">const</span> bucketObjects = <span class="hljs-keyword">await</span> s3Client.send(listBucketObjectsCommand);
<span class="hljs-keyword">const</span> specs = bucketObjects.Contents!.map(<span class="hljs-function">(<span class="hljs-params">content</span>) =&gt;</span> {
  <span class="hljs-keyword">const</span> key = content.Key!;
  <span class="hljs-keyword">const</span> splitKey = key.split(<span class="hljs-string">"/"</span>);
  <span class="hljs-keyword">const</span> account = splitKey[<span class="hljs-number">1</span>];
  <span class="hljs-keyword">const</span> stack = splitKey[<span class="hljs-number">2</span>];
  <span class="hljs-keyword">const</span> apiId = splitKey[<span class="hljs-number">3</span>].split(<span class="hljs-string">"."</span>)[<span class="hljs-number">0</span>];
  <span class="hljs-keyword">return</span> { key, account, stack, apiId };
});
</code></pre>
<p>From there we'll map those objects splitting out the useful information from the key name.</p>
<h3 id="heading-create-the-domain-folder">Create the Domain folder</h3>
<p>Next, we need to make sure the "Domain" folder exists and if not, create it with the markdown files.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> domain = <span class="hljs-string">`acct-<span class="hljs-subst">${account}</span>`</span>;
<span class="hljs-keyword">const</span> domainPath = join(__dirname, <span class="hljs-string">`../../catalog/domains/<span class="hljs-subst">${domain}</span>/`</span>);
mkdirSync(domainPath, { recursive: <span class="hljs-literal">true</span> });
<span class="hljs-keyword">if</span> (!existsSync(join(domainPath, <span class="hljs-string">`./index.md`</span>))) {
  <span class="hljs-keyword">const</span> domainMd = [
    <span class="hljs-string">`---`</span>,
    <span class="hljs-string">`name: <span class="hljs-subst">${domain}</span>`</span>,
    <span class="hljs-string">`summary: |`</span>,
    <span class="hljs-string">`  This is the automatically stubbed documentation. Please replace this by clicking the edit button above.`</span>,
    <span class="hljs-string">`owners:`</span>,
    <span class="hljs-string">`  - martzcodes`</span>,
    <span class="hljs-string">`---`</span>,
  ];
  writeFileSync(join(domainPath, <span class="hljs-string">`./index.md`</span>), domainMd.join(<span class="hljs-string">"\n"</span>));
}
</code></pre>
<p>It's a very simple stub that just says it was automatically stubbed.</p>
<p>⚡️You can add yourself as a contributor to the <code>./catalog/eventcatalog.config.js</code> file and then list yourself as an owner in the markdown, and you'll be displayed on the page.</p>
<h3 id="heading-create-the-service-folder">Create the Service Folder</h3>
<p>Next, we need to create the service folder.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> basePath = join(domainPath, <span class="hljs-string">`./services/<span class="hljs-subst">${apiName}</span>`</span>);
mkdirSync(basePath, { recursive: <span class="hljs-literal">true</span> });
writeFileSync(join(basePath, <span class="hljs-string">`./openapi.json`</span>), spec);
<span class="hljs-keyword">if</span> (!existsSync(join(basePath, <span class="hljs-string">`./index.md`</span>))) {
  <span class="hljs-keyword">const</span> apiMd = [
    <span class="hljs-string">`---`</span>,
    <span class="hljs-string">`name: <span class="hljs-subst">${apiName}</span>`</span>,
    <span class="hljs-string">`summary: |`</span>,
    <span class="hljs-string">`  This is the automatically stubbed documentation for the <span class="hljs-subst">${apiName}</span> API (<span class="hljs-subst">${specMeta.apiId}</span>) in the <span class="hljs-subst">${specMeta.stack}</span> stack. Please replace this.`</span>,
    <span class="hljs-string">`owners:`</span>,
    <span class="hljs-string">`  - martzcodes`</span>,
    <span class="hljs-string">`---`</span>,
    <span class="hljs-string">``</span>,
    <span class="hljs-string">`&lt;OpenAPI /&gt;`</span>,
  ];
  writeFileSync(join(basePath, <span class="hljs-string">`./index.md`</span>), apiMd.join(<span class="hljs-string">"\n"</span>));
}
</code></pre>
<p>Of note here is that we're NOT re-stubbing the file if the file already exists.  The idea here is that developers would come into the catalog project and update their documentation via commit.  Spec/Schema discovery will be automated and from that point it's easily extended by the devs. 💪</p>
<h2 id="heading-update-the-bucketdeployment">Update the BucketDeployment</h2>
<p>Last (but not least)... we want to update our BucketDeployment from Part 1 to run the <code>prepare:catalog</code> script as part of the cdk deployment.  This means that any time the watcher stack is deployed, it'll pull the files from S3 and bootstrap any that weren't already defined!  The asset generation then becomes:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> bundle = Source.asset(uiPath, {
  assetHash: <span class="hljs-string">`<span class="hljs-subst">${<span class="hljs-built_in">Date</span>.now()}</span>`</span>,
  bundling: {
    command: [<span class="hljs-string">"sh"</span>, <span class="hljs-string">"-c"</span>, <span class="hljs-string">'echo "Not Used"'</span>],
    image: DockerImage.fromRegistry(<span class="hljs-string">"alpine"</span>), <span class="hljs-comment">// required but not used</span>
    local: {
      tryBundle(outputDir: <span class="hljs-built_in">string</span>) {
        execSync(<span class="hljs-string">"npm run prepare:catalog"</span>); <span class="hljs-comment">// &lt;-- added this</span>
        execSync(<span class="hljs-string">"cd catalog &amp;&amp; npm i"</span>);
        execSync(<span class="hljs-string">"cd catalog &amp;&amp; npm run build"</span>);
        copySync(uiPath, outputDir, {
          ...execOptions,
          recursive: <span class="hljs-literal">true</span>,
        });
        <span class="hljs-keyword">return</span> <span class="hljs-literal">true</span>;
      },
    },
  },
});
</code></pre>
<h1 id="heading-see-it-in-action">See It In Action</h1>
<p>🙈 <em>SPOILER ALERT: You can see this in action at <a target="_blank" href="https://docs.martz.dev">docs.martz.dev</a> which will be the result of this series</em>🤫</p>
<p>I went ahead and deployed my blog post <a target="_blank" href="https://matt.martz.codes/create-a-cross-account-iam-authorized-apigateway-with-cdk">Create a Cross-Account IAM Authorized APIGateway with CDK</a> in order to get some files to work with and then I CDK deployed this project.</p>
<p>The domain was created:
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1666624849031/pHXx6Wr3R.png" alt="Screen Shot 2022-10-24 at 11.20.26 AM.png" /></p>
<p>And the Service was created with the exported OpenAPI spec!
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1666624860114/Yzw4ldFTG.png" alt="Screen Shot 2022-10-24 at 11.20.19 AM.png" /></p>
<h1 id="heading-whats-next">What's Next?</h1>
<p>Now that we have automatic API Gateway specs we're only missing the EventBridge Event Schemas... and that's what we'll be doing in Part 3.</p>
<p>🙌 If anything wasn't clear or if you want to be notified on when I post part 3... feel free to hit me up on Twitter <a target="_blank" href="https://twitter.com/martzcodes">@martzcodes</a>.</p>
]]></content:encoded></item><item><title><![CDATA[Using AWS CDK to Deploy EventCatalog]]></title><description><![CDATA[In this series we're going to SUPERCHARGE developer experience by implementing Event Driven Documentation.  In this post, we'll start by using CDK to deploy EventCatalog to a custom domain using CloudFront and S3.  In Part 2 we'll use AWS Service Eve...]]></description><link>https://martzmakes.com/using-aws-cdk-to-deploy-eventcatalog</link><guid isPermaLink="true">https://martzmakes.com/using-aws-cdk-to-deploy-eventcatalog</guid><category><![CDATA[AWS]]></category><category><![CDATA[documentation]]></category><category><![CDATA[aws-cdk]]></category><category><![CDATA[event-driven-architecture]]></category><dc:creator><![CDATA[Matt Martz]]></dc:creator><pubDate>Mon, 24 Oct 2022 13:05:30 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/unsplash/Oaqk7qqNh_c/upload/v1666579856103/W7rZisdZC.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this series we're going to SUPERCHARGE developer experience by implementing <strong><em>Event Driven Documentation</em></strong>.  In this post, we'll start by using CDK to deploy <a target="_blank" href="https://eventcatalog.dev">EventCatalog</a> to a custom domain using CloudFront and S3.  In Part 2 we'll use <a target="_blank" href="https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-service-event.html">AWS Service Events</a> from CloudFormation to detect when an API Gateway has deployed and export the <a target="_blank" href="https://www.openapis.org">OpenAPI</a> spec from AWS to bundle it in our EventCatalog.  In Part 3 we'll export the JSONSchema of EventBridge Events using schema discovery and bundle them into the EventCatalog.</p>
<p>🛑 <em>Not sure where to start with CDK? See my <a target="_blank" href="https://youtu.be/T-H4nJQyMig">CDK Crash Course on freeCodeCamp</a></em></p>
<p>The architecture we'll be deploying with CDK is:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1666473368096/jTQ0lrEnP.png" alt="Dev Portal - Blog Arch.png" /></p>
<p>In this post we'll be focusing on creating the "Watcher Stack" that creates the EventCatalog UI Bucket along with the CloudFront distribution.  I'll be going into more detail on our target architecture in the follow-on posts.</p>
<p>💻 The code for this series is published here: <a target="_blank" href="https://github.com/martzcodes/blog-event-driven-documentation">https://github.com/martzcodes/blog-event-driven-documentation</a></p>
<p>🤔 If you have any architecture or post questions/feedback... feel free to hit me up on Twitter <a target="_blank" href="https://twitter.com/martzcodes">@martzcodes</a>.</p>
<h1 id="heading-what-is-eventcatalog">What is EventCatalog?</h1>
<p><a target="_blank" href="https://eventcatalog.dev">EventCatalog</a> is an <em>awesome</em> Open Source project built by <a target="_blank" href="https://twitter.com/boyney123">David Boyne</a> that helps you document your events, services and domains.  It ingests a combination of markdown files, OpenAPI specs and EventBridge event schemas (or AsyncAPI specs) to build a static documentation site.</p>
<p>🙈 <em>SPOILER ALERT: You can see this in action at <a target="_blank" href="https://docs.martz.dev">docs.martz.dev</a> which will be the result of this series</em>🤫</p>
<h1 id="heading-deploying-eventcatalog">Deploying EventCatalog</h1>
<p>Our <a target="_blank" href="https://eventcatalog.dev">EventCatalog</a> is going to be stored in S3 and hosted via CloudFront.  To do that we're going to create a Level 3 CDK Construct that will:</p>
<ul>
<li>Create the UI Bucket</li>
<li>Create the CloudFront Distribution that hosts the contents of the UI Bucket</li>
<li>Use a BucketDeployment resource to upload the EventCatalog assets to the UI Bucket</li>
</ul>
<p>After initializing a CDK project, we can install EventCatalog using: <code>npx @eventcatalog/create-eventcatalog@latest catalog</code> and it will go into a <code>catalog</code> subfolder in our project.</p>
<h2 id="heading-create-the-ui-bucket">Create the UI Bucket</h2>
<p>Creating Buckets with CDK is fairly simple.  We'll use the L2 Bucket Construct in our L3 Catalog construct to create the bucket... and then we'll give it some sensible defaults.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> CatalogOne <span class="hljs-keyword">extends</span> Construct {
  <span class="hljs-keyword">constructor</span>(<span class="hljs-params">scope: Construct, id: <span class="hljs-built_in">string</span></span>) {
    <span class="hljs-built_in">super</span>(scope, id);
    <span class="hljs-keyword">const</span> destinationBucket = <span class="hljs-keyword">new</span> Bucket(<span class="hljs-built_in">this</span>, <span class="hljs-string">`EventCatalogBucket`</span>, {
      removalPolicy: RemovalPolicy.DESTROY,
      blockPublicAccess: BlockPublicAccess.BLOCK_ALL,
      objectOwnership: ObjectOwnership.BUCKET_OWNER_ENFORCED,
      autoDeleteObjects: <span class="hljs-literal">true</span>,
    });
  }
}
</code></pre>
<ul>
<li><code>removalPolicy: RemovalPolicy.DESTROY</code> will delete the Bucket if the Stack is destroyed.  In order to do this, we need to make sure the bucket is empty. <code>autoDeleteObjects: true</code> creates a CustomResource that will empty the bucket if the Stack is destroyed.</li>
<li><code>blockPublicAccess: BlockPublicAccess.BLOCK_ALL</code> will prevent users from directly retrieving files from S3 (forcing them to go through CloudFront).</li>
<li><code>objectOwnership: ObjectOwnership.BUCKET_OWNER_ENFORCED</code> enforces normal IAM permissions on the bucket (instead of the hard-to-use ACL-based permissions that S3 started with long ago...)</li>
</ul>
<h2 id="heading-create-the-cloudfront-distribution">Create the CloudFront Distribution</h2>
<p>In order for CloudFront to access the files in the S3 bucket, we need to grant read access to this.  We do this by creating an <code>OriginAccessIdentity</code> (emphasis on <strong><em>IDENTITY</em></strong>) and using the bucket's <code>grantRead</code> method to grant the access.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> originAccessIdentity = <span class="hljs-keyword">new</span> cloudfront.OriginAccessIdentity(
  <span class="hljs-built_in">this</span>,
  <span class="hljs-string">`OriginAccessIdentity`</span>
);
destinationBucket.grantRead(originAccessIdentity);
</code></pre>
<p>We then create the CloudFront Distribution with an S3 Origin that uses the identity.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> distribution = <span class="hljs-keyword">new</span> cloudfront.Distribution(
  <span class="hljs-built_in">this</span>,
  <span class="hljs-string">`EventCatalogDistribution`</span>,
  {
    defaultRootObject: <span class="hljs-string">"index.html"</span>,
    defaultBehavior: {
      origin: <span class="hljs-keyword">new</span> S3Origin(destinationBucket, { originAccessIdentity }),
    },
  }
);
</code></pre>
<p>For convenience, we'll create a CloudFormation Output that has the Catalog's CloudFront-hosed url.  This will be logged out as part of the deployment.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">new</span> CfnOutput(<span class="hljs-built_in">this</span>, <span class="hljs-string">`CatalogUrl`</span>, {
  value: <span class="hljs-string">`https://<span class="hljs-subst">${distribution.distributionDomainName}</span>`</span>,
});
</code></pre>
<h2 id="heading-use-a-bucketdeployment-to-upload-assets">Use a BucketDeployment to Upload Assets</h2>
<p>Finally, we need something to actually host.  We can use S3 Deployment constructs to upload our EventCatalog's output and deploy it to S3.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> execOptions: ExecSyncOptions = {
  stdio: [<span class="hljs-string">"ignore"</span>, process.stderr, <span class="hljs-string">"inherit"</span>],
};
<span class="hljs-keyword">const</span> uiPath = join(__dirname, <span class="hljs-string">`../../../catalog/out`</span>);
<span class="hljs-keyword">const</span> bundle = Source.asset(uiPath, {
  bundling: {
    command: [<span class="hljs-string">"sh"</span>, <span class="hljs-string">"-c"</span>, <span class="hljs-string">'echo "Not Used"'</span>],
    image: DockerImage.fromRegistry(<span class="hljs-string">"alpine"</span>), <span class="hljs-comment">// required but not used</span>
    local: {
      tryBundle(outputDir: <span class="hljs-built_in">string</span>) {
        execSync(<span class="hljs-string">"cd catalog &amp;&amp; npm i"</span>);
        execSync(<span class="hljs-string">"cd catalog &amp;&amp; npm run build"</span>);
        copySync(uiPath, outputDir, {
          ...execOptions,
          recursive: <span class="hljs-literal">true</span>,
        });
        <span class="hljs-keyword">return</span> <span class="hljs-literal">true</span>;
      },
    },
  },
});
</code></pre>
<p><code>Source.asset</code> can accept commands to locally bundle things.  Normally it tries to use docker to do the bundling, but will accept a local override.  The <code>command</code> and <code>image</code> are used as the fall-back in case the <code>tryBundle</code> output returns falsey.  Within <code>tryBundle</code> we can use any commands we need to create the output.</p>
<ul>
<li><code>cd catalog &amp;&amp; npm i</code> changes directory into our catalog folder and makes sure the dependencies are installed</li>
<li><code>cd catalog &amp;&amp; npm run build</code> run's the build script for EventCatalog</li>
<li><code>copySync(...</code> recursively copies the output folder of EventCatalog as an S3 Asset</li>
</ul>
<p>Next we use that S3 Asset in a Bucket Deployment:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">new</span> BucketDeployment(<span class="hljs-built_in">this</span>, <span class="hljs-string">`DeployCatalog`</span>, {
  destinationBucket,
  distribution,
  sources: [bundle],
  prune: <span class="hljs-literal">true</span>,
  memoryLimit: <span class="hljs-number">1024</span>,
});
</code></pre>
<p>Here, we specify our UI Bucket, CloudFront distribution and S3 Asset.  We include <code>prune: true</code> to ensure old versions of the static site's assets get removed on subsequent deployments and we bumped he memory limit of the BucketDeployment lambda so it's a little faster.  In the background this <code>BucketDeployment</code> construct uses a lambda to do the S3 upload.  By setting the <code>memoryLimit</code> we're setting the memory of that lambda.</p>
<p>💡 If you use BucketDeployments for other things and run into issues with slowness or failures... try increasing the memory.  The default memory is only 128 MB.</p>
<p>If we deploy our EventCatalog now and go to the Catalog's output URL in the deployment log... we'll see our EventCatalog!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1666475735453/Qcq7loZla.png" alt="Screen Shot 2022-10-22 at 5.54.45 PM.png" /></p>
<p>EventCatalog includes some Examples built-in!  BUT if we reload any page, we'll get a <code>NoSuchKey</code> error 😱</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1666475800575/Zd54D5I0x.png" alt="Screen Shot 2022-10-21 at 10.22.48 AM.png" /></p>
<p>We get this because EventCatalog is trying to access something in S3 at some direct object and doesn't know to add <code>index.html</code> to the end of it.  For Single Page Apps that use UI frameworks like React or Angular... this is less of a problem because you can set a default routing to go to the root index.html where the framework will handle it.  But this isn't a single page app.  It's a static site!</p>
<h1 id="heading-fixing-the-lack-of-cloudfront-url-rewrites">Fixing the lack of CloudFront URL rewrites</h1>
<p>We can use an edge lambda to do this rewrite for us.  <a target="_blank" href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/example-function-add-index.html">CloudFront has an example of how to do this</a>.  Using CDK I'll create an Edge Function to do the rewriting.  First we need the lambda:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> edgeFn = <span class="hljs-keyword">new</span> cloudfront.experimental.EdgeFunction(
  <span class="hljs-built_in">this</span>,
  <span class="hljs-string">`EdgeRedirect`</span>,
  {
    code: Code.fromInline(
      <span class="hljs-string">'"use strict";var n=Object.defineProperty;var u=Object.getOwnPropertyDescriptor;var c=Object.getOwnPropertyNames;var d=Object.prototype.hasOwnProperty;var a=(e,r)=&gt;{for(var i in r)n(e,i,{get:r[i],enumerable:!0})},o=(e,r,i,s)=&gt;{if(r&amp;&amp;typeof r=="object"||typeof r=="function")for(let t of c(r))!d.call(e,t)&amp;&amp;t!==i&amp;&amp;n(e,t,{get:()=&gt;r[t],enumerable:!(s=u(r,t))||s.enumerable});return e};var f=e=&gt;o(n({},"__esModule",{value:!0}),e);var l={};a(l,{handler:()=&gt;h});module.exports=f(l);var h=async e=&gt;{let r=e.Records[0].cf.request;return r.uri!=="/"&amp;&amp;(r.uri.endsWith("/")||r.uri.lastIndexOf(".")&lt;r.uri.lastIndexOf("/"))&amp;&amp;(r.uri=r.uri.concat(`${r.uri.endsWith("/")?"":"/"}index.html`)),r};0&amp;&amp;(module.exports={handler});'</span>
    ),
    handler: <span class="hljs-string">"index.handler"</span>,
    runtime: Runtime.NODEJS_16_X,
    logRetention: RetentionDays.ONE_DAY,
  }
);
</code></pre>
<p>Since Edge Lambda Functions don't support automatic bundling using esbuild like regular lambdas do we just used the <code>Code.fromInline</code> method to automatically upload the inline code as our Lambda source.</p>
<p>Next, we can update our CloudFront Distribution's props to include the edge function:</p>
<pre><code class="lang-typescript">{
  defaultRootObject: <span class="hljs-string">"index.html"</span>,
  defaultBehavior: {
    origin: <span class="hljs-keyword">new</span> S3Origin(destinationBucket, { originAccessIdentity }),
    viewerProtocolPolicy: ViewerProtocolPolicy.REDIRECT_TO_HTTPS,
    edgeLambdas: [
      {
        functionVersion: edgeFn.currentVersion,
        eventType: cloudfront.LambdaEdgeEventType.VIEWER_REQUEST,
      },
    ],
  },
}
</code></pre>
<p>⚡️ <strong>Are you making an internal (private) documentation page?</strong>  <em>See my last post on how to <a target="_blank" href="https://matt.martz.codes/protect-a-static-site-with-auth0-using-lambdaedge-and-cloudfront">Protect a Static Site with Auth0 Using Lambda@Edge and CloudFront</a></em></p>
<p>⚠️ When you use Lambda@Edge functions... you need to specify the region in your stack.  If you don't, you'll get an error like this <code>Error: stacks which use EdgeFunctions must have an explicitly set region</code> when you try to deploy.  Set your region like this:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">new</span> BlogDevCatalogStack(app, <span class="hljs-string">'BlogDevCatalogWatcherStack'</span>, {
  env: {
    region: process.env.CDK_DEFAULT_REGION,
    account: process.env.CDK_DEFAULT_ACCOUNT
  }
});
</code></pre>
<p>Now when we deploy, we can refresh our pages (or go directly to pages) without having <code>NoSuchKey</code> errors!</p>
<h1 id="heading-deploying-to-a-custom-domain">Deploying to a Custom Domain</h1>
<p>That's great but <code>&lt;random cloudfront url&gt;</code> is pretty boring.  AWS hosts domains too!  If you purchased a domain and have a hosted zone set up, you can have CloudFront use it.</p>
<p>First we look up the HostedZone by the domain name.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> domainName = <span class="hljs-string">`docs.<span class="hljs-subst">${hostDomain}</span>`</span>;
<span class="hljs-keyword">const</span> hostedZone = HostedZone.fromLookup(<span class="hljs-built_in">this</span>, <span class="hljs-string">`UIZone`</span>, {
  domainName: hostDomain,
});
</code></pre>
<p>Then we create a DNS Certificate (so we can use https):</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> certificate = <span class="hljs-keyword">new</span> DnsValidatedCertificate(<span class="hljs-built_in">this</span>, <span class="hljs-string">`EventCatalogCert`</span>, {
  domainName,
  hostedZone,
});
</code></pre>
<p>Then we pass these in to our CloudFront Distribution props:</p>
<pre><code class="lang-typescript">{
  defaultRootObject: <span class="hljs-string">"index.html"</span>,
  certificate, <span class="hljs-comment">// &lt;--</span>
  domainNames: [domainName], <span class="hljs-comment">// &lt;--</span>
  <span class="hljs-comment">// ...</span>
}
</code></pre>
<p>And finally we create an <code>A Record</code> where the target is for our CloudFront Distribution:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">new</span> ARecord(<span class="hljs-built_in">this</span>, <span class="hljs-string">`ARecord`</span>, {
  zone: hostedZone,
  recordName: domainName,
  target: RecordTarget.fromAlias(<span class="hljs-keyword">new</span> CloudFrontTarget(distribution)),
});
</code></pre>
<p>Putting it all together, I can host my personal documentation at <a target="_blank" href="https://docs.martz.dev">docs.martz.dev</a>!</p>
<p>⚠️ When using hosted zones... you need to specify the account in your stack.  If you don't, you'll get an error like this <code>Error: Cannot retrieve value from context provider hosted-zone since account/region are not specified at the stack level.</code> when you try to deploy.  Set your account like this:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">new</span> BlogDevCatalogStack(app, <span class="hljs-string">'BlogDevCatalogWatcherStack'</span>, {
  env: {
    region: process.env.CDK_DEFAULT_REGION,
    account: process.env.CDK_DEFAULT_ACCOUNT
  }
});
</code></pre>
<h1 id="heading-whats-next">What's Next?</h1>
<p>Hosting a static site is great, but we haven't even scratched the surface of <em>Event Driven Documentation</em> yet.  In parts 2 and 3 we'll automatically fetch API Gateway OpenAPI specs  + EventBridge Event Schemas and bundle them into our EventCatalog!</p>
<p>🙌 If anything wasn't clear or if you want to be notified on when I post parts 2 and 3... feel free to hit me up on Twitter <a target="_blank" href="https://twitter.com/martzcodes">@martzcodes</a>.</p>
]]></content:encoded></item></channel></rss>