In today's rapidly evolving tech landscape, cloud computing has become an essential skill for aspiring and seasoned professionals alike. To bridge the gap between theory and practical application, I embarked on the Cloud Resume Challenge (CRC), a hands-on project designed by Forrest Brazeal. Through this blog post, I will share my journey, the steps I took, and the lessons I learned along the way.
What is the Cloud Resume Challenge?
The Cloud Resume Challenge (CRC) is a unique project that aims to deepen understanding of cloud technologies and DevOps practices through a comprehensive, step-by-step approach.
The purpose of the CRC is to help individuals build a cloud-based resume website from scratch, leveraging various cloud services and infrastructure-as-code practices. This journey involves a series of tasks that progressively increase in complexity, ensuring a well-rounded learning experience. Participants start by creating a static HTML resume and end by deploying it on the cloud using automation and CI/CD pipelines. The steps include:
Writing and formatting a static HTML/CSS resume.
Hosting the resume on a cloud storage service.
Implementing HTTPS using a CDN.
Creating a serverless backend to count page views.
Integrating the backend with a database.
Automating infrastructure deployment using infrastructure-as-code tools.
Setting up CI/CD pipelines to streamline updates and deployments.
Writing tests to ensure the functionality of the code.
Why I Took on the Cloud Resume Challenge: A Journey into DevOps
I decided to take on the Cloud Resume Challenge because of my growing interest in DevOps, automation, and integration. Despite having limited experience with cloud technologies, I was eager to gain hands-on practice to enhance my skills. The CRC provided a structured approach to learning and applying real-world cloud technologies and best practices, focusing on the process rather than just the end result.
Amazon Web Services
I chose AWS for this project because of its extensive range of services, robust infrastructure, and leading position in cloud computing. AWS's detailed documentation and active community support were invaluable resources throughout this journey.
My Solution
Repositories and Final Product
Front-end repo:disaa0/web-resume-front
Back-end repo :disaa0/web-resume-back
Final product:disaa.dev
Architecture diagram:
Setting up AWS
Secure Access Management with AWS IAM Identity Center
To securely manage access to my AWS resources, I set up the AWS IAM Identity Center. This service simplifies access management by allowing centralized control over permissions and roles. Here’s how I set it up:
Enable IAM Identity Center: I started by enabling IAM Identity Center in the AWS Management Console.
Create Users and Groups: Next, I created the users and groups that we will use in this project within IAM Identity Center. This step involved defining roles and assigning appropriate permissions to ensure users have access to only the resources they need.
Assign Permissions: I assigned specific AWS IAM roles to these users and groups. By doing so, I ensured that access to AWS services and resources was controlled and monitored.
Configure SSO: Finally, I configured Single Sign-On (SSO) settings to streamline the login process. This configuration allows us to access our AWS account and applications using existing corporate credentials, simplifying access management and improving the user experience.
Setting up IAM Identity Center not only enhanced the security of the AWS environment but also provided a scalable and manageable way to control access as my project evolves.
Infrastructure as Code
What is IaC?
Infrastructure as Code (IaC) is a modern approach to managing and provisioning computing infrastructure through machine-readable configuration files rather than physical hardware configuration or interactive configuration tools. This method ensures consistent, repeatable, and automated deployment of infrastructure resources, significantly enhancing efficiency and reliability.
Terraform
Terraform, developed by HashiCorp, is a powerful IaC tool that allows users to define and provision data center infrastructure using a high-level configuration language. With Terraform, you can manage a wide range of resources across multiple cloud providers using a single configuration language, making it incredibly versatile and efficient.
While AWS SAM (Serverless Application Model) is a great tool for building serverless applications on AWS, I chose to use Terraform for this project. I made this decision to gain more experience with a versatile IaC tool that supports multiple cloud providers.
As part of the challenge, I embarked on learning Terraform from scratch. I started by familiarizing myself with the basics of Terraform syntax and concepts, including providers, resources, modules, and state management. Online tutorials, official documentation, and community forums were invaluable in helping me understand how to write and apply Terraform configurations effectively.
Useful resources:
Using an S3 Bucket as Terraform Back-end
One important part of using Terraform is managing its state, which tracks the resources it handles. For this project, I set up Terraform to use an S3 bucket as the back-end for state management. This lets us access the state of the infrastructure remotely.
Here’s a brief overview of the steps I took to configure Terraform to use an S3 bucket as the back-end:
Create an S3 Bucket: I created an S3 bucket in AWS to store the Terraform state file.
Configure the Backend in Terraform: I updated my Terraform configuration to specify the S3 bucket as the back-end. This involved adding a
backend
block in theterraform
block of my configuration file.Initialize Terraform: After configuring the backend, I ran
terraform init
to initialize the configuration and migrate the state file to the S3 bucket.
By setting up Terraform with an S3 back-end, I ensured that my infrastructure state is stored securely and can be accessed and managed remotely, providing a robust foundation for managing my cloud resources.
Setting up the Website in AWS
Creating the Website
To simplify the maintenance and updating process of my website, I developed a template-based resume builder using Python and Jinja2. This approach allows me to update my resume by editing a YAML text file, eliminating the need to manually alter the HTML each time. The YAML file contains all the necessary information, such as my personal details, work experience, education, skills, and projects. When I make changes to the YAML file, the Python script reads the data and uses Jinja2 templates to generate the HTML for my resume automatically. This method not only saves time but also ensures consistency across different sections of the resume.
Additionally, I have included features like PDF generation and GitHub Pages deployment, although we won't be using them for this project. These features make the resume more flexible and adaptable for different job applications. The entire project is well-documented, with clear instructions on how to set it up and use it. You can check out the code for this project in my GitHub repository, where I have also provided examples and templates to help you get started. This project has been a great learning experience, and I hope it can be useful for others looking to streamline their resume updating process.
S3 Bucket Static Website
To make my resume accessible online, I uploaded the static website files to an Amazon S3 bucket. First, I created a new S3 bucket and uploaded the HTML, CSS, and other necessary files.
I then configured the S3 bucket for static website hosting, setting the index document and error document. The index document is the main HTML file, and the error document handles errors like a 404 page not found.
Next, I set the permissions to make the content publicly accessible by modifying the bucket policy. I ensured the policy allowed public read access while maintaining security.
Finally, I tested the setup by accessing the website URL provided by S3 to ensure everything worked correctly. The website loaded properly, and all links and assets functioned as expected.
Securing the Website with HTTPS and CloudFront
I began by creating a CloudFront distribution to act as a content delivery network (CDN), distributing my static website files globally. I set my S3 bucket as the origin, allowing CloudFront to fetch the website files from this bucket.
To ensure secure data transfer, I generated an SSL/TLS certificate through AWS Certificate Manager (ACM). I requested a new certificate for my domain and validated it using DNS validation. Once the certificate was issued, I associated it with my CloudFront distribution, ensuring the website was served over a secure HTTPS connection to protect user data.
Finally, I tested the entire setup by accessing the website through the CloudFront URL. I confirmed that the website loaded correctly over HTTPS and that all links and assets worked as expected.
Point domain name to CloudFront
After configuring CloudFront, I updated my DNS provider settings to point my domain to the CloudFront distribution. This involved modifying the DNS records to use the CloudFront-provided domain name, ensuring that all traffic to my website is routed through the CDN. I also set up automatic invalidation of cached content whenever updates are made to the website, ensuring that users always receive the most up-to-date version of the site. We will discuss this in more detail later.
Back-End API
Building a Serverless Back-end: DynamoDB, API Gateway, and Lambda
The next part of the challenge involves creating a visitor counter for the website. To do this, I used Amazon DynamoDB, API Gateway, and AWS Lambda.
The backend will be serverless. This means that instead of managing servers, the infrastructure automatically scales up or down based on demand. With a serverless architecture, you only pay for the compute time you use, and there's no need to worry about server maintenance or capacity planning.
DynamoDB:
First, I set up a DynamoDB table to store the visitor count. DynamoDB is a fully managed NoSQL database service provided by AWS that offers fast and predictable performance along with seamless scalability. To begin, I created a table specifically designed to hold the visitor count data. Then, I configured the table with the necessary read and write capacity units to handle the expected traffic.
Overall, setting up the DynamoDB table was straightforward, thanks to AWS's comprehensive documentation. With the table in place, I was ready to move on to the next step: integrating it with API Gateway and AWS Lambda to create a fully functional visitor counter for the website.
API Gateway:
Next, I configured API Gateway to act as a front door for the visitor counter service. API Gateway allows us to create, publish, maintain, monitor, and secure APIs at any scale. I created a RESTful API with endpoints that would interact with the DynamoDB table. These endpoints included methods for incrementing the visitor count and retrieving the current count.
Using JavaScript, I crafted functions to send HTTP requests to these API endpoints. These requests included methods such as GET for retrieving data and POST for sending data updates.
- Handling Responses: Upon receiving responses from the API, I implemented logic to handle and display data on the front end accordingly. This involved parsing JSON data and dynamically updating elements on the web page based on the retrieved information.
Lambda:
To handle the logic behind these API calls, I wrote AWS Lambda functions employing Python as the runtime environment. Lambda is a serverless compute service that runs code in response to events and automatically manages the underlying compute resources. I created a Lambda function to increment the visitor count in DynamoDB each time the API endpoint is called. Another Lambda function was created to retrieve the current visitor count from the database.
- Local Testing of Python Code: To make sure my Python code worked before deployment, I tested it locally. I used AWS mocking with
moto
to create a simulated AWS environment, which helped me thoroughly test and debug the Lambda functions. This way, I could check the logic and performance of my code in a controlled setting and catch any issues early. I also used unit tests to verify each part of the code, ensuring everything worked as expected. Once the local tests were successful, I deployed the Lambda functions to AWS, confident they would perform well.
CI/CD Implementation
What is CI/CD?
Continuous Integration and Continuous Deployment (CI/CD) are principles and practices that help teams deliver code changes more frequently and reliably. CI/CD automates building, testing, and deploying applications, reducing manual errors, providing consistent feedback, and enabling faster iterations. By regularly integrating code changes and deploying them automatically, teams can ensure their software is always ready to be deployed.
A pipeline in the context of CI/CD is a series of automated processes that enable the continuous delivery of software. It typically includes stages such as:
Source Code Management: This stage involves version control systems like Git, where code changes are tracked and managed.
Build: The code is compiled and built into an executable format.
Testing: Automated tests are run to ensure the code is functioning as expected. This can include unit tests, integration tests, and other types of testing.
Deployment: The application is deployed to a staging or production environment.
Monitoring and Feedback: The deployed application is monitored for performance and errors, and feedback is provided to the development team for further improvements.
Implementing GitHub Actions
To automate the CI/CD pipelines, I utilized GitHub Actions, a powerful automation tool provided by GitHub. GitHub Actions allows us to define workflows in YAML files within our repository. These workflows are triggered by various events such as code pushes, pull requests, or scheduled intervals. By leveraging GitHub Actions, we can automate repetitive tasks, enforce code quality standards, and streamline our development process.
For both back-end and front-end repositories, I created YAML files to define the workflow for our CI/CD pipelines. These files specify the different jobs that need to be executed, such as building the application, running tests, and deploying the code. Each job consists of multiple steps, which can include checking out the code, setting up the environment, installing dependencies, and running scripts.
Setting up AWS Credentials using OIDC
For secure AWS access within our GitHub Actions workflows, I configured AWS credentials using GitHub's OpenID Connect (OIDC) provider for AWS. OIDC provides a secure way to obtain temporary AWS credentials without embedding long-lived credentials in our repository. This ensures that our workflows have the necessary permissions to interact with AWS services while maintaining a high level of security. By using OIDC, we can dynamically generate credentials that are scoped to the specific actions being performed, reducing the risk of unauthorized access.
Back-End Pipeline
Testing:
The GitHub Actions workflow runs tests for the Lambda functions whenever code changes are pushed to the repository. These tests ensure that new code does not break existing functionality. The tests are executed in a simulated AWS environment using tools like moto
, which allows for local mocking of AWS services. This automatic testing helps catch potential issues early in the development process.
Deployment:
Upon successful testing, the infrastructure is deployed to AWS using Terraform. The deployment process begins with Terraform initializing the configuration files and downloading the necessary provider plugins using terraform init
. Next, Terraform applies the changes using terraform apply
, provisioning the necessary AWS resources such as Lambda functions, API Gateway endpoints, DynamoDB tables, and other services required by the application.
Front-End Pipeline
Testing:
Unit tests for the front-end are executed automatically whenever changes are pushed to the repository. These tests verify that the individual components of our front-end application exist and are valid. By running these tests as part of our CI/CD pipeline, we can ensure that our front-end code is robust and free from regressions.
Build:
If the tests pass successfully, the website is built as part of the workflow. The build process involves generating the front-end website code using a base template and generating a PDF for it.
Deployment:
After building, the website is uploaded to an S3 bucket using AWS CLI commands. Once the website is uploaded, a CloudFront cache invalidation is triggered to ensure that the latest version of the website is served to users. By invalidating the cache, we can ensure that users always see the most recent version of our website.
Final thoughts
The Cloud Resume Challenge has been a transformative experience, pushing me to dive deep into cloud technologies and DevOps practices. It not only helped me build a cloud-based resume website but also significantly boosted my confidence in working with cloud environments.
The journey was filled with learning moments, from setting up secure access controls with AWS IAM Identity Center to mastering Terraform for Infrastructure as Code, and implementing CI/CD pipelines using GitHub Actions. These skills are now invaluable tools in my software engineering arsenal, providing a strong foundation for future projects.
For anyone passionate about cloud technologies and looking to level up their skills, I wholeheartedly recommend embarking on the Cloud Resume Challenge. It's more than a project—it's an opportunity to grow, learn, and prepare for the evolving demands of the tech industry.
If you have any questions or would like to discuss the Cloud Resume Challenge further, feel free to contact me. I'm always open to connecting with fellow tech enthusiasts and sharing insights from this journey.