
Deploying Web Applications at Full Sail University
A retrospective on WDV463 Deployment of Web Applications, the course where I learned to take a project from a local development environment all the way to a live, authenticated, multi-platform web application, built around a personal migraine tracking tool.
I built a migraine tracker for this course because I actually needed one. That turned out to matter more than I expected. Every deployment problem I hit in week three was worth solving because I wanted the app to exist, not because a milestone required it. Deployment of Web Applications was the first course where I was shipping something I would still have open in a browser tab after the grade was in.
Week 1: Deploying a Static Site and Making It Publicly Accessible
My thoughts at the time
The first week set the context for everything that followed. The assignment was to build and deploy a static website that was accessible from any device with an internet connection. The technical bar for a static deployment is lower than what came later in the course, but that simplicity made it a useful starting point. For the first time I was not running a local server and pretending it was production. I was putting something on the internet.
Getting the migraine log's static front-facing layer deployed early also gave me a foundation to iterate on. Seeing the project live, even in its most basic form, changed how I thought about the remaining weeks. Every addition would need to actually work in a real environment, not just on localhost.
Retrospective insight
This week reframed how I think about development environments. The difference between something that works locally and something that works in production is not always small, and the earlier you expose that gap by deploying, the less painful it is to close. I also gained a better understanding of what static hosting actually means from an infrastructure perspective, separating it cleanly in my mind from server-rendered or dynamically served applications. That distinction has come up repeatedly in conversations about architecture and cost tradeoffs since.
Week 2: API First Development and Multi-Device Deployment
My thoughts at the time
Week two introduced API First development as a design philosophy, which reframed how I thought about building the migraine log's backend. Rather than treating the API as a detail attached to the frontend, I started thinking about it as the core of the application that the frontend would consume. Creating a secondary interface for the application made that separation concrete. The migraine log needed to accept and return data in a way that worked regardless of what was consuming it.
The multi-device deployment theme added a practical constraint that raised the quality of the work. Knowing that the application needed to function properly across different screen sizes and network conditions pushed me to think more carefully about how data was fetched, how errors were handled, and how the interface adapted.
Retrospective insight
API First thinking is one of the more durable shifts in perspective that came out of this course. It is easy to build a tightly coupled application where the frontend and backend assumptions are baked into each other in ways that make future changes painful. Designing the API as a clean contract first makes everything downstream more flexible. For the migraine log specifically, this meant the data structure I defined in week two stayed stable through the rest of the course even as the frontend and authentication layer changed around it.
Week 3: Building the Full Stack and Connecting All Three Layers
My thoughts at the time
Week three was the most technically dense week of the course. The goal was a fully deployed web application using modern client, server, and database technologies together. For the migraine log this meant the frontend interface, the Node.js backend, and the MongoDB database all needed to be running and connected in a live environment, not locally, but actually deployed and accessible.
Getting all three layers working together in production surfaced issues that local development had hidden. Environment variables needed to be configured correctly on the host, database connection strings pointed to a real instance rather than localhost, and error handling that had never been triggered locally suddenly mattered. Working through those issues on a project I genuinely cared about made the troubleshooting feel less frustrating and more purposeful.
Retrospective insight
This week gave me the most complete picture of full stack deployment I had encountered in the program to that point. Understanding that each layer of an application has its own deployment concerns, and that they interact in ways that require deliberate configuration, is knowledge that does not fully transfer from reading. You have to go through the process of something breaking in a production environment and then figure out why. The migraine log gave me a real enough context that the problems felt worth solving carefully, and the solutions stuck in a way that they might not have with a generic assignment.
Week 4: Adding User Authentication and Finishing the Application
My thoughts at the time
The final week added user authentication to the deployed application, which was the feature that made the migraine log feel like a real product rather than a demonstration. Without authentication, the data belonged to whoever was looking at the screen. With it, the log became something that could plausibly be used by a real person to track their own health data privately over time.
Implementing authentication in a deployed environment also introduced a category of security considerations that local development had completely obscured. Session persistence, token handling, secure routes, and protecting data so that one user cannot access another person's records all required more careful thought than any frontend or API work I had done before.
Retrospective insight
Authentication was the last piece that made the application coherent as a product. It also introduced the most direct overlap with security concerns I have encountered in any course in the program. Understanding how user sessions work, how authentication state is maintained across requests in a stateless protocol, and what can go wrong when those pieces are implemented carelessly has influenced how I evaluate authentication solutions in every project since. The migraine log ended the course as something I was genuinely proud of, a working, deployed, authenticated application built around a real personal need.
Closing Thoughts
Deployment of Web Applications was one of the most perspective-shifting courses in the program. It took skills I had been building for months and asked a different question: not whether I could build something, but whether I could ship it. Learning to close the gap between local development and a live production environment, deploy across multiple devices, architect around an API First approach, and implement user authentication in a real deployed context all compounded into a significantly more complete understanding of what modern web development actually involves. The migraine log was a surprisingly fitting vehicle for all of it.
Where I Use This Now
Typing Force and this portfolio are both deployed on Heroku using the pattern I first learned here: environment variables in the host config, zero secrets in the repository, and a deployment triggered by a git push. The API First principle also shaped how I built Echo Effect: the API contract was defined before the frontend was built, which made both layers cleaner.
Code: Environment Config and Auth Patterns
The environment variable discipline that keeps credentials out of code:
// config.js - load from environment, never hardcode
const config = {
port: process.env.PORT || 3000,
mongoUri: process.env.MONGO_URI,
jwtSecret: process.env.JWT_SECRET,
nodeEnv: process.env.NODE_ENV || 'development',
}
if (!config.mongoUri || !config.jwtSecret) {
throw new Error('Required environment variables are missing')
}
module.exports = config
And the basic shape of a protected API endpoint:
// middleware/auth.js
const jwt = require('jsonwebtoken')
const { jwtSecret } = require('../config')
function requireAuth(req, res, next) {
const token = req.headers.authorization?.split(' ')[1]
if (!token) return res.status(401).json({ error: 'No token provided' })
try {
req.user = jwt.verify(token, jwtSecret)
next()
} catch {
res.status(401).json({ error: 'Invalid or expired token' })
}
}
module.exports = { requireAuth }
FAQ
What is the difference between a development environment and a production environment? Development runs locally with verbose error output, hot reloading, and debug tools enabled. Production runs on a remote server with errors logged but not exposed to users, performance optimization on, and secrets managed separately from the codebase. Many bugs only appear in production because the two environments make different assumptions.
What does API First mean as a development strategy? Design and agree on the API contract before building the frontend or the backend. This decouples the two sides so they can be developed in parallel, makes the data structure explicit from the start, and produces cleaner interfaces on both ends.
Why do authentication tokens expire? Expiring tokens limit the window of exposure if a token is stolen. A token that lives forever and is compromised gives an attacker permanent access. A short-lived token reduces the damage to a bounded window of time and forces clients to refresh regularly, which provides a natural checkpoint to revoke access.
What is Heroku and when should you use it? Heroku is a platform-as-a-service that manages server provisioning, scaling, and deployment infrastructure so you can focus on the application code. It is a good choice for small to medium applications where the overhead of managing your own server is not worth the control it provides. For high-traffic or cost-sensitive workloads, cloud infrastructure like AWS is usually the better fit.
Credits and Collaboration
A huge thank you to Esther Allin for designing the blog banner art! If you're looking for a professional digital media specialist, Connect with her on LinkedIn!
Share this article

Ryan VerWey
Full-stack developer, Army veteran, and founder of Echo Effect LLC. Currently serving as CTO at Ratespedia and building enterprise software for USSOCOM. Nearly two decades of shipping real products across defense, fintech, and the open web. More about Ryan or see the work.
Recommended Reading

Project and Portfolio III: Web Development at Full Sail University
A retrospective on Project and Portfolio III, the course where I diverged from the suggested Spotify project and built a Full Sail Alumni Networking App from scratch, complete with user registration, social profiles, a blog, and a collaborative project board, managed entirely through Agile sprints and weekly SCRUM.

Learning Server-Side Languages with Node.js at Full Sail University
A retrospective on Server-Side Languages with Node.js, the course where I built RESTful APIs from scratch, learned to test them with Jest and Postman, and connected a live backend to MongoDB running through Docker.

Advanced Server-Side Languages at Full Sail University
A retrospective on Advanced Server-Side Languages, the course where I moved past Node.js basics into TypeScript on the backend, built real-time features with WebSockets, and structured an Express application built to last rather than just to ship.