
Database Systems at Full Sail University
A retrospective on CTI3622 Database Systems, the course where I worked with MySQL, MongoDB, and AWS to understand how scalable, secure databases are built, managed, and protected in real cloud environments.
Before Database Systems, a database was where data lived. I knew how to query it, roughly, and I knew not to put secrets in the code. What I did not know was how fragile that mental model was. The week I SSHed into an actual remote server to back up a live database, I realized I had never had to think about where the data physically lived, who could access it, or what happened if the machine stopped responding.
Week 1: Cloud Infrastructure and Getting AWS Under Control
My thoughts at the time
Week one dropped me directly into AWS, which was equal parts exciting and disorienting. The task was straightforward on paper: restore a previously backed up virtual machine using a snapshot, what AWS calls an AMI, and confirm the EC2 instance was set up and configured correctly. The concept was not complicated. The execution, however, required navigating an AWS console that felt like a cockpit the first time you sit in it.
Every panel had options I did not fully understand yet, and every action felt more permanent than clicking undo in a text editor ever does. I had to slow down, read carefully, and actually think through each step before committing to it. That deliberateness was uncomfortable at first, but I quickly realized it was the right instinct for infrastructure work.
Retrospective insight
This week permanently changed how I think about where web applications actually live. Before this course, the server was an abstraction. After configuring an EC2 instance by hand, it became a real machine with a real operating system that I had real responsibility for. Learning about AMIs and snapshot restores also introduced me to disaster recovery as a genuine engineering concern rather than something buried in documentation I would never read. Every time I hear someone mention cloud infrastructure in a technical conversation now, I have hands-on context to anchor the discussion, and that is worth a lot.
Week 2: CRUD, MySQL, MongoDB, and the Relational vs. Non-Relational Question
My thoughts at the time
Week two got into the databases themselves, and it started by answering a question I had been vaguely curious about for a long time: why do some applications use SQL and others use MongoDB? Reading about the difference abstractly never fully clicked for me. Actually writing CRUD operations in both MySQL and MongoDB during the same week made the tradeoffs tangible in a way that no article had managed to do.
I could feel the difference. MySQL wanted a defined schema, consistent structure, and relationships expressed through foreign keys. MongoDB let me throw a document in and figure out the shape later. Each approach had moments where it felt clearly right and moments where it created friction. Sitting with both of them side by side made the choice between them feel like an actual engineering decision rather than a preference.
Retrospective insight
This week gave me a framework I still use. Before, I treated the SQL versus NoSQL choice as something more senior developers made. After this course, I understood what each system optimizes for and where each one starts to struggle. Understanding CRUD at the database level also made me more confident debugging data issues in later projects because I was not relying entirely on an ORM to hide what was happening underneath. When something goes wrong in a query or a document lookup, knowing what the database is actually doing is the difference between a fast fix and a long investigation.
Week 3: SSH, Backups, and the Concepts Behind Replication and Sharding
My thoughts at the time
Week three was the most operationally dense week of the course. SSHing into an AWS server to perform live database backups and restores felt like the first time I was doing something that directly mirrored what a backend or DevOps engineer does on the job. The import and export workflows for both MySQL and MongoDB had just enough friction to make the process stick rather than feel like a checkbox exercise.
Replication and sharding came toward the end of the week and were the most conceptually challenging part of the course for me. I understood what they accomplished at a high level, distributing data across multiple nodes for redundancy and performance, but reasoning through how a write propagates across replicas or how a sharding key determines where data lives took more mental effort than I expected. I wished the curriculum had spent more time on sharding key selection specifically, since choosing the wrong key is one of the most costly mistakes you can make in a production database and there was not much guidance on how to reason through that decision.
Retrospective insight
This week shifted my mental model of databases from storage buckets into infrastructure components with real operational requirements. Replication and sharding are concepts that come up constantly when reading about system design, scaling decisions, and database architecture. Having hands-on experience with what those terms actually mean made later reading significantly easier. The backup and restore workflow also gave me genuine respect for operational discipline. Backups feel like an afterthought until you have personally restored a machine from a snapshot and understood exactly what would have been lost without it.
Week 4: User Permissions, Security Testing, and AWS RDS
My thoughts at the time
The final week brought security and cloud-hosted databases together in a way that felt very close to how real systems are managed. Configuring user permissions in both MongoDB and MySQL forced me to think carefully about who should have access to what and why. When you are the only person working on a local database, it is easy to default to giving everything administrative access and moving on. This week made it clear how dangerous that habit becomes the moment more than one person, or more than one service, is touching the same system.
Creating an AWS RDS instance and connecting it to the EC2 VM from week one was a satisfying way to close the course. The two tools I had been working with across the month came together into something that actually resembled a deployed, production-adjacent architecture. It was one of those rare moments in a course where the pieces visibly locked into place.
Retrospective insight
The security portion of this week has had more lasting impact than almost anything else in the course. Understanding the principle of least privilege at the database level translated directly into how I think about access control in every system I have worked on since. The AWS RDS work was equally valuable because managed database services are the default in most production environments I have encountered. Knowing how to provision one, configure connectivity, and reason through the tradeoffs between a managed cloud database and a self-hosted instance is knowledge that applies immediately in a professional context. It made conversations about infrastructure in internship and freelance work feel like familiar territory rather than something I was catching up on.
Closing Thoughts
Database Systems was one of the more grounding courses in the program. It pulled me out of the application layer and into the systems that applications depend on to function. Working with real cloud infrastructure, comparing database paradigms side by side, performing backups and restores over SSH, and locking down permissions gave me a much more complete picture of what building something production-ready actually involves. MySQL, MongoDB, and AWS all showed up again in real projects not long after this course ended, and having a hands-on foundation made the learning curve noticeably shorter each time.
Where I Use This Now
The principle of least privilege from week four is part of every database I configure now. PUG Empire uses Supabase, which has its own row-level security model, and the discipline of thinking carefully about who can read or write what came directly from this course. The SQL versus NoSQL mental model is also a real evaluation I run before starting new projects: Cairo Photography needed structured, relational data; other projects need document flexibility. Knowing the difference before picking a tool matters.
Code: Querying Both Paradigms
The SQL pattern for structured, relational queries:
-- Get all orders placed by active users in the last 30 days
SELECT
users.name,
orders.created_at,
orders.total
FROM orders
JOIN users ON orders.user_id = users.id
WHERE users.active = true
AND orders.created_at >= NOW() - INTERVAL '30 days'
ORDER BY orders.created_at DESC;
The MongoDB equivalent for document-oriented queries:
// Same query logic, document-oriented
const thirtyDaysAgo = new Date(Date.now() - 30 * 24 * 60 * 60 * 1000)
const results = await db.collection('orders').aggregate([
{
$lookup: {
from: 'users',
localField: 'userId',
foreignField: '_id',
as: 'user'
}
},
{
$match: {
'user.active': true,
createdAt: { $gte: thirtyDaysAgo }
}
},
{ $sort: { createdAt: -1 } }
]).toArray()
FAQ
When should you use a SQL database versus MongoDB? Use SQL when your data has clear, stable relationships and you need complex queries across multiple tables. Use MongoDB when your data structure varies between records, you need to iterate on the schema quickly, or you are storing documents that do not fit naturally into rows and columns. Both are good tools for different problems.
What is the principle of least privilege in database security? Every user, service, or application connecting to a database should have access only to the specific data it needs and no more. An API that reads blog posts should not have write access. An analytics service should not be able to delete records. Scoping permissions tightly limits the damage if credentials are compromised.
What is SSH and why do backend developers use it? SSH (Secure Shell) is an encrypted protocol for connecting to remote servers. Backend and DevOps engineers use it to configure servers, transfer files, run database commands, and troubleshoot production systems without exposing credentials over an unencrypted connection.
What is database replication and why does it matter? Replication is the process of maintaining duplicate copies of a database on multiple servers. If the primary server fails, a replica can take over without data loss. It also allows read traffic to be distributed across replicas to reduce load on the primary.
Credits and Collaboration
A huge thank you to Esther Allin for designing the blog banner art! If you're looking for a professional digital media specialist, Connect with her on LinkedIn!
Share this article

Ryan VerWey
Full-stack developer, Army veteran, and founder of Echo Effect LLC. Currently serving as CTO at Ratespedia and building enterprise software for USSOCOM. Nearly two decades of shipping real products across defense, fintech, and the open web. More about Ryan or see the work.
Recommended Reading

Going Further With Project and Portfolio II at Full Sail University
A retrospective on Project and Portfolio II: Web Development, the course where I built a full React application from prototype to finished product, integrating external APIs, MongoDB, CSS libraries, and a formal presentation showcase.

Application Integration and Security at Full Sail University
A retrospective on Application Integration and Security, the course where I learned Python from scratch, worked through authentication and vulnerability management, and shipped a containerized application to AWS, all under the weight of a two-strike course policy.

Cloud Application Development at Full Sail University
A retrospective on Cloud Application Development, the course where I left local servers behind for good, learned to build with AWS Lambda, DynamoDB, Cognito, and AppSync, and shipped a serverless application powered entirely by cloud-native infrastructure.