Tuesday, July 1, 2025

CST 499 Week 1

Since my last update (the status update at the end of CST489), I have thankfully made significant progress on the project. In fact, the core deliverables outlined in my project scope are essentially complete.

As a brief reminder, I'm (solo project) building an online staff portal for Nyamboyo Technical School(NTS), an African nonprofit organization who strives to educate rural Kenyan youth and young adults adults modernized, technical and vocational skills.

Here is the progress I made since the previous update:

• I finally deployed the portal via Render, as a single, monolithic service as was planned. I was experiencing significant challenges configuring my project structure in a manner that wouldn't trigger Render's error code for Node applications. Specifically, Render wants the 'app' folder to be in the same directory of the Next.js installation, which for me wasn't really feasible, given the custom structure I began with from the beginning of development. Even still, I tried refactoring everything into a more friendly structure for Render and it still failed. That's when I realized I could containerized the project with Docker and then deploy it to Render as a Docker app rather than a Node app without having to worry about the strict structural expectations on Render. After containerized containerized Docker, I successfully deployed to Render.
• I also completed and deployed the first version of the third and final core deliverable for my capstone, the student health application. The student health app was successfully plugged into the portals authentication system and deployed to the live service after preliminary testing.
• In deploying the student health app, I was also able to establish a proper continuous deployment pipeline for myself and future developers who work with NTS, solving the maintainability issue that NTS' original staff portal had (more details of this are in my proposal). Future developers will now simply branch off of main for building their features and configure the environmental variables. Then, when done with development and testing, they merge their branch into main, which will now automatically deploy the changes to the live service.
• The portal was of course tested further to confirm security, stability and functionality with the two live apps.

Looking forward, the next stage of my capstone is extensive user testing, debugging, and improvements to UI/UX. I know for certain that the portal can benefit from more features, such as expanding those in the profile page and perhaps adding specialized features for admins (for instance, to manage staff accounts).

I am currently in contact with my supervisor (the 'client') to set up a live training session with school staff in Kenya, to help them learn how to use the portal and discuss further improvements.

Wednesday, April 23, 2025

CST438 Week 8

This course was a ton of work, but also very fulfilling and rewarding in terms knowledge and skills gained. If I had to decide on my five favorite new things I learned, they would be as follows.

1. Git 

I believe that Git is criminally underrated in terms of skills that software engineers should master. Most people only know the very basics, such as git push, pull, commit, add, and fetch, without understanding the more complex stuff. When I first came into the course, I was in that boat, and it was a blast running into many errors with Git and learning the necessary skills to debug them. The crazy part is that Git goes so deep that I only went a bit deeper during the course. I will continue to learn more about Git.

2. Agile Method

From what I have heard, Agile is popular and very commonly seen in startups, which means that if I were to be hired at such a company, having the basics down already will put me in a great spot. I also think its fun to create epics and stories and complete stories during sprints. I am naturally extroverted so I might also be biased in favor of scrum meetings and teams with tighter communication.

3. React Basics

I am choosing this item because of how popular React is in today's tech industry. If I am not mistaken, it is the most popular frontend library which means that we are almost certainly likely to see it again as industry engineers. Being able to customize our project's React frontend was a thrill and I like the dark theme that I installed on it. I know that the skills I improved on in this course will almost certainly carry over well to that.

4. Service Oriented Architecture

Splitting our backend repo in two and having each part handle a different service was really cool. I especially like that this is something real, large companies do all the time, although apparently to far greater degrees (understandably). I'm not going to lie, it was confusing at first to wrap my head around the concept, but programming it directly really helped me to grasp the idea and now I am quite comfortable with the philosophy of and reasons for microservices. 

5. AWS Deployment

Admittedly, I thought that this was going to be complicated and possibly even dangerous considering the potential of being overcharged by AWS for unaccounted compute on our deployed project. However, as I worked through the assignment, re-read the document and re-watched Prof. Wisneski's video walk through, the process became clear and understandable. Furthermore, my application thankfully deployed successfully after debugging the manifest file error, which only took a few minutes extra. Last but not least, cleaning up was easier than I thought, and should prevent us from getting overcharged, which I'd wager is a great skill on its own. I am now open to learning more about AWS, especially since it's widely used in the industry these days and no doubt a great tool for contemporary software engineers.

Tuesday, April 22, 2025

CST 438 Week 7

Agile and Plan-and-Document, also known as Waterfall, are two software development processes with differing philosophy and organizational structure. Waterfall is a linear, sequential process where each stage—gathering requirements, designing, implementing, testing, deploying, and maintaining—is completed before the next is started. The process relies on detailed documentation and forward planning and so is ideal for the projects whose requirements and scope are clear and not expected to change. For example, defense or aerospace industries or manufacturing that need predictability, safety, and compliance are likely to employ the Waterfall model. Agile is the reverse because it is flexible and iterative. It breaks work down into tiny, manageable chunks named sprints, which take anywhere from two to four weeks. Agile requires ongoing feedback from the users and stakeholders in order to allow the product to be altered while it is still being developed. It emphasizes working software and collaboration over documentation and rigid processes. Agile is common where there is an acceleration in change such as in the case of the technology startups, digital product companies, and user experience or quick innovation-focused organizations. These companies are often forced to move at high speed because of market changes, and Agile provides the structure to do this without being bogged down by too much process. In general, Waterfall functions best in situations where requirements cannot change, times are limited, and there can be no acceptance of ambiguity. Agile, on the other hand, performs better in high-change environments where requirements from the customers and market demands will vary.

The choice between the two generally depends upon the project type, corporate culture, and how much adaptability the team is going to need to deal with change. A few contemporary organizations even employ hybrid strategies, using the planned design components of Waterfall and marrying them with the flexibility of Agile in order to fit more effectively with their own needs.

Monday, April 14, 2025

CST 438 Week 6

This week, we discovered a great deal about a new method of structuring services in software architecture. Before week 6, the project that we had been working on had a monolithic architecture—i.e., the backend was one app where all the services (such as the registrar, gradebook, and any future features) were bundled together and run inside the same server.

On the other hand, this week we studied microservices architecture, where each service is developed and deployed independently, typically on its own server or container. Each microservice typically has its own database, but some services will have additional tables specific to the function of their purpose. There are several reasons why one would want to use a microservices approach, including increased scalability—because each service can be scaled independently as necessary—and fault isolation, in that if one microservice fails, it does not necessarily bring down the whole system.

Another basic concept of microservices architecture is inter-service communication. Since each microservice is a separate application and often on different servers or containers, it is not possible to use direct function calls or shared memory (as in monolithic applications). Microservices communicate over a network instead, using protocols like HTTP or gRPC. This is called inter-service communication. Services normally expose APIs—typically RESTful endpoints—other services invoke to pass or receive data. In more complex systems, communication is asynchronous and through message brokers like RabbitMQ or Apache Kafka, where services subscribe and publish events. Asynchronous messaging further decouples and allows services to continue working even when a part of the system is behind or temporarily unavailable. The decision between synchronous and asynchronous styles is use-case dependent, trading performance, reliability, and complexity. Funny enough, we got to work with RabbitMQ in our own assignment, and it was an absolute blast to directly program an implementation of service oriented architecture using our pre-existing monolithic project.

Another advantage of microservices is that they are technology-agnostic. Because each service is decoupled, developers can choose the most suitable programming language, framework, or database for the problem that service is trying to solve. This can lead to better productivity and performance, as teams are not forced into a single tech stack. For example, one can construct a Python-based machine learning microservice using TensorFlow in one instance, and a high-performance user authentication service in Go. Provided that services adhere to pre-defined communication protocols—e.g., REST or gRPC—they can coexist beautifully in the overall system. This autonomy also enables parallel development across teams along with simpler integration of future technologies down the line.

A real world example of microservice oriented software is Netflix, which was originally a monolithic app, just like our own project. Netflix runs hundreds of microservices today that do everything from video encoding, user recommendations, and billing. Each microservice is deployed separately, communicates with APIs, and can be scaled based on usage patterns. This allows Netflix to achieve high availability, deploy highly frequently without downtime, and continue innovating at high velocity. Their use of asynchronous messaging and advanced DevOps tools also solves the complexity involved in such a large microservices ecosystem.

Sunday, April 6, 2025

CST 438 Week 5

This week, we learned some fundamental concepts in the plan and document method for software engineering, also known as the waterfall method. As opposed to agile development, where teams have rough, generalized objectives (typically represented as stories) which are iterated upon at cyclical intervals, the waterfall method involves a very thorough planning phase where software requirements are extensively documented and followed through the entire development process, with perhaps only small adjustments here and there. We learned about the software requirements specification (SRS) document, which is a sort of blueprint for the software being developed, and how to write key sections in a typical SRS. It was fun learning all about UML case diagrams, use case documentation, and how to properly document database requirements. It's cool to know that the SRS, because of its indifference on exact implementation details, can serve as a type of generalized recipe for the software you are developing. For example, in our project, we do not (and should not) mention exactly what type of entry form we use for finding a student's schedule by year and semester, rather we simply mention that the student can select year and semester, and the frontend liberties are fully granted to the developer(s). I hope we can continue to train with the waterfall method and the agile method, and perhaps other methods in the future.

Tuesday, April 1, 2025

CST 438 Week 4

Admittedly, I was initially reluctant to learn about code reviews, because I thought they were too time-consuming. However, after having read about them in our textbook and especially collaborating with others in our course to accomplish software development tasks, I have grown very fond of them and would consider them to be my favorite thing learned so far, although I have much training to do myself before I become adept with them. The best part is that code reviews are a sort of safety mechanism to help ensure that your code works properly. Nobody is perfect, and we can certainly use a second set of eyes on occasion to keep our work in check. 

Another great benefit of code reviews is that they can help you maintain best practices and with maintaining good code readability. A seasoned developer once told me that your code should be as simple and as readable as possible, rather than being overly fancy or gimmicky. I certainly see the value in his advice now that we are working in a small team. I can only imagine how crazy it must be to work with very large teams, and how important code reviews are in those settings. Another benefit of code reviews they provide avenues for one to learn from more experienced developers. For instance, I was fairly rusty with Java after not using it for a while, but early on, my teammates were able to create some good examples of our code which helped me caught up to speed. 

Code reviews are so important that apparently they are a quintessential part of Google's culture, which is a key reason as to why their code is stable and maintainable in the long term. I hope we can continue practicing this process even after our course concludes.

Friday, March 21, 2025

CST 438 Week 3

    React is like a whole new world compared to what I've been used to. I’ve mostly been doing HTML, CSS, and vanilla JavaScript before diving into React, and to be honest it's been a blast learning the framework.

    First, I learned that React is all about components. Everything becomes a component, and our team noticed this when we were building our frontend components for our web app. If you've ever built a site with vanilla JS, you might’ve had to deal with a bunch of DOM manipulation, manually updating elements, and keeping track of which parts of the page need to change. With React, you basically just make these self-contained components and React does a lot of the heavy lifting for you. If something changes in your data, React re-renders only the parts of the page that need to update, which is way easier than trying to manually update stuff with JS. I like that React uses JSX, which is kinda like mixing HTML with JavaScript. It feels like I'm writing HTML, but I can also use JavaScript in the same file. It’s cool because you can easily write logic for rendering parts of the page based on the data.

    But I’ll be real, React can be a bit overwhelming at first. You’ve got things like state, props, hooks, and context to learn. If you're used to just sticking everything in a single HTML file and throwing JavaScript in there, it can feel like you’re juggling a lot of new concepts. I struggled a bit with how to manage state in React at first—it’s not like vanilla JS where you just use global variables or something.React projects can get pretty messy if you don’t manage your code properly. Since you’re building your app with a bunch of components, things can get a little chaotic if you don’t structure your project well from the start. Also, React’s learning curve can be steep for newcomers—especially with things like hooks and state management.

    Overall, I’m liking React so far, but it definitely feels like a whole new way of thinking about building web apps. Though it can get confusing sometimes, it is very useful and powerful for web developers. I'm looking forward to leveling up my React knowledge as we build more of our cst 438 project.

Saturday, March 15, 2025

CST 438 Week 2

A Mock is basically a fake version of something your code depends on, like a database or an API, that you use in unit testing. Instead of calling the real thing, which might be slow or unpredictable, a mock just returns whatever you tell it to. This helps keep tests simple, fast, and consistent because you control exactly how the mock behaves.

Mocks are super useful because they let you test one piece of code without worrying about the rest of the system. They make tests run faster since you're not waiting on real databases or external services. Plus, they help you check if your code handles different situations properly, like errors or missing data. In JUnit, a tool like Mockito makes it easy to create and use mocks, so you can focus on testing just what matters.

Let’s say you’re building a shopping app, and there’s a method that calculates the total price of a customer’s order. This method needs to fetch product prices from a database. But in a unit test, you don’t want to actually connect to the database because it’s slow and could change over time. Instead, you use a mock to fake the database and return fixed prices.

For example, if a customer buys two items, the mock can be set up to return $10 for one and $20 for the other, so you know the total should be $30. This way, you can test if your calculation method works correctly without depending on a real database. Plus, you can easily test different cases, like what happens if an item is missing or if the database throws an error. This makes your tests faster, more reliable, and easier to control.

 

Tuesday, March 11, 2025

CST 438 Week 1

This week in CST 438, we were introduced to various basic software engineering concepts that more or less confirmed my preconceptions regarding the course's content. 

Before the beginning of our course term, I had already heard from a few prior students what we may learn. Namely, the basics of git version control, agile method, and the software development lifecycle, including collaborating in a team environment. It turns out we are in fact going to learn these techniques in our course, CST 438, which thrills me, especially since these are real world industry skills. I will admit that most of our academic CS curriculum has been overly theoretical thus far (and indeed this is probably the status quo opinion of someone working in the current tech industry), so being able to learn practical skills such as testing, agile, scrum meetings, git projects, behavior driven development, while also bolstering our knowledge of modern technologies such as react, java, AWS, and Git among other, is an absolute gift. I'm very much looking forward training and improving alongside my team during this term.

Thursday, December 12, 2024

CST 462S: Service Learning Reflections

    I have just completed CST 462S, which includes a service learning portion in which students provide service to others via technical skills, such as software development, tutoring, and more. I was specifically assigned to develop the website and database of a small nonprofit organization in Kenya named Nyamboyo Technical School, or NTS. The mission of NTS is to empower local youth with modernized skills, including vocational and computer literacy skills, in order to help them attain a higher quality of life. 

    Overall, the experience went surprisingly well, especially since I dove right in with little web development skills, having not yet taken internet programming. From the very first meeting with my supervisor, I was highly motivated to complete this project and help the school establish an official student and grant database along with an interface for staff to interact with. Although the project is ongoing, I have vowed to continue serving NTS until they no longer need me, which will help everybody - I can continue to train and improve on multiple professional facets while NTS gains better software for various essential operations. We are currently quite close to completing the student application, which will serve as a great skeleton for the grant database, so all in all we are almost complete with the core deliverables. 

    For future students, I would recommend that you choose a service learning site that you already have skills for, especially if you are short on time. Even though I was taking two courses this term, I was still able to put in extra time learning fundamental web dev skills, but I can certainly see how most students would not have the time for this. In addition, always prioritize the needs of the client and users, they will drive your development towards perfection more than anything else. Constant feedback from your supervisors is highly beneficial. Good luck!

Wednesday, August 7, 2024

CST 334 Week 8 Report

This week, we delved more deeply into the essentials of persistence in operating systems. We began by examining the fundamental interactions between the OS and hardware devices. Efficient communication with a device relies on two key components: the hardware interface and its internal organization. To minimize CPU load, three main techniques are used: interrupt-driven I/O, programmed I/O, and direct memory access (DMA). DMA is particularly advantageous for systems handling large volumes of data, especially with frequent transactions, as it reduces the need for constant CPU involvement during data transfers. We also studied the ways the OS interacts with devices, focusing on explicit I/O instructions and memory-mapped I/O. Additionally, we learned that device drivers play a crucial role in abstracting device operations from the OS, through software that defines the device's functionality. Our exploration continued with the basics of hard disk drives. Modern hard disks feature a platter and spindle, with data stored in concentric circles called tracks on each surface. A disk head and arm are used to read this data. Various disk I/O scheduling algorithms were discussed, ranging from simple methods like first-come, first-served to more advanced ones such as budget fair queuing. In the realm of file systems, we covered persistent storage devices like HDDs and SSDs. We focused on the core abstractions of storage: files and directories, which are fundamental to data persistence in the OS. We explored the file system interface, including file creation, access, and deletion. We concluded the week by implementing a basic file system using vsfs (Very Simple File System), a simplified model of a typical UNIX file system. Key takeaways included understanding the structure and access methods of file systems, learning about the Inode (index node) for file metadata, and exploring multi-level indexing, directory organization, and free space management. Overall, it was a productive week of learning, and I look forward to building on these foundational concepts.

When it comes to persistence of personal character, I learned quite a bit over the course. I learned that even when I am confused during an assignment, to simply sit with the challenge and continuously examine it until it truly sinks into my understanding. I also learned that I can sometimes rely on others to clarify for me instead of trying to brute force solve it on my own. Thankfully, Dr. Ogden was helpful in slack and helped to clear up any confusions. I learned that by developing more resilience and discipline, I can accomplish any task as long as I stay focused. Thanks. 

Thursday, August 1, 2024

CST 334 Week 7 Report

 This week, we learned a ton about the fundamentals of persistence in operating systems. We started by looking at basic device interactions with the operating system. The device itself requires two parts to make interaction efficient: the hardware interface and its internal structure. There are three main mechanisms employed to reduce CPU overhead: interruption system, programmed I/O, and direct memory access (DMA). For systems with larger memory volume transactions, DMA is superior, especially if data transactions are frequent, because the CPU does not have to be constantly used during transfers. When it comes to how the OS interacts with the device, there are two primary methods: one is to have explicit I/O instructions, and the second is known as memory-mapped I/O. Finally, the device driver is what specifically abstracts the device function away from the OS by way of a piece of software that details how a device works. Afterwards, we learned about the basics of hard disk drives. Modern disks have a platter and a spindle. Data is encoded on each surface in concentric circles of sectors that are called tracks. We read data from the surface with a disk head and arm. There are numerous disk I/O scheduling algorithms that can be employed, from more basic ones like first come, first serve to modernized algorithms like budget fair queuing. When it comes to file systems, we learned about persistent storage and devices like HDDs and SSDs. The two basic abstractions developed regarding storage are files and directories, which comprise the bread and butter of persistence in the OS. We explored the file system interface including creating, accessing, and deleting files. We wrapped up our week by learning a simple file system implementation through the vsfs (very simple file system), which is a simplified version of a typical UNIX file system. When thinking of file systems, we should be thinking about two primary aspects: the data structures of the file system and the access methods required to actually do things with data. We learned about the Inode, or index node which is the structure that holds metadata for a given file. We were able to learn more about multi-level indexing, directory organization, and free space management within a file system. All in all, it was a good week of learning and I hope we can continue to build on these basic concepts.

Saturday, July 27, 2024

CST334 Week 6 Report

 Concurrency: Part II

This week, we learned a ton more about concurrency in operating systems. Notably, the main topic was semaphores, which are essentially an upgraded version of our previous basic locks and condition variables which we can use to improve system performance, especially in multi-threading applications. Our book specifically defines a semaphore as an object with an integer value that we can manipulate with two routines, which in the posix standard are sem_wait() and sem_post(). It is important to remember that the initival value of a semaphore defines its behavior, so it must first be initialized to some value. The first type of semaphore we studied is the binary semaphore, used as a lock. Next, we learned how to implement semaphores for ordering events in a concurrent program. These semaphores can be very useful to use when a thread which is waiting for a list to become non-empty, so it can delete an element from it. Specifically, the semaphore would signal when another thread has completed, in order for the waiting thread to awaken into action, much like how condition variables work. We also learned about the producer/consumer problems and dining philosopher problems as means to understand semaphores on a deeper level. Avoiding concurrency bugs, including deadlock, was very helpful especially since we may implement semaphores ourselves in the future.