Tuesday, April 1, 2025

CST 438 Week 4

Admittedly, I was initially reluctant to learn about code reviews, because I thought they were too time-consuming. However, after having read about them in our textbook and especially collaborating with others in our course to accomplish software development tasks, I have grown very fond of them and would consider them to be my favorite thing learned so far, although I have much training to do myself before I become adept with them. The best part is that code reviews are a sort of safety mechanism to help ensure that your code works properly. Nobody is perfect, and we can certainly use a second set of eyes on occasion to keep our work in check. 

Another great benefit of code reviews is that they can help you maintain best practices and with maintaining good code readability. A seasoned developer once told me that your code should be as simple and as readable as possible, rather than being overly fancy or gimmicky. I certainly see the value in his advice now that we are working in a small team. I can only imagine how crazy it must be to work with very large teams, and how important code reviews are in those settings. Another benefit of code reviews they provide avenues for one to learn from more experienced developers. For instance, I was fairly rusty with Java after not using it for a while, but early on, my teammates were able to create some good examples of our code which helped me caught up to speed. 

Code reviews are so important that apparently they are a quintessential part of Google's culture, which is a key reason as to why their code is stable and maintainable in the long term. I hope we can continue practicing this process even after our course concludes.

Friday, March 21, 2025

CST 438 Week 3

    React is like a whole new world compared to what I've been used to. I’ve mostly been doing HTML, CSS, and vanilla JavaScript before diving into React, and to be honest it's been a blast learning the framework.

    First, I learned that React is all about components. Everything becomes a component, and our team noticed this when we were building our frontend components for our web app. If you've ever built a site with vanilla JS, you might’ve had to deal with a bunch of DOM manipulation, manually updating elements, and keeping track of which parts of the page need to change. With React, you basically just make these self-contained components and React does a lot of the heavy lifting for you. If something changes in your data, React re-renders only the parts of the page that need to update, which is way easier than trying to manually update stuff with JS. I like that React uses JSX, which is kinda like mixing HTML with JavaScript. It feels like I'm writing HTML, but I can also use JavaScript in the same file. It’s cool because you can easily write logic for rendering parts of the page based on the data.

    But I’ll be real, React can be a bit overwhelming at first. You’ve got things like state, props, hooks, and context to learn. If you're used to just sticking everything in a single HTML file and throwing JavaScript in there, it can feel like you’re juggling a lot of new concepts. I struggled a bit with how to manage state in React at first—it’s not like vanilla JS where you just use global variables or something.React projects can get pretty messy if you don’t manage your code properly. Since you’re building your app with a bunch of components, things can get a little chaotic if you don’t structure your project well from the start. Also, React’s learning curve can be steep for newcomers—especially with things like hooks and state management.

    Overall, I’m liking React so far, but it definitely feels like a whole new way of thinking about building web apps. Though it can get confusing sometimes, it is very useful and powerful for web developers. I'm looking forward to leveling up my React knowledge as we build more of our cst 438 project.

Saturday, March 15, 2025

CST 438 Week 2

A Mock is basically a fake version of something your code depends on, like a database or an API, that you use in unit testing. Instead of calling the real thing, which might be slow or unpredictable, a mock just returns whatever you tell it to. This helps keep tests simple, fast, and consistent because you control exactly how the mock behaves.

Mocks are super useful because they let you test one piece of code without worrying about the rest of the system. They make tests run faster since you're not waiting on real databases or external services. Plus, they help you check if your code handles different situations properly, like errors or missing data. In JUnit, a tool like Mockito makes it easy to create and use mocks, so you can focus on testing just what matters.

Let’s say you’re building a shopping app, and there’s a method that calculates the total price of a customer’s order. This method needs to fetch product prices from a database. But in a unit test, you don’t want to actually connect to the database because it’s slow and could change over time. Instead, you use a mock to fake the database and return fixed prices.

For example, if a customer buys two items, the mock can be set up to return $10 for one and $20 for the other, so you know the total should be $30. This way, you can test if your calculation method works correctly without depending on a real database. Plus, you can easily test different cases, like what happens if an item is missing or if the database throws an error. This makes your tests faster, more reliable, and easier to control.

 

Tuesday, March 11, 2025

CST 438 Week 1

This week in CST 438, we were introduced to various basic software engineering concepts that more or less confirmed my preconceptions regarding the course's content. 

Before the beginning of our course term, I had already heard from a few prior students what we may learn. Namely, the basics of git version control, agile method, and the software development lifecycle, including collaborating in a team environment. It turns out we are in fact going to learn these techniques in our course, CST 438, which thrills me, especially since these are real world industry skills. I will admit that most of our academic CS curriculum has been overly theoretical thus far (and indeed this is probably the status quo opinion of someone working in the current tech industry), so being able to learn practical skills such as testing, agile, scrum meetings, git projects, behavior driven development, while also bolstering our knowledge of modern technologies such as react, java, AWS, and Git among other, is an absolute gift. I'm very much looking forward training and improving alongside my team during this term.

Thursday, December 12, 2024

CST 462S: Service Learning Reflections

    I have just completed CST 462S, which includes a service learning portion in which students provide service to others via technical skills, such as software development, tutoring, and more. I was specifically assigned to develop the website and database of a small nonprofit organization in Kenya named Nyamboyo Technical School, or NTS. The mission of NTS is to empower local youth with modernized skills, including vocational and computer literacy skills, in order to help them attain a higher quality of life. 

    Overall, the experience went surprisingly well, especially since I dove right in with little web development skills, having not yet taken internet programming. From the very first meeting with my supervisor, I was highly motivated to complete this project and help the school establish an official student and grant database along with an interface for staff to interact with. Although the project is ongoing, I have vowed to continue serving NTS until they no longer need me, which will help everybody - I can continue to train and improve on multiple professional facets while NTS gains better software for various essential operations. We are currently quite close to completing the student application, which will serve as a great skeleton for the grant database, so all in all we are almost complete with the core deliverables. 

    For future students, I would recommend that you choose a service learning site that you already have skills for, especially if you are short on time. Even though I was taking two courses this term, I was still able to put in extra time learning fundamental web dev skills, but I can certainly see how most students would not have the time for this. In addition, always prioritize the needs of the client and users, they will drive your development towards perfection more than anything else. Constant feedback from your supervisors is highly beneficial. Good luck!

Wednesday, August 7, 2024

CST 334 Week 8 Report

This week, we delved more deeply into the essentials of persistence in operating systems. We began by examining the fundamental interactions between the OS and hardware devices. Efficient communication with a device relies on two key components: the hardware interface and its internal organization. To minimize CPU load, three main techniques are used: interrupt-driven I/O, programmed I/O, and direct memory access (DMA). DMA is particularly advantageous for systems handling large volumes of data, especially with frequent transactions, as it reduces the need for constant CPU involvement during data transfers. We also studied the ways the OS interacts with devices, focusing on explicit I/O instructions and memory-mapped I/O. Additionally, we learned that device drivers play a crucial role in abstracting device operations from the OS, through software that defines the device's functionality. Our exploration continued with the basics of hard disk drives. Modern hard disks feature a platter and spindle, with data stored in concentric circles called tracks on each surface. A disk head and arm are used to read this data. Various disk I/O scheduling algorithms were discussed, ranging from simple methods like first-come, first-served to more advanced ones such as budget fair queuing. In the realm of file systems, we covered persistent storage devices like HDDs and SSDs. We focused on the core abstractions of storage: files and directories, which are fundamental to data persistence in the OS. We explored the file system interface, including file creation, access, and deletion. We concluded the week by implementing a basic file system using vsfs (Very Simple File System), a simplified model of a typical UNIX file system. Key takeaways included understanding the structure and access methods of file systems, learning about the Inode (index node) for file metadata, and exploring multi-level indexing, directory organization, and free space management. Overall, it was a productive week of learning, and I look forward to building on these foundational concepts.

When it comes to persistence of personal character, I learned quite a bit over the course. I learned that even when I am confused during an assignment, to simply sit with the challenge and continuously examine it until it truly sinks into my understanding. I also learned that I can sometimes rely on others to clarify for me instead of trying to brute force solve it on my own. Thankfully, Dr. Ogden was helpful in slack and helped to clear up any confusions. I learned that by developing more resilience and discipline, I can accomplish any task as long as I stay focused. Thanks. 

Thursday, August 1, 2024

CST 334 Week 7 Report

 This week, we learned a ton about the fundamentals of persistence in operating systems. We started by looking at basic device interactions with the operating system. The device itself requires two parts to make interaction efficient: the hardware interface and its internal structure. There are three main mechanisms employed to reduce CPU overhead: interruption system, programmed I/O, and direct memory access (DMA). For systems with larger memory volume transactions, DMA is superior, especially if data transactions are frequent, because the CPU does not have to be constantly used during transfers. When it comes to how the OS interacts with the device, there are two primary methods: one is to have explicit I/O instructions, and the second is known as memory-mapped I/O. Finally, the device driver is what specifically abstracts the device function away from the OS by way of a piece of software that details how a device works. Afterwards, we learned about the basics of hard disk drives. Modern disks have a platter and a spindle. Data is encoded on each surface in concentric circles of sectors that are called tracks. We read data from the surface with a disk head and arm. There are numerous disk I/O scheduling algorithms that can be employed, from more basic ones like first come, first serve to modernized algorithms like budget fair queuing. When it comes to file systems, we learned about persistent storage and devices like HDDs and SSDs. The two basic abstractions developed regarding storage are files and directories, which comprise the bread and butter of persistence in the OS. We explored the file system interface including creating, accessing, and deleting files. We wrapped up our week by learning a simple file system implementation through the vsfs (very simple file system), which is a simplified version of a typical UNIX file system. When thinking of file systems, we should be thinking about two primary aspects: the data structures of the file system and the access methods required to actually do things with data. We learned about the Inode, or index node which is the structure that holds metadata for a given file. We were able to learn more about multi-level indexing, directory organization, and free space management within a file system. All in all, it was a good week of learning and I hope we can continue to build on these basic concepts.

Saturday, July 27, 2024

CST334 Week 6 Report

 Concurrency: Part II

This week, we learned a ton more about concurrency in operating systems. Notably, the main topic was semaphores, which are essentially an upgraded version of our previous basic locks and condition variables which we can use to improve system performance, especially in multi-threading applications. Our book specifically defines a semaphore as an object with an integer value that we can manipulate with two routines, which in the posix standard are sem_wait() and sem_post(). It is important to remember that the initival value of a semaphore defines its behavior, so it must first be initialized to some value. The first type of semaphore we studied is the binary semaphore, used as a lock. Next, we learned how to implement semaphores for ordering events in a concurrent program. These semaphores can be very useful to use when a thread which is waiting for a list to become non-empty, so it can delete an element from it. Specifically, the semaphore would signal when another thread has completed, in order for the waiting thread to awaken into action, much like how condition variables work. We also learned about the producer/consumer problems and dining philosopher problems as means to understand semaphores on a deeper level. Avoiding concurrency bugs, including deadlock, was very helpful especially since we may implement semaphores ourselves in the future.

Sunday, July 21, 2024

CST334 Week 5 Report

 CST334 Week 5 Journal

This week, we learned a ton about the basics of operating system concurrency. One of the most fundamental things we learned is the thread. A thread is basically a computational unit within a process. Although a single thread is semi independent, it shares a central logical address space with other threads, allowing them to access the same data. At the same time, one process can have many threads. We use threads in order to support parallelism and also to avoid blocking program progress due to slow IO. Another very important concept we learned is the lock, which is designed to help us execute a series of instructions atomically. By placing locks around critical sections in code, programmers can ensure that the section is executed as if it is a single atomic instruction. We implement locks by declaring lock variables, which hold the state of the lock (available or acquired). We evaluate the efficacy of a particular lock type by looking at several goals: mutual exclusion, fairness, and performance are the main objectives. The ticket lock has a key advantage over the spin lock in that it prevents a thread from starving, since it ensures that at some point in the future it will acquire the lock. Several data structures can be utilized in the implementation of concurrency, and we can achieve this by adding locks with performance in mind. There are concurrent counters, linked lists, queues, and hash tables, which all have pros and cons, particularly apparent when factoring scalability in. It is important to keep several things in mind though: more concurrency does not necessarily increase performance and performance problems should only be remedied once they exist. Threads can utilize a condition variable in order to bypass our problem of a thread traditionally spinning in an inefficient manner until some condition is true. In a nutshell, the thread that is waiting in the condition variable queue is signaled to by another thread to awaken and continue.

Saturday, July 13, 2024

CST Week 4 Post

 CST Week 4 Summary

This week we learned a ton about how the operating system manages virtual memory, especially paging. Paging is basically an alternative to the segmentation approach when it comes to managing memory. Instead of splitting up a process' address space into some number of variable sized segments, we divide it into fixed-size units, each of which is called a page. There are numerous advantages of paging, from avoiding external fragmentation to being very flexible and enabling the spare use of virtual address spaces. 

To be blunt, we cannot simply implement paging into our system willy nilly - we have to take numerous factors into account when doing so in order to ensure good memory management. One is ensuring that we have a tranlsation-lookaside buffer, or TLB in order to cache frequently used virtual-to-physical address mappings. The main purpose of the TLB is to keep our system running quickly - we wont have to perform a full page table search for an address mapping if it is in the TLB. Another important technique for ensuring good paging function is to implement a hybrid of small and large table sizes: instead of having a single page table for the entire address space of the process, we can have one per logical segment. We may thus have three page tables, one for the code, heap and stack parts of the address space. In addition, the OS actually packs away infrequently used portions of address spaces into hard disk drives, in order to maintain a single and large address space overall. When memory is near full, the OS will also page out one or more pages to make room for the new page about to be used. The process of picking a page to kick out or replace is known as the page replacement policy, and there are several. Notably, the optimal policy, developed by Belady, decrees that it is most optimal to replace the page which will be accessed furthest in the future. The optimal is an ideal template which developers can approach through implementing their own policies, which include FIFO, random, and LRU among others.

Wednesday, July 3, 2024

CST334 Week 3 Report

 CST334: Weekly Learning Summary Pt. III

This week, we learned a ton in Operating Systems. We learned mostly about how the OS virtualizes memory. In essence, we are looking at how the OS utilizes hardware based address translation. The OS creates an easy to use abstraction of memory by way of the address space: this space contains all the memory state of the running program: the code, the stack, the heap, etc. Each process has not a real memory address, but rather a virtual memory address that must be translated into a real, physical memory address by the OS whenever we are creating or modifying a process.With dynamic relocation, we use a base register to transform virtual addresses into physical addresses; furthermore, a bounds register ensures that addresses are within the confines of the address space.These base and bounds registers are typically managed by a part of the CPU known as the memory management unit, or MMU. On an important note, both internal and external fragmentation are necessary evils in this address translation model, and our job as computer scientists is to try to minimize these while managing memory.


Saturday, June 29, 2024

CST334 Week 2 Report

Operating Systems Week 2 Summary

This week we learned a ton about processes and how the operating system manages them. The operating system virtualizes one or a few CPUs in order to run many processes concurrently. The operating system employs a combination of low level machinery and higher level scheduling policies to accomplish this. On the low level part, we have techniques such as context switches, which is utilized when changing the currently running process to a new one. We also see various system calls such as fork(), exec(), and wait(). In order to protect the system from any negatives a process may incur, we utilize limited direct execution - the process must run under limitations imposed by the operating system. While the process typically runs in user mode, the operating system by default runs in kernel mode, which means it has unlimited access to machine resources. 

We must combine low level machinery with higher level policies, or disciplines - this is while trying to simultaneously optimize for performance metrics such as turnaround time and response time. There are several approaches that can be considered, from shortest job first to round robin, in determining how to schedule processes. Ultimately, a more optimal approach is using a Multi-level feedback queue, or MLFQ. In this treatment, we have a number of distinct queues, each with a different priority level. The process with a higher priority level runs first in a round robin fashion with processes sharing its priority level, which means each process runs for a predetermined time slice, or scheduling quantum in alternating fashion with other processes. When each job's allotment is spent, its priority level is automatically decremented - this is so that more interactive processes can stay higher priority, while longer and more CPU intensive processes remain in the lower priority levels.

Wednesday, June 26, 2024

CTI: Personal Value Proposition (Beta Version)

 Dear Amazon and Google,

I heard that you are in need of a network engineer. I am here because I believe we can work together to produce a better level of stability and function in all areas of your workflow and in those which clients navigate in.

As a network engineer with five years experience having a notorious reputation for improving company network reliability, you can count on me to accomplish the following:

  • reduce network downtime by at least 50%
  • continually design new and improved implementations for local network
  • fully integrate within the current team and enhance group productivity

As leading companies within the software industry, you will benefit from the items I can bring to the table. I would like to speak with you on how we can begin improving our network strategies. 

Sincerely,

Luis