Posts

Showing posts from December, 2024

Week 32 – Persisting as Student of CS

In my previous journal entry, I covered persistence in operating systems. In the world outside of computers, the meaning of the word persistence means to repeat and continue and go on. In many cases, if one is persistent, it ensures continuity, reliability, and success. Both persistence in real life and persistence in the context of operating systems require some sort of structure and both require maintenance of a system. In order to persist as a student of CS, it’s crucial to keep tackling concepts and different layers of abstraction so that we may understand everything as a whole. CST 334 has provided me with valuable tools for many other areas of this study. Whether it’s Software, Networking, or Data, everything relies on the OS and the kernel which act as a medium between the hardware and the user.

Week 31 - Persistence

This week we covered persistence in file systems. Persistence refers to the capability to retain information over time in data storage systems. This also includes retaining information when there’s crashes, power failures or other hardware malfunctions. File systems achieve persistence by using the structured directories. They also make use of inodes for metadata and superblocks to manage the system information. Things like RAID arrays can help enhance persistence in file systems by combining disks for redundancy. Caching also helps with persistence by optimizing the file system performance by storing frequently accessed disk blocks in memory. Persistence ensures the long term preservation of data and the balancing of performance and reliability.

Week 30 - Concurrency Continued

This week we explored concepts in concurrency and focused on synchronization and common problems in multi-threading. I learned that problems can arise when a system tries to process multiples tasks or in overlapping time periods. We were introduced to semaphores which are an important synchronization primitive. These can be used to manage access to shared resources in concurrent programming environments.  The lab this week focused on synchronization using semaphores. The aim was to get familiar with solving synchronization issues like the producer-consumer problem. Using semaphores, we were able to implement thread-safe operations to ensure mutual exclusion and prevent race conditions. This lab reinforced the importance of using synchronization techniques to avoid unpredictable and inconsistent behavior caused by allowing access to shared resources.