Core Dumpped

  • Stop Killing Processes! Make Ctrl+C Meow Instead… (with Signals)
  • Why Some Projects Use Multiple Programming Languages
  • Why Can’t Programs Access Each Other’s Memory?
  • The Weirdest Bug in Programming - Race Conditions
  • This Simple Algorithm Powers Real Interpreters: Pratt Parsing
  • What Happens When a Program Calls Sleeps?
  • The Fancy Algorithms That Make Your Computer Feel Smoother
  • That One Time I Needed a Linked List
  • How Hardware Assist Software When Multitasking
  • Why Applications Are Operating-System Specific
  • The most common question on my channel
  • Threads On Multicore Systems
  • Why Are Threads Needed On Single Core Processors
  • IPC: To Share Memory Or To Send Messages
  • How the Clock Tells the CPU to “Move Forward”
  • How a Single Bit Inside Your Processor Shields Your Operating System’s Integrity
  • The Most Successful Idea in Computer Science
  • A PROGRAM is not a PROCESS.
  • How computer processors run conditions and loops
  • Capacitors are terrible at remembering data. But for this reason we continue doing it.
  • HOW COMPUTERS CAST STRINGS TO NUMBERS
  • CRAFTING A CPU TO RUN PROGRAMS
  • HOW TRANSISTORS REMEMBER DATA
  • HOW TRANSISTORS RUN CODE?
  • CONCURRENCY IS NOT WHAT YOU THINK
  • ARRAYLIST VS LINKEDLIST
  • WHY IS THE HEAP SO SLOW?
    • alloc allocates new memory on the heap, making a system call if there is no memory region / insufficient memory to hold the requested amount of memory
      • allocations require searching available subregions in the heap
    • free deallocates the memory occupied by a call to alloc and is also responsible for merging two free regions of memory if the freed region is contiguous with another free/unallocated region (“hole”)
    • memory regions are not required to be contiguous if we e.g. use a linked list with a fixed-size pointer on each node to indicate the memory address of the next word/byte of memory pertaining to that memory chunk/structure/variable
      • nodes are scattered throughout memory which decreases the probability of cache hits slower retrieval from memory
    • Vectors (or ArrayList; in low-level languages like C++ they are referred to as vectors: vector) maintain compactness (contiguousness) in memory
    • Runtime errors using the heap
      • memory leaks
      • null pointer dereferences
      • dangling pointers
  • WHY IS THE STACK SO FAST?
    • function calls lead to definition of regions of the stack called stack frames
    • the return link is inserted at the top of the stack frame with the amount of memory determined by the return type of the function (which we explicitly annotate for compile-time checks) and this is returned to and written to with the return value of the function when the stack frame (all of its values) are popped off
    • recursive functions can cause stack overflows as the base case is not encountered before the stack memory is exhausted (not only if the base case is not defined/incorrectly defined)
      • this is why iterative algorithms are easier to reason about than recursive in terms of memory (recall: NASA’s SWE rules do not permit recursion, iteration is used)
    • stack memory is preallocated and tends to be in the order of megabytes - see C/C++ maximum stack size of program on mainstream OSes
    • modern CPUs allocate a dedicated register to the stack pointer - the CPU does not have to query memory or even cache to identify the top of the stack or the next free memory address (i.e. stack pointer + 1)
  • The size of your variables matters.
  • Rust in 2023. The definitive summary.