DEV Community

Cover image for Week 2 – You’re Not Stuck - You Just Skipped the Basics: Essential Memory
Adam Neves
Adam Neves

Posted on • Edited on

Week 2 – You’re Not Stuck - You Just Skipped the Basics: Essential Memory

Hello, dev community! 👋

We’re back with our series “You’re Not Stuck - You Just Skipped the Basics”. After exploring the CPU in Week 1, the brain of your program, today we’re diving into another fundamental concept every developer needs to master to truly level up: memory.

Every piece of code we write, every variable we declare, every object we create, they all live in and interact with the computer’s memory. Ignoring how memory works is like trying to build a house without understanding the ground it stands on.

In this post, we’ll break down the essential memory concepts that directly impact the performance, robustness, and behavior of your code.

What Is Memory and Why Does Your Code Depend on It?

Think of RAM (Random Access Memory) as your computer’s temporary workspace while a program is running. It’s where data is stored that the processor needs quick access to in order to execute your instructions.

Your code depends on memory because it’s where everything happens during execution:

  • The actual instructions of your program are loaded into memory.
  • Variables store their values in memory.
  • Data structures (lists, arrays, objects, etc.) are built in memory.
  • Function calls and execution state are managed in memory.

Without memory, there’s no space for your program to exist or operate. Understanding this dependency is the first step toward writing more conscious and efficient code.

Stack vs. Heap: The Two Main Memory Areas

When your program runs, it primarily uses two distinct memory areas: the Stack and the Heap.

Stack:

  • Works like a LIFO (Last-In, First-Out) stack.
  • Used to store fixed-size and known-at-compile-time data, such as local variables (primitives in many languages) and function call metadata (return addresses, arguments, etc.).
  • Extremely fast allocation/deallocation, since it only involves moving the stack pointer up or down.
  • Each thread typically has its own stack.
  • Has a limited size and can cause a stack overflow if too many nested function calls or large variables are allocated.

Heap:

  • A larger and more flexible memory area.
  • Used for data whose size is not known at compile time, or that needs a longer lifespan than the current function scope. Dynamically created objects (new, malloc, etc.) usually go to the Heap.
  • Slower allocation than the Stack, as the system must find a suitable memory block.
  • Deallocation may be manual (C/C++) or automatic (via Garbage Collector).
  • Shared among all threads (with proper concurrency controls).

Knowing where your data lives (Stack or Heap) helps you understand performance implications and memory management in your code.

Variables, Scope, and Automatic Management (GC)

Variables are symbolic names we give to memory locations that store values. A variable’s scope defines the code region where it’s accessible, and more importantly, its lifetime.

  • Variables declared inside a function (local scope) usually live on the Stack and are automatically deallocated when the function ends.
  • Dynamically created objects and the variables referencing them (usually pointers or references) live on the Heap.

In many modern languages (JavaScript, Python, Java, C#, Go, etc.), memory management in the Heap is handled by a Garbage Collector (GC).

The GC automatically tracks which objects in the Heap are still referenced by your program. If an object is no longer referenced, it’s considered "garbage" and the GC will free the memory, making it available for reuse.

GC doesn’t eliminate the need to understand memory, it just changes the type of memory issues you’ll face.

References, Allocation, and Memory Leaks

When you create an object in the Heap, the variable accessing it usually holds a reference to it. Think of a reference as the “address” in memory where the object is stored.

Allocation is the act of reserving a space in memory (Stack or Heap) to store data. Whenever you declare a local variable or create a new object, allocation is happening.

A memory leak occurs when memory that was allocated is not released when it’s no longer needed. This happens when a Heap object continues to be referenced even though your program no longer uses it. The Garbage Collector can only release memory if no references to it remain.

Memory leaks cause your application to consume more and more memory over time, leading to:

  • Degraded performance: The OS might start swapping memory to disk to compensate for low free RAM.
  • Instability: Eventually, your app may run out of memory and crash with “Out of Memory” errors.

Understanding how references work and how allocation/deallocation happens (manually or via GC) is critical for identifying and preventing memory leaks.

Conclusion

Memory isn’t just some “low-level” concept that only matters in C or embedded systems. It’s the foundation where all your code operates.

Mastering concepts like Stack vs. Heap, scope-driven lifetimes, and reference-based allocation (and how GC fits into this) gives you a clearer mental model of what your code is actually doing behind the scenes. That empowers you to write more robust, predictable code and debug problems more effectively.

Don’t skip the basics. Next week, we’ll dive into what really happens when you type a URL, and explore the fundamentals of Networking.

Bonus: Types of Memory and Why Access Speed Matters

Memory in a computer isn’t just RAM and that’s it. There are multiple levels of memory, each with different speeds, sizes, and purposes. Understanding this hierarchy helps you reason about performance more deeply.

Common Types of Memory

  • Registers

    Super fast, tiny memory blocks inside the CPU itself. Used to store immediate values during execution.

    Examples: EAX, EBX, RAX, RDX — depending on CPU architecture.

  • L1, L2, L3 Cache

    Small memory levels closer to the CPU than RAM. Used to reduce access time to frequently used data.

    • L1: fastest but smallest
    • L2: mid-size, slower
    • L3: largest, but slowest among the caches
  • RAM (Main Memory)


    Where your programs and runtime data live during execution. Much slower than CPU cache, but much larger.

  • Disk (Swap Space / Virtual Memory)


    If RAM is full, the system may use the disk as memory — extremely slow in comparison.


Approximate Memory Access Times

Memory Type Access Time
Registers ~1 CPU cycle
L1 Cache ~3–4 cycles
L2 Cache ~10–12 cycles
L3 Cache ~30–50 cycles
RAM ~100–150 cycles
SSD (swap) ~10,000+ cycles

A well-optimized program minimizes RAM access and maximizes use of registers and cache — that’s how you make code really fast.


TL;DR

  • Registers are where your CPU does its thinking — lightning fast but limited.
  • Memory hierarchy impacts your code’s performance more than you think.
  • Understanding how data flows from storage to CPU helps you write faster, more efficient code.
  • This isn’t just for systems programming — even frontend and backend code can suffer from poor memory usage patterns.

You’re building abstractions every day — but those abstractions run on physical memory. Know the terrain, and you'll navigate like a pro.


This post is part of the series "You’re Not Stuck — You Just Skipped the Basics".

Top comments (0)