Coding Problems

LRU Cache | Leetcode 146

Welcome to our architecture breakdown of the LRU Cache (Leetcode 146). This is undeniably one of the most critical and frequently asked System Design and OOP questions in the entire software engineering interview circuit. It requires you to seamlessly combine two totally distinct fundamental data structures.

Problem Statement

Design a data structure that follows the constraints of a Least Recently Used (LRU) cache.

Implement the LRUCache class:

  • LRUCache(int capacity) Initialize the LRU cache with positive size capacity.
  • int get(int key) Return the value of the key if the key exists, otherwise return -1.
  • void put(int key, int value) Update the value of the key if the key exists. Otherwise, add the key-value pair to the cache. If the number of keys exceeds the capacity from this operation, evict the least recently used key.

The functions get and put must each run in exactly O(1) average time complexity.


The Core Concept: HashMap + Doubly Linked List O(1)

We have two incredibly conflicting constraints.

  1. We must execute data retrieval (get) in O(1) time. This mandates a HashMap.
  2. We must rigorously track the sequential ordering of "Least Recently Used" vs "Most Recently Used" elements to organically evict them in O(1) time. This mandates a Linked List.

Standard Arrays require O(N) shifting to splice indices cleanly. Standard Singly-Linked Lists require traversing O(N) to find the prev node when severing a target internally. The master key is pairing our HashMap with a custom Doubly Linked List.

  • The Map directly maps actual keys to identical Node Pointer Addresses in memory.
  • The Doubly Linked List implicitly manages chronological age.
  • We employ two literal dummy protective nodes (head and tail) to prevent NullPointerException edge cases physically inside the dynamic splicing engine!

Every time a node is get() or newly put(), we snip it cleanly out of its current position O(1), and push it physically to the extreme tail of the list (designating it the Most Recently Used). When capacity overflows, we simply cleanly snap the physically adjacent head.next element off the queue, because that is intrinsically the oldest!

graph LR
    A["Head [Dummy Spacer]"] <--> B["[Key 1: Val A]<br>Oldest Node"]
    B <--> C["[Key 3: Val X]"]
    C <--> D["[Key 5: Val Y]<br>Most Recent Node"]
    D <--> E["Tail [Dummy Spacer]"]
    
    Hash[(HashMap Directory)] -.->|"Hash(Key 3) --> Memory Pointer"| C
class Node:
    def __init__(self, key=0, val=0):
        self.key, self.val = key, val
        self.prev = self.next = None

class LRUCache:
    def __init__(self, capacity: int):
        self.cap = capacity
        # value mapping: key -> Node Pointer
        self.cache = {} 
        
        # Protective Dummy boundary nodes
        self.left = Node() # Represents absolute LRU
        self.right = Node() # Represents MRU limit
        
        self.left.next = self.right
        self.right.prev = self.left

    # Extracted helper algorithm for organic Linked List Node splicing
    def remove(self, node):
        prev_nd = node.prev
        nxt_nd = node.next
        
        # Circumvent Target Node
        prev_nd.next = nxt_nd
        nxt_nd.prev = prev_nd

    # Extracted helper algorithm to push node dynamically against tail boundary
    def insert(self, node):
        prev_tail = self.right.prev
        
        # Bind dynamically backwards
        prev_tail.next = node
        self.right.prev = node
        
        # Bind node natively forwards
        node.next = self.right
        node.prev = prev_tail

    def get(self, key: int) -> int:
        if key in self.cache:
            # We must refresh its "Recent" priority mechanically!
            self.remove(self.cache[key])
            self.insert(self.cache[key])
            return self.cache[key].val
        return -1

    def put(self, key: int, value: int) -> None:
        if key in self.cache:
            # Mechanically remove out-of-date value entirely!
            self.remove(self.cache[key])
            
        # Push brand new physical wrapper into engine
        self.cache[key] = Node(key, value)
        self.insert(self.cache[key])
        
        # Verify overflow triggers dynamically
        if len(self.cache) > self.cap:
            # Pop Least Recently Used structure off the 'left' floor boundary
            lru = self.left.next
            self.remove(lru)
            del self.cache[lru.key]

Complexity Analysis

MetricComplexityExplanation
TimeO(1)Flawless amortized O(1) runtime for absolutely every native API function request. Dictionary retrieval is instant, and pointer rebinding guarantees zero inner loops.
SpaceO(C)Where C natively represents the pre-defined maximum dynamic numerical capacity constraint for active key and inner nested node allocation.

Try it yourself!

Try this problem on LeetCode (146)