Skip to content

Handle System Architecture

Design Philosophy

Indirection as Memory Density Strategy

The handle system embodies a counter-intuitive architectural principle: strategic indirection to achieve memory density. Rather than direct object references, the system interposes a 32-bit token layer that enables memory compaction techniques impossible with direct pointers.

Fundamental Trade-off: Accept significant indirection overhead (multiple function calls, conditional branches, mutex locks) to achieve 50% memory reduction per reference, enabling processing of datasets 2x larger than direct pointer approaches.

Layered Abstraction Hierarchy

The system employs a three-tier abstraction hierarchy:

  1. Public Interface Layer: Unified handle semantics across all JSON types
  2. Manager Facade Layer: Type-specific routing and policy enforcement
  3. Implementation Pool Layer: Actual memory management and object lifecycle

This separation enables interface stability while allowing internal optimization flexibility.

Immortal Resource Strategy

Critical Architectural Decision: Ubiquitous JSON values (null, true, false, invalid) are treated as immortal system resources rather than managed objects.

Rationale: In large-scale JSON processing, these values appear millions of times. Traditional per-instance allocation would cause catastrophic memory waste and OOM conditions.

Implementation Strategy: Create once during system initialization, cache tokens permanently, bypass normal lifecycle management.

System Architecture

Manager Ecosystem

The handle system operates as a managed ecosystem with five distinct memory pools coordinated through a central registry:

Handle System Ecosystem:
┌─────────────────────────────────────────────────────────────────┐
│                   Global Coordination Layer                     │
│  ┌─────────────────────────────────────────────────────────────┐│
│  │         JsonHandleManagerRegistryCompact                    ││
│  │                   (Singleton)                               ││
│  │  [Scalar] [String] [Array] [Object]                         ││
│  │     ↓        ↓       ↓       ↓                              ││
│  └─────┼────────┼───────┼───────┼──────────────────────────────┘│
└────────┼────────┼───────┼───────┼───────────────────────────────┘
         │        │       │       │
┌────────▼────────▼───────▼───────▼──────────────────────────────┐
│                Memory Pool Layer                               │
│                                                                │
│ ScalarManagerCompact     StringManagerCompact                  │
│ (Facade - No Pools)     (Dedicated Pool)                       │
│        │                       │                               │
│        ├─── IntManagerCompact  │                               │
│        │   (Hidden Pool)       │                               │
│        └─── FloatManagerCompact│                               │
│            (Hidden Pool)       │                               │
│                                │                               │
│ ArrayManagerCompact           ObjectManagerCompact             │
│ (Dedicated Pool)             (Dedicated Pool)                  │
└────────────────────────────────────────────────────────────────┘

Token Space Architecture

32-bit Handle Token Encoding Strategy:

The system partitions the 32-bit token space to encode both manager routing and object addressing:

Token Bit Allocation:
┌──────────┬─────────────────────────────────────────────────┐
│Type (2b) │              Index (30b)                        │
│31...30   │              29...0                             │
└──────────┴─────────────────────────────────────────────────┘

Manager Type Encoding:
00 → ScalarManagerCompact (routes to Int/Float sub-managers)
01 → StringManagerCompact  
10 → ArrayManagerCompact
11 → ObjectManagerCompact

Address Space per Manager: 2^30 = 1,073,741,824 objects

Architectural Constraint: The 30-bit address space represents a deliberate limitation trading unlimited scalability for memory density. This constraint is acceptable for practical JSON processing scenarios while enabling the 50% handle size reduction.

Scalar Manager Facade Pattern

ScalarManagerCompact implements a pure facade pattern with zero storage, delegating all operations to hidden implementation managers:

Scalar Manager Internal Architecture:
┌─────────────────────────────────────────────────────────────────┐
│                  ScalarManagerCompact                           │
│                  (Public Facade)                                │
│                                                                 │
│  ┌─────────────────────────────────────────────────────────────┐│
│  │ No Memory Pools:                                            ││
│  │ - No handles_ vector                                        ││
│  │ - No freelist_ heap                                         ││
│  │ - No hash_to_index_map_                                     ││
│  │                                                             ││
│  │ Pure Delegation:                                            ││
│  │ - Route INT tokens → JsonHandleManagerIntCompact            ││
│  │ - Route FLOAT tokens → JsonHandleManagerFloatCompact        ││
│  │ - Route NULL/BOOL/INVALID → Immortal singletons             ││
│  └─────────────────────────────────────────────────────────────┘│
└─────────────────────────────────────────────────────────────────┘
              │                            │
              ▼                            ▼
┌─────────────────────────┐    ┌─────────────────────────┐
│ JsonHandleManagerInt    │    │ JsonHandleManagerFloat  │
│ (Hidden Implementation) │    │ (Hidden Implementation) │
│                         │    │                         │
│ ├─ handles_             │    │ ├─ handles_             │
│ ├─ freelist_            │    │ ├─ freelist_            │
│ └─ hash_to_index_map_   │    │ └─ hash_to_index_map_   │
└─────────────────────────┘    └─────────────────────────┘

Design Rationale: The facade pattern enables:

  • Token space conservation: Avoids additional type encoding bits
  • NaN-boxing encapsulation: Hides IEEE 754 complexity from public interface
  • Unified scalar semantics: Single create() interface for all scalar types
  • Implementation flexibility: Internal managers can optimize independently

Memory Pool Patterns

Deduplication-Enabled Pools (Int, Float, String):

Memory Pool with Deduplication:
┌─────────────────────────────────────────────────────────────────┐
│ Primary Strategy: Avoid Redundant Allocation                    │
│                                                                 │
│ 1. Hash Lookup → Existing Object? → Increment Reference         │
│ 2. Cache Miss → Freelist Reuse → New Object                     │
│ 3. Growth Only When Necessary                                   │
│                                                                 │
│ Memory Behavior:                                                │
│ - Common values (42, "hello", 3.14) stored once                 │
│ - Millions of handles → Thousands of objects                    │
│ - Memory usage: O(unique_values), not O(total_handles)          │
└─────────────────────────────────────────────────────────────────┘

Independence-Focused Pools (Array, Object):

Memory Pool without Deduplication:
┌─────────────────────────────────────────────────────────────────┐
│ Primary Strategy: Mutable Container Independence                │
│                                                                 │
│ 1. No Hash Lookup → Always New Object                           │
│ 2. Freelist Reuse for Index Compaction                          │
│ 3. Each Container Instance Isolated                             │
│                                                                 │
│ Memory Behavior:                                                │
│ - Every create() → New container instance                       │
│ - Modifications isolated to specific container                  │
│ - Memory usage: O(total_containers)                             │
└─────────────────────────────────────────────────────────────────┘

Memory Lifecycle Patterns

Allocation Strategies

Deduplication-First Allocation (Immutable types):

Allocation Decision Tree:
┌─────────────────┐
│ Value Request   │
└─────────┬───────┘
          ▼
    ┌─────────────┐     Yes    ┌─────────────────┐
    │Hash Lookup  ├──────────► │Increment RefCnt │
    │in Cache?    │            │Return Existing  │
    └─────┬───────┘            └─────────────────┘
          │No
          ▼
    ┌─────────────┐     Yes    ┌─────────────────┐
    │Freelist     ├──────────► │Reuse Index      │
    │Available?   │            │Create Object    │
    └─────┬───────┘            └─────────────────┘
          │No
          ▼
    ┌─────────────┐            ┌─────────────────┐
    │Grow Vector  ├──────────► │Append Object    │
    │Append Index │            │Update Hash Map  │
    └─────────────┘            └─────────────────┘

Independence-First Allocation (Mutable types):

Allocation Decision Tree:
┌─────────────────┐
│Container Request│
└─────────┬───────┘
          ▼
    ┌─────────────┐     Yes    ┌─────────────────┐
    │Freelist     ├──────────► │Reuse Index      │
    │Available?   │            │Create Container │
    └─────┬───────┘            └─────────────────┘
          │No
          ▼
    ┌─────────────┐            ┌─────────────────┐
    │Grow Vector  ├──────────► │Append Container │
    │Append Index │            │No Hash Tracking │
    └─────────────┘            └─────────────────┘

Fragmentation Management Philosophy

Fragmentation Tolerance Strategy: The system tolerates significant fragmentation (up to 50%) before triggering expensive compaction operations.

Design Rationale:

  • Compaction cost: O(n) vector reallocation + heap reconstruction
  • Fragmentation cost: Unused nullptr slots in vector
  • Trade-off: Accept memory waste to avoid performance spikes
  • Threshold: Only compact when freelist_size > handles_size / 2

Immortal Singleton Lifecycle

Singleton Creation Pattern:

Immortal Singleton Initialization:
┌─────────────────────────────────────────────────────────────────┐
│                Static Initialization Phase                      │
│                                                                 │
│ 1. FloatManagerCompact creates NaN-boxed objects                │
│    └─ create(TRUE_NAN) → Object₁ with refcount=1                │
│    └─ create(FALSE_NAN) → Object₂ with refcount=1               │
│    └─ create(NULL_NAN) → Object₃ with refcount=1                │
│    └─ create(INVALID_NAN) → Object₄ with refcount=1             │
│                                                                 │
│ 2. Cache tokens permanently in static variables                 │
│    └─ TRUE_COMPACT_TOKEN = token₁                               │
│    └─ FALSE_COMPACT_TOKEN = token₂                              │
│    └─ NULL_COMPACT_TOKEN = token₃                               │
│    └─ INVALID_COMPACT_TOKEN = token₄                            │
│                                                                 │
│ 3. Subsequent requests return cached tokens                     │
│    └─ create(true) → return TRUE_COMPACT_TOKEN                  │
│    └─ create(false) → return FALSE_COMPACT_TOKEN                │
│                                                                 │
│ 4. Release operations bypass normal lifecycle                   │
│    └─ release(NULL_TOKEN) → (void)token; // no-op               │
└─────────────────────────────────────────────────────────────────┘

Architectural Consequence: Immortal singletons participate in normal memory pools but with frozen reference counts, ensuring they're never deallocated while maintaining uniform token semantics.

Concurrency Architecture

Manager-Level Synchronization Model

Lock Granularity Philosophy: The system employs coarse-grained manager-level locking rather than fine-grained per-object locking.

Synchronization Strategy:

  • Manager Operations: Protected by std::recursive_mutex
  • Object Access: Caller-responsible synchronization
  • Reference Counting: Lock-free atomic operations

Recursive Mutex Rationale:

  • Nested Operations: Arrays containing Objects containing Strings
  • Reentrancy Safety: Destructor cascades require reentrant access
  • Deadlock Prevention: Single lock hierarchy eliminates circular dependencies

Thread Safety Boundaries

Manager Boundary: Thread-safe

  • Handle creation, lookup, reference counting
  • Memory pool management (freelist, hash maps)
  • Object lifecycle coordination

Object Boundary: Application-responsible

  • Mutable container modifications (array.push_back, object.set)
  • Concurrent read/write operations on same container
  • Iterator invalidation scenarios

Reference Counting Boundary: Always thread-safe

  • Atomic operations with relaxed memory ordering
  • Independent of manager synchronization
  • Safe across all object types

Reentrancy Safety Architecture

Cascading Destruction Problem: Object destruction can trigger recursive manager operations on the same thread:

Reentrancy Scenario Chain:
Array Destructor → Object Elements Released → String Keys Released
     │                       │                        │
     ▼                       ▼                        ▼
ArrayManager::release() → ObjectManager::release() → StringManager::release()
(Same Thread)            (Same Thread)              (Same Thread)

Reentrancy Protection Strategy:'

  • State Snapshotting: Calculate compaction parameters before triggering destruction
  • Deferred Validation: Re-check conditions after potential recursive modifications
  • Atomic State Updates: Minimize critical section duration
  • Recursive Mutex: Enable same-thread reentrant access

Operational Characteristics

Memory Pressure Response

Adaptive Memory Management: The system responds to memory pressure through graduated intervention strategies:

Level 1: Normal fragmentation tolerance (0-50% waste)

  • Continue normal operations
  • Accept memory overhead for performance

Level 2: Aggressive compaction (>50% waste)

  • Trigger O(n) vector compaction
  • Reclaim significant memory at performance cost

Level 3: System-wide compaction (external trigger)

  • Compact all managers simultaneously
  • Release maximum possible memory

Performance Scaling Characteristics

Handle Access Scaling: O(1) algorithmic complexity but significant constant factors

  • Token decoding: Multiple function calls and conditional branches
  • Manager lookup: Registry access + singleton pattern + virtual dispatch
  • Object access: Mutex acquisition + array indexing + tagged pointer extraction

Actual Handle Access Cost Analysis (from source code):

Benchmark Results (10M iterations):
┌─────────────────────────────────────────────────────────────────┐
│ Operation                    │ Time     │ Cycles  │ Overhead    │
├─────────────────────────────────────────────────────────────────┤
│ Direct pointer dereference   │  1.01 ns │  ~3.0   │ (baseline)  │
│ get_type() call              │  1.80 ns │  ~5.4   │ +2.4 cycles │
│ get_view() call              │ 10.67 ns │ ~32.0   │ +29.0 cycles│
│ Full chain (type + view)     │ 13.02 ns │ ~39.1   │ +36.0 cycles│
│ Handle copy (retain)         │ 19.99 ns │ ~60.0   │ +57.0 cycles│
└─────────────────────────────────────────────────────────────────┘

Performance Analysis:
- get_view(): ~11x slower than direct pointer access
- get_type(): ~2x slower than direct pointer access  
- Handle copy: ~20x slower due to atomic operations

Reality Check: Handle access has measurable overhead (~32 cycles vs ~3 cycles for direct pointers), but this cost is acceptable in memory-bounded scenarios where the 50% memory reduction prevents OOM conditions.

Memory Pool Scaling: Different patterns per pool type

  • Deduplication pools: Memory growth sub-linear with handle count
  • Independence pools: Memory growth linear with container count
  • Hash table performance: O(1) average, O(n) worst case

Fragmentation Impact: Memory overhead increases with object churn

  • Low churn: Minimal fragmentation, optimal memory density
  • High churn: Significant fragmentation, periodic compaction overhead
  • Steady state: Fragmentation stabilizes around threshold

Design Rationale

Why Facade Over Direct Managers?

Token Space Conservation: Exposing separate Int and Float managers would require additional type encoding bits, reducing available address space.

Interface Simplicity: Applications interact with logical "scalar" concept rather than implementation details (NaN-boxing, IEEE 754 specifics).

Optimization Encapsulation: Internal managers can use type-specific optimizations without affecting public interface.

Why Immortal Singletons?

Memory Explosion Prevention: In JSON documents with millions of boolean/null values, per-instance allocation would cause OOM before computational limits.

Cache Efficiency: Singleton tokens enable branch-free lookup through static arrays rather than conditional logic.

Lifecycle Simplification: Immortal resources eliminate complex destruction ordering and circular reference issues.

Why Manager-Level Locking?

Memory Overhead Minimization: Per-object locks would add 8+ bytes per object—unacceptable overhead for billions of small JSON values.

Contention Amortization: Manager-level locks amortize synchronization cost across multiple operations.

Reasoning Simplicity: Single lock hierarchy eliminates complex deadlock scenarios and ordering requirements.

Contribution Opportunity: If you can design a lock-free algorithm with demonstrably better performance characteristics, pull requests are welcome.

Summary

The handle system architecture demonstrates sophisticated memory-first engineering that challenges conventional performance wisdom:

  1. Strategic Indirection: Accept CPU overhead to achieve memory density
  2. Facade Encapsulation: Hide implementation complexity while preserving optimization opportunities
  3. Immortal Resource Management: Treat ubiquitous values as system resources, not managed objects
  4. Differentiated Pool Strategies: Optimize for immutable sharing vs mutable independence
  5. Adaptive Compaction: Balance memory density against performance stability
  6. Manager-Level Synchronization: Minimize lock overhead while ensuring thread safety

The architecture enables processing JSON datasets that would otherwise exceed available memory while maintaining the concurrency and performance characteristics required for production systems.