Overview
The Princeton Engineering Anomalies Research (PEAR) laboratory ran from 1979 to 2007 at Princeton University. Over 28 years, the lab collected millions of experimental trials testing whether human consciousness can influence the behavior of random physical systems. This project reproduces those experiments using modern hardware and statistical methods.
Background
PEAR's researchers reported that human operators could produce small but statistically detectable deviations in random event generator (REG) outputs through conscious intention. The lab published these findings across dozens of peer-reviewed papers. Independent replication has proven inconsistent, making modern reproduction with better tooling a priority.
The original experiments used electronic noise diodes and other hardware random event generators available in the 1980s and 1990s. Our reproductions use contemporary entropy sources: hardware QRNGs based on photonic quantum noise, plus dozens of unconventional entropy sources extracted from modern computing hardware. We pair these with automated collection, calibration, and statistical analysis.
Apparatus
Our experimental apparatus is built on OpenEntropy, a Rust-based entropy harvesting system that extracts raw, unconditioned randomness from 63 physical noise sources inside computers. Unlike standard random number APIs that post-process output through deterministic algorithms (DRBGs), OpenEntropy preserves the actual hardware signal in raw mode. This matters because detecting the micro-biases that PEAR's work hypothesized requires an unfiltered signal.
Entropy Sources
OpenEntropy collects from 13 categories of physical noise:
- Timing. Clock jitter from PLL phase noise, DRAM row buffer timing, page fault latency, kernel clock domain crossings.
- Microarchitecture. Branch predictor state, TLB shootdowns, DVFS frequency races, atomic contention, cache timing.
- Thermal. Audio PLL crystal oscillator beats, display PLL phase noise, PCIe PHY jitter, dual clock domain interference patterns.
- GPU. Metal shader thread divergence, CPU-GPU memory domain crossings, Neural Engine inference timing.
- Sensor. Microphone ADC thermal noise, camera sensor dark current, Bluetooth RF environment, SMC thermistor readings.
- Quantum. QCicada USB QRNG (photonic shot noise via Crypta Labs hardware).
For PEAR reproductions, the QCicada photonic QRNG serves as the primary source, providing true quantum randomness from vacuum fluctuations in a photonic beam splitter. The other 62 sources provide cross-correlation baselines and independent control channels.
Raw Mode
Most QRNG APIs expose only post-DRBG output. The raw hardware signal is destroyed by SHA-256 conditioning before the user sees it. OpenEntropy's raw mode bypasses all conditioning, delivering XOR-combined bytes directly from the hardware. Here is why that matters: if conscious intention produces a micro-bias in the physical noise, conditioning would erase it. Raw mode lets us observe the actual signal.
Three output modes are available:
- Raw. Unconditioned bytes for research and signal analysis.
- Von Neumann. Debiasing only. Removes first-order bias while preserving higher-order structure.
- SHA-256. Full cryptographic conditioning for control trials where signal preservation is not needed.
Calibration Gate
Before any recording session, OpenEntropy enforces calibration checks on the entropy source:
- Terminal z-score within ±2.0 (no pre-existing strong bias)
- Bit-level bias below 0.005 (balance between 0s and 1s)
- Shannon entropy above 7.9 bits per byte (minimum quality threshold)
- Z-score standard deviation between 0.85 and 1.15 (distribution stability)
Recording is blocked if any check fails. This prevents weak or degraded sources from confounding experimental results, a control that PEAR's original apparatus lacked.
Trial Methodology
OpenEntropy implements the PEAR trial analysis methodology directly. Let's break it down:
-
Trial slicing. Raw entropy is divided into fixed-length trials (default: 200 bits per trial, configurable). Each trial represents one discrete measurement unit.
-
Per-trial statistics. For each trial, the system computes the count of 1-bits, the z-score deviation from the expected mean (N/2), and the running cumulative deviation across trials.
-
Terminal z-score. After all trials, the cumulative deviation is normalized to produce a terminal z-score:
Z = cumulative_deviation / sqrt(num_trials * N/4). This is the primary outcome metric, the same statistic PEAR reported across their 28 years of data. -
Effect size. Computed as
terminal_z / sqrt(num_trials). This allows comparison across sessions with different trial counts and direct comparison to PEAR's published effect sizes. -
Multi-session composition. Stouffer's method combines z-scores across multiple recording sessions with sqrt(N) weighting:
Z_combined = sum(w_i * Z_i) / sqrt(sum(w_i^2)). This supports meta-analysis across days, weeks, or months of recording.
Statistical Validation
Each recording session undergoes analysis with a battery of 31 statistical tests derived from NIST SP 800-22, covering:
- Frequency and runs tests for randomness
- Serial and spectral analysis for pattern detection
- Cross-correlation between entropy sources to identify dependent pairs
- Stationarity testing to detect drift over time
- Autocorrelation profiling across multiple lags
- Bit-by-bit bias analysis (deviation from 0.5 per position)
Optional system telemetry (CPU load, thermal state, power mode) is recorded alongside entropy data to control for hardware state changes during experimental sessions.
Extension to LLM Inference
Beyond classical REG experiments, we extend the PEAR paradigm into large language model inference. A custom vLLM plugin substitutes QRNG entropy into the token sampling pipeline, allowing us to test whether intention effects observed in simple random generators also show up in high-dimensional AI decision processes. The plugin uses z-score signal amplification over thousands of random samples per token to surface micro-biases that would be invisible in raw byte streams. This connects directly to PEAR's core finding: small but consistent statistical anomalies accumulate across large trial counts.
Experimental Protocol
Sessions follow a three-condition design:
- Intention (high). Participant directs conscious intention toward increasing the bit count above the expected mean (more 1s than 0s).
- Intention (low). Participant directs conscious intention toward decreasing the bit count below the expected mean (more 0s than 1s).
- Baseline. No intention directive. Participant is present but does not attempt to influence the output.
Each session begins with calibration, runs a fixed number of trials across all three conditions (counterbalanced order), and ends with a full statistical analysis pass. The system timestamps and logs all data for reproducibility.
Status
Hardware and software infrastructure is operational. The QCicada QRNG and OpenEntropy platform are collecting data. The LLM sampling plugin is built. We are developing classical REG reproduction protocols.
Sources
- Jahn, R.G. & Dunne, B.J. "Margins of Reality: The Role of Consciousness in the Physical World." Harcourt Brace Jovanovich, 1987.
- Jahn, R.G. et al. "Correlations of Random Binary Sequences with Pre-Stated Operator Intention." Journal of Scientific Exploration, Vol. 11, No. 3, 1997. https://www.scientificexploration.org/docs/11/jse_11_3_jahn.pdf
- PEAR Lab. "PEAR Proposition." Princeton University, 2007. https://pear-lab.com
- National Institute of Standards and Technology. "A Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications." NIST SP 800-22 Rev. 1a, 2010.