Open Access Article
Charlie
Maslen
a,
Luke
Nicholson
a and
Juliane
Simmchen
*ab
aUniversity of Strathclyde, 295 Cathedral Street, Glasgow G1 1XL, UK. E-mail: juliane.simmchen@strath.ac.uk
bTechnische Universitat Dresden, Zellescher Weg 19, Dresden 01069, Germany
First published on 30th October 2025
We demonstrate a physical implementation of Monte Carlo sampling using the Brownian motion of microscopic rods, applied to the classical Buffon's needle experiment. In this way, a problem in geometric probability is mapped onto a Monte Carlo method, with a physical system performing key aspects of the computation. The experiment's parameters are embedded directly: the rods length encodes the probability integral, while their thermal motion supplies the sampling. Although only a toy-model system, this approach illustrates how embedding probabilistic structure into soft matter can provide a low-energy pathway for stochastic computation that exploits freely available thermal noise.
Especially quantum-based computing has attracted scientific interest in general,4,5 been awarded a very recent Nobel Prize and received a high volume of investment capital.6,7 Alternative forms of computing based on biological8,9 or chemical properties10 intrinsic to the systems have drawn attention over time. Colloidal and active matter systems are increasingly attracting interest for computing applications: colloidal solutions have been used in a number of publications as a physical reservoir for reservoir computing.11–13 The first realisation of a physical reservoir computer using self-propelled active microparticles was based on a nonlinear dynamical system due to time delays in retarded interactions.14 Immense potential is also ascribed to the use of inherent material properties to embody intelligence.15
Randomness plays a central role in modern computing. From secure cryptographic protocols and gambling systems to simulations and stochastic algorithms, the need for high-quality random number generators (RNGs) has only grown with the increasing complexity of computational tasks.16–19 However, generating randomness is not without cost. Whether it is achieved via deterministic, classical computer-based pseudo-RNGs; or specialized, hardware-based true-RNGs, producing randomness in digital systems incurs significant energy costs – through both computational cycles and memory operations.20,21
A notable application of RNGs is found in Monte Carlo methods – a class of stochastic sampling techniques widely used to solve problems in numerical integration, physics, finance, and machine-learning.22–24 Monte Carlo methods rely, fundamentally, on the generation of random sampling – the quality and quantity of which can significantly impact the accuracy and efficiency of the computation. As digital technology continues to progress, especially in the field of machine-learning, the energy cost of computational tasks is becoming highly scrutinized and alternative paradigms of computing are now under exploration.25,26 Within this scope, alternative, low-energy forms of random number generation demand investigation.
Thermal noise – the microscopic energy fluctuations caused by the thermal motion of particles – is a ubiquitous and naturally occurring form of randomness. It is a direct consequence of the statistical nature of many-bodied, thermodynamic systems in which the kinetic energies of bodies are distributed following the Boltzmann distribution.27 In classical computing, where results should be deterministic and non-probabilistic, thermal fluctuations are suppressed by averaging results out. Conversely, in quantum computing they are suppressed by holding systems at low temperature in order to maintain quantum coherence.28 Given that thermal noise is freely available at ambient temperature, a question arises: instead of suppression, can thermal fluctuations be leveraged as a computational resource? This question has motivated research into exploiting thermal noise for stochastic computing.29–31
An overtly physical manifestation of ambient thermal fluctuations is Brownian motion. First described by Robert Brown in 1827 and explained by Einstein in 1905,† Brownian motion describes the erratic movement of microscopic particles suspended in a fluid, whose rheological properties and density of the surrounding fluid influence the observed phenomenon. The erratic motion of the micronscale objects is driven by collisions with the surrounding molecules, which in turn are driven by ambient thermal noise.33,34 Importantly, while the underlying physics is deterministic, the macroscopic behaviour is effectively unpredictable and thus serves as a potent physical source of randomness.35
In this work, we present an experimental system of microscopic rods in an aqueous medium which exhibit Brownian motion and diffuse over a surface digitally patterned with evenly spaced lines, creating a dynamic realisation of the classical Buffon needle experiment. In the experiment, the probability that a randomly dropped needle intersects a set of parallel lines depends on the geometry of the system and the value of π (Fig. 1).36,37 Buffon demonstrated mathematically that by knowing both the rods length and the separation of the parallel lines, π could be estimated by simply counting the number of scattered needles and the number of needle-line crossings. Specifically, by setting l = 2d, then the probability of crossing is 1/π. Thus, π can be estimated by counting the total number of needles and dividing by the number of crossings. The accuracy of the estimation can be increased by re-scattering the needles and increasing the sample count, N. First posed as a question in geometric probability, it embodies the same statistical principles of Monte Carlo methods: drawing repeated random samples from a well-defined probability space, recording the occurrence of a specific event, and using the observed event frequency to estimate an underlying constant or integral.38,39 In our system, re-scattering of the needles, i.e., resampling, is realised entirely by thermal fluctuations in a fluidic environment, driving the rods into new, randomised positions and angles. Through this conceptual toy model, we hope to demonstrate Brownian motion as a naturally occurring, zero-energy-input source of stochasticity required for solving probability integrals.
Similar uses of thermodynamic noise to generate ‘high-quality’ random numbers have been demonstrated previously,40–42 and Brownian motion of colloids for RNG has been demonstrated by others.35,43 We do not aim to compete with state-of-the-art physical RNGs for the quality or frequency of random numbers generated. Rather, we demonstrate a conceptual proof-of-principle by using inherent properties of the physical systems to directly solve Monte Carlo integrals. Specifically, the rod-shape geometry of the particle naturally embodies variables of the Buffon needle experiment.
Using adapted Broersma relations46,47 we estimate decorrelation times τ from the rotational diffusion coefficients Drot for rods with length, l, and diameter, w.
| τ1,0.2 ∼ 0.1 s, |
| τ3,0.5 ∼ 2 s, |
| τ15,2.5 ∼ 300 s |
These estimations indicate clearly the importance of particle size on resampling rate, with decorrelation times spanning 4 orders of magnitude. They indicate that larger SiO2 rods would be unsuitable for this experimental work, let alone a reasonable random-number generator.
After dispersing the microrods in a dilute surfactant solution on a plasma-cleaned glass slide to minimise aggregation and sticking to the surface, the rods behaviour was observed by optical microscopy. It was found that both the large and small SiO2 rods were unsuitable for analysis. The large rods, as predicted, were measured showing limited diffusion (D ∼ 10−14 m2 s−1). Conversely, the small SiO2 rods were highly dynamic, translating more than a body length on the order of 10 s of milliseconds. However, they had a sufficiently low buoyant mass to show off-plane angular diffusion, resulting in them coming in and out of focus and with a changing projected shape (SI). This meant they could not be reliably tracked and were unsuitable for the strictly two-dimensional Buffon needle experiment. Balancing the need for resolvable planar motion with a sufficient reorientation rate, we selected 3 μm × 0.5 μm ZnO rods as the optimal compromise for further experiments. The ZnO rods showed clear Brownian motion both translationally and rotationally, and with all rotation being planar they were suitable for performing Buffon's test.
Measurements of their angle over time display a decorrelation time (autocorrelation function (ACF) < 1/e) of 0.81 s with a deviation of ±0.34 s (Fig. 2C) – slightly lower than the Broersma-estimated value which could be due to electrostatic effects or low-level photocatalytic activity of the rods, both of which are neglected in the Broersma estimation. Measurements of the mean-square-displacements of the rods provide a measure for the translational diffusion coefficient: Dtrans = 16.9 μm2 s−1. This means within 1 s each rod can reposition a body length (3 μm) away from its initial position. Thus, with every 2 s it can be assumed that the angle and position of the rods has been sufficiently randomised, or re-scattered as with the analogy to Buffon's needle.
ZnO rods were dispersed as described and images were recorded at 2 s intervals for just over 17 h (31
058 frames total). The resulting video was binarised and processed using a custom OpenCV platform (SI). A set of parallel lines, separated by a distance 2× the median rod length, was digitally imposed on the images and in each frame every rod as well as every rod-line crossing was counted. For each frame, a π estimate was measured from just rods and crossings in the frame as well as a π estimate from the cumulative rods and crossings.
(Fig. 3C and D). This directly mirrors the statistical efficiency of digital Monte Carlo sampling, confirming that passive Brownian sampling can, in principle, achieve the same asymptotic accuracy without consuming energy in re-sampling cycles. To quantify uncertainty in the running estimate of π, we computed 95% Bayesian credible intervals from the cumulative rod and crossing counts, treating crossings as a Poisson-distributed variable. The intervals narrow steadily over time, reflecting the increased statistical confidence as more data accumulates.
To explore whether the convergence behaviour depended on the temporal extent of the experiment, we re-analyzed the same video starting from various frame indices (5000, 10
000, 15
000, 20
000 and 25
000) (SI). In all cases, the cumulative estimates converge within π ± 0.02 suggesting the long-term convergence is robust. For example, over the original dataset, the estimate follows an increasing trend which could be interpreted as a physical effect – such as rod length gradually decreasing over time due to dissolution. However, when the analysis is restricted to a subset of the data beginning at later frames, the trend is predominantly decreasing. This suggests that trends in the cumulative estimate may result from statistical fluctuations, rather than underlying changes in the physical system.
At the conclusion of the experiment, a total of 2
820
336 rods and 898
483 crossings had been detected, yielding an estimate,
ε = | − π| = 2.6 × 10−3. |
Addressing the statistical uncertainty in our system, we obtain
= 3.1390 ± 0.0053(stat.) |
| 〈l〉 = 3.11 μm ± 0.20 μm, |
| δπ(syst.) = 0.21, |
= 3.14 ± 0.21. |
This reflects the limitations of a physical system, specifically, the polydispersity of the colloidal rods. Nonetheless, the much lower statistical uncertainty, which decreases with increasing sample size indicates that our system behaves as a Monte-Carlo solver. A more experimentally precise system (e.g., by using a higher resolution camera) would exhibit the same statistical behaviour.
At this total rod count, the number of crossings required to produce an estimate closest to π would be 897
741, resulting in
| ε = 5.94 × 10−7. |
This means at the conclusion of the measurement, there was a deviation of 742 crossings from the ideal behaviour. As would be expected, the magnitude of the deviation rate (in this case 742/898
483 ≈ 8 × 10−4) is inversely proportional to the magnitude of the loss of accuracy (in this case 10−3/10−7).
The closest approach to the true value of π occurred at frame 28
388, at which point 2
580
938 rods and 821
538 crossings had been recorded. The resulting estimate,
This was the closest possible value at this rod count and corresponds to an absolute error of
| ε = 3.09 × 10−7. |
Interestingly, within the dataset we identified 13 separate instances of four-frame sequences that recorded 355 rods and 113 crossings. These combinations yield
| ε = 2.67 × 10−7. |
If the experiment had consisted of only such brief, 8 second sequences, the estimates would have been extraordinarily accurate. This would, of course, be cheating the system by using approximations of π known since antiquity.48
As a comparison, we implemented an OpenCV video generator that created binary videos mimicking the videos of our Brownian rods after pre-processing. The difference here was that, instead of exhibiting Brownian motion, between every frame the rods would be randomly distributed with a new angle and position using the Numpy PCG64 pseudo-RNG. After running the Buffon experiment on this new video, we compared the quality of π estimation finding that the Numpy based approach yielded much stronger convergence towards π (SI). The cumulative estimation for π remains stable within 3 significant figures after only 5000 frames. Nonetheless, this method requires energy consumption on the order of 4000 J min−1 whereas the Brownian motion comes free.49
Notably, bands form in the scatter plots of frame-specific π estimates, clustering around integers and common fractions (e.g., 3, 3.5, 4). This pattern is a direct consequence of measuring a discrete, physical system – especially when the sample size per frame, N, is relatively small and the mathematical operations are minimal. Because
is computed as a ratio of two integers, it is constrained to rational values. In essence, the inverse of our estimates form a subset of Farey sequences FN for N ∈ [58, 120].50 As with Farey sequences, values formed from integers with many common divisors (e.g., 2, 3, 4, 5) appear with higher frequency. For example,
= 3 can be formed by many rod-crossing pairs ((60, 20), (63, 21), …, (117, 39), and (120, 40)). In contrast, less common values like
= 4.1 only occur in two specific combinations: (62, 20) or (93, 30). A more detailed description and visualisation can be found in SI.
This feature reflects a fundamental aspect of measuring physical systems: such systems rely on discrete observations, which inherently produce discretised outputs. As a result, certain values are simply inaccessible. While increasing the measurement resolution (e.g., by observing more events) or performing more mathematical operations (i.e., increasing permutations of event counts) can reduce this effect, a fundamental limitation remains – tied to the discrete nature of counting and observation. Notably, this limit is shared with electronic computational methods using floating points but which must still be embodied in a physical system, i.e., n-bit bytes are discretised to 2n values. Interestingly, this limitation parallels that of electronic computation, where floating-point numbers are constrained by finite bit-depth – n-bit registers can only represent 2n discrete states.
Currently, the set-up relies heavily on digital systems to do both the measurement and mathematical operations. The next fundamental step would be to encode processing power into the physical system, for example through electrochemical signals with capacitive sensing, or microfluidics with on-chip counting.
While the present system is a minimal demonstration, it establishes a conceptual foundation for more complex physically implemented Monte Carlo algorithms. By re-framing passive Brownian motion as a computational engine, we link the physics of soft matter to the mathematics of sampling, opening new directions in alternative computing paradigms. Naturally, we do not envision such systems as being advanced π-estimators. But by careful selection of a colloidal system and a geometric probability problem that can be mapped to stochastic-computations; they may offer a zero-energy-input resource if an energy-friendly readout method is devised in the future.
000) (PVP40), sodium citrate (NaCit), 1-pentanol, ethanol, NH3, and Tween-20 surfactant were all purchased from Sigma-Aldrich. All chemicals were analytical grade and used as purchased without any further purification.
To perform the Buffon's needle experiment rods were recorded at intervals of 2 s.
Supplementary information including supporting videos, diffusion behaviour of differently sized rods, an example of image tresholding, pi-estimates with different starting frames, a comparison with simulated rods and information on band formations. See DOI: https://doi.org/10.1039/d5sm00844a.
Footnote |
| † The mathematical formalism for Brownian motion predates its physical explanation. In 1900, Louis Bachelier described similar stochastic processes in his attempts to model the Paris stock exchange – underscoring the deep connection between mathematical stochasticity and physical thermodynamic systems.32 |
| This journal is © The Royal Society of Chemistry 2025 |