Open Access Article
This Open Access Article is licensed under a
Creative Commons Attribution 3.0 Unported Licence

Training physical matter to matter

Heinrich M. Jaeger *, Arvind Murugan and Sidney R. Nagel
The James Franck Institute and Department of Physics, The University of Chicago, 929 E 57th St., Chicago, Illinois 60637, USA. E-mail: h-jaeger@uchicago.edu

Received 23rd May 2024 , Accepted 2nd August 2024

First published on 6th August 2024


Abstract

Biological systems offer a great many examples of how sophisticated, highly adapted behavior can emerge from training. Here we discuss how training might be used to impart similarly adaptive properties in physical matter. As a special form of materials processing, training differs in important ways from standard approaches of obtaining sought after material properties. In particular, rather than designing or programming the local configurations and interactions of constituents, training uses externally applied stimuli to evolve material properties. This makes it possible to obtain different functionalities from the same starting material (pluripotency). Furthermore, training evolves a material in situ or under conditions similar to those during the intended use; thus, material performance can improve rather than degrade over time. We discuss requirements for trainability, outline recently developed training strategies for creating soft materials with multiple, targeted and adaptable functionalities, and provide examples where the concept of training has been applied to materials on length scales from the molecular to the macroscopic.


Adapted behavior through training

Memorizing and forgetting, teaching and learning, improving endurance with exercise: all are processes that relate to our daily experience. These processes may seem at first to be essentially biological activities. However, by taking appropriate cues from biology one can see how similar manipulation can be applied not just to biological but also to physical matter. As an example, machine learning, the broad endeavor to teach a system to classify objects into separate meaningful categories, is in part an attempt to apply training to computer algorithms. This leads us to ask more generally: how can the broad concept of training be applied to produce desired functionality in materials?

In biological systems training provides a path toward improved performance. Moreover, biological systems can often be trained for one property and then, when no longer useful to the organism, be retrained for an entirely different purpose. Indeed, one can think of the long evolutionary history of living organisms as training to make biological systems exquisitely adaptable to new requirements from their environment. This occurs through an intricate interplay of evolution and adaptation.

For example, individuals can alter their body's musculature by targeted personal-training regimens. By lifting weights, they can become stronger in arms or legs or core; by running they can develop flexibility, endurance, and speed; by practicing a musical instrument they can train the smaller muscles to operate repetitively and smoothly to execute delicate maneuvers. Some of this training presumably occurs in the brain, but much of it also occurs in the muscles themselves – that is, by altering the size, shape and/or function of the biological material. A second, less obvious, example is Wolff's law1 for how bone becomes stronger and more resilient due to repeated exposure to stresses; when small breaks occur, the body rebuilds those particular spots to become stronger. Exercise – training – evolves the material of the bone to become more useful for the specific tasks encountered and which are required of it in the course of living. The material gains this resilience by being trained for it by repetition.

Training as a form of processing. As a community, we have begun to realize that physical matter – that is materials not associated with living or biological tissue – can also be trained in non-trivial ways. We are all familiar with the processing of materials. While training can certainly be viewed as a form of material processing, it occupies a rather special niche since the philosophy underlying training differs from that of typical processing protocols. As an example consider steel blades, which can be made stronger and sharper by heat tempering of the alloys.2 In that case, the material acquires its enhanced function via exposure to temperatures or stresses far exceeding those it encounters during its intended application. Later during use it does not become stronger or sharper over time. It also cannot be retrained after deployment in the field to give it different material properties. An ideal trainable material, on the other hand, acquires its enhanced functionality in situ under conditions similar to those during actual use. This implies that it will perform better the more it is used. Moreover, to obtain new functionality it can be retrained in the field after deployment.

Training versus design. The idea of training to create or enhance function is also different from the traditional philosophy underlying materials design. Conventional approaches focus on generating desired material properties by designing specific structural configurations and associated local interactions among the constituent components of a material, often at molecular scales.3–6 Changing the targeted properties then requires careful re-programming of those local interactions. Furthermore, once identified, the associated design parameters are typically intended to remain fixed, so as to maintain a material's properties. In other words, the key goal is to find local interactions that correspond to deep local minima in the free energy.

In contrast to such design strategy, trainability requires that adaptive changes in physical parameters must be possible, and evolution of the original interactions becomes the central feature of the training process. That is, training and retraining takes advantage of shallow ripples on the energy landscape – deep enough to retain a sufficiently long memory, but not too deep so that an escape can be triggered.

To change properties, training does not thermally activate escape out of deep wells in an energy landscape that is fixed. Instead, the important feature of training is that it uses external stimuli to tilt or otherwise reconfigure the whole energy landscape so that the targeted state becomes a local minimum toward which the system can be biased to flow over those shallow ripples.

Global versus local stimuli. Furthermore, during training, stimuli are applied to the material as a whole and these stimuli then find their way selectively to those local constituents whose adaptive response generates the desired overall performance enhancement. Effectively, while typical design involves deliberate manipulation of local interactions, training applies only global cues. Under an appropriate training protocol, the material reconfigures as it learns and memorizes the relevant interaction parameters on its own and without detailed control at the local level. Conceptually, this stimuli-induced but otherwise autonomous evolution of local interactions represents a radical departure from standard design approaches (Fig. 1).


image file: d4sm00629a-f1.tif
Fig. 1 Training as new paradigm for generating desired functionality in materials.

Pluripotency. It is exciting to envision training as a new paradigm for creating pluripotent function. Thus we could ask, taking another page from the biology textbooks, whether one can make a material platform that mimics the capabilities of stem cells so that exposure to the environment (training) can dictate the subsequent functionality of the material after it has been deployed. As is possible with a stem cell, the same material could subsequently be re-trained for many different purposes.

Training as a general approach. Training need not be confined only to mechanical responses to applied stress or strain. It can involve any other type of response, be it electrical, magnetic, optical, or chemical. The ideas that undergird the notion of training are far from specific. Instead, training is a concept for how to create desired function. It should be applicable at different length scales from molecular to architectural and to different energy scales to include materials of both soft and hard matter.

Requirements for trainability

Clearly, not all materials can be trained to produce a useful outcome. Thus, it is important to understand what is required in a material for it to be ‘trainable’ and potentially ‘retrainable,’ and what are the possibilities and limits of such a process.

In considering different types of materials as platforms for trainable, potentially pluripotent behaviors, and given the points made above about training, we are looking for at least two key ingredients:

(1) a number of distinct responses (states) that each can retain a memory of the training outcome, and

(2) the ability of the material to reconfigure and evolve from one state to another, meaning that these states must not only exist but be accessible during the training.

Furthermore, for a given trainable material, a key task will be the design of appropriate training environments and regimens. Thus, both proper choice of the material platform and proper design of the training protocol will be critical for reaching a desired functionality or level of performance.

Examples of trainable matter

At this stage it is useful to give some specific examples of how a physical material can be trained to create behavior not normally found. We also suggest some types of materials that appear to be good candidates for training.

Soft matter as a trainable material. In thinking about the requirements for training it is apparent that some materials can be trained more easily than others. Soft materials7 appear particularly well suited to training as they often have a multitude of easily accessible, energetically similar configurations that training can select so as to amplify a desired property (see Fig. 1). If the system were isolated in a very deep well in the energy landscape, it would be unable to explore phase space efficiently. For this reason, it is perhaps intuitive that soft-matter systems, with their ease of malleability, present a particularly attractive platform on which to apply training. In soft materials, training protocols based on applying mechanical stress provide straightforward access to nonlinear regimes which can help to imprint a memory8 by triggering long-lasting changes such as plastic structural deformation.9

Role of disorder. It makes sense that some degree of disorder, either in the local interactions among the components of a material or in their structural configuration, is advantageous in enabling reconfigurability.10–12 By contrast, perfect crystals, while exhibiting exceedingly long-term configurational memory, cannot adaptively evolve their structure and thus are not trainable.

Importantly, disorder also makes it possible for stimuli, which during training are applied on the outside of a material, to be directed to those parts inside the material that benefit the most from adaptation. Such internal focusing of uniformly applied external stimuli is well known from the physics of disordered, heterogeneous materials. For example, mechanical stress propagates through such materials along a network of paths that concentrate force on a limited number of internal spots.13,14 If these spots can be trained to adapt to the load, the material as a whole will become more resilient.

Disordered networks. Materials based on disordered networks appear to be prime candidates for trainability. Many materials and patterns can be profitably modeled as networks. Macroscopic mechanical metamaterials composed of nodes and struts,15–17 crosslinked polymers18–20 and biological fibers,21,22 the abovementioned bone,1,23 and even the creases in folded sheets24 all have a network structure where links between nodes can be clearly identified. We can then think of training as evolving the properties of the links and/or the configuration of the nodes. Thus, networks can be viewed as prototypical adaptive materials.

This adaptation can also involve hierarchies of networks.17 The initial experiments for directed aging relied on a macroscopic, disordered network of struts connecting nodes, all cut from polymeric material;25 the aging then changed the mechanical properties of the struts (while not altering the connectivity at the nodes) by reorganizing the more microscopic networks formed by the crosslinked polymer comprising the strut and node material. It has also been shown in computer simulations, that interactions between distant pairs of the macroscopic network nodes can be induced by aging the system while driving the nodes to have the desired response.26 This again is a feature of networks that mimic the functions found in biological matter – in this case the allosteric properties of proteins.27–30 In addition to real-space networks, training and memory have been shown to be relevant in networks of chemical affinities between many species of molecules.31–33

Examples of training protocols

We have given some examples of materials that could be good platforms on which to apply training. We would now like to provide examples of training protocols and how training could occur.

Directed aging. One way to train a material is to exploit material aging.25 Aging occurs in systems that have been forced out of equilibrium either by dropping the temperature or exerting external forces. As the system continually searches the available phase space, it discovers lower free-energy configurations. The longer it explores, the lower the energy can get.

We are all too familiar with the fact that aging often leads to detrimental degradation of the material. However, a system can sometimes be coaxed to go downhill in energy towards a state that represents a preferred outcome. This state is selected by appropriately applied biases, e.g., stresses or strains, on the material as a whole. In other words, such aging is directed to evolve in a desired manner. The inhomogeneous local response of the material allows the material to evolve because stressed regions relax at different rates than unstressed regions.

In this example, the aging behaves in a greedy fashion, analogous to how a greedy algorithm works in a simulation. By this we mean that the material does not need to look ahead in time and find an optimum strategy; it only needs to find a strategy that is “good enough.” All that it needs to know is which direction in phase space the energy will decrease the fastest without giving up a local best move in the hopes that that would provide an even greater energy savings later on. That is, the system only does energy minimization and does not do the equivalent of finite-temperature annealing. Direct energy minimization is extremely rapid compared to gradual annealing at lower and lower temperatures.

The material chooses its own pathway just by being exposed to the applied stresses or strains. Moreover, this training protocol is not relegated to the linear response of a material; it can readily be extended to non-linear properties.9 One way of thinking about this is to consider each of the material's constituent building blocks that can age as a transient degree of freedom in the system.34 While the material ages, these degrees of freedom change their values until they have reached a particularly low-stress regime (or until the temperature is lowered so that relaxation is slowed).

The directed aging protocol has been used to create materials which have uncommon properties. An example is a material with a negative Poisson's ratio. Nearly all non-engineered materials have a positive Poisson's ratio, ν > 0.35 (When stretched along one axis, a material with ν > 0 becomes compressed along the axes perpendicular to it). But elasticity theory allows both negative as well as positive Poisson's ratios. Simply by placing a material under uniform compression and then aging it in that configuration for a substantial period of time, an otherwise normal elastic material can fundamentally alter its behavior by developing a negative Poisson's ratio, known as auxetic behavior.25,36 Different values of ν can be achieved by varying the training conditions. Ideas like directed aging have also been used to train origami structures with multiple folding patterns to fold along specific pathways37 in experiments.

Extensions of directed aging to meta-properties. Directed aging can be extended to meta-properties such as making a material adaptable to specific changes cued from the environment.38 In this case, the objective is not to train for any single functionality, but rather to train the material such that it can most easily switch between a set of targeted functionalities that are incompatible. This, too, takes its inspiration from biological evolution which has led to organisms that easily adapt to sudden drastic changes in the environment.39,40 Similar adaptability was achieved in a computer model where, during the training process, the material was alternately trained for two functions that were incompatible with each other. (The two functions could be in-phase and put-of-phase motion – i.e., motions that cannot be simultaneously achieved). Training repeatedly first for one behavior and then, before the training was complete, training for the other, allowed the material to eventually switch between the two behaviors with a minimum of alternations. This training protocol led to a material with the property that it could rapidly adapt to external cues. Thus protocols can be designed to achieve not just specific functions but also metaproperties like being able to switch between incompatible functionalities with minimal changes in parameters.

Coupled learning. Another recent approach uses material training similar to directed aging that alternates between softening and stiffening of bonds (a ‘thumbs up/thumbs down’ rule41,42) to accomplish more complex tasks. Other, more involved training variants based on equilibrium propagation,43 such as coupled learning,44 have mathematically provable convergence. However, this benefit comes at the expense of requiring two or more identical copies of the material whose response to changing boundary conditions can be compared. Unlike directed aging or thumbs up/thumbs down extensions, equilibrium-propagation-based rules may not be a natural path that many systems automatically follow on their own. However, it has been shown that specially designed electrical circuits can be trained this way to perform a simple classification task as well as linear regression.45 One can conceive other extensions of this approach, which can also be implemented in mechanical metamaterials, such as exploiting non-equilibrium memory.46

Extending training to systems with changing topology. One aspect of being in a network of fixed topology is that the connectivity of each of the nodes remains unaltered as the training proceeds. In this case, the unchanging topology encodes a memory of the initial configuration. The question that then presents itself is whether materials that are not representable as networks or have networks with evolving topology can be trained in the same manner as those that have a fixed network topology. That is, can the memory of prior training – and its usefulness in creating novel function – be preserved if the material connectivity is allowed to change significantly over time?

Combining Darwinian and Lamarckian elements. In answering this question we first consider a process that combines a training step with a separate step that involves a potential topology change. Such a process evolves material function in ways that resemble modes of biological evolution through a combination of Darwinian and Lamarckian mechanisms. If we associate a given network topology with a genotype, the above discussion of in situ training can be viewed as within-a-lifetime adaptation, while changes in network topology would be akin to genetic changes over generations in Darwinian evolution.

In the analogy that we present here, the basic Darwinian mechanism might start with networks of slightly different topology and train them for a particular task or set of tasks. The most promising networks, in the sense that they respond best to training, become the initial ‘parent’ network topology from which copies are made, potentially with local variations in topology. The variations could be due to ‘mutations’ in which some bonds are incorrectly copied to the next generation. These ‘offspring’ networks are then trained again for different tasks and the topologies that were most successful in being trained are again copied – with some variation – and trained again. Such a process resembles Darwinian evolution of networks but with selection on the ability to undergo successful training, rather than the usual optimization of materials for a specific task. The materials resulting from such selection do not promise to have any specific property but instead, can be expected to undergo successful training for a range of tasks.

During the training of each generation the local network geometry adapts as the struts deform from the applied strains. Importantly, in the copying step to generate the next generation, these deformations are copied too, creating an untrained network with the exact geometry (not only the topology) of the productively trained parent. Since the offspring network uses fresh material, it can be trained again to the fullest extent (in terms of parameters such as the stiffnesses). One can repeat the in situ training process from this fresh sample, make a copy of successfully trained networks (with variation) and repeat this over generations of material.

This type of training is analogous to Darwinian evolution with Larmarckian elements, since the trained geometry of a network in one generation is inherited by the subsequent generation. Such a training scheme might expand the universe of trainable tasks; tasks that could not be trained for with one round of training might be achievable by refreshing the trained geometry. Another distinct Larmarckian possibility is directed mutagenesis,47 where the training creates a set of bonds that are distorted and are therefore hard to reproduce from one generation to the next. The training thus promotes mutations at the spots where training is most effective. Schematically, we suggest the following elements as illustrated in Fig. 2: (training/aging) ⇒ (alters shape of bonds at particularly important parts of the network) ⇒ (makes those bonds difficult to copy correctly) ⇒ (creates mutations – copying errors – preferentially at those spots) ⇒ (passes to the next generation those changes originally caused by the training). This is an example mechanism in which training can influence the Darwinian evolution of the material.


image file: d4sm00629a-f2.tif
Fig. 2 Darwinian evolution with Larmarckian elements. Each generation of the material is physically trained and the next generation is made with fresh material in the geometry of the trained material. Mutations (change in topology) during the copying are likely in regions with particularly large distortion due to training. Images are of a laser-cut network (top left) that was then trained by compression (top right). These images were obtained from authors of ref. 25.

Training with topology change. More generally, the training and the topology change need not be separated cleanly. Thus training with concurrent topology changes can work in network-based materials as long as training memory can be preserved, meaning that relaxation back to an untrained state is sufficiently slow. For example, the stress-adaptive behavior in bone is a consequence of training that triggers changes in local network topology, and such bone remodeling can grow new connections where needed.1,23 Similar remodeling occurs in many other bio-mechanical networks.48

Another example is given by dry granular materials49–51 or dense, non-Brownian suspensions of small particles in a liquid,52–55 where training by cyclic shear evolves the network of particle–particle contacts, thereby changing the local connectivity and instilling a memory of the pathway created by the applied shear strain. At the molecular scale, networks formed by dynamic covalent bonds among polymers can similarly evolve and adapt.20,56,57 Examples here are new materials based on liquid crystal elastomers that can exhibit trainable shape shifting. These materials have many stable states at normal operating conditions and can be reconfigured because the covalent bonds can change neighbors: subjecting these materials to a new configuration during training induces dynamic bond exchange, which relaxes internal stresses and keeps a trained-in configuration stable.

Training memory and re-training. Given that trainability requires the ability of a material to reconfigure internally, that same ability to change also implies that trained materials might be prone to ‘forget’ unless training stimuli continue to be applied.9 However, as long as the material stays exposed to the intended operating conditions such ‘refresher training’ happens automatically through exposure to the environment. We experience the same with exercising our bodies. Still, depending on the strength of the training memory, there are at least two classes of trainable materials. The first and certainly more common class has a training memory that is finite and will decay after some time. The other class has a very long, potentially infinite memory due to a response that is strongly hysteretic between when a training stimulus is applied and when it is withdrawn. Examples are networks of coupled mechanical elements that each can reversibly snap into one of two states (so-called hysterons).58 There is of course the possibility that some memories are not “erased” but are rather simply over-written. This has the potential to leave behind vestigial memories of previous training as occurs in some biological contexts.

In both cases, this brings up the question of how to devise protocols that efficiently erase previous training outcomes so that a material subsequently can be retrained for an entirely different response. This can be seen as another, but related, research area that studies not only the elastic response of a material but also the time dependence of the response – both for learning and forgetting as well as determining the pathway followed by the dynamics.59 It also suggests that for adequate retraining, modifications of a material during the initial training must be reversable. This brings in the role not only of raising or lowering temperature as a way of enhancing or halting relaxation, but also the role that different chemical reactions can have in altering reactions to sections of a material that are under greater or lesser stress. This, again, takes inspiration from how biological material can alter its properties by sensing target bonds that are under large or small stress.60

Outlook

Trainability, together with learning and memory, forms key elements in the evolution of biological systems and, more recently, in machine learning with computers. Currently no systematic framework exists for designing trainable (soft or hard) materials and for devising optimal training protocols. One can hope to develop such a framework by taking advantage of an emerging synergy between recent ideas from biology, materials science, polymer chemistry and soft matter physics, all addressing different aspects of learning and memory in complex systems. These include theories of plasticity, evolution and learning in biology with recent insights about the malleability of disorder in material science.

The new types of trainable materials we envision can exist from the macroscopic to the molecular scale; in their most potent manifestations they will require careful consideration of all these scales. The ultimate aim is to develop novel strategies for creating soft materials with multiple, targeted and adaptable functions. Research along this direction has the potential to lay the scientific foundation for training as a new paradigm within the larger field of materials processing. In this vision, we can use training to make matter functional in novel ways. Training will make matter matter in ways we have not yet discovered or even conceived.

Data availability

This perspective article does not include data.

Conflicts of interest

The authors declare no competing interest.

Acknowledgements

We thank Nidhi Pashine for providing the network images used in Fig. 2. We thank our many collaborators who worked on different aspects of training in materials. This work was primarily supported by the National Science Foundation MRSEC under Award No. DMR-2011854. S. R. N. acknowledges support from DOE Basic Energy Sciences Grant No. DE-SC0020972.

References

  1. J. Wolff, The Law of Bone Remodeling (translation of the German 1892 edition), Springer, Berlin Heidelberg New York, 1986 Search PubMed.
  2. K. K. Ma, H. M. Wen, T. Hu, T. D. Topping, D. Isheim, D. N. Seidman, E. J. Lavernia and J. M. Schoenung, Acta Mater., 2014, 62, 141–155 CrossRef CAS.
  3. S. C. Glotzer and M. J. Solomon, Nat. Mater., 2007, 6, 557–562 CrossRef PubMed.
  4. A. Jain, S. P. Ong, G. Hautier, W. Chen, W. D. Richards, S. Dacek, S. Cholia, D. Gunter, D. Skinner, G. Ceder and K. A. Persson, APL Mater., 2013, 1, 011002 CrossRef.
  5. U. G. K. Wegst, H. Bai, E. Saiz, A. P. Tomsia and R. O. Ritchie, Nat. Mater., 2015, 14, 23–36 CrossRef CAS PubMed.
  6. A. Zunger, Nat. Rev. Chem., 2018, 2, 0121 CrossRef CAS.
  7. S. R. Nagel, Rev. Mod. Phys., 2017, 89, 025002 CrossRef.
  8. N. C. Keim, J. D. Paulsen, Z. Zeravcic, S. Sastry and S. R. Nagel, Rev. Mod. Phys., 2019, 91, 035002 CrossRef CAS.
  9. D. Hexner, N. Pashine, A. J. Liu and S. R. Nagel, Phys. Rev. Res., 2020, 2, 043231 CrossRef CAS.
  10. C. P. Goodrich, A. J. Liu and S. R. Nagel, Phys. Rev. Lett., 2015, 114, 225501 CrossRef PubMed.
  11. D. Hexner, A. J. Liu and S. R. Nagel, Soft Matter, 2018, 14, 312–318 RSC.
  12. D. Hexner, A. J. Liu and S. R. Nagel, Phys. Rev. E, 2018, 97, 063001 CrossRef PubMed.
  13. C. H. Liu, S. R. Nagel, D. A. Schecter, S. N. Coppersmith, S. Majumdar, O. Narayan and T. A. Witten, Science, 1995, 269, 513–515 CrossRef CAS PubMed.
  14. T. S. Majmudar and R. P. Behringer, Nature, 2005, 435, 1079–1082 CrossRef CAS PubMed.
  15. J. Paulose, A. S. Meeussen and V. Vitelli, Proc. Natl. Acad. Sci. U. S. A., 2015, 112, 7639–7644 CrossRef CAS PubMed.
  16. K. Bertoldi, V. Vitelli, J. Christensen and M. van Hecke, Nat. Rev. Mater., 2017, 2, 1–11 Search PubMed.
  17. C. Coulais, A. Sabbadini, F. Vink and M. van Hecke, Nature, 2018, 561, 512 CrossRef CAS PubMed.
  18. E. A. Appel, J. del Barrio, X. J. Loh and O. A. Scherman, Chem. Soc. Rev., 2012, 41, 6195–6214 RSC.
  19. J. Y. Sun, X. H. Zhao, W. R. K. Illeperuma, O. Chaudhuri, K. H. Oh, D. J. Mooney, J. J. Vlassak and Z. G. Suo, Nature, 2012, 489, 133–136 CrossRef CAS PubMed.
  20. B. T. Michal, C. A. Jaye, E. J. Spencer and S. J. Rowan, ACS Macro Lett., 2013, 2, 694–699 CrossRef CAS PubMed.
  21. M. L. Gardel, J. H. Shin, F. C. MacKintosh, L. Mahadevan, P. Matsudaira and D. A. Weitz, Science, 2004, 304, 1301–1305 CrossRef CAS PubMed.
  22. L. Blanchoin, R. Boujemaa-Paterski, C. Sykes and J. Plastino, Physiol. Rev., 2014, 94, 235–263 CrossRef CAS PubMed.
  23. J. H. Keyak, S. Sigurdsson, G. S. Karlsdottir, D. Oskarsdottir, A. Sigmarsdottir, J. Kornak, T. B. Harris, G. Sigurdsson, B. Y. Jonsson, K. Siggeirsdottir, G. Eiriksdottir, V. Gudnason and T. F. Lang, Bone, 2013, 57, 18–29 CrossRef CAS PubMed.
  24. J. Andrejevic, L. M. Lee, S. M. Rubinstein and C. H. Rycroft, Nat. Commun., 2021, 12, 1470 CrossRef CAS PubMed.
  25. N. Pashine, D. Hexner, A. J. Liu and S. R. Nagel, Sci. Adv., 2019, 5, eaax4215 CrossRef PubMed.
  26. D. Hexner, A. J. Liu and S. R. Nagel, Proc. Natl. Acad. Sci. U. S. A., 2020, 117, 31690–31695 CrossRef CAS PubMed.
  27. J. W. Rocks, N. Pashine, I. Bischofberger, C. P. Goodrich, A. J. Liu and S. R. Nagel, Proc. Natl. Acad. Sci. U. S. A., 2017, 114, 2520–2525 CrossRef CAS PubMed.
  28. L. Yan, R. Ravasio, C. Brito and M. Wyart, Proc. Natl. Acad. Sci. U. S. A., 2017, 114, 2526–2531 CrossRef CAS PubMed.
  29. T. Tlusty, A. Libchaber and J. P. Eckmann, Phys. Rev. X, 2017, 7, 021037 Search PubMed.
  30. J. W. Rocks, H. Ronellenfitsch, A. J. Liu, S. R. Nagel and E. Katifori, Proc. Natl. Acad. Sci. U. S. A., 2019, 116, 2506–2511 CrossRef CAS PubMed.
  31. A. Murugan, Z. Zeravcic, M. P. Brenner and S. Leibler, Proc. Natl. Acad. Sci. U. S. A., 2015, 112, 54–59 CrossRef CAS PubMed.
  32. W. S. Zhong, D. J. Schwab and A. Murugan, J. Stat. Phys., 2017, 167, 806–826 CrossRef CAS.
  33. C. G. Evans, J. O’Brien, E. Winfree and A. Murugan, Nature, 2024, 625, 500–507 CrossRef CAS PubMed.
  34. V. F. Hagh, S. R. Nagel, A. J. Liu, M. L. Manning and E. I. Corwin, Proc. Natl. Acad. Sci. U. S. A., 2022, 119, e211762119 CrossRef PubMed.
  35. G. N. Greaves, A. L. Greer, R. S. Lakes and T. Rouxel, Nat. Mater., 2011, 10, 823–837 CrossRef CAS PubMed.
  36. R. Lakes, Science, 1987, 235, 1038–1040 CrossRef CAS PubMed.
  37. C. Arinze, M. Stern, S. R. Nagel and A. Murugan, Phys. Rev. E, 2023, 107, 025001 CrossRef CAS PubMed.
  38. M. J. Falk, J. Y. Wu, A. Matthews, V. Sachdeva, N. Pashine, M. L. Gardel, S. R. Nagel and A. Murugan, Proc. Natl. Acad. Sci. U. S. A., 2023, 120, e2219558120 CrossRef CAS PubMed.
  39. N. Kashtan and U. Alon, Proc. Natl. Acad. Sci. U. S. A., 2005, 102, 13773–13778 CrossRef CAS PubMed.
  40. A. Murugan and H. M. Jaeger, MRS Bull., 2019, 44, 96–105 CrossRef.
  41. M. Stern, C. Arinze, L. Perez, S. E. Palmer and A. Murugan, Proc. Natl. Acad. Sci. U. S. A., 2020, 117, 14843–14850 CrossRef CAS PubMed.
  42. M. Stern and A. Murugan, Annual Rev. Cond. Matter Phys., 2023, 14, 417–441 CrossRef.
  43. B. Scellier and Y. Bengio, Front. Comput. Neurosci., 2017, 11, 24 CrossRef PubMed.
  44. M. Stern, D. Hexner, J. W. Rocks and A. J. Liu, Phys. Rev. X, 2021, 11, 021045 CAS.
  45. S. Dillavou, M. Stern, A. J. Liu and D. J. Durian, Phys. Rev. Appl., 2022, 18, 014040 CrossRef CAS.
  46. M. Falk, A. Strupp, B. Scellier and A. Murugan, arXiv, 2023, preprint, arXiv 2312.17723, https://arxiv.org/pdf/2312.17723 Search PubMed.
  47. E. V. Koonin and Y. I. Wolf, Biol. Direct, 2009, 4, 42 CrossRef PubMed.
  48. M. Stern, M. B. Pinson and A. Murugan, Phys. Rev. X, 2020, 10, 031044 CAS.
  49. J. R. Royer and P. M. Chaikin, Proc. Natl. Acad. Sci. U. S. A., 2015, 112, 49–53 CrossRef CAS PubMed.
  50. D. Fiocco, G. Foffi and S. Sastry, Phys. Rev. Lett., 2014, 112, 025702 CrossRef PubMed.
  51. M. O. Lavrentovich, A. J. Liu and S. R. Nagel, Phys. Rev. E, 2017, 96, 020101 CrossRef PubMed.
  52. D. J. Pine, J. P. Gollub, J. F. Brady and A. M. Leshansky, Nature, 2005, 438, 997–1000 CrossRef CAS PubMed.
  53. L. Corté, P. M. Chaikin, J. P. Gollub and D. J. Pine, Nat. Phys., 2008, 4, 420–424 Search PubMed.
  54. J. D. Paulsen, N. C. Keim and S. R. Nagel, Phys. Rev. Lett., 2014, 113, 068301 CrossRef CAS PubMed.
  55. H. Kim, A. P. Esser-Kahn, S. J. Rowan and H. M. Jaeger, Proc. Natl. Acad. Sci. U. S. A., 2023, 120, e2310088120 CrossRef CAS PubMed.
  56. N. R. Boynton, J. M. Dennis, N. D. Dolinski, C. A. Lindberg, A. P. Kotula, G. L. Grocke, S. L. Vivod, J. L. Lenhart, S. N. Patel and S. J. Rowan, Science, 2024, 38, 545–551 CrossRef PubMed.
  57. C. A. Lindberg, E. Ghimire, C. Q. Chen, S. Lee, N. D. Dolinski, J. M. Dennis, S. H. Wang, J. J. de Pablo and S. J. Rowan, J. Polym. Sci., 2023, 62, 907–915 CrossRef.
  58. J. N. Ding and M. van Hecke, J. Chem. Phys., 2022, 156, 204902 CrossRef CAS PubMed.
  59. C. W. Lindeman, V. F. Hagh, C. I. Ip and S. R. Nagel, Phys. Rev. Lett., 2023, 130, 197201 CrossRef CAS PubMed.
  60. K. Hayakawa, H. Tatsumi and M. Sokabe, J. Cell Biol., 2011, 195, 721–727 CrossRef CAS PubMed.

This journal is © The Royal Society of Chemistry 2024