Complex Systems Studies – Telegram
Complex Systems Studies
2.43K subscribers
1.55K photos
125 videos
116 files
4.54K links
What's up in Complexity Science?!
Check out here:

@ComplexSys

#complexity #complex_systems #networks #network_science

📨 Contact us: @carimi
Download Telegram
👌🏾 An Introduction to Complex Systems: Society, Ecology, and Nonlinear Dynamics
http://physicstoday.scitation.org/doi/full/10.1063/PT.3.3766

Physics Today 70, 11, 51 (2017);https://doi.org/10.1063/PT.3.3766

PDF 👇🏼👇🏻👇🏽
🎞 Civilization Far From Equilibrium - Energy, Complexity, and Human Survival

https://www.perimeterinstitute.ca/videos/civilization-far-equilibrium-energy-complexity-and-human-survival

Abstract
Human societies use complexity – within their institutions and technologies – to address their various problems, and they need high-quality energy to create and sustain this complexity. But now greater complexity is producing diminishing returns in wellbeing, while the energetic cost of key sources of energy is rising fast. Simultaneously, humankind’s problems are becoming vastly harder, which requires societies to deliver yet more complexity and thus consume yet more energy. Resolving this paradox is the central challenge of the 21st century.
Thomas Homer-Dixon holds the CIGI Chair of Global Systems at the Balsillie School of International Affairs in Waterloo, Canada, and is a Professor at the University of Waterloo.
💎 How do little things combine to do big things?
Bryan Daniels

http://www.public.asu.edu/~bdaniel6/collective-behavior-amplification.html

Collective behavior in living systems is fascinating and diverse: proteins collectively perform metabolism in your cells; neurons collectively process information in your brain; honeybees collectively forage and care for their young.

Collective behavior in inanimate systems is also beautiful and can be more pristine: the fact that the atoms of carbon in a diamond are equivalent means that the crystal structure they form can be utterly precise. It also makes it easier to describe real crystals and predict their properties: once you know the fundamental repeating building block, it's just a matter of scaling up to have trillions of repeating units.

Physics tells us a lot about how to deal with these less complicated collective systems. We can connect the properties of atoms to the properties of familiar objects, like the hardness of a diamond. Can this expertise be ported to biological examples, where individuals are not so simple?

One connection we've been thinking about involves how information is amplified in producing a system's aggregate behavior. As an example of a system with low amplification, imagine our diamond under usual conditions: if we somehow jostled just a few atoms, this wouldn't have a huge effect. The jostling would quickly dissipate without changing the overall structure.

In many living systems, though, changes to individuals can have a large effect on the whole. A single bird might change the direction of a flock; a change to the activity of a few neurons might affect your subsequent behavior.

We can quantify this "amplification" with a fundamental measure from information theory—the Fisher information—which also turns out to have interesting connections to the idea of a phase transition in statistical physics. It's also a natural measure from the point of view of biology: Amplifying important information and damping extraneous noise can be key to the survival of a group.

In this way, amplification measures a fundamental property of collectives, one that could be compared across different systems. Making the connection to statistical physics, we might ask questions like, "How far is this fish school from a phase transition?" Making the connection to biology, we might ask, "How is the correct information from individuals amplified to influence the behavior of the whole?"

Through this lens of amplification and others that describe distinct aspects of information processing, we aim eventually to map out the space of strategies used by living systems as they combine to form functional aggregates.
🌀 Sloppy Models
Bryan Daniels

The biochemistry happening inside each one of your cells is amazingly complex. As an important example, take gene regulation. In the process of trannoscription and translation, proteins are constructed from the information in your DNA's genetic code. This process is regulated so that the cell can make more or less of a certain protein when it needs to (responding to, for example, the presence of a hormone in the bloodstream). The problem becomes more complicated when you realize that some proteins themselves regulate the creation of other proteins; we could find that protein A upregulates the creation of protein B, which downregulates the creation of protein C, and so on. In fact, huge networks of interacting genes and proteins are routinely studied in systems biology.

Understanding these large biochemical networks is a big challenge. For one, it's hard for experimentalists to measure what's going on inside a tiny living cell. Still, they can (painstakingly) discover which proteins are connected to which others (If I don't let the cell make protein A, do I still see protein B?), and a network 'topology' is gradually built up.

But what if we actually want to predict how much of a certain protein will be made under certain conditions (say, the addition of a drug)? Then we have to know not only the network topology (protein A upregulates the production of protein B), but specific numbers for each connection (protein A increases the rate of creation of protein B by 2.5x), and specific numbers for the rates involved (one copy of protein A is created every 5 seconds). If we're trying to model the network, we need to set numbers for lots of these parameters.

But these parameters are even harder to measure than the topology: Asking the question of how much protein is present is much more difficult than asking whether the protein is present. So we have to deal with limited information. We may only know the concentrations of two of the proteins in our network, and have only vague ideas about the concentrations of ten others. Then our group is tasked with finding values for 50 parameters that produce a reasonable fit to the available data, so that we can make a prediction about what will happen in other, unmeasured conditions.

As you might imagine, this problem is generally ill-constrained: there are lots of different ways you can set your parameters and still find model output that agrees with the available data. Some parameters could be intrinsically unimportant to what you measured. Some sets of parameters could compensate for each other; for example, raising one rate and lowering another might leave the output unchanged. (We say that there are lots of 'sloppy' directions in parameter space in which you can move without changing the model output.) And at first glance, it seems audacious to think that anything useful could come out of all of this. If we don't know our parameters very well, how can we hope to make valid predictions?

But it turns out that the situation is not so bleak. If we keep track of all of the parameter sets that work to fit the experimental data, we can plug them in and see what output each of them produces for an unmeasured condition. And we find that (well, Ryan Gutenkunst found that) oftentimes the outputs of all these possible parameter sets are alike enough that we can still make a prediction with some confidence. In fact, even if we imagined doing experiments to reasonably measure each of the individual parameters, we couldn't do much better. This is saying that the experimental data still constrain the predictions we care about, even if they don't constrain the parameter values.
There are lots of other interesting questions you can imagine asking about these 'sloppy models.' Can these models be systematically simplified to contain fewer parameters? Can other types of measurements (say, of fluctuations) better constrain parameter values? If organisms evolve by changing parameters, can 'sloppiness' help us understand evolution? You can learn more at my advisor's website: http://www.lassp.cornell.edu/sethna/Sloppy/index.html
🍔 http://nautil.us/issue/54/the-unspoken/physics-has-demoted-mass

#Reductionism

Modern physics teaches us something rather different, and deeply counter-intuitive. As we worked our way ever inward—matter into atoms, atoms into sub-atomic particles, sub-atomic particles into quantum fields and forces—we lost sight of matter completely. Matter lost its tangibility. It lost its primacy as mass became a secondary quality, the result of interactions between intangible quantum fields. What we recognize as mass is a behavior of these quantum fields; it is not a property that belongs or is necessarily intrinsic to them.
💎 In physics, #symmetry_breaking is a phenomenon in which (infinitesimally) small fluctuations acting on a system crossing a critical point decide the system's fate, by determining which branch of a bifurcation is taken. To an outside observer unaware of the fluctuations (or "noise"), the choice will appear arbitrary. This process is called symmetry "breaking", because such transitions usually bring the system from a symmetric but disorderly state into one or more definite states. Symmetry breaking is thought to play a major role in #pattern_formation.

🌀 One of the first cases of broken symmetry discussed in the physics literature is related to the form taken by a uniformly rotating body of incompressible fluid in gravitational and hydrostatic equilibrium. Jacobi and soon later Liouville, in 1834, discussed the fact that a tri-axial ellipsoid was an equilibrium solution for this problem when the kinetic energy compared to the gravitational energy of the rotating body exceeded a certain critical value. The axial symmetry presented by the McLaurin spheroids is broken at this bifurcation point. Furthermore, above this bifurcation point, and for constant angular momentum, the solutions that minimize the kinetic energy are the non-axially symmetric Jacobi ellipsoids instead of the Maclaurin spheroids.

https://en.wikipedia.org/wiki/Symmetry_breaking
A ball is initially located at the top of the central hill (C). This position is an unstable equilibrium: a very small perturbation will cause it to fall to one of the two stable wells left (L) or (R)