🌀 Sloppy Models
Bryan Daniels
The biochemistry happening inside each one of your cells is amazingly complex. As an important example, take gene regulation. In the process of trannoscription and translation, proteins are constructed from the information in your DNA's genetic code. This process is regulated so that the cell can make more or less of a certain protein when it needs to (responding to, for example, the presence of a hormone in the bloodstream). The problem becomes more complicated when you realize that some proteins themselves regulate the creation of other proteins; we could find that protein A upregulates the creation of protein B, which downregulates the creation of protein C, and so on. In fact, huge networks of interacting genes and proteins are routinely studied in systems biology.
Understanding these large biochemical networks is a big challenge. For one, it's hard for experimentalists to measure what's going on inside a tiny living cell. Still, they can (painstakingly) discover which proteins are connected to which others (If I don't let the cell make protein A, do I still see protein B?), and a network 'topology' is gradually built up.
But what if we actually want to predict how much of a certain protein will be made under certain conditions (say, the addition of a drug)? Then we have to know not only the network topology (protein A upregulates the production of protein B), but specific numbers for each connection (protein A increases the rate of creation of protein B by 2.5x), and specific numbers for the rates involved (one copy of protein A is created every 5 seconds). If we're trying to model the network, we need to set numbers for lots of these parameters.
But these parameters are even harder to measure than the topology: Asking the question of how much protein is present is much more difficult than asking whether the protein is present. So we have to deal with limited information. We may only know the concentrations of two of the proteins in our network, and have only vague ideas about the concentrations of ten others. Then our group is tasked with finding values for 50 parameters that produce a reasonable fit to the available data, so that we can make a prediction about what will happen in other, unmeasured conditions.
As you might imagine, this problem is generally ill-constrained: there are lots of different ways you can set your parameters and still find model output that agrees with the available data. Some parameters could be intrinsically unimportant to what you measured. Some sets of parameters could compensate for each other; for example, raising one rate and lowering another might leave the output unchanged. (We say that there are lots of 'sloppy' directions in parameter space in which you can move without changing the model output.) And at first glance, it seems audacious to think that anything useful could come out of all of this. If we don't know our parameters very well, how can we hope to make valid predictions?
But it turns out that the situation is not so bleak. If we keep track of all of the parameter sets that work to fit the experimental data, we can plug them in and see what output each of them produces for an unmeasured condition. And we find that (well, Ryan Gutenkunst found that) oftentimes the outputs of all these possible parameter sets are alike enough that we can still make a prediction with some confidence. In fact, even if we imagined doing experiments to reasonably measure each of the individual parameters, we couldn't do much better. This is saying that the experimental data still constrain the predictions we care about, even if they don't constrain the parameter values.
Bryan Daniels
The biochemistry happening inside each one of your cells is amazingly complex. As an important example, take gene regulation. In the process of trannoscription and translation, proteins are constructed from the information in your DNA's genetic code. This process is regulated so that the cell can make more or less of a certain protein when it needs to (responding to, for example, the presence of a hormone in the bloodstream). The problem becomes more complicated when you realize that some proteins themselves regulate the creation of other proteins; we could find that protein A upregulates the creation of protein B, which downregulates the creation of protein C, and so on. In fact, huge networks of interacting genes and proteins are routinely studied in systems biology.
Understanding these large biochemical networks is a big challenge. For one, it's hard for experimentalists to measure what's going on inside a tiny living cell. Still, they can (painstakingly) discover which proteins are connected to which others (If I don't let the cell make protein A, do I still see protein B?), and a network 'topology' is gradually built up.
But what if we actually want to predict how much of a certain protein will be made under certain conditions (say, the addition of a drug)? Then we have to know not only the network topology (protein A upregulates the production of protein B), but specific numbers for each connection (protein A increases the rate of creation of protein B by 2.5x), and specific numbers for the rates involved (one copy of protein A is created every 5 seconds). If we're trying to model the network, we need to set numbers for lots of these parameters.
But these parameters are even harder to measure than the topology: Asking the question of how much protein is present is much more difficult than asking whether the protein is present. So we have to deal with limited information. We may only know the concentrations of two of the proteins in our network, and have only vague ideas about the concentrations of ten others. Then our group is tasked with finding values for 50 parameters that produce a reasonable fit to the available data, so that we can make a prediction about what will happen in other, unmeasured conditions.
As you might imagine, this problem is generally ill-constrained: there are lots of different ways you can set your parameters and still find model output that agrees with the available data. Some parameters could be intrinsically unimportant to what you measured. Some sets of parameters could compensate for each other; for example, raising one rate and lowering another might leave the output unchanged. (We say that there are lots of 'sloppy' directions in parameter space in which you can move without changing the model output.) And at first glance, it seems audacious to think that anything useful could come out of all of this. If we don't know our parameters very well, how can we hope to make valid predictions?
But it turns out that the situation is not so bleak. If we keep track of all of the parameter sets that work to fit the experimental data, we can plug them in and see what output each of them produces for an unmeasured condition. And we find that (well, Ryan Gutenkunst found that) oftentimes the outputs of all these possible parameter sets are alike enough that we can still make a prediction with some confidence. In fact, even if we imagined doing experiments to reasonably measure each of the individual parameters, we couldn't do much better. This is saying that the experimental data still constrain the predictions we care about, even if they don't constrain the parameter values.
There are lots of other interesting questions you can imagine asking about these 'sloppy models.' Can these models be systematically simplified to contain fewer parameters? Can other types of measurements (say, of fluctuations) better constrain parameter values? If organisms evolve by changing parameters, can 'sloppiness' help us understand evolution? You can learn more at my advisor's website: http://www.lassp.cornell.edu/sethna/Sloppy/index.html
🍔 http://nautil.us/issue/54/the-unspoken/physics-has-demoted-mass
#Reductionism
Modern physics teaches us something rather different, and deeply counter-intuitive. As we worked our way ever inward—matter into atoms, atoms into sub-atomic particles, sub-atomic particles into quantum fields and forces—we lost sight of matter completely. Matter lost its tangibility. It lost its primacy as mass became a secondary quality, the result of interactions between intangible quantum fields. What we recognize as mass is a behavior of these quantum fields; it is not a property that belongs or is necessarily intrinsic to them.
#Reductionism
Modern physics teaches us something rather different, and deeply counter-intuitive. As we worked our way ever inward—matter into atoms, atoms into sub-atomic particles, sub-atomic particles into quantum fields and forces—we lost sight of matter completely. Matter lost its tangibility. It lost its primacy as mass became a secondary quality, the result of interactions between intangible quantum fields. What we recognize as mass is a behavior of these quantum fields; it is not a property that belongs or is necessarily intrinsic to them.
Nautilus
Physics Has Demoted Mass
You’re sitting here, reading this article. Maybe it’s a hard copy, or an e-book on a tablet computer or e-reader. It doesn’t…
💎 In physics, #symmetry_breaking is a phenomenon in which (infinitesimally) small fluctuations acting on a system crossing a critical point decide the system's fate, by determining which branch of a bifurcation is taken. To an outside observer unaware of the fluctuations (or "noise"), the choice will appear arbitrary. This process is called symmetry "breaking", because such transitions usually bring the system from a symmetric but disorderly state into one or more definite states. Symmetry breaking is thought to play a major role in #pattern_formation.
🌀 One of the first cases of broken symmetry discussed in the physics literature is related to the form taken by a uniformly rotating body of incompressible fluid in gravitational and hydrostatic equilibrium. Jacobi and soon later Liouville, in 1834, discussed the fact that a tri-axial ellipsoid was an equilibrium solution for this problem when the kinetic energy compared to the gravitational energy of the rotating body exceeded a certain critical value. The axial symmetry presented by the McLaurin spheroids is broken at this bifurcation point. Furthermore, above this bifurcation point, and for constant angular momentum, the solutions that minimize the kinetic energy are the non-axially symmetric Jacobi ellipsoids instead of the Maclaurin spheroids.
https://en.wikipedia.org/wiki/Symmetry_breaking
🌀 One of the first cases of broken symmetry discussed in the physics literature is related to the form taken by a uniformly rotating body of incompressible fluid in gravitational and hydrostatic equilibrium. Jacobi and soon later Liouville, in 1834, discussed the fact that a tri-axial ellipsoid was an equilibrium solution for this problem when the kinetic energy compared to the gravitational energy of the rotating body exceeded a certain critical value. The axial symmetry presented by the McLaurin spheroids is broken at this bifurcation point. Furthermore, above this bifurcation point, and for constant angular momentum, the solutions that minimize the kinetic energy are the non-axially symmetric Jacobi ellipsoids instead of the Maclaurin spheroids.
https://en.wikipedia.org/wiki/Symmetry_breaking
ای مردم، ویکیپدیای فارسی رو دریابین!
اگر متخصص هستین، به توسعه ویکیپدیا بپردازین!
https://fa.wikipedia.org/wiki/%D9%88%DB%8C%DA%A9%DB%8C%E2%80%8C%D9%BE%D8%AF%DB%8C%D8%A7:%D9%88%DB%8C%DA%A9%DB%8C%E2%80%8C%D9%BE%D8%B1%D9%88%DA%98%D9%87_%D9%81%DB%8C%D8%B2%DB%8C%DA%A9
اگر متخصص هستین، به توسعه ویکیپدیا بپردازین!
https://fa.wikipedia.org/wiki/%D9%88%DB%8C%DA%A9%DB%8C%E2%80%8C%D9%BE%D8%AF%DB%8C%D8%A7:%D9%88%DB%8C%DA%A9%DB%8C%E2%80%8C%D9%BE%D8%B1%D9%88%DA%98%D9%87_%D9%81%DB%8C%D8%B2%DB%8C%DA%A9
Wikipedia
ویکیپدیا:ویکیپروژه فیزیک
به ویکیپروژهی فیزیک خوش آمدید! اینجا مکانی است که در آن میکوشیم، در زمینهی ایجاد مقالههای مرتبط با علم فیزیک، به ساماندهی فعالیتهایمان بپردازیم. همچنین برای کاملسازی و ویرایش مقالههای مربوط به این علم، نگهداری از درگاه فیزیک و ساخت ردههای مورد…
#سمینارهای_هفتگی گروه سیستمهای پیچیده و علم شبکه دانشگاه شهید بهشتی
🔹دوشنبه، ۲۲ آبان ماه، ساعت ۴:۰۰ - کلاس۱ دانشکده فیزیک دانشگاه شهید بهشتی.
@carimi
🔹دوشنبه، ۲۲ آبان ماه، ساعت ۴:۰۰ - کلاس۱ دانشکده فیزیک دانشگاه شهید بهشتی.
@carimi
Forwarded from کافه فیزیک بهشتی (sbuPhysics)
در کافه فیزیک این هفته:
آغاز قرن ۲۱ام با یک انفجار همراه شده. برای مردم عادی، این انفجار در مورد انقلابهای فناوری است که از قرن ۱۹ میلادی شروع شده و توسعه آن، تاثیر به شدت محسوسی بر روی قوانین اقتصادی داشته است. با این وجود، برای دانشمندان، یکی از وجههای این انفجار، «انقلاب پیچیدگی» است. موضوعی که در همه حوزههای علمی مانند زیستشناسی و پزشکی مورد پژوهش و مطالعه قرار گرفته است. اکنون پرسشی مطرح میشود؛ نقش فیزیک به عنوان قدیمیترین و سادهترین علم چیست؟ آیا فیزیک نظری هم باید دچار تغییر شود؟
فیزیک نظری قرن ۲۰ام از دل انقلاب نسبیت و مکانیک کوانتومی به دنیا آمد و تماما در مورد سادگی و پیوستگی بود. ابزار اصلی آن حسابان (حساب دیفرانسیل و انتگرال) و تجلی نهایی آن نظریه میدان بود.
فیزیک نظری قرن ۲۱ام، از انقلاب آشوب بیرون آمده و درباره پیچیدگی است. ابزار اصلی آن کامپیوتر است و آخرین دستاورد آن هنوز مشخص نیست. ترمودینامیک، به عنوان یک بخش حیاتی فیزیک، در این جابهجایی نقش اساسی بازی خواهد کرد.
در كافه فيزيك اين هفته، با مرور تاریخ علم فیزیک و تحولات آن از زمان نیوتون، میخواهیم گریزی بزنیم به مفاهیمى چون آشوب، پیچیدگی و انتروپی و بیان کنیم که منظورمان از پیچیدگی چیست!
عباس کریمی، از گروه سیستمهای پیچیده
sitpor.org/abbas
با كافه فيزيك همراه باشيد... ☕️ 😊
@sbu_physicscafe
آغاز قرن ۲۱ام با یک انفجار همراه شده. برای مردم عادی، این انفجار در مورد انقلابهای فناوری است که از قرن ۱۹ میلادی شروع شده و توسعه آن، تاثیر به شدت محسوسی بر روی قوانین اقتصادی داشته است. با این وجود، برای دانشمندان، یکی از وجههای این انفجار، «انقلاب پیچیدگی» است. موضوعی که در همه حوزههای علمی مانند زیستشناسی و پزشکی مورد پژوهش و مطالعه قرار گرفته است. اکنون پرسشی مطرح میشود؛ نقش فیزیک به عنوان قدیمیترین و سادهترین علم چیست؟ آیا فیزیک نظری هم باید دچار تغییر شود؟
فیزیک نظری قرن ۲۰ام از دل انقلاب نسبیت و مکانیک کوانتومی به دنیا آمد و تماما در مورد سادگی و پیوستگی بود. ابزار اصلی آن حسابان (حساب دیفرانسیل و انتگرال) و تجلی نهایی آن نظریه میدان بود.
فیزیک نظری قرن ۲۱ام، از انقلاب آشوب بیرون آمده و درباره پیچیدگی است. ابزار اصلی آن کامپیوتر است و آخرین دستاورد آن هنوز مشخص نیست. ترمودینامیک، به عنوان یک بخش حیاتی فیزیک، در این جابهجایی نقش اساسی بازی خواهد کرد.
در كافه فيزيك اين هفته، با مرور تاریخ علم فیزیک و تحولات آن از زمان نیوتون، میخواهیم گریزی بزنیم به مفاهیمى چون آشوب، پیچیدگی و انتروپی و بیان کنیم که منظورمان از پیچیدگی چیست!
عباس کریمی، از گروه سیستمهای پیچیده
sitpor.org/abbas
با كافه فيزيك همراه باشيد... ☕️ 😊
@sbu_physicscafe
Forwarded from کافه فیزیک بهشتی (sbuPhysics)
#کافه_فیزیک هشتم
🗓 ۳شنبه ۲۳ آبان ١٣٩٦
🕰 ساعت ۱۱:۴۵
📍دانشكده فيزيك دانشگاه شهيد بهشتى، طبقه همكف.
صحبت در مورد "آشوب، پیچیدگی و بینظمی"
عباس کریمی
http://sitpor.org/abbas
همراه هم خواهيم بود با كافه فيزيك! ☕️😊
@farzin23i
@sbu_physicscafe
🗓 ۳شنبه ۲۳ آبان ١٣٩٦
🕰 ساعت ۱۱:۴۵
📍دانشكده فيزيك دانشگاه شهيد بهشتى، طبقه همكف.
صحبت در مورد "آشوب، پیچیدگی و بینظمی"
عباس کریمی
http://sitpor.org/abbas
همراه هم خواهيم بود با كافه فيزيك! ☕️😊
@farzin23i
@sbu_physicscafe
🎞 Physical Applications of Stochastic Processes
IIT Madras Course , Prof. V. Balakrishnan
http://freevideolectures.com/Course/3702/Physical-Applications-of-Stochastic-Processes
IIT Madras Course , Prof. V. Balakrishnan
http://freevideolectures.com/Course/3702/Physical-Applications-of-Stochastic-Processes
Free Video Lectures
Physical Applications of Stochastic Processes video lectures, V. Balakrishnan of IIT Madras
Physical Applications of Stochastic Processes Video Lectures, IIT Madras Online Course, free tutorials for free download
🐘 Metabolism and power laws
https://www.johndcook.com/blog/2009/04/16/metabolism-and-power-laws/
Bigger animals have more cells than smaller animals. More cells means more cellular metabolism and so more heat produced. How does the amount of heat an animal produces vary with its size? We clearly expect it to go up with size, but does it increase in proportion to volume? Surface area? Something in between?
A first guess would be that metabolism (equivalently, heat produced) goes up in proportion to volume. If cells are all roughly the same size, then number of cells increases proportionately with volume. But heat is dissipated through the surface. Surface area increases in proportion to the square of length but volume increases in proportion to the cube of length. That means the ratio of surface area to volume decreases as overall size increases. The surface area to volume ratio for an elephant is much smaller than it is for a mouse.
If an elephant’s metabolism per unit volume were the same as that of a mouse, the elephant’s skin would burn up.
So metabolism cannot be proportional to volume. What about surface area? Here we get into variety and controversy. Many people assume metabolism is proportional to surface area based on the argument above. This idea was first proposed by Max Rubner in 1883. Others emphasize data that supports the theory that suggests metabolism is proportional to surface area.
In the 1930’s, Max Kleiber proposed that metabolism increases according to body mass raised to the power 3/4. (I’ve been a little sloppy here using body mass and volume interchangeably. Body mass is more accurate, though to first approximation animals have uniform density.) If metabolism were proportional to volume, the exponent would be 1. If it were proportional to surface area, the exponent would be 2/3. But Kleiber’s law says it’s somewhere in between, namely 3/4.
So why the exponent 3/4? There is a theoretical explanation called the #metabolic_scaling_theory proposed by #Geoffrey_West, Brian Enquist, and James Brown.
Metabolic scaling theory says that circulatory systems and other networks are fractal-like because this is the most efficient way to serve an animal’s physiological needs.
To quote Enquist:
Although living things occupy a three-dimensional space, their internal physiology and anatomy operate as if they were four-dimensional. … Fractal geometry has literally given life an added dimension.
The fractal theory would explain the power law exponent exponent 3/4 simply: it’s the ratio of the volume dimension to the fractal dimension. However, as I suggested earlier, this theory is controversial. Some biologists dispute Kleiber’s law. Others accept Kleiber’s law as an empirical observation but dispute the theoretical explanation of West, Enquist, and Brown.
To read more about metabolism and power laws, see chapter 17 of Complexity: A Guided Tour.
Johndcook
Metabolism and power law distributions
Bigger animals have more cells than smaller animals. More cells means more cellular metabolism and so more heat produced. How does the amount of heat an animal
⛑ همراهان عزیز، برای کمک به مردم زلزلهزده، راههای زیادی وجود داره. یکی از راحتترین کارها واریز مبلغی به حساب کسایی هست که به این امور واقف هستن، مثلا جمعیت امام علی(ع):
یکی دو هزار تومن پول زیادی نیست ولی قطره قطره جمع گردد...
شماره کارت شانزده رقمی:
۶۱۰۴۳۳۷۹۰۵۳۲۴۶۰۲
بانک ملت به نام جمعیت امداد دانشجویی امام علی (ع)
به حساب شماره ۵۳۲۵۰۴۳۸۷۷
درگاه پرداخت اینترنتی:
https://donate.sosapoverty.org/emdadresani
🆔 @imamalisociety
یکی دو هزار تومن پول زیادی نیست ولی قطره قطره جمع گردد...
شماره کارت شانزده رقمی:
۶۱۰۴۳۳۷۹۰۵۳۲۴۶۰۲
بانک ملت به نام جمعیت امداد دانشجویی امام علی (ع)
به حساب شماره ۵۳۲۵۰۴۳۸۷۷
درگاه پرداخت اینترنتی:
https://donate.sosapoverty.org/emdadresani
🆔 @imamalisociety
👌🏻 https://www.quantamagazine.org/the-beautiful-intelligence-of-bacteria-and-other-microbes-20171113/
Quanta Magazine
Seeing the Beautiful Intelligence of Microbes
Bacterial biofilms and slime molds are more than crude patches of goo. Detailed time-lapse microscopy reveals how they sense and explore their surroundings, communicate with their neighbors and…
😱 http://www.bbc.com/news/av/technology-41935721/why-these-faces-do-not-belong-to-real-people
Jaakko Lehtinen's interview on BBC is about the generative adversarial networks (GANs). This technique can generate photographs that look authentic to human observers. See for more information
http://research.nvidia.com/publication/2017-10_Progressive-Growing-of
Jaakko Lehtinen's interview on BBC is about the generative adversarial networks (GANs). This technique can generate photographs that look authentic to human observers. See for more information
http://research.nvidia.com/publication/2017-10_Progressive-Growing-of
Bbc
Why these faces do not belong to 'real' people
Nvidia has been developing algorithms to generate photorealistic faces.
🐘 گویا فیلها کمتر از حد انتظار سرطان میگیرن:
https://www.wired.com/story/a-zombie-gene-protects-elephants-from-cancer/amp
https://www.wired.com/story/a-zombie-gene-protects-elephants-from-cancer/amp
WIRED
A Zombie Gene Protects Elephants From Cancer
Elephants did not evolve to become huge animals until after they turned a bit of genetic junk into a unique defense against inevitable tumors.
💊 مدلسازی ریاضی داروی ایدز:
https://sinews.siam.org/Details-Page/mathematically-modeling-hiv-drug-pharmacodynamics
https://sinews.siam.org/Details-Page/mathematically-modeling-hiv-drug-pharmacodynamics
Forwarded from رادیوفیزیک 📣
This media is not supported in your browser
VIEW IN TELEGRAM
شبیهسازی ۴۱ آونگ سهتایی که با شرایط اولیه تقریبا مشابهی رها میشوند ولی تحول متفاوتی دارند.
#آشوب #حساس_بودن_به_شرایط_اولیه
#آشوب #حساس_بودن_به_شرایط_اولیه
Forwarded from رادیوفیزیک 📣
Chaos, Complexity, and Entropy.pdf
1.7 MB