Complex Systems Studies – Telegram
Complex Systems Studies
2.43K subscribers
1.55K photos
125 videos
116 files
4.54K links
What's up in Complexity Science?!
Check out here:

@ComplexSys

#complexity #complex_systems #networks #network_science

📨 Contact us: @carimi
Download Telegram
Complex Systems Studies
🎞 Missing Semester: computing ecosystem literacy https://www.youtube.com/watch?v=Z56Jmr9Z34Q&list=PLyzOVJj3bHQuloKGG59rS43e29ro7I57J As computer scientists, we know that computers are great at aiding in repetitive tasks. However, far too often, we forget…
این دوره جزو چیزایی هست که:

۱) هیچ‌جایی به آدم درس نمیدن
۲) همه حرفه‌ای‌ها بلدش هستن
۳) همه چون خودشون یاد گرفتن فکر نکردن که این چیزا بدیهی نیست
۴) بدون اینها شاید زندگیتون بگذره، ولی با دونستن این نکته‌های ریز، نگاهتون به کامپیوتر و توانایی‌هاتون خیلی ساده چند برابر میشه!

https://missing.csail.mit.edu/2020/course-shell/

ویدیوها:
https://www.aparat.com/playlist/406966
💰 Great #PhD opportunity in sunny #brisbane - @ProfMJSimpson with the School of #maths at @QUTSciEng is looking for 2 PhD students for his project in mathematical biology: Mathematical and statistical modelling of cell migration in 4D tumour spheroids. Info: https://t.co/sbvV6JFQCI
Waiting-Time Paradox in 1922

Naoki Masuda, University at Buffalo
Takayuki Hiraoka, Aalto University

Abstract
We present an English translation and discussion of an essay that a Japanese physicist, Torahiko Terada, wrote in 1922. In the essay, he described the waiting-time paradox, also called the bus paradox, which is a known mathematical phenomenon in queuing theory, stochastic processes, and modern temporal network analysis. He also observed and analyzed data on Tokyo City trams to verify the relevance of the waiting-time paradox to busy passengers in Tokyo at the time. This essay seems to be one of the earliest documentations of the waiting-time paradox in a sufficiently scientific manner.
💡 Interested in a rather peculiar type of a phase transition in computational problems, an infinite-order one? Want to learn about a bunch of conjectures to prove rigorously? Here is a paper just for you https://t.co/SLPkEWxiCw

Recovery thresholds in the sparse planted matching problem

Guilhem Semerjian, Gabriele Sicuro, Lenka Zdeborová

We consider the statistical inference problem of recovering an unknown perfect matching, hidden in a weighted random graph, by exploiting the information arising from the use of two different distributions for the weights on the edges inside and outside the planted matching. A recent work has demonstrated the existence of a phase transition, in the large size limit, between a full and a partial recovery phase for a specific form of the weights distribution on fully connected graphs. We generalize and extend this result in two directions: we obtain a criterion for the location of the phase transition for generic weights distributions and possibly sparse graphs, exploiting a technical connection with branching random walk processes, as well as a quantitatively more precise denoscription of the critical regime around the phase transition.
Artificial Intelligence
Instructor:‌ Prof. Patrick Henry Winston
https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-034-artificial-intelligence-fall-2010/index.htm

Course Denoscription
This course introduces students to the basic knowledge representation, problem solving, and learning methods of artificial intelligence. Upon completion of 6.034, students should be able to develop intelligent systems by assembling solutions to concrete computational problems; understand the role of knowledge representation, problem solving, and learning in intelligent-system engineering; and appreciate the role of problem solving, vision, and language in understanding human intelligence from a computational perspective.
Network Science

Winter - Spring 2020.
Instructors: Prof. Leonid Zhukov, Ilya Makarov
http://www.leonidzhukov.net/hse/2020/networks/
💰 Postdoctoral position in mathematical modelling of #COVID__19 epidemics:

we are seeking a #postdoc to work in an international collaboration between Bern University Hospital and ICTP. 2 years position funded by SNF grant:
https://t.co/7xK8AQszys
💡 Understanding deep learning is also a job for physicists

Lenka Zdeborová, Nature Physics (2020)

Automated learning from data by means of deep neural networks is finding use in an ever-increasing number of applications, yet key theoretical questions about how it works remain unanswered. A physics-based approach may help to bridge this gap.

https://www.nature.com/articles/s41567-020-0929-2
How to cut #SARSCoV2 spread?
Like this:
https://t.co/ijWQzq75eD

#MasksforAll
How to translate a verbal theory into a formal model.

https://t.co/Iqr1lMCAaD
👇👇
“What are you thinking?” “#Entropy” “Entropy?” “Yeah, entropy.
Boris explained it. It’s why you can’t get the toothpaste back in the tube.”

Whatever works by Woody Allen
سخنرانی من تا یکساعت دیگر در همایش فیزیک و سامانه های اجتماعی
در صورت علاقه میتوانید برای شرکت به آدرس زیر مراجعه کرده و از یوزر و پسوررد زیر استفاده کنید
https://www.skyroom.online/ch/iut_farhangi/physics-ss

User: physics_ss
Password:pss1399
Forwarded from دِرَنـــگ (Keivan Aghababaei Samani)
🔷 باز هم داده‌های بزرگ

▪️خورخه لوئیس بورخس¹، نویسندهٔ آرژانتینی، در داستان کوتاه «فونس باحافظه»(*) ماجرای پسری را تعریف می‌کند که در‌ پی حادثهٔ سقوط از اسب، بی‌هوش می‌شود و پس از به‌هوش آمدن در‌می‌یابد که صاحب حافظه‌ای فوق‌العاده شده است، طوری‌که هرچه می‌بیند و می‌شنود با تمام جزئیات در خاطرش می‌ماند. اما درعین‌حال عاجز از درک این اطلاعات است. قدرت انتزاع و تعمیم ندارد. مثلاً برایش مشکل است که درک کند اصطلاح² «سگ»، نمونه‌های بسیاری از سگ‌ با اندازه‌ها و شکل‌های متفاوت را در بر می‌گیرد. حتی برایش آزاردهنده است که سگِ الآن همان اسم «سگ» یک‌دقیقه‌ٔ پیش را داشته باشد. از پردازش و دسته‌بندی اطلاعات عاجز است. خودش می‌گوید: «حافظه‌ٔ من مثل یک کیسه‌ٔ آشغال است». به‌همین دلیل اصولاً توان اندیشیدن ندارد، زیرا «اندیشیدن فراموش کردن تفاوت‌هاست و تعمیم‌دادن و تجرید». در پایان، فونس —احتمالاً زیر بار این‌همه اطلاعات انباشته شده— در بیست‌ویک سالگی خفه می‌شود و می‌میرد.

▪️ کالین بلیک‌مور³ در فصل چهارم کتاب «ساخت و کار ذهن»(**) تحت عنوان «فرزند لحظه‌ها»، به کارکرد حافظه به‌عنوان کلید اعمال عالی ذهن می‌پردازد. از قضا، در ضمن این بحث به داستان فونس باحافظه‌ هم اشاره می‌کند. در ادامه‌ٔ این فصل، بلیک‌مور بحث را به نقش خطرناک انسان در دست‌کاری و چپاول محیط خود می‌کشاند و سپس از «ذهن جمعی» انسان‌ها سخن به میان می‌آورد. یعنی این که کل انسان‌ها را هم‌چون یک مغز بزرگ در نظر بگیریم که می‌تواند اطلاعات را ثبت و پردازش کند. اما درعین‌حال خطر اصلی را دخالت در روند طبیعی تکامل نمی‌داند، بلکه می‌گوید آن‌چه خطرناک‌تر است احتمالاً این است که پس از اختراع دست‌گاه چاپ و وسایل ضبط مغناطیسی و حافظه‌های کامپیوتری برای ثبت و نگه‌داری اطلاعات، این ذهن جمعی توانایی حیاتی خود در فراموش کردن را از دست داده است. دیدگاه بلیک‌مور در پایان این فصل جالب‌توجه است:

«هم‌اکنون تکنولوژی موجود، که در کشورهای پیشرفته زندگی روزمره بر شالودهٔ آن قرار گرفته، به قدری پیچیده شده است که هیچ ذهنی به تنهایی قادر به درک آن نیست. ممکن است چنین رخ ندهد که انسان در اثر انفجاری که خود خلق کرده است از صفحهٔ روزگار محو شود. نیز ممکن است چنین پیش نیاید که در نتیجهٔ تباه ساختن منابع انرژیِ زمین، نژاد خود را از سرما منجمد گرداند. ولی ممکن است انسان خود را در چنان سیلابی از اطلاعات غرق کند که جامعه دیگر نتواند میراث فرهنگی خود را درک کند.» 

1. Jorge Luis Borges (1899-1986)
2. term
3. Colin Blakemore (1944- )

* این داستان در کتاب «کتابخانه‌ٔ بابل و ۲۳ داستان دیگر» ترجمهٔ کاوه سیدحسینی، انتشارات نیلوفر آمده است.
** این کتاب را انتشارات فرهنگ معاصر با ترجمه‌ٔ عالی و روان محمدرضا باطنی به فارسی منتشر کرده است.

@k1samani_channel
4⃣ Carlo Rovelli
Theoretical Physicist; Aix-Marseille University, in the Centre de Physique Théorique, Marseille, France; Author, Reality Is Not What It Seems

☀️ Relative Information

Everybody knows what “information” is. It is the stuff that overabounds online; which you ask the airport kiosk when you don’t know how to get downtown; or which is stored in your USB sticks. It carries meaning. Meaning is interpreted in our head, of course. So, is there anything out there which is just physical, independent from our head, which is information?

Yes. It is called “relative information.” In nature, variables are not independent; for instance, in any magnet, the two ends have opposite polarities. Knowing one amounts to knowing the other. So we can say that each end “has information” about the other. There is nothing mental in this; it is just a way of saying that there is a necessary relation between the polarities of the two ends. We say that there is "relative information" between two systems anytime the state of one is constrained by the state of the other. In this precise sense, physical systems may be said to have information about one another, with no need for a mind to play any role.

Such "relative information" is ubiquitous in nature: The color of the light carries information about the object the light has bounced from; a virus has information about the cell it may attach; and neurons have information about one another. Since the world is a knit tangle of interacting events, it teams with relative information.

When this information is exploited for survival, extensively elaborated by our brain, and maybe coded in a language understood by a community, it becomes mental, and it acquires the semantic weight that we commonly attribute to the notion of information.

But the basic ingredient is down there in the physical world: physical correlation between distinct variables. The physical world is not a set of self-absorbed entities that do their selfish things. It is a tightly knitted net of relative information, where everybody’s state reflects somebody else’s state. We understand physical, chemical, biological, social, political, astrophysical, and cosmological systems in terms of these nets of relations, not in terms of individual behavior. Physical relative information is a powerful basic concept for describing the world. Before “energy,” “matter,” or even “entity.”

This is why saying that the physical world is just a collection of elementary particles does not capture the full story. The constraints between them create the rich web of reciprocal information.

Twenty-four centuries ago Democritus suggested that everything could be just made of atoms. But he also suggested that the atoms are “like the letters of the alphabet”: There are only twenty or so letters but, as he puts it, “It is possible for them to combine in diverse modes, in order to produce comedies or tragedies, ridiculous stories or epic poems.” So is nature: Few atoms combine to generate the phantasmagoric variety of reality. But the analogy is deeper: The atoms are like an alphabet because the way in which they are arranged is always correlated with the way other atoms are arranged. Sets of atoms carry information.

The light that arrives at our eyes carries information about the objects which it has played across; the color of the sea has information on the color of the sky above it; a cell has information about the virus attacking it; a new living being has plenty of information because it is correlated with its parents, and with its species; and you, dear reader, reading these lines, receive information about what I am thinking while writing them, that is to say, about what is happening in my mind at the moment in which I write this text. What occurs in the atoms of your brain is not any more independent from what is happening in the atoms of mine: we communicate.

The world isn’t just a mass of colliding atoms; it is also a web of correlations between sets of atoms, a network of reciprocal physical information between physical systems.
Ever sit and think to yourself: hello self. i know what we should be doing right now but what if *instead of that* we obsessively tweak the params of an edge bundling function for networkx/matplotlib??

wellp https://t.co/vCHrE5BLya

in progress, uses https://t.co/umEJeJlilG lots