June15_1200_CEST_Ekaterina_Svikhnushina_Expectation_vs_Reality_in.pdf
3 MB
Willingness to Delegate to Digital Assistants -- интересный концепт для "умных" продуктов
https://dl.acm.org/doi/10.1145/3544549.3585763
https://dl.acm.org/doi/10.1145/3544549.3585763
Forwarded from Цифровой геноцид
Цифровой геноцид ревью: Перспективы HCI
Next Steps in Human-Computer Integration
Важно полностью понимать и совместно формировать технологии, где пользователь и технология вместе образуют тесно связанную систему в более широком физическом, цифровом и социальном контексте - с этих строк начинается манифест Next Steps in Human-Computer Integration огромного коллектива авторов, среди которых достаточно большое число имен, которых я читал в статьях и рецензиях.
Наступает эпоха «многих машин для многих пользователей», современная эра повсеместных вычислений, которая смещает фокус HCI с вопроса «Как мы взаимодействуем с компьютерами?» к «Как интегрированы люди и компьютеры?»
Область человеко-компьютерной интеграции охватывает четыре тематические области: (1) технологии, совместимые с человеком; (2) Влияние интеграции на идентичность и поведение; (3) Человеческая интеграция и общество; и (4) Проектирование интегрированного взаимодействия.
Большое число вычислений большого числа устройств создает принципиально иное отношения в социуме, наборов правил поведения,связанных с устройствами. Вторым челленджем является более глубокая интеграция в тело и психику устройств. В последние годы появились такие достижения, как эпидермальная электроника и интерактивный текстиль, в которых используется гибкая и растягивающаяся электроника, обеспечивающая более сильное слияние с человеческим телом.
Для UX это отдельный вызов - авторы подчеркивают, что тут будет много проектирования неявного взаимодействия
Next Steps in Human-Computer Integration
Важно полностью понимать и совместно формировать технологии, где пользователь и технология вместе образуют тесно связанную систему в более широком физическом, цифровом и социальном контексте - с этих строк начинается манифест Next Steps in Human-Computer Integration огромного коллектива авторов, среди которых достаточно большое число имен, которых я читал в статьях и рецензиях.
Наступает эпоха «многих машин для многих пользователей», современная эра повсеместных вычислений, которая смещает фокус HCI с вопроса «Как мы взаимодействуем с компьютерами?» к «Как интегрированы люди и компьютеры?»
Область человеко-компьютерной интеграции охватывает четыре тематические области: (1) технологии, совместимые с человеком; (2) Влияние интеграции на идентичность и поведение; (3) Человеческая интеграция и общество; и (4) Проектирование интегрированного взаимодействия.
Большое число вычислений большого числа устройств создает принципиально иное отношения в социуме, наборов правил поведения,связанных с устройствами. Вторым челленджем является более глубокая интеграция в тело и психику устройств. В последние годы появились такие достижения, как эпидермальная электроника и интерактивный текстиль, в которых используется гибкая и растягивающаяся электроника, обеспечивающая более сильное слияние с человеческим телом.
Для UX это отдельный вызов - авторы подчеркивают, что тут будет много проектирования неявного взаимодействия
Основатель Вольфрама выступил с заявлением о создании языка описания концептов/идей/гипотез:
In a sense, what’s happening is that Wolfram Language shifts from concentrating on mechanics to concentrating on conceptualization.
https://www.ted.com/talks/stephen_wolfram_how_to_think_computationally_about_ai_the_universe_and_everything
https://writings.stephenwolfram.com/2023/10/how-to-think-computationally-about-ai-the-universe-and-everything/
Базовое понятие, которое лежит в основе "языка идей" - рулиада (ruliad)
https://writings.stephenwolfram.com/2021/11/the-concept-of-the-ruliad/
In a sense, what’s happening is that Wolfram Language shifts from concentrating on mechanics to concentrating on conceptualization.
https://www.ted.com/talks/stephen_wolfram_how_to_think_computationally_about_ai_the_universe_and_everything
https://writings.stephenwolfram.com/2023/10/how-to-think-computationally-about-ai-the-universe-and-everything/
Базовое понятие, которое лежит в основе "языка идей" - рулиада (ruliad)
https://writings.stephenwolfram.com/2021/11/the-concept-of-the-ruliad/
Ted
How to think computationally about AI, the universe and everything
Drawing on his decades-long mission to formulate the world in computational terms, Stephen Wolfram delivers a profound vision of computation and its role in the future of AI. Amid a debut of mesmerizing visuals depicting the underlying structure of the universe…
Forwarded from Data Secrets
А мы написали нашу первую статью на Хабр!
Посвятили ее крутой библиотеке RecTools от коллег из МТС. Внутри:
▶️ за что мы так любим эту библиотеку;
▶️ ликбез по основным рекси-моделям (ItemKNN, ALS, SVD, Lightfm, DSSN);
▶️ как готовить данные и запускать модели в библиотеке;
▶️ как рассчитывать метрики;
▶️ оставили много полезных дополнительных материалов.
Очень старались, так что ждем ваших реакций!
😻 #NN #train
Посвятили ее крутой библиотеке RecTools от коллег из МТС. Внутри:
Очень старались, так что ждем ваших реакций!
Please open Telegram to view this post
VIEW IN TELEGRAM
хмммм
Фундаментальный ресерч о "природе данных" - The topology of data
Для широкой публики будет открыт 1 января 2024
https://authors.library.caltech.edu/records/qa61x-ah042
Фундаментальный ресерч о "природе данных" - The topology of data
Для широкой публики будет открыт 1 января 2024
https://authors.library.caltech.edu/records/qa61x-ah042
Compared to machine learning, causal inference allows us to build a robust framework that controls for confounders in order to estimate the true incremental impact to members
https://netflixtechblog.com/a-survey-of-causal-inference-applications-at-netflix-b62d25175e6f
https://netflixtechblog.com/a-survey-of-causal-inference-applications-at-netflix-b62d25175e6f
Illustrating power using the example of flipping a coin 100 times and calculating the fraction of heads. The black and red dashed lines show, respectively, the distribution of outcomes assuming the probability of heads is 50% (null hypothesis) and 64% (specific value of the alternative hypothesis). Here, the power against this alternative is 80% (red shading).
https://netflixtechblog.com/interpreting-a-b-test-results-false-negatives-and-power-6943995cf3a8
https://netflixtechblog.com/interpreting-a-b-test-results-false-negatives-and-power-6943995cf3a8
MAP-WEB-May2021.jpg
12.5 MB
Карта науки о теории сложности - последние вдохновления в работе черпаю отсюда
https://www.art-sciencefactory.com/complexity-map_feb09.html
https://www.art-sciencefactory.com/complexity-map_feb09.html
THE EVOLUTION OF TRUST: в интерактивном формате изучаем основы теории игр, стратегий поведения и точки роста для социума.
https://ncase.me/trust/
https://ncase.me/trust/
ncase.me
The Evolution of Trust
an interactive guide to the game theory of why & how we trust each other
Forwarded from Цифровой геноцид
Автогенерация интерфейсов
Новая нейросеть от гугл - Gemini - умеет генерировать интерфейсы внутри чата в зависимости от задачи пользователя, выглядит очень перспективно - как примерно и описывалось в статьях об автогенерации интерфейсов в LLM.
Интересно сколько там все-таки заготовленных канвас?
https://www.theverge.com/2023/12/6/23990466/google-gemini-llm-ai-model
Узнал от @cryptoEssay
Новая нейросеть от гугл - Gemini - умеет генерировать интерфейсы внутри чата в зависимости от задачи пользователя, выглядит очень перспективно - как примерно и описывалось в статьях об автогенерации интерфейсов в LLM.
Интересно сколько там все-таки заготовленных канвас?
https://www.theverge.com/2023/12/6/23990466/google-gemini-llm-ai-model
Узнал от @cryptoEssay
YouTube
Personalized AI for you | Gemini
Google’s newest and most capable AI model – Gemini.
Join Google Research Engineering Director Palash Nandy as he showcases Gemini’s advanced reasoning and coding abilities, all while exploring ideas for a birthday party.
The model understands his intent…
Join Google Research Engineering Director Palash Nandy as he showcases Gemini’s advanced reasoning and coding abilities, all while exploring ideas for a birthday party.
The model understands his intent…
Does big data serve policy? Not without context. An experiment with in silico social science
Authors: Graziul, Chris; Belikov, Alexander; Ishanu Chattopadyay; Ziwen Chen; Hongbo Fang; Anuraag Girdhar; Xiaoshuang Jua; P. M. Krafft; Max Kleiman-Weiner; Candice Lewis; Chen Liang; John Muchovej; Alejandro Vietos; Meg Young and James Evans
Source: Computational and Mathematical Organizational Theory; Vol.: 29; Issue: 1; Pp.: 188-219;
DOI: 10.1007/s10588-022-09362-3; March 2023
SFI Taxonomy: Models, Tools, and Scientific Visualization (Human Social Dynamics)
Abstract:
The DARPA Ground Truth project sought to evaluate social science by constructing four varied simulated social worlds with hidden causality and unleashed teams of scientists to collect data, discover their causal structure, predict their future, and prescribe policies to create desired outcomes. This large-scale, long-term experiment of in silico social science, about which the ground truth of simulated worlds was known, but not by us, reveals the limits of contemporary quantitative social science methodology. First, problem solving without a shared ontology-in which many world characteristics remain existentially uncertain-poses strong limits to quantitative analysis even when scientists share a common task, and suggests how they could become insurmountable without it. Second, data labels biased the associations our analysts made and assumptions they employed, often away from the simulated causal processes those labels signified, suggesting limits on the degree to which analytic concepts developed in one domain may port to others. Third, the current standard for computational social science publication is a demonstration of novel causes, but this limits the relevance of models to solve problems and propose policies that benefit from the simpler and less surprising answers associated with most important causes, or the combination of all causes. Fourth, most singular quantitative methods applied on their own did not help to solve most analytical challenges, and we explored a range of established and emerging methods, including probabilistic programming, deep neural networks, systems of predictive probabilistic finite state machines, and more to achieve plausible solutions. However, despite these limitations common to the current practice of computational social science, we find on the positive side that even imperfect knowledge can be sufficient to identify robust prediction if a more pluralistic approach is applied. Applying competing approaches by distinct subteams, including at one point the vast TopCoder.comglobal community of problem solvers, enabled discovery of many aspects of the relevant structure underlying worlds that singular methods could not. Together, these lessons suggest how different a policy-oriented computational social science would be than the computational social science we have inherited. Computational social science that serves policy would need to endure more failure, sustain more diversity, maintain more uncertainty, and allow for more complexity than current institutions support.
Authors: Graziul, Chris; Belikov, Alexander; Ishanu Chattopadyay; Ziwen Chen; Hongbo Fang; Anuraag Girdhar; Xiaoshuang Jua; P. M. Krafft; Max Kleiman-Weiner; Candice Lewis; Chen Liang; John Muchovej; Alejandro Vietos; Meg Young and James Evans
Source: Computational and Mathematical Organizational Theory; Vol.: 29; Issue: 1; Pp.: 188-219;
DOI: 10.1007/s10588-022-09362-3; March 2023
SFI Taxonomy: Models, Tools, and Scientific Visualization (Human Social Dynamics)
Abstract:
The DARPA Ground Truth project sought to evaluate social science by constructing four varied simulated social worlds with hidden causality and unleashed teams of scientists to collect data, discover their causal structure, predict their future, and prescribe policies to create desired outcomes. This large-scale, long-term experiment of in silico social science, about which the ground truth of simulated worlds was known, but not by us, reveals the limits of contemporary quantitative social science methodology. First, problem solving without a shared ontology-in which many world characteristics remain existentially uncertain-poses strong limits to quantitative analysis even when scientists share a common task, and suggests how they could become insurmountable without it. Second, data labels biased the associations our analysts made and assumptions they employed, often away from the simulated causal processes those labels signified, suggesting limits on the degree to which analytic concepts developed in one domain may port to others. Third, the current standard for computational social science publication is a demonstration of novel causes, but this limits the relevance of models to solve problems and propose policies that benefit from the simpler and less surprising answers associated with most important causes, or the combination of all causes. Fourth, most singular quantitative methods applied on their own did not help to solve most analytical challenges, and we explored a range of established and emerging methods, including probabilistic programming, deep neural networks, systems of predictive probabilistic finite state machines, and more to achieve plausible solutions. However, despite these limitations common to the current practice of computational social science, we find on the positive side that even imperfect knowledge can be sufficient to identify robust prediction if a more pluralistic approach is applied. Applying competing approaches by distinct subteams, including at one point the vast TopCoder.comglobal community of problem solvers, enabled discovery of many aspects of the relevant structure underlying worlds that singular methods could not. Together, these lessons suggest how different a policy-oriented computational social science would be than the computational social science we have inherited. Computational social science that serves policy would need to endure more failure, sustain more diversity, maintain more uncertainty, and allow for more complexity than current institutions support.
SpringerLink
Does big data serve policy? Not without context. An experiment with in silico social science
Computational and Mathematical Organization Theory - The DARPA Ground Truth project sought to evaluate social science by constructing four varied simulated social worlds with hidden causality and...
Economics in nouns and verbs
Author: Arthur, W. Brian
Source: Journal of Economic Behavior & Organization; Vol.: 205; Pp.: 638-647;
DOI: 10.1016/j.jebo.2022.10.036; January 2023
SFI Taxonomy: Models, Tools, and Scientific Visualization (Economics)
Abstract:
Standard economic theory uses mathematics as its main means of understanding, and this brings clarity of reasoning and logical power. But there is a drawback: algebraic mathematics restricts economic modeling to what can be expressed only in quantitative nouns, and this forces theory to leave out matters to do with process, formation, adjustment, and creation-matters to do with nonequilibrium. For these we need a different means of understanding, one that allows verbs as well as nouns. Algorithmic expression is such a means. It allows verbs-processes-as well as nouns-objects and quantities. It allows fuller denoscription in economics, and can include heterogeneity of agents, actions as well as objects, and realistic models of behavior in ill-defined situations. The world that algorithms reveal is action-based as well as object-based, organic, possibly ever-changing, and not fully knowable. But it is strangely and wonderfully alive.
Author: Arthur, W. Brian
Source: Journal of Economic Behavior & Organization; Vol.: 205; Pp.: 638-647;
DOI: 10.1016/j.jebo.2022.10.036; January 2023
SFI Taxonomy: Models, Tools, and Scientific Visualization (Economics)
Abstract:
Standard economic theory uses mathematics as its main means of understanding, and this brings clarity of reasoning and logical power. But there is a drawback: algebraic mathematics restricts economic modeling to what can be expressed only in quantitative nouns, and this forces theory to leave out matters to do with process, formation, adjustment, and creation-matters to do with nonequilibrium. For these we need a different means of understanding, one that allows verbs as well as nouns. Algorithmic expression is such a means. It allows verbs-processes-as well as nouns-objects and quantities. It allows fuller denoscription in economics, and can include heterogeneity of agents, actions as well as objects, and realistic models of behavior in ill-defined situations. The world that algorithms reveal is action-based as well as object-based, organic, possibly ever-changing, and not fully knowable. But it is strangely and wonderfully alive.