Collective Intelligence – Telegram
Collective Intelligence
742 subscribers
39 photos
1 video
32 files
438 links
Collective intelligence (CI) is shared or group intelligence that emerges from the collaboration, collective efforts, and competition of many individuals and appears in consensus decision making.
Download Telegram
User modeling is the subdivision of human–computer interaction which describes the process of building up and modifying a conceptual understanding of the user. The main goal of user modeling is customization and adaptation of systems to the user's specific needs. The system needs to "say the 'right' thing at the 'right' time in the 'right' way". To do so it needs an internal representation of the user. Another common purpose is modeling specific kinds of users, including modeling of their skills and declarative knowledge, for use in automatic software-tests. User-models can thus serve as a cheaper alternative to user testing.

Conference: https://www.um.org/
A bunch of smaller things joining together to form a giant that can function as more than the sum of its parts — is called emergence. We can visualize it as a tower.

...

A human isn’t simply a perfect survival creature—it’s also just the right element of a perfect survival tribe. Examining the traits of a perfect survival tribe can help us see the specs for human nature, not only illuminating who we are, but why we’re that way.

https://waitbutwhy.com/2019/08/giants.html
Forwarded from TechSparks
Микрософту упрямо не везёт с ботами. Была у них печально знаменитая Таи, которая реально училась у собеседников в Твиттере — и ее немедленно обучили плохому и очень плохому.
А теперь тамошние исследователи написали бота-комментатора (да ещё и выложили код на Гитхабе). Бот читает новости и пишет типичный комментарий среднего обитателя соцсеточек, который в каждой бочке затычка. Цель заявлена благая: размещая сгенерированные комментарии, побудить живых людей вступать в дискуссию (а для этого некоторые из них и саму новость прочтут). То есть боты должны поднять интерес к новостям и вовлечённость читателей.
Но неудивительно, что борцы с фейками увидели не столь благостные сценарии использования алгоритма ;) И встревожились. А ещё неудивительно, что обе разработки — изначально для Китая: всё-таки деанонимизированный и контролируемый интернет живет по своим правилам ;)
https://www.vice.com/en_ca/article/d3a4mk/microsoft-used-machine-learning-to-make-a-bot-that-comments-on-news-articles-for-some-reason
About UMUAI - The Journal of Personalization Research
User Modeling and User-Adapted Interaction (UMUAI) provides an interdisciplinary forum for the dissemination of novel original research results about interactive computer systems that can be adapted or adapt themselves to their current users, and on the role of user models in the adaptation process.

http://www.umuai.org/
Last week, I saw a lot of social media discussion about a paper using deep learning to generate artificial comments on news articles. I’m not sure why anyone thinks this is a good idea. At best, it adds noise to the media environment. At worst, it’s a tool for con artists and propagandists.

A few years ago, an acquaintance pulled me aside at a conference to tell me he was building a similar fake comment generator. His project worried me, and I privately discussed it with a few AI colleagues, but none of us knew what to do about it. It was only this year, with the staged release of OpenAI’s GPT-2 language model, that the question went mainstream.

Do we avoid publicizing AI threats to try to slow their spread, as I did after hearing about my acquaintance’s project? Keeping secret the details of biological and nuclear weapon designs has been a major force slowing their proliferation. Alternatively, should we publicize them to encourage defenses, as I’m doing in this letter?

Efforts like the OECD’s Principles on AI, which state that “AI should benefit people and the planet,” give useful high-level guidance. But we need to develop guidelines to ethical behavior in practical situations, along with concrete mechanisms to encourage and empower such behavior.

We should look to other disciplines for inspiration, though these ideas will have to be adapted to AI. For example, in computer security, researchers are expected to report vulnerabilities to software vendors confidentially and give them time to issue a patch. But AI actors are global, so it’s less clear how to report specific AI threats.

Or consider healthcare. Doctors have a duty to care for their patients, and also enjoy legal protections so long as they are working to discharge this duty. In AI, what is the duty of an engineer, and how can we make sure engineers are empowered to act in society’s best interest?

To this day, I don’t know if I did the right thing years ago, when I did not publicize the threat of AI fake commentary. If ethical use of AI is important to you, I hope you will discuss worrisome uses of AI with trusted colleagues so we can help each other find the best path forward. Together, we can think through concrete mechanisms to increase the odds that this powerful technology will reach its highest potential.


https://info.deeplearning.ai/the-batch-tesla-acquires-deepscale-france-backs-face-recognition-robots-learn-in-virtual-reality-acquirers-snag-ai-startups
User Modeling in Human-Computer Interaction


A fundamental objective of human-computer interaction research is to make systems more usable, more useful, and to provide users with experiences fitting their specific background knowledge and objectives. The challenge in an information-rich world is not only to make information available to people at any time, at any place, and in any form, but specifically to say the “right” thing at the “right” time in the “right” way. Designers of collaborative human- computer systems face the formidable task of writing software for millions of users (at design time) while making it work as if it were designed for each individual user (only known at use time).
User modeling research has attempted to address these issues. In this article, I will first review the objectives, progress, and unfulfilled hopes that have occurred over the last ten years, and illustrate them with some interesting computational environments and their underlying conceptual frameworks. A special emphasis is given to high-functionality applications and the impact of user modeling to make them more usable, useful, and learnable. Finally, an assessment of the current state of the art followed by some future challenges is given.


http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.43.6025&rep=rep1&type=pdf
In short, the goal of this course is to introduce students to ways of thinking about how Artificial Intelligence will and has impacted humans, and how we can design interactive intelligent systems that are usable and beneficial to humans, and respect human values. As students in this course, you will build a number of different interactive technologies powered by AI, gain practical experience with what impacts their usability for humans, understand the various places that humans exist in the data pipeline that drives machine learning, and learn to think both optimistically and critically of what AI systems can do and how they can and should be integrated into society.

TODO: download slides http://www.humanaiclass.org/schedule/
*Personalized Re-ranking for Recommendation*

Re-ranking в рекомендательных системах зависит от данных пользователя, его предпочтений и намерений.
Для пользователя, который чувствителен к цене, взаимодействие между ценой должно быть более важным в модели повторного re-ranking.
Обычно ранжирование в рекомендательной системе учитывает только особенности пары пользователь-элемент. Pairwise и listwise learning to rank пытаются решить эту проблему, используя в качестве входных данных пару элементов или список элементов. Они сосредоточены только на оптимизации функции потерь, чтобы лучше использовать метки, например, данные о кликах. Они явно не моделировали взаимные влияния между элементами в пространстве признаков.

Исследователи из Alibaba и Kwai предложили использовать Transofrmer, хорошо знакомый по машинному переводу. Их аргументы следующие: в структуре Transformer используется self-attention, в котором любые два элемента могут взаимодействовать друг с другом напрямую без ухудшения encoding distance.
Между тем, Transformer более эффективен, чем RNN из-за возможности распараллеливания. Transformer позволяет моделировать взаимодействия для любого из двух предметов за O (1).

Исследователи представили персонализированную матрицу PV для изучения функции encoding, специфичной для пользователя, которая способна моделировать персонализированные взаимные влияния между парой предметов. Поэтому функцию потерь можно сформулировать следующим образом.

Авторы используют предварительно обученную нейронную сеть для создания персонализированных вложений пользователя, которые затем используются в качестве дополнительных функций для модели PRM. Предварительно обученная нейронная сеть извлекается из click-through logs . Дополнительная информация пользователя включает пол, возраст и информацию о покупках.

Evaluation Metrics
1. Precision@k
2. MAP@k
Для онлайн A/B test, использовали PV (pageView), IPV(itemProudctClick), CTR(click-through-rates) и GMV в качестве метрик.


https://arxiv.org/abs/1904.06813