BlackBox (Security) Archiv – Telegram
BlackBox (Security) Archiv
4.15K subscribers
183 photos
393 videos
167 files
2.67K links
👉🏼 Latest viruses and malware threats
👉🏼 Latest patches, tips and tricks
👉🏼 Threats to security/privacy/democracy on the Internet

👉🏼 Find us on Matrix: https://matrix.to/#/!wNywwUkYshTVAFCAzw:matrix.org
Download Telegram
CMOinfographic.pdf
25.8 MB
A Look Back At 25 Years Of Digital Advertising

Advertising has always found a way to adapt to the medium. But the introduction of the “World Wide Web” in 1991 truly changed everything—providing advertisers with an unprecedented opportunity to flex their creative chops. Within a few years, new and entirely different types of ads began to, quite literally, pop up.

PDF:
https://www.cmo.com/content/dam/CMO_Other/articles/CMOinfographic.pdf

Article:
https://www.cmo.com/features/articles/2019/3/19/25-years-of-digital.html#gs.cig5lu

German:
https://news.1rj.ru/str/cRyPtHoN_INFOSEC_DE/3032

#advertising #ads #history #pdf
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
Audio
🎧 The CyberWire Daily Podcast - May 20, 2019

Huawei is on the US Entity List, and US exporters have been quick to notice and cut the Shenzhen company off.
Security concerns are now expected to shift to the undersea cable market.
Hacktivism seems to have gone into eclipse. T
he EU enacts a sanctions regime to deter election hacking.
Facebook shutters inauthentic accounts targeting African politics.
Salesforce is restoring service after an unhappy upgrade.
OGuser forum hacked. And don’t worry about a hacker draft.
Jonathan Katz from UMD on encryption for better security at border crossings.
Tamika Smith reports on the Baltimore City government ransomware situation.

📻 The #CyberWire Daily #podcast
https://www.thecyberwire.com/podcasts/cw-podcasts-daily-2019-05-20.html

📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
AdAway: Advertising and tracking blocker - Take back control! (Part 6)

1. data collection frenzy

In the last part of the article series I introduced you to the F-Droid Store, where you can get free and open source apps that don't track you or display advertisements. A general recommendation of the article series "Take back control! is therefore:

💡Get apps only from the F-Droid Store.

However, this advice cannot always be put into practice 1:1. Many users are still dependent on apps from the Play Store or cannot find a viable alternative in the F-Droid Store. Unfortunately, apps from the Google Play Store are not exactly known for their data economy - but rather the opposite. Most apps from the Google Play Store contain third-party software components that display advertisements to the user or track his activity every step of the way. As a normal user, however, you don't have any insight into the app or can't "see" from the outside whether this poses a risk to security and privacy.

Since the apps from the Play Store are often accompanied by a "loss of control", I will introduce you to the AdAway app from the F-Droid Store in this article. With this app, the loss of control can be minimized by putting a stop to the delivery of (harmful) advertising and the outflow of personal data to dubious third-party providers.

2nd AdAway

AdAway is an open source advertising and tracking blocker for Android, which was originally developed by Dominik Schürmann - currently AdAway is developed by Bruce Bujon. Based on filter lists, connections to advertising and tracking networks are redirected to the local device IP address. This redirection prevents the reloading of advertisements or the transmission of (sensitive) data to third parties.

By the way, AdAway cannot be found in the Play Store because Google no longer allows ad blockers - they simply violate Google's business model. Or to put it another way: Google will not tolerate an app that effectively protects your privacy and security by preventing the reloading of (harmful) advertisements and the outflow of personal data.

💡There are several advantages to using AdAway:

Reduction in data consumption:
Opening, connecting and closing (app) connections to servers on the Internet inevitably means that data is sent and received. While this is likely to be a problem for most people in their home WLAN due to a flat rate, the use of mobile data often presents a different picture. AdAway blocks the reloading of advertisements, tracking code and other resources. This saves valuable bandwidth and your mobile data plan is not unnecessarily burdened.

Faster device:
The display of advertisements, the execution of reloaded tracking code and basically every (unnecessary) connection setup costs CPU power. However, if these resources are not recharged or blocked by AdAway, not only will your battery last longer, but your device will also respond faster to your input.

Protection of privacy:
A major disadvantage of the predominantly proprietary apps located in the Google Play Store is the lack of transparency of data processing associated with their proprietary nature. Because with these proprietary apps we don't know and often can't check what they actually do (without our knowledge). However, if AdAway is able to block the majority of (app) connections to trackers and advertising networks, this can have a positive impact on our privacy.

AdAway not only blocks advertisements and trackers in your browser, but also in all apps you have installed on your device.

2.1 Concept | Technical background

Using the example of in-app advertising, I would like to briefly explain how AdAway works technically. Suppose an app developer has integrated an advertising module into his app. The app or the integrated module contacts the address each time the app is started or during runtime:
werbung.server1.de

However, this domain name must first be translated into an IP address so that the advertisement can then be reloaded from there. This service is provided by the Domain Name System (DNS) - one of the most important services on the Internet that converts domain names into their IP address. Everyone knows the principle behind this: You enter a URI (the domain name) in the browser and this is then translated into the corresponding IP address by a DNS server. Names are easier to remember than IP addresses. Your router therefore usually contains DNS servers from your provider or you have entered your own manually, which then translate the address "werbung.server1.de" into an IP address.

AdAway now makes use of this DNS principle. In its memory, AdAway maintains a list of domain names that can either deliver advertisements, track users, or otherwise have a negative impact on security and privacy. Once you have installed AdAway, the DNS query is first compared with the internally stored list. If the address is...

werbung.server1.de

...in the list or if there is a hit, the IP address is not resolved as usual, but your device or app receives the answer: "Not reachable" - the translation into the correct IP address is suppressed by AdAway. The result: The advertisement cannot be reloaded from the actual source or IP address. Instead of the advertisement, the user sees a placeholder or simply nothing. A simple principle that blocks the advertisement before it is delivered - even before it is translated into the IP address.

2.2 Installation

The installation of AdAway is done conveniently via the F-Droid Store - where the app does not violate questionable business models, as is the case with Google. With a tap on Install the installation of AdAway is done within seconds.

2.3 Adjustment via Magisk

Due to the read-only system partition of the Aquaris X Pro, the Hosts file cannot simply be modified by AdAway. However, this is necessary so that all domains that should not be accessible later can be stored there. Magisk offers a solution for this. Opens the Magisk Manager and calls up the settings. There you tap once on Activate system-less Hosts file.

3. configuration

The configuration of AdAway is done within a few minutes. Many advertising and tracking domains are already blocked in the delivery state. By adding more filter lists we can improve the result even more.

3.1 Initial Start

Immediately after the start AdAway will ask you if you want to send telemetry data (via sentry) to the developer. This is the following information:

Crash report and application failures,
Application usage.
Both kinds of report does not contains any personal data.

Then you can start AdAway directly with a tip on ACTIVATE ADMINISTRATIVE BLOCKS. AdAway will then download the current (block) lists and update the Hosts file.

3.2 Settings

Via the menu item Settings you can configure various options of AdAway. Among other things, you can specify that the (block) lists should be updated daily. The download and installation can be done automatically in the background.

By default, AdAway redirects all blocked hostnames to the IP address 127.0.0.1. For speed reasons, you should change this, as redirecting to 127.0.0.1 (localhost) actually causes network traffic. Tap on the Redirection IP entry and configure the address there:

0.0.0.0

3.3 Blacklists | Add filter lists

Via the menu item Hosts-sources you can add further filter lists. Three (block) lists are active in AdAway by default. You can use the plus sign to add additional lists that are not included in AdAway. My suggestion would be to add the following to the existing lists:
https://github.com/StevenBlack/hosts

💡Advice
In the AdAway Wiki you will find further suggestions and filter lists.
https://github.com/AdAway/AdAway/wiki/HostsSources

Of course you can also activate other filter lists or (block) lists. Possible overlaps are automatically removed by AdAway - duplicate entries would be too inefficient to process the filter lists. After adding the filter lists, AdAway will first download them from the sources and merge them into one big list - so you'll have to wait a moment.

Activating the filter lists can lead to so-called "overblocking". This means that domains that are necessary for the functionality of an app are filtered incorrectly. You will then have to decide on a case-by-case basis whether you want to release the domain in AdAway or put it on the whitelist. Further information on this topic can be found in Section 4.2.

4th AdAway in action

The configuration of AdAway is finished or you can customize it to your needs. Unfortunately AdAway does not offer the possibility to display the number of blocked domains - it should be more than 100.000 domains.

4.1 Blocked Domains

As already mentioned, the phenomenon of overblocking can occur, which can under certain circumstances lead to an app or certain function no longer functioning correctly. Personally, I have not been able to observe this so far - however, I am not the appropriate yardstick in this respect either, as I deliberately do without the services of Google, Facebook and Co.

So if an app doesn't work as usual, you should first activate DNS recording via the menu item Record DNS Requests and then open the app that doesn't work. Then open the menu item Record DNS Queries again and tap on the button Display RESULTS. All logged DNS queries will then be listed. As an example I allow the domain "media.kuketz.de" by tapping on the tick in the middle. AdAway will then remember this selection and put the domain on the whitelist:

4.2 Whitelist of a domain | App

Via the menu item Your Lists you can view the domains you have added yourself. AdAway distinguishes between three different variants:

Negative list:
You can add your own domains to block AdAway. In a way, this is a supplement to the existing (block) lists, which you can influence yourself.

Positive list:
As already mentioned, the overblocking effect may occur under certain circumstances. If this happens, you can make a domain accessible again via the positive list. The positive list is always before the filter lists - so the domain is reachable again, even if it is listed in one of the filter lists.

Redirections:
If necessary, you can activate IP redirects for certain domains. The domain "facebook.com" could be pointed to the IP address 193.99.144.80 (heise.de). If you call the domain "facebook.com" in your browser, you will be redirected to heise.de.

5. final note

The integration of advertising or the transmission of data to tracking companies is not necessary for the pure function provision of an app. These third-party software components do not simply end up in an app by magic, but are deliberately or actively integrated by the developers. Unfortunately, the developers themselves often do not know which data these building blocks or modules (also known as SDK in technical jargon) actually capture. Thus providers and developers sacrifice their users frivolously on the altar of boundless data collection frenzy, regardless of the associated risks for the security and privacy of their users.

With AdAway, you can minimize this unwanted data transfer. In practice, the principle of DNS blocking works extremely well - the vast majority of unwanted tracking and advertising domains are filtered, which of course has a positive effect on both security and privacy.
Nevertheless, you should not lull yourself into security and now believe that you are solving all tracker and privacy problems. Under certain circumstances it may happen that a tracking or advertising domain is still so unknown or new that it has not yet found its way to one of the (block) lists. In this case, there is a high probability that unwanted data will flow out to questionable third parties. The best long-term protection against unwanted data leakage is to do without most of the apps offered in the Google Play Store. Fortunately, the F-Droid Store is an alternative app store that addresses critical users who value free and open source applications. In the recommendation corner you will find data protection-friendly apps for a wide variety of applications.

6. conclusion

In the Google Play Store there is a whole arsenal of "pseudo-security apps" like virus scanners, which lull the user into false security. AdAway, on the other hand, can effectively protect security and privacy. The paradox is that AdAway is excluded from the Google Play Store because blocking trackers and advertising is against Google's business model. An app that blocks the delivery of Google advertising and tracking is understandably a thorn in Google's side.

In the following article of the article series "Take back control!" I will show you how to block "Big Brother Apps" from the Google Play Store into a kind of closed environment or prison - this is possible with Shelter. This way you can avoid that these apps access sensitive data (contacts etc.).

Source (🇩🇪) and more info:
https://www.kuketz-blog.de/adaway-werbe-und-trackingblocker-take-back-control-teil6/

#android #NoGoogle #guide #part1 #part2 #part4 #part5 #part6 #AdAway #kuketz
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
All just fake ethics

After numerous scandals, Facebook, Google and Co. have recently been playing the role of moral model students. Why we shouldn't fall for this scam.

Lean back, breathe calmly - in, out. There's no reason to get excited, you're in good hands. Even if the last time was difficult and you feel betrayed: We have listened, we promise improvement.

Everything will be different, no: Everything will be fine.

The promise

This is the sound of the hypnotic singsong currently blowing out of Silicon Valley.

For example, from Google headquarters, where ethicists are to discuss algorithms in the future, or from the mouth of Facebook boss Mark Zuckerberg. He suddenly wants the privacy of his users to take precedence over everything else and has recently expressed the wish for a "more active role for governments" in tech regulation. This follows a series of scandals that have severely damaged his company's reputation. The big IT companies no longer want to be the bad boys. Instead, they want to look more mature and virtuous. https://netzpolitik.org/2018/die-ultimative-liste-so-viele-datenskandale-gab-es-2018-bei-facebook/

Throughout the Valley, people are purifying themselves after the crisis tactics of recent years, mantra-like professing their own responsibility - code name: Corporate Digital Responsibility. The corporations seem to be reflecting on the good and calling out, frightened by the risks and side effects of their own smart developments, one ethic after the other, especially in the field of artificial intelligence (AI).

Mark Zuckerberg recently even announced ideas for regulating the Internet in a charm offensive - after having lobbied for years against everything that looked like the law (e.g. the DSGVO). The CEO of Facebook not only pretended to obey the authorities in advance, but also pretended to be a moral lawyer who wished to "preserve the good" on the Internet in order to present his own solutions from the very top with proposals for a "more active role for governments". https://www.washingtonpost.com/opinions/mark-zuckerberg-the-internet-needs-new-rules-lets-start-in-these-four-areas/2019/03/29/9e6f0504-521a-11e9-a3f7-78b7525a8d5f_story.html?noredirect=on&utm_term=.e2c285fa7e1e

Critics see Zuckerberg's proclamation as a clever calculation of power to cement their own monopoly position. They sense that someone here wants to take off their dirty coat in order to stage themselves as decent, clean-washed again. The discomfort is well-founded. And it doesn't just extend to Zuckerberg's new desire for clear rules.

The measures with which Google, Facebook and Co. want to get their problems in terms of credibility, data protection or artificial intelligence under control seem immature. They are fragmentary - and in most cases only a facade of public relations behind which the void yawns.

The problems

Google: Distorted Algorithms
For example at Google. There, in 2018, internal protests against Project Maven, an order from the US Department of Defense for the AI-supported, image-analytical improvement of drone attacks, were raised. CEO Sundar Pichai quickly announced new ethical guidelines: Google wanted to ensure that its AI systems would operate in a socially responsible manner, comply with scientific rigour, protect privacy, not discriminate unfairly, and generally be safe and responsible. https://www.blog.google/technology/ai/ai-principles/

But whether this catalogue of principles, formulated in the form of seven commandments, really promises a new responsibility in AI is highly questionable. As long as Google itself determines what an "appropriate transparency" and what a "relevant explanation" is, the effect of the new guidelines and the interpretation of the terms will remain a company secret - a beautiful appearance that at best simulates clear rules.
Google's bids were not only a response to militarily explosive projects, but also a reaction to Jacky Alciné's case, which became known in 2015: Alciné and his girlfriend were identified as "gorillas" in Google Photos. This racist bias referred on the one hand to a patchy data set and on the other to a diversity problem among Google programmers. Both are fundamental problems for many digital companies, as a study by the MIT lab found out. https://twitter.com/jackyalcine/status/615329515909156865?lang=en and http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf

The AI-supported face recognition software from IBM, Microsoft and Face++ also recognizes one group of people particularly well: white men. Black men, on the other hand, were wrongly classified in six percent of cases, black women in almost one third.

IBM: Questionable application areas
IBM, too, has therefore sought ethical guidelines and even developed ethnic-diverse data sets to correct distortions in its software. IBM CEO Virginia Rometty told the press that the company wanted to remain attractive especially in the areas of trust and transparency: "Every organization that develops or uses AI or stores or processes the data must act responsibly and transparently.

However, the fact that IBM's face recognition software was used in Rodrigo Duterte's "War on Drugs" in the Philippines suggests that ethically responsible action is by no means guaranteed even with a distortion-free AI. Because the difficulties are not limited to the smooth functioning of the system, but are reflected above all in its questionable application. Can an ever more precise recording of the population - especially of marginalized groups - be desirable at all? Perhaps, as the authorities in San Francisco recently decided, it would be better to do without such technologies altogether.

The fact that Google has also resumed work on a search engine for the Chinese market contrary to the announcement is another reason to become suspicious of the company's own catalogues of principles. For they do not define themselves as categorical imperatives, but as morally blurred declarations of intent whose commercial interpretation promises maximum flexibility. One must therefore almost inevitably agree with Rometty's words: "Society will decide which companies it trusts".

Microsoft: Ethics Council without bite
Microsoft has also been committed to the values of "transparency", "non-discrimination", "reliability", "accessibility", "responsibility" and "data protection" for a year now. In order to make such guidelines appear not only to be pretty but ultimately meaningless brochures, an ethics committee was established, the AI and Ethics in Engineering and Research (Aether) Committee, which advises developers on moral issues such as facial recognition and autonomous weapon systems. https://theintercept.com/2019/03/20/rodrigo-duterte-ibm-surveillance/

However, the committee is not allowed to provide information to the public. Hardly anything is known about the committee's working methods - what is known is limited to the statements of those responsible. These seldom shed light on the darkness. Eric Horvitz, director of the Microsoft Research Lab, recently proudly stated - albeit without giving any concrete figures - that the Aether Committee had already expressed reservations about the fact that several profits had not been realized. The committee had therefore shown its teeth. https://www.geekwire.com/2018/microsoft-cutting-off-sales-ai-ethics-top-researcher-eric-horvitz-says/
Whether the committee really shows effect may be doubted, however. As the AI expert Rumman Chowdhury recently explained, the committee cannot make any changes, but only make recommendations. And so it's not surprising that Microsoft has raised awareness on its own blog about the ethical problems of AIs in the context of military projects, but despite employee protests still wants to cooperate with the US Department of Defense: "We can't address these new developments if the people in the tech sector who know the most about the technology withdraw from the debate. https://www.theverge.com/2019/4/3/18293410/ai-artificial-intelligence-ethics-boards-charters-problem-big-tech

Ethical ideals, for example, are in principle documented at Microsoft, but often appear as gross silhouettes. As long as expert councils act in secret and without authority to issue directives, the "applied ethics" of the technology companies remain nothing but a loose lip service.

Google: The wrong partners
In addition to the planned lack of transparency, the structure of the ethics councils in particular often points to questionable breaking points. Although their composition usually follows the pretty principle of "interdisciplinarity", they rarely impress with ethical qualifications.

Google recently found out that this is a problem. Starting in April, an eight-member Advanced Technology External Advisory Council was to check whether the self-estimated values are really filled with life for AI development. Even before the first meeting, the council was suspended again because parts of the staff protested against the appointment of the committee and wanted both Dyan Gibbens, CEO of the drone manufacturer Trumbull, and Kay Coles James, President of the neoconservative Thinktanks Heritage Foundation, banished. https://blog.google/technology/ai/external-advisory-council-help-advance-responsible-development-ai/

Meanwhile, Google is acting at a loss - after all, without explaining anything in detail, they want to "break new ground" in order to obtain external opinions.

Facebook: Purchased research
Meanwhile, Facebook shows us how to avoid the problems of a lack of expertise and at the same time appear untrustworthy. The social network also wants to have the ethical challenges of the AI externally evaluated and founded the Institute for Ethics in Artificial Intelligence in cooperation with the Technical University of Munich at the beginning of the year. Facebook is investing 6.5 million euros over 5 years to develop "ethical guidelines for the responsible use of this technology in business and society". https://www.tum.de/die-tum/aktuelles/pressemitteilungen/detail/article/35188/

Since a company whose CEO once called its users "dumb fucks" also looks like a praiseworthy effort for ethical make-up, it was hardly surprising that criticism quickly rose. This was mostly aimed at the risk of purchased research and anticipated conflicts of interest as well as the moral damage the university would suffer if it "went to bed" with such a company. https://www.theguardian.com/technology/2018/apr/17/facebook-people-first-ever-mark-zuckerberg-harvard

Christoph Lütge, the future director of the institute, replied that Facebook was independent and that the research was published transparently, referring to the "win-win situation" for society as a whole resulting from the financing of Facebook.

But there are also limits to ethical research at the TU Munich. In an interview, Lütge stated that society's concerns about artificial intelligence would be addressed - but also that ethics "can do this better than legal regulation". https://netzpolitik.org/2019/warum-facebook-ein-institut-fuer-ethik-in-muenchen-finanziert/
Perhaps as a result, really important questions came up: whether, how and at what speed we do the business of digitization, in which areas we want to use AI systems such as face recognition at all, and how regulation can look beyond a calendar-like responsibility. Where do our red lines run?

A critical public will therefore be more important than ever. In this sense, breathe in and out calmly. But leaning back doesn't count - otherwise we'll be the all too trusting "dumb fucks" mentioned above.

https://www.republik.ch/2019/05/22/alles-nur-fake-ethik

#thinkabout #why
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
This is exactly where the matter becomes delicate. For as long as the companies themselves issue guidelines beyond generally applicable laws, "regulate themselves" through self-chosen councils or finance "independent" research themselves, doubts ferment as to whether the ethical principles are really sufficient; whether they are maintained or enforced at all - or whether they are not just a fleshless shell and thus cheap PR.

EU: Trustworthy CI
So the self-proclaimed do-gooders from Silicon Valley can hardly be expected to do anything substantial when it comes to ethics. From the stylized wording of Potemkin's ethics councils to the always the same, meaningless term casings, a lot of verbal fuss is made. But there are usually no consequences that really question one's own actions.

Their educational work thus has no effect whatsoever as a "principle of responsibility" (Hans Jonas), but as an act of precautionary ethics washing. If something goes wrong again, one will at least be able to explain it: After all, we made an effort.

The EU has now recognised the problem and set up the 52-member High-Level Expert Group on Artificial Intelligence itself, an expert committee that was to develop guidelines for AI. https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence

The result was presented in April - and was sobering. Thomas Metzinger, Professor of Theoretical Philosophy and one of only four ethicists in the group, described it as "lukewarm, short-sighted and deliberately vague". Resolute rejections, such as the use of lethal autonomous weapon systems, had been dispensed with at the insistence of industry representatives and the proclaimed "trustworthy AI" was nothing more than a stale "marketing narrative". https://background.tagesspiegel.de/ethik-waschmaschinen-made-in-europe

Metzinger's conclusion:
If the economy is too strongly involved in the discussion, at best "fake ethics" will emerge - but no real ethical progress. His appeal: Civil society must take the ethics debate away from industry again in order to develop the guidelines further itself. But how?

The tasks

Loose concepts could never make things and people better on their own. And preaching morality, as Friedrich Nietzsche already knew, is "just as easy as justifying morality is difficult". So instead of formulating a few melodious but shallow principles ex post, it is necessary to start earlier.

This means that already during the training of the developers - the TU Kaiserslautern, for example, offers the study course Socioinformatics - ethical and socio-political questions are raised and institutions are strengthened that negotiate ethics and digitality on a higher level beyond the relevant lobbyism. Institutions that push the discourse on effective rules without blinkers and false considerations.

Humanities scholars are also in demand here. Ethics, this would be the goal, must not remain a mere accessory that modestly accompanies or softly covers the laisser-faire in digital space. As a practice of consistent, critical assessment, its task should be to develop clear criteria for the corridors of action and thus also to determine the framework on which binding regulations are based. If it does not do so, it misses its potential and runs the risk of becoming meaningless.

To avoid this, it is necessary not to rely on the voluntary self-regulation of the tech elite, but to declare oneself more independent, in order to combine reflection on morality with reflection on the establishment of the world. Because if digital corporations are penetrating more and more areas of life and are decisively shaping social coexistence through their smart systems, this circumstance should be taken seriously. And think fundamentally about whether the techies, entrepreneurs and engineers alone should decide on the ethical dimensions of their developments - or whether this should not be a democratic, participatory, and thus many-voiced process.
This media is not supported in your browser
VIEW IN TELEGRAM
📺 SensorID
Sensor Calibration Fingerprinting for Smartphones

When you visit a website, your web browser provides a range of information to the website, including the name and version of your browser, screen size, fonts installed, and so on. Ostensibly, this information allows the website to provide a great user experience. Unfortunately this same information can also be used to track you. In particular, this information can be used to generate a distinctive signature, or device fingerprint, to identify you.

📺 https://sensorid.cl.cam.ac.uk/

#tracking #android #ios #fingerprinting
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
Media is too big
VIEW IN TELEGRAM
📺 Top 5 "Conspiracy Theories" That Turned Out To Be True

We all know the old trope of the tinfoil hat wearing conspiracy theorist who believes crazy things like "the government is spying on us" and "the military is spraying things in the sky" and "the CIA ships in the drugs." Except those things aren't so crazy after all. Here are five examples of things that were once derided as zany conspiracy paranoia and are now accepted as mundane historical fact.

📺 https://www.youtube.com/watch?v=wO5oJM8GjWA

🖨 https://www.corbettreport.com/5conspiracies/

📡 @NoGoolag #corbettreport
https://news.1rj.ru/str/NoGoolag/1233

#corbettreport #conspiracy #facts #history #gov #why #video #podcast
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
Media is too big
VIEW IN TELEGRAM
📺 The secret tactics Monsanto used to protect Roundup, its star product

Four Corners investigates the secret tactics used by global chemical giant #Monsanto to protect its billion-dollar business and its star product — the weed killer, #Roundup

📺 https://www.youtube.com/watch?v=JszHrMZ7dx4

🖨 https://www.abc.net.au/news/2018-10-08/cancer-council-calls-for-review-amid-roundup-cancer-concerns/10337806

#DeleteMonsanto #DeleteBayer #DeleteRoundup #FourCorners #video #podcast
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
Media is too big
VIEW IN TELEGRAM
🎧 Around the Globe with Financial Survival

James joins Melody Cedarstrom for this wide-ranging edition of Financial Survival. Topics covered include Vietnam and tyranny, big tech regulation and back door globalization, the US-China trade war and false flags in the Persian Gulf.

🖨 https://www.corbettreport.com/around-the-globe-with-financial-survival/

#corbettreport #video #podcast
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
☣️ Chaos Communication Camp 2019 ☣️

The Chaos Communication Camp in Mildenberg is an open-air hacker camp and party that takes place every four years, organized by the Chaos Computer Club (CCC). Thousands of hackers, technology freaks, artists and utopians get together for five days in the Brandenburg summer – to communicate, learn, hack and party together.

We focus on topics such as information technology, digital security, hacking, crafting, making and breaking, and we engage in creative, sceptical discourse on the interaction between technology and society.

We’d love to see your submission for these tracks:

💡 Arts & Culture,
💡 Ethics, Society & Politics,
💡 Hardware & Making,
💡 Security & Hacking,
💡 Science.

Apart from the official conference program on the main stages, the Chaos Communication Camp also offers space for community villages, developer and project meetings, art installations, lightning talks and numerous workshops (called “self-organized sessions”).

Dates & deadlines:

💡 May 22th, 2019: Call for Participation
💡 June 11th, 2019 (23:59 CEST): Deadline for submissions
💡 July 10th: Notification of acceptance
💡 August 21st – 25th, 2019: Chaos Communication Camp at Ziegeleipark Mildenberg

Submission guidelines for talks:

All lectures need to be submitted to our conference planning system under the following URL: https://frab.cccv.de/cfp/camp2019.

Please follow the instructions there. If you have any questions regarding the submission, you are welcome to contact us via mail at camp2019-content@cccv.de.

Please send us a denoscription of your suggested talk that is as complete as possible. The denoscription is the central criterium for acceptance or rejection, so please ensure that it is as clear and complete as possible. Quality comes before quantity. Due to the non-commercial nature of the event, presentations which aim to market or promote commercial products or entities will be rejected without consideration.

More info:
https://events.ccc.de/2019/05/22/call-for-participation-chaos-communication-camp-2019/

#ccc #camp
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
Media is too big
VIEW IN TELEGRAM
📺 Interview with Ren Zhengfei, Founder And CEO Of Chinese Telecom Giant Huawei

Ren Zhengfei, founder and CEO of Chinese telecom giant Huawei, spoke to Time on U.S. actions against his company, the security of Huawei's product, his daughter and Huawei CFO's arrest, President Donald Trump and 5G technology.

📺 https://www.youtube.com/watch?v=Nl2jCWDwE8w

#china #huawei #founder #interview #video #podcast
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
US authorities want to intercept telecommunications in Europe

The FBI could soon legally demand sensitive communication data from European Internet service providers, possibly in real time. In doing so, the European Union wants to make the Trump administration more inclined to be allowed to query "electronic evidence" directly on Facebook & Co. in return.

The EU Commission wants to negotiate an agreement with the US government that will force European Union-based Internet service providers to cooperate more with US authorities. The companies would have to grant police and secret services from the USA access to the communication of their users. European prosecutors would then also be able to issue an order for publication directly on Facebook, Apple and other Internet giants. The legal process via the judicial authorities that has been customary up to now is to be dropped. https://ec.europa.eu/info/policies/justice-and-fundamental-rights/criminal-justice/e-evidence-cross-border-access-electronic-evidence_de

The plans are part of the "E-Evidence" regulation, with which the EU wants to facilitate the publication of "electronic evidence". According to a recently published draft, this includes user data (name, date of birth, postal address, telephone number), access data (date and time of use, IP address), transaction data (transmission and reception data, location of the device, protocol used) and content data.

Agreement on implementation with the US Government
The planned EU regulation is limited to companies domiciled in the European Union. But because most of the coveted data is stored in the USA, the EU Commission is planning an implementation agreement with the US government. This would be possible within the framework of the "CLOUD Act", which the US government enacted last year. It obliges companies established in the USA to disclose inventory, traffic and content data if this appears necessary for criminal prosecution or averting danger.

The CLOUD Act also allows third countries to issue orders to US companies. An agreement necessary for this must be based on reciprocity and thus allow the US government access to companies in the partner countries. The Trump administration, however, demands a concession to be able to listen to content data in real time. Companies based in the EU would then have to transfer this data directly to US authorities.

More info:
https://netzpolitik.org/2019/us-behoerden-wollen-telekommunikation-in-europa-abhoeren/

#USA #FBI #EU #government #surveillance
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
This media is not supported in your browser
VIEW IN TELEGRAM
Meet Doggo: Stanford's student built, four-legged robot

Putting their own twist on robots that amble through complicated landscapes, the Stanford Student Robotics club’s Extreme Mobility team at Stanford University has developed a four-legged robot that is not only capable of performing acrobatic tricks and traversing challenging terrain but is also designed with reproducibility in mind. Anyone who wants their own version of the robot, dubbed Stanford Doggo, can consult comprehensive plans, code and a supply list that the students have made freely available online:

https://github.com/Nate711/StanfordDoggoProject

https://docs.google.com/spreadsheets/d/1MQRoZCfsMdJhHQ-ht6YvhzNvye6xDXO8vhWQql2HtlI/edit#gid=726381752

http://roboticsclub.stanford.edu/

📺 https://www.youtube.com/watch?v=2E82o2pP9Jo

#doggo #robotic #opensource #video #podcast
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN