vx-underground
A young person encountered a vx-underground imposter on Discord who was trying to convince them to meet-up in person. They contacted us to confirm whether or not this was true. This is a reminder vx-underground will never try to meet you in person. We don't…
tl;dr some dude pretending to be a staff member was trying to pick up a chick. we dont go outside, we dont meet people, were scared of grass and sunlight
🤣117❤20🤓17😁5🔥4👍2😢1
vx-underground
A young person encountered a vx-underground imposter on Discord who was trying to convince them to meet-up in person. They contacted us to confirm whether or not this was true. This is a reminder vx-underground will never try to meet you in person. We don't…
What the hell is someone going to say to pick up chicks while impersonating someone from our group?
"hai bb, you wanna hookup with a chronically online, morally ambiguous, mid-30s man with a benzodiazepine dependency and who is also (probably) on multi watchlists?"
"hai bb, you wanna hookup with a chronically online, morally ambiguous, mid-30s man with a benzodiazepine dependency and who is also (probably) on multi watchlists?"
😁108🤣42❤24🎉9💯9😍6❤🔥3😢1
It's the ultra rare, limited edition, double mega oopsie doopsie.
Last night at 4am PST California officials accidentally sent out an evacuation warning to the entire Los Angeles area ... AGAIN. They've made the same mistake two times in a 12 hour time stretch!
Last night at 4am PST California officials accidentally sent out an evacuation warning to the entire Los Angeles area ... AGAIN. They've made the same mistake two times in a 12 hour time stretch!
🤣103🔥12👍2😁2❤1🤩1🤓1
vx-underground
It's the ultra rare, limited edition, double mega oopsie doopsie. Last night at 4am PST California officials accidentally sent out an evacuation warning to the entire Los Angeles area ... AGAIN. They've made the same mistake two times in a 12 hour time stretch!
We've never seen such a colossal oopsie 2 times in a row in a 12 hour stretch. They're probably scaring these people to death — getting notified at 4am they need to pack their stuff and go 😭
🤣60😱10🤓5❤2😁2🫡2🔥1🎉1
vx-underground
It's the ultra rare, limited edition, double mega oopsie doopsie. Last night at 4am PST California officials accidentally sent out an evacuation warning to the entire Los Angeles area ... AGAIN. They've made the same mistake two times in a 12 hour time stretch!
THEY DID IT A THIRD TIME.
We didn't think it was possible to do an oopsie doopsie 3 times! This is absolute madness. Someone get California on the horn and tell them to wake up
Holy smokes
We didn't think it was possible to do an oopsie doopsie 3 times! This is absolute madness. Someone get California on the horn and tell them to wake up
Holy smokes
😁94🤣45👏6😱6🤓2🔥1🎉1
vx-underground
THEY DID IT A THIRD TIME. We didn't think it was possible to do an oopsie doopsie 3 times! This is absolute madness. Someone get California on the horn and tell them to wake up Holy smokes
What's interesting though is this time it wasn't sent to the entire LA County. It was sent to the wrong areas on Los Angeles, with the wrong message
tldr people in Long Beach received notifications for people near Eaton Fire which said it was for entirety of LA
???
tldr people in Long Beach received notifications for people near Eaton Fire which said it was for entirety of LA
???
🔥36🤣14😁6🎉3🤓2❤1
This media is not supported in your browser
VIEW IN TELEGRAM
40 minutes ago Los Angeles county officials stated on television they're working with partners to stop the false and/or incorrect evacuation warnings people are receiving WHICH ARE NOT happening from human interaction (???)
They're currently investigating how this is happening
They're currently investigating how this is happening
🤣101🤓11😁6🤯2😢1🤩1
vx-underground
40 minutes ago Los Angeles county officials stated on television they're working with partners to stop the false and/or incorrect evacuation warnings people are receiving WHICH ARE NOT happening from human interaction (???) They're currently investigating…
We don't want to get all crazy-whacko-conspiracy-theory, but this sure would be a great time for an adversary of the United States to cause chaos and/or spread misinformation.
🤓49🤣15🔥5❤4👍3🤔2😁1🤯1😢1🤩1
Hello,
This is a reminder that if you're a politician representing your country in the UN — you should avoid information stealer malware.
You should also avoid soliciting sex with male prostitutes on social media in private DMs.
This is a reminder that if you're a politician representing your country in the UN — you should avoid information stealer malware.
You should also avoid soliciting sex with male prostitutes on social media in private DMs.
🤣150💯16🤔10🤯5👍3❤1😢1🤓1🫡1
Hello,
Our backend is currently down because we're migrating hosts. Our frontend is still up, hence why you can see the "BBIAB" message.
tl;dr used too much data, moving to dedi
non-tl;dr (long read)
We initially used Wasabi as our backend because it's cheaper than a lot of hosting providers. Wasabi is good if you have data stored, but you don't intend on your egress exceeding what is currently being stored. Your egress exceeding what is stored is a violation of Wasabi terms-of-service specifically in their data usage section.
Under normal conditions, due to our Cloudflare enterprise which was gifted to us from Cloudflare, we would not exceed our data storage in egress and everything would be fine and dandy. However, as we've begun aggregating malware for our virus exchange domain, we've begun consuming egress and data usage at a high rate. Our current flow works something like this:
1. Get file (malware malware, maybe not malware)
2. Submit to virus exchange database via API
3. Data goes inside virus exchange database
4. Data sent to VirusTotal for scanning
5. Wait 60 seconds (async, other files sent too)
6. Query VirusTotal results
7. If file is malware, store in database as SHA256
8. If not malware, dispose of file
9. Copy confirmed malware from virus exchange bucket to vx-underground malware ingestion bucket
10. File placed in daily ingestion queue data directory
Each day every malicious file received is thrown in a directory labeled the current date — usually named something like "Malware.{Year}.{Month}.{Date}". We eventually pull these directories down from our bucket using the AWS CLI and 7z ultra compress them. Once we 7z ultra compress them we move them to local backup instances. Once backup is completed we push it back to the vx-underground backend prod environment.
We began receiving warnings from Wasabi when we were ingesting 50,000 - 100,000 malware samples a day. We scaled it back to 15,000 - 30,000 malware samples a day. This still irritated them, so we now have to move to a new host who won't charged us a fortune for processing and moving so much data internally and externally.
We ultimately decided to move to TorGuard because they're a sponsor of ours, we have a good relationship with them and their team, and they're going to help us out with some malware-related stuff. We had planned on eventually moving to their infrastructure for awhile but we kept delaying it because moving so much data, modifying so much of our internal procedures, and laziness, made us dread the move.
Our backend is currently down because we're migrating hosts. Our frontend is still up, hence why you can see the "BBIAB" message.
tl;dr used too much data, moving to dedi
non-tl;dr (long read)
We initially used Wasabi as our backend because it's cheaper than a lot of hosting providers. Wasabi is good if you have data stored, but you don't intend on your egress exceeding what is currently being stored. Your egress exceeding what is stored is a violation of Wasabi terms-of-service specifically in their data usage section.
Under normal conditions, due to our Cloudflare enterprise which was gifted to us from Cloudflare, we would not exceed our data storage in egress and everything would be fine and dandy. However, as we've begun aggregating malware for our virus exchange domain, we've begun consuming egress and data usage at a high rate. Our current flow works something like this:
1. Get file (malware malware, maybe not malware)
2. Submit to virus exchange database via API
3. Data goes inside virus exchange database
4. Data sent to VirusTotal for scanning
5. Wait 60 seconds (async, other files sent too)
6. Query VirusTotal results
7. If file is malware, store in database as SHA256
8. If not malware, dispose of file
9. Copy confirmed malware from virus exchange bucket to vx-underground malware ingestion bucket
10. File placed in daily ingestion queue data directory
Each day every malicious file received is thrown in a directory labeled the current date — usually named something like "Malware.{Year}.{Month}.{Date}". We eventually pull these directories down from our bucket using the AWS CLI and 7z ultra compress them. Once we 7z ultra compress them we move them to local backup instances. Once backup is completed we push it back to the vx-underground backend prod environment.
We began receiving warnings from Wasabi when we were ingesting 50,000 - 100,000 malware samples a day. We scaled it back to 15,000 - 30,000 malware samples a day. This still irritated them, so we now have to move to a new host who won't charged us a fortune for processing and moving so much data internally and externally.
We ultimately decided to move to TorGuard because they're a sponsor of ours, we have a good relationship with them and their team, and they're going to help us out with some malware-related stuff. We had planned on eventually moving to their infrastructure for awhile but we kept delaying it because moving so much data, modifying so much of our internal procedures, and laziness, made us dread the move.
👍64🫡18🤓7🙏6❤4🤣4😢1