DataHoarder – Telegram
DataHoarder
330 subscribers
6.28K photos
247 videos
45.3K links
Download Telegram
Testing early stages of media server
https://redd.it/1pxi7sq
@r_DataHoarder
How should I back up my media?

I currently have a pc with 3 drives totalling 5.5tb

But I have so many videos and pictures from years ago that I dont even look at but I can't delete them or only rarely need to look them up for something. I need a single central place (instead of anywhere across 3 drives and countless directories) but I don't need a 24/7 nas or server. Maybe I should buy a large drive and put them on there? Plug in only when needed? But then i also am going to run out of space on my desktop and id rather clean it up and reorganize so I'll probably not have the originals on my computer. So in that case I'll be trusting on this other drive ​which isnt great. Should I buy two drives and put one in a safe or give to a family meneber? My computer is 6 years old and I know hdd and ssds dont last forever. I dont like spending money but I figure the stress and even devastation if something happened makes whatever the cost worth it.




https://redd.it/1pxlvbt
@r_DataHoarder
Upgrading from 2x 6TB to 2x 12TB storage

Current setup 2x 6TB (zfs mirror), 80% full.

Bought 2x 12TB deciding what to do with them... What I'm thinking, please let me know if I'm not considering something, and what would you do?

Copy everything to a new 12TB mirror, but continue using the 6TB mirror as my main and delete all the less used items to free space (like any large backups not needed to be accessed frequently). Downsides would be managing two pools, I currently run them as external drives lol which would mean 4 external drives, and possibly outgrowing the space again on the 6TB main. I don't want to end up placing new files in both places.
Copy everything to a new 12TB mirror, use that as the main, nuke the 6TBs. Maybe a (6+6) stripe, and use it as an offline backup/export of the 12TB mirror? Or I could go (6+6)+12TB mirror with the 12TB offline backup/export, but would still need to rebuild the (6+6) stripe.

https://redd.it/1pxm0fz
@r_DataHoarder
Just had a bit rot (I think) experience!

I downloaded a 4K UHD disc and before offloading it from my main storage, I archived it using winrar. I tested it and it worked fine. I copied it to two different 20TB drives (One Seagate Exos, One WD Ultrastar). This was about a month ago. The archive was split into multiple 1GB files.

Today I needed the files for seeding, so I tried to extract it. It stopped at part11.rar saying the archive is corrupt. It was fine when I tested it before copying to the drives. Luckily, I had two recovery volumes created, so I deleted the corrupted file, and the recovery volumes reconstructed the file.

Then I tried to extract it from the other 20TB drive (WD), and it extracted fine. No corrupt files.

So, I think the Seagate Exos had a silent bit error ??

The drive health is showing 100%, running a full surface read test now.

https://redd.it/1pxmrii
@r_DataHoarder
Venting: I dislike how upvoted comments crap on a person's efforts but don't offer helpful suggestions.

C'mon, you're all better than this.

I thought this community was nice.

https://redd.it/1pxqw2s
@r_DataHoarder
Syncing without corruption?

I run a homelab and have a NAS which stores both archival data (i.e. photo galleries, movies) and files I work with on a regular basis (i.e. documents) in a zfs pool consisting of mirrored zdevs. I let my NAS sync files to my PCs so that they can access and work on them locally without delay or compatibility issues.

However, it occurred to me that having several synced copies of the dataset raises the chances that one of the copies gets corrupted (mainly due to bad sectors on a harddrive) and synced to all the other copies.

My first idea was that I could keep checksums of my data and watch for spontaneous changes, but I don't really see an easy way for a program to distinguish this from the case where a user has edited the data. The other would be to run regular scans of all drives to check for bad blocks.

As far as I can see, the safest and simplest way to protect the data would be to have my PCs work with a network share, but this makes me dependent on my internet connection for my offsite hosts (i.e. PCs at family's places who share the data) and could maybe cause compatibility issues with certain software.

So I'd like to make sure I'm not overlooking a solution for syncing data without multiplying the risk of data corruption.

https://redd.it/1pxr9a8
@r_DataHoarder
Looking to scan some old Nindori magazines, preferably without damaging the spines?

Hello! I have just bought a few Nintendo Dream mags & I intend to scan them in & archive them online, likely on archive.org. My local library does not support color scans. I am checking around for printing services/Staples/maybe some of the local colleges for other digitizing options, but I'll Eventually need a scanner anyhow. I am not super familiar with digitizimg beyond scanning in my personal pencil & pen art, & digitizing in general (but especially magazines) is a new area for me. A very select amount of scans of the Nindoris I have bought do exist online (issue 204 from 2011 has like 30 pages scanned, for instance) but they're lacking in quality & do not carry >1/2 the pages. As I intend to have at least 1 of the pages partially translated, the lacking quality of the scans available are a bit of a bottleneck -- the text is not clear at all & it makes the characters hard to read even for my translator. So! I need my own scans.

I have seen a few recommendations for scanning mags & most either involve cutting the pages or otherwise debinding. I am not opposed to this, but I have also heard mention of scanners that have the ability to scan magazines/books in a nondestructive manner? I'd like to not damage the magazines if possible so I was hoping somebody'd have the 411 on how I'd go about this.

I know the Bookeye is a verrry expensive option (I am not that rich) but I figured I'd ask -- is there a scanner that can scan these Nindoris at a reasonable quality without me unbinding them? I'd throw out my price range at around $500 or so, & I am a little worried about warping of the images due to the natural curve of the spine if I do not debind -- do these scanners account for that warping? I also need to specify the printer would either need a wired connection or a way to insert a USB stick or SD card as my pc does not support bluetooth. Otherwise the general scanning recs apply -- 600 or 1200dpi, save to PDF or Tiff, preferably a software that doesn't alter the images by default, etc.

If not, I will likely use a heatgun to preserve the og spines & reglue when done. My only reason for hesitating is that I have to scan many pages & many volumes -- I have 11 out of 12 of the issues for 2021, for example. Debinding will not take super duper long but regluing will as most Nindoris seem to run past 100 pages. I am also a bit worried the heatgun could end up warping or discoloring the pages? If anyone has any recs for flatbeds or feeder style in the instanceof me deciding to debind anyhow, those would be pog.

https://redd.it/1pxt70d
@r_DataHoarder
best place to buy high capacity hard drives for the low/cheap?

In the past like a year or two ago I used serverpartdeals and goharddrive and got crazy deals on 14 TB and 12 TB drives that were manufacturer refurbished or recertified, now that I'm back in the market, I checked out their websites for the first time in a year and it seems that their prices have gone up way high. A year ago from goharddrive I was able to get a 12tb Ironwolf with 3 years warranty for like $110.

Are there any alternatives?

https://redd.it/1pxxggj
@r_DataHoarder
What Software Tools are we using?

I saw a comment on a post talking about how automation tools are getting better, and so I ask you, what do you use? I use musicbrainz Picard and renamemytvseries (I think?). But I find myself vibe coding tools to do specific things and I wonder if there isn't a better, safer way.. or at least a more mainstream tool that already exists. And if not, what Software tools do you use that you or someone you know have made? I'm thinking like renamers, audio level normalizers, artwork fetchers, API checkers, folder structure normalizers, dedupers, etc.

https://redd.it/1pxyfzy
@r_DataHoarder
Fractal Design Define 7 drive capacity

Spec'ing a new system and copilot is telling me that this case can easily hold 5 3.5" SATA drives. I don't see it. I'm confused. Is it hallucinating or am I missing something. Just don't want to trust it blindly but despite looking over the product information it looks to me like it will hold at most two drives (without getting creative) Can anyone confirm?

https://redd.it/1pxzgjg
@r_DataHoarder
Two basic 1tb external HDD storages for family media longterm?

Anyone have any suggestions here please?

Will be storing the media copied across both drives if that's recommended.

https://redd.it/1py1zyf
@r_DataHoarder
Audio converting guide (ffmpeg, powershell 7, windows, parallel and recursive)

Hi,

Just wanna share my simple work flow for handling audio converting, maybe someone will find it useful.

Also it's parallel - uses all cores of CPU, so it's much faster.

Parallel works only in powershell version 7 and up, so you need to get that before running the noscript.

cd to directory where you have files, converts recursively every file in every folder bellow.

copy-paste from notepad (to clear formatting) to run it


## .wav to .flac:

    Get-ChildItem -Recurse -Filter *.wav | 
ForEach-Object -Parallel {
$outfile = Join-Path $_.DirectoryName "$($_.BaseName).flac"
ffmpeg -y -i $_.FullName -c:a flac -compression_level 12 $outfile
}



## .flac to .opus (160K is enough for "transparency" XD)

Get-ChildItem -Recurse -Filter *.flac | 
ForEach-Object -Parallel {
$outfile = Join-Path $_.DirectoryName "$($_.BaseName).opus"
ffmpeg -y -i $_.FullName -c:a libopus -b:a 160k $outfile
}



## .wav to .opus (160K is enough for "transparency" XD)

Get-ChildItem -Recurse -Filter *.wav | 
ForEach-Object -Parallel {
$outfile = Join-Path $_.DirectoryName "$($_.BaseName).opus"
ffmpeg -y -i $_.FullName -c:a libopus -b:a 160k $outfile
}


After that you can use Everything (Void Tools) to clean up source files.

I'm sure there is a way to make in neater, but I need some flexibility it this works for me :D

https://redd.it/1py2hoz
@r_DataHoarder
Need advice for preserving subreddit posts/subreddit data

Hello,

I'm the founder of r/AcademicQuran, an academically based subreddit which explores the Quran, early Islamic history and Islamic Studies in general from an historical critical perspective. Our sub is nearly 5 years old and there are many high quality posts discussing a variety of academic topics that have been made over the years, and I am interested in finding a way in preserving the content of these posts , if not the data for the entire subreddit.

What steps would need to be taken in order to preserve some of the better posts on the sub in a way that would be legal and not in violation of any Terms of Service on Reddit? What would have to be done in order to preserve the data of the entire subreddit (even though if I had my choice it would be the higher quality posts whose data would be preserved only)?

https://redd.it/1py2ze9
@r_DataHoarder
Largest amount of storage I can fit in my DeskMini 310w?

Currently i'm running 2x 2tb SSDs as a mirrored NAS in my ASRock DeskMini 310w, and its serving me well. Only half used so far. I don't go out of my way to get as much media as possible, only ones Id like to either rewatch or do watch and maybe eventually delete.

But then I looked into 4K, which is... alot would be an understatement... not that I am looking to change it anytime soon, but I am curious to know in MY use case where I do not want to invest in an external NAS (for now anyways), what do you guys think I am able to go up to max if (and a big if) I wanted to upgrade storage within the PC?

Please humor me and suppose I used 2 drives to combine them instead of mirror them.

https://redd.it/1py6gy1
@r_DataHoarder
A Holiday Miracle - My CD-RW Works Again!

I have this old CD-RW that I used to backup my files when I was a kid. It had stories I wrote, homework, photographs of family and friends, and music. Life got busier as I got older and I forgot all about this backup.

It wasn't until a few years ago, I remembered it and tried to view the files, but it took my computer a long time to read it and sometimes not all files would appear. When I took the disc out and tried again, File Explorer couldn't read it at all. If I right-clicked and viewed the properties, it showed the disc contents as 0 bytes. Multiple, subsequent attempts all failed.

I think I might have actually posted a thread here or maybe a tech support forum about this problem. I learned that different brands of CD-RW have different lifespans, that humidity, temperature, the dyes, all played a role, and eventually the disc would degrade. As it was unreadable, I was certain it was dead. Despite this, I couldn't throw away something that had once held so many memories so I put it in a box in my closet.

Fast-forward some more years to this Christmas; I was going through my belongings in preparation for an upcoming move and came across my CD-RW. Maybe it was some lingering hope, or maybe just dealing with grief motivated me to make another attempt at recovering something from my past. For whatever reason, my CD-RW is working normally again! I haven't done anything or installed any special software to read it, it just works somehow. I've copied all the files to an HDD just in case the CD-RW fails again.

Anyway, I just wanted to share this story here. I'm so happy to have those old files back.

https://redd.it/1pyaj1o
@r_DataHoarder