Reddit Sysadmin – Telegram
Printix garbled output

I'm currently in the process of switching everything to Printix at our company. I have a printer model with a specific driver that only prints cryptic characters when the print job originates from a Mac. The driver is the correct one, the same driver that we used without Printix before. Has anyone else experienced this? It seems as if the printer and the operating system aren't speaking the same language.

https://redd.it/1qzbutk
@r_systemadmin
Do you have a 12th gen Proliant? Willing to show me the factory iLO certificate?

I'd like to see an example of the certificate (certificate chain?) that ships with a 12th generation Proliant's iLO interface.

If you've got one that's still sporting its OEM (or self-generated? I'm not sure if these are factory applied vs. generated at first boot), you can pull it from a shell prompt with:

openssl s_client -connect google.com:443 -showcerts </dev/null \
| awk '
/BEGIN CERTIFICATE/ {cert=""}
{cert = cert $0 ORS}
/END CERTIFICATE/ {
print cert | "openssl x509 -noout -text"
close("openssl x509 -noout -text")
print ""
}'

...Just change "google.com" to the name or IP of your iLO interface.

Feel free to obfuscate any MAC address, serial number or key modulus as you see fit, but please don't break the format: I'd like to know whether MAC addresses are encoded as abcd.abcd.abcd vs. AB:CD:AB:CD:AB:CD and so forth.

Thanks!

https://redd.it/1qze8pu
@r_systemadmin
Need help setting up a reverse proxy for my nodejs backend on IIS

Hi everyone as the noscript clearly states, i assistances with setting up a reverse proxy for my nodejs backend on IIS . for context i've developed a react web app, reliant on a nodejs backend





https://redd.it/1qzf9z5
@r_systemadmin
Changed email address for resource calendar, can't see free/busy now

I changed the email address for a resource/room calendar and now I can't see free/busy if I add the shared calendar to my calendar list in Outlook. It will still accept/deny meeting invites.

I waited 24 hours and no change. I've changed the email address back and it still doesn't work. Next step is to delete and add, but I might upset lots of users.

Any ideas?

https://redd.it/1qzfqzh
@r_systemadmin
SSH Port forwarding

My question to all sysadmins, do you all allow tcp port forwarding on the ssh server? Like if someone has access to only the ssh server but the ssh server is also in whole internal network? I just realized on most server distros , tcp port forwarding is enabled by default

https://redd.it/1qzj9dt
@r_systemadmin
Action1/Powershell Scripts for Secure Boot kickoff and check

Just in case anyone needs these, I posted a couple of noscripts to "kickoff" the secure boot certificate updates (with BIOS already updated to include 2023 cert) and another one to check the flag that the update is completed.

I posted them in the Action1 sub but sysadmin doesn't allow cross posting. So they are over here - Use at your own risk with testing.


Kickoff - https://www.reddit.com/r/Action1/comments/1qz6rsd/secure\_boot\_2023\_cert\_kickoff\_noscript/

Verification Check - https://www.reddit.com/r/Action1/comments/1qz74re/secure\_boot\_2023\_cert\_updated\_verification\_noscript/

https://redd.it/1qzljal
@r_systemadmin
Experiences with Unix‑like systems on older hardware (32‑bit limits)?

Many mainstream OSes are dropping 32‑bit support. Has anyone kept a 32‑bit Unix‑like system alive? What worked best?
What challenges did you face and how did you solve them?

https://redd.it/1qzlswc
@r_systemadmin
Disk mounted as write-protected, protected by Bitlocker, and I've tried everything I'm aware of to mount it writeable.

I'm able to unlock the volume without issue. Status is protected and unlocked. Disk and Volume attributes are both NOY readonly, but I've cleared those attributes just in case.

NTFS permissions look fine, but even if I try to adjust them, I get an "disk is mounted read only"

I am aware of the GPO that can dictate making non-prtected volumes write protected, and I've even gone so far as to make that a "disabled" policy.. I've also checked the SAN policy, and ensured it's OnlineAll.....still, I can't get this disk mounted writeable.

Any bitlocker gurus out there understand what is happening? What am I missing? I'm inputting a password after the VM boots, it's mounted readonly, and I've unlocked with the AD-stored password key also, and that results in the volume mounted readonly as well.

Eternally grateful for any insights. Thanks, All.

https://redd.it/1qzobcn
@r_systemadmin
Carnival Cruise Line Outage?

Any comrades have info on the ongoing Carnival Cruise line outage? Boarded (after terribly long delays) on the Panorama in Long Beach, but unable to sail out due to "IT Issues." Sounds like it's fleet wide.

https://redd.it/1qztp4z
@r_systemadmin
Does anyone have a back up/alternate location for the Dell devices Secure Boot update firmware versions list?

We're working on getting the Secure Boot certificates updates done and I've been referencing this list from Dell for the past week: dell.com/support/kbdoc/nl-nl/000347876
It seems to have disappeared since Friday though, even though it's still referenced by Dell and Microsoft in other documentation. Thanks in advance!

https://redd.it/1qzxofq
@r_systemadmin
From Today: Microsoft 365 Admin Center Demands MFA


Starting today, access to the Microsoft 365 admin center will be blocked for any account that does not have Multi-factor Authentication enabled.

Stay ahead: If you haven’t enabled MFA yet, set it up right away to avoid any sign-in issues once mandatory MFA enforcement is rolled out in your organization.



https://redd.it/1r016ba
@r_systemadmin
Datacenter costs through the roof

Hi all,

We, a Belgian based company uses the data centre of one of the biggest ISP's in Belgium.

We recently were pressured in to change our model from reserved to pay as you use.

We were in the reserved model with 30 VM's, when checking the pay as you use model and seeing that we were going to eliminate half of our VM's this looked like a no brainer as the ISP¨stated that costs would be reduced by almost half.

Half a year later our bill is exactly as much but with half the resources.

Is this also fallout from the broadcom acquisition or have we been bamboozled?

(if this is in violation with guidelines, please tell me, as this keeps getting removed without a reason)

https://redd.it/1r06bx9
@r_systemadmin
Exchange Online has broken almost every single month

One of those things that keeps surprising me is the general impression moving email to Microsoft's cloud isn't a massive business risk. I hear all the time that people have "never experienced an outage".

If you look at Bleeping Computer's posts tagged with Exchange Online, it's pretty much monthly that Microsoft fails to correctly let people send blurbs of text to other people across the Internet: https://www.bleepingcomputer.com/tag/exchange-online/

https://redd.it/1r0cinv
@r_systemadmin
Our dev team is the weak point in our cyber security and they don't want to change

Tl;dr: dev team is pushing back hard to give up their privileges, which create a weak spot in​ our cyber security. ​Wonder how others handle this.

Our company does both manufacturing and software. About 150 desks of which 45 ​developers. We grew very​ quickly in the past few ​years, roughly 10x in size. This meant IT only became a thing when the dev team already got their own Linux devices with superuser, single shared password for the file shares, etc.

Last year I got the responsibility to streamline IT. I don't have a degree in it but just became the 'sysadmin' because I was the only one taking on ​responsibility and ​answering questions about IT.

I worked diligently with an MSP to get everything in order from backups, redundancy, password policy, password manager, asset management, Intune, CA, standardizing ​on- and off boarding etc.

This year we came to the point we wanted a clear view on the road ahead so I made a Cyber Roadmap. We identified one major cyber security risk, and that was that ​our​ Linux endpoints are (basically) unmanaged. No endpoint protection, no encryption, full permissions, shared passwords, no patches or updates. And almost no options for managing it, except maybe when using 5+ tools.

Looking​ at alternatives, a Unix OS seem to be a must​ for some AI/ML tools. And we have on prem software​ that only runs on Windows, which some of the developers need in their workflow. So that left me with:

\- Mac + Azure Virtual Desktop

\- Windows + WSL

I've been leaving hints about the change that needs to happen and that seemed to have rubbed the wrong way. ​Some of the team members appear to have exagerrated​ this, claiming we want to force them on Windows only.

I got approval for a​ one desk pilot, but even ​setting that up got me some snarky comments​. ​I feel like i'm ​walking on a thin line. Management understands the need for security but also don't want to scare away our valuable dev team (and ​me neither). I still have the green light but feel like it's turning to orange.

What would you guys do?

https://redd.it/1r0eadi
@r_systemadmin
Weekly 'I made a useful thing' Thread - February 13, 2026

There is a great deal of user-generated content out there, from noscripts and software to tutorials and videos, but we've generally tried to keep that off of the front page due to the volume and as a result of community feedback. There's also a great deal of content out there that violates our advertising/promotion rule, from noscripts and software to tutorials and videos.

We have received a number of requests for exemptions to the rule, and rather than allowing the front page to get consumed, we thought we'd try a weekly thread that allows for that kind of content. We don't have a catchy name for it yet, so please let us know if you have any ideas!

In this thread, feel free to show us your pet project, YouTube videos, blog posts, or whatever else you may have and share it with the community. Commercial advertisements, affiliate links, or links that appear to be monetization-grabs will still be removed.

https://redd.it/1r3laua
@r_systemadmin
Patch Tuesday Megathread (2026-02-10)

Apologies, y'all - We didn't get the 2026 Patch Tuesday threads scheduled. Here's this month's thread temporarily while we get squared away for the year.

Hello r/sysadmin, I'm ~~u/automoderator~~ err. u/kumorigoe , and welcome to this month's Patch Megathread!

This is the (mostly) safe location to talk about the latest patches, updates, and releases. We put this thread into place to help gather all the information about this month's updates: What is fixed, what broke, what got released and should have been caught in QA, etc. We do this both to keep clutter out of the subreddit, and provide you, the dear reader, a singular resource to read.

For those of you who wish to review prior Megathreads, you can do so here.

While this thread is timed to coincide with Microsoft's Patch Tuesday, feel free to discuss any patches, updates, and releases, regardless of the company or product. NOTE: This thread is usually posted before the release of Microsoft's updates, which are scheduled to come out at 5:00PM UTC. Except today, because... 2026.

Remember the rules of safe patching:

Deploy to a test/dev environment before prod.
Deploy to a pilot/test group before the whole org.
Have a plan to roll back if something doesn't work.
Test, test, and test!

https://redd.it/1r1hz0s
@r_systemadmin
PSA: visual studio (msdn) subnoscriptions doesn’t get license keys or azure credits anymore

Microsoft has quietly changed their benefits.

No more ISO and license keys for windows server, client, office or all their other on premise products.

Download ISO’s and keys while you can.

And azure credits? Will still be there - kinda. Now pooled centrally. Not sure yet how they are awarded.

Are you rocking a homelab? Did you want to test some configuration manager (SCCM) edge cases? Do you have a Entra and intune tenant with the m365 licenses? Did you want to show case some awesome solution you created?

Well Microsoft says fuck you, pay us more licenses.

\> Azure credits are now delivered through the partner program benefit packages at the organization level, rather than being bundled with individual IDE licenses. This pooled model enables partners to plan, share, and apply Azure credits across teams and projects more effectively, reducing unused credits and improving overall utilization.

\> Legacy on-premises software downloads and transferable product keys (such as Windows, Office, and server products) are no longer included with Partner Program developer benefits. These products remain available through appropriate Microsoft licensing channels.

\> Legacy developer tools that are no longer aligned with modern, cloud-first development workflows have been retired in favor of current tools, services, and learning resources.

https://learn.microsoft.com/en-us/partner-center/benefits/mpn-benefits-visual-studio#whats-changed

https://redd.it/1r4t9fu
@r_systemadmin
Does the Highest Ranking IT Person in Your Company Report to the CEO?

Do you think this matters in how IT is viewed and treated at your company?

https://redd.it/1r4jn1s
@r_systemadmin
How to approach SSL certificate automation in this environment?

We've been tasked with figuring out a way to automate our SSL certificate handling. Yes, I know we're at least 10 years late. However due to reasons I'll detail below, I don't believe any sane solution really exists which fits our requirements.

Our environment

- ~700 servers, ~50/50 mix of Windows / Linux
- A number of different appliances (firewalls, load balancers etc)
- ~150 different domains
- Servers don't have outbound internet connectivity
- nginx, apache, IIS, docker containers, custom in-house software, 3rd party software
- We also use Azure and GCP and have certificates in different managed services there
- We require Extended Validation due to some customer agreements, meaning Let's encrypt is out of the question and we need to turn to commercial service providers with ACME support

So far we have managed certificate renewals manually. Yes, it's dumb and takes time. Given the tightening certificate validity times we're now looking to switch to ACME based automation. I've been driving myself insane thinking about this for the last few weeks.

The main issue we face is that we can't just setup certbot / any other ACME client on the servers using the certificates themselves, for multiple reasons:

- A large amount of our services run behind load balancers and the load balancers perform HTTP -> HTTPS redirects with no way to configure exceptions. This means our servers can't utilize HTTP-01 ACME challenge.
- Our servers have no outbound internet access, meaning we can't access our DNS provider's API for DNS-01 challenge for example.
- Even if we could, we have ~150 domains and our DNS provider doesn't provide per-zone permission management. Meaning all of our servers would have DNS edit access to all of our domains, which is a recipe for disaster in case any of them get breached. So client ACME + DNS-01 is out of the question as well.

Given that our servers can't utilize HTTP-01 or DNS-01 ACME challenges, the only viable option seems to be to set up a centralized certificate management server which loops through all of our certificates and re-enrolls them with ACME + DNS-01 challenge. This way we can solve certificate acquisition.

If we go the route of a centralized certificate management server we then need to figure out a way to distribute the certificates to the clients. One possibility would be to use a push-based approach with ansible for example. However we don't really have infrastructure for that. All of our servers don't have centralized user management in place and creating local users for SSH / WinRM connections is quite the task, given the user accounts permissions would have to be tightened. We also run into the issue that especially on Linux we use such different distributions from different times that there isn't a single ansible release which would work with the different python versions across our server fleet. Plus having a push-based approach would make the certificate management server a very critical piece of infrastructure, if an attacker got hold of it they could get local access to all of our servers easily via it. So a push-based approach isn't preferable.

If we look at pull-based distribution mechanisms then we require server-specific authentication, since we want to limit the scope of a possible breach to as few certificates as possible. So every server should only have access to the certificates they really need. For this permission model probably the best suited choice would be to use SFTP. It's supported natively by both Linux and Windows and allows keypair authentication. This creates some annoying workflows of "create a user-account per client server on the certificate management server with accompanying chroot jail + permission shenanigans" but that's doable with Ansible for example. In this case I imagine we'd symlink the necessary certificate files to the chrooted server-specific SFTP directories and clients would poll the certificate management server for new certificates via cron jobs /
scheduled tasks. Ok, this seems doable albeit annoying.

Then we come to handling the client side automation. Ok, let's imagine we have the cronjobs / scheduled tasks polling for new certificates from the certificate management server. We'd also need accompanying noscripts for handling service restarts for the services utilizing these noscripts. Maybe the poller noscript should invoke the service restart noscripts when it detects that a new version of any of the certificate files is present on the cert mgmt server and downloads them.
Then we come to the issue that some servers may have multiple certificates and/or multiple services utilizing these certificates. One approach would be to have a configuration file with a mapping table for "certificate x is used by services y and z, certificates n and m are used by service i etc". However that sounds awful, maintaining such mapping tables does not spark joy. The alternative way of handling this would be to just say "fuck it, when ANY certificate has changed, just run ALL of the service reload noscripts". That way we would not need any cert -> service mapping tables but it'd in some cases lead to unnecessary service downtime for some specific services where reloading them causes application downtime. Maybe that's an acceptable outcome, not sure yet.

But the biggest problem I see with this approach is actually managing the client-side automation noscripts. As described earlier, we can't really rely on Ansible to deploy these noscripts to target hosts due to python version mismatches across our fleet. But I'd still want some sort of a centralized way to deploy new versions of the client noscripts across our fleet, since it's not particularly unimaginable that some edge cases will pop up every now and then requiring us to deploy new version of some IIS reload noscript for example across our fleet. It'd also be nice to have a single source of truth telling us where exactly have different service reload noscripts been deployed to (just relying on documentation for this will result in bad times).

So to combat that problem... More SFTP polling? This is where this whole thing starts to feel way too hacky. The best answer to that problem that I've come up with is to also host the client-side noscripts on the certificate server and deploy them to client via the same symlink + client-side poller noscript setup. Thus we can see on the certificate server what servers use what service reload noscripts and updating them en masse is easy. But this also feels like something we really should not do..

Initially I thought we should just save the certificates to a predefined location like /etc/cert-deploy/ and configure all services to read their certificates from there, rather than deploying the services to custom locations on all servers. However I now realize that brings permission / ownership problems. How does the poller noscript know to which user the certificates should be chowned to? It doesn't. So either we'd require local "ssl-access" groups to which we'd attempt to add all sorts of generic www-data, apache, nginx etc accounts and chgrp the cert files to that group, or the service reload noscripts should re-copy the certs to another location and chown them with user account that they know the certs will be used by. Or another mapping table for the poller noscript. Yay, more brittle complexity regardless of choice.

At this point if we go with an approach such as this one, I'd also want to have some observability into the whole thing. Some nice UI showing when have the clients last polled their certificates. "Oh, this server hasn't polled their certificates for 10 days, what's up with that?" etc. Parsing that information from sftp logs and displaying on some web server is of course doable but once again one starts to ask themselves "are we out of our minds?".

I even went as far as I started drafting up a Python webserver which would replace the whole sftp-based approach. Instead clients would send requests to the application, providing a unique per-client authentication token which must match the client token
stored in a database. Then the application would allow the client to download the certificates and service reload noscripts via it. It'd allow showing client connection statistic more easily etc. However my coworker thankfully managed to convince me that this is a really bad idea both from a maintainability and auditing POV.

So, to sum it all up.. How should this problem actually be tackled? I'm at a loss. All solutions I can come up with seem hacky at best and straight up horrible at worst. I can't imagine we're the only organization battling with these woes, so how have others in a similar boat overcome these problems?

https://redd.it/1r4ttqo
@r_systemadmin