Not boring, and a bit of a condescending prick – Telegram
Not boring, and a bit of a condescending prick
308 subscribers
100 photos
2 videos
179 links
Semi-digested observations about our world right after they are phrased well enough in my head to be shared broader.
Download Telegram
Yes: Scan this QR code to pay this bill with Apple Pay.

But: “Type in the table number or check number to continue.”

Folks, you do know QR stands for Quick Response, right? It’s kinda a crime against humanity to not encode this very check number in the QR code printed on the very check, if you ask me.
😁71
Unpopular opinion: I'm starting to respect Yaml.

First, it's JSON-compatible, as in every JSON is a valid Yaml. Which means anything inside a Yaml doc can just be a JSON, literally copy-pasted inside. And which means everything that accepts a Yaml will by extension accept a JSON.

Second, it supports comments and stuff.

Third, I love jq and I instinctively typed in yq once — and it did exactly what I expected it to do. Moreover, yq -o json, or just yq -oj, will make it output JSON, nicely formatted, and colored just slightly differently enough to see it's not jq.

Furthermore, yq -P pretty-prints any Yaml, which by extension includes any JSON. It's just more human-readable, with no extra lines for closing } and ], and yet it's 100% machine-readable. Even package.json reads better after | yq -P.

In Python, yaml.safe_load would load the Yaml doc just like json.loads loads the JSON. All the more reasons to keep BaseModel-validated configs Yaml-s, not JSON-s. They are, after all, backwards-compatible.

Finally, there are Yaml Document Streams, which are just better than my now-second-favorite one-JSON-per-line, JSONL, format. I'd definitely prefer it when human-readability is part of the requirements, or at least a nice-to-have.
👍1
I got curious recently. With developed countries — the UK among them — tightening laws around VPN usage, how does this actually work for employees of overseas corporations who are required to use a corporate VPN to access company resources?

Surprisingly, this is hard to research. Most online answers try to solve a different problem entirely: whether employers can track where employees log in from. That is not my question.

I am not trying to trick employers. Quite the opposite — I want employers to give employees the freedom to use the Internet as it was intended.

Consider a simple scenario. Someone travels to the UK frequently, but works for a company registered in, say, the Cayman Islands. Per their contract, during business hours they are expected to spend several hours connected to a corporate VPN terminating in Cayman.

Now add a policy amendment. The company:

∙ does not keep VPN logs, and

∙ explicitly encourages employees to use the corporate VPN whenever not doing so could put company business at risk.

During orientation — which, naturally, happens in Cayman! — this is explained plainly. There may be content that is legal in Cayman but problematic when accessed while traveling in the UK. The company wants its employees safe, comfortable, and able to do their jobs without unnecessary exposure.

So the guidance is simple: if you are unsure, use the corporate VPN. The cost is negligible. The risk reduction is not. Better that traffic stays private than visible to hotel staff, local ISPs, or anyone else who does not need to see it.

Employees comply. They use corporate hardware. They use the corporate VPN — as required. From the UK ISP’s perspective, they are simply connected to a Cayman endpoint. Work traffic, personal email, private messages during natural breaks in the workday — all indistinguishable.

So where is the catch?

To be clear, I am not endorsing using VPNs to break laws. This is a thought experiment. If someone connects to a VPN specifically to access content they are forbidden to access locally, that is not defensible. But that is not what this scenario is about.

What, then, is the status quo?

Will the UK refuse to allow people to connect to corporate VPNs unless those VPNs provide government backdoors? Will it make it illegal for foreign companies to operate in the UK without traffic inspection capabilities?

I am trying to understand where the line is supposed to be between:

∙ protecting traffic for legitimate reasons — corporate security, privacy, risk management, and

∙ protecting traffic for questionable reasons — accessing things one should not.

These two are technically indistinguishable.

No country is trying to stop visitors from China from reading Wikipedia. China may disagree, and China may want to enforce its own rules later — that is a separate issue. But my hypothetical runs in the opposite direction. Cayman Islands is a reputable jurisdiction that happens to trust its people to know what not to look for online.

So what is the right moral compass here? And more importantly — where do we expect this to go over the next few years?

Because the Internet does not recognize borders. But laws increasingly pretend that it does.

PS: I do not know whether Cayman Islands allow online adult content. But my hypothetical argument should hold regardless.
Looks like my most valuable software development & architecture skill of the past ~ten years is indeed only getting more valuable.

I love producing small, clean, self-contained examples. To understand various concepts better, to explain them better, and to ultimately pick which ones to use and which one to ditch.

And this skill is very, very well aligned with AI-assisted coding!

Because the AI can hack up most simple examples well, and it can tweak them almost perfectly and almost instantly. What it lacks is the sense of beauty.

Both in clarity — is the code aesthetically pleasing to read? And in durability — if we introduce this code to a team of fellow humans, will it proliferate through the codebase in a good way, or will it grow like a bad tumor?

Perhaps in 5+ years my full-time job will be trying out various patterns with and without AI, and labeling them — manually, with experts, with the general public, and with, well, other AIs.

And then maybe people like me will be designing programming languages for the 21st century — because we're long overdue.
🔥42👍2
I’m sincerely wondering: are there high-profile tech companies that explicitly focus on doing more work async?

It’s kind of trivial. “Let’s have a call about this” should be declared a sign of unprofessionalism — if not outright a banned phrase — for lack of, well, empathy and integrity.

There would be a culture of emails, and a culture of not expecting immediate answers. A culture of doing one’s own research, and a culture of asking for help politely, at the right time, with the right granularity when providing context.

There would be mid- to long-form documents, and a culture of keeping them up to date. Documents with collapsible sections that contain non-trivial yet essential details, for those who need to dig deeper.

There would be scheduled meetings, within the team and cross-team. Single-digit hours per week. Agreed upon by everybody.

And introducing a new meeting — unless it’s consensual right away — would require some formal “board approval.” Any and all direct or indirect pressure to make new meetings happen, or to somehow guilt others into joining unnecessary meetings, would be hunted down and promptly eliminated.

What’s not to love about this?
2😁1
Just had my first experience with Github Copilot code reviewing my code.

TL;DR: It sucks.

7 comments. One legit, fixed the typo of ${1?: -> ${1:?. Other six are just "this will not work because paths blah blah blah", while in reality I've triple-checked the code myself, and one thing I definitely am sure of is that it works under five different setups.

I stand by my position: it's not impossible that humans will be useful as those meat bags with brains who actually care to understand what is going on behind the scenes. While the value of "making changes" to the code will continue hitting rock bottom.
👍4
Is there a quick way to have Linux support MacOS keyboard shortcuts?

I'm a Mac user now, but I still love linux. My keyboard is wireless. With one keystroke it goes from one laptop to another and back.

It'd be great to use the Cmd+C / Cmd+V, as well as Cmd+Enter, Cmd+L, etc. on Linux. Ideally, without even having to flip the physical Win/Mac switch on the keyboard.

Realistically, I can use any Linux these days. Everything is in Docker anyway. So if Ubuntu/Debian is not great for this purpose, I'm willing to give something else a shot. It's the New Year's week after all, might as well cheer the nerd in me up.

So just some zero-configuration reliably working way to have Mac shortcuts work on Linux would be great. I've tried manual mapping, but it's more painful and more fragile than I anticipated. Although if there is a tool or a noscript or a Github repo for Ubuntu that does the trick, I'd give it a try beforehand.

What do folks like me do these days?
🔥3
I stopped myself from writing a long post on Docker, but here's the most interesting part.

First, docker leaks containers.

Consider this inc.sh:

#!/bin/sh
echo $(( $(cat /tmp/n.txt 2>/dev/null) + 1 )) | tee /tmp/m.txt && mv /tmp/m.txt /tmp/n.txt


If you run it locally multiple times it'd print one, two, three, etc.

Now consider this Dockerfile:

FROM alpine
COPY ./inc.sh /inc.sh
ENTRYPOINT ["/inc.sh"]

If you run this command multiple times, it will always print 1:
docker run $(docker build -q .)

It will also always print 1 if you do docker build -t dima . once, followed by docker run dima repeatedly.

Each of these runs yields a new container! It will not show in docker ps or docker container ls, but it will in docker container ls -a.

Alas, the universes of images and containers are easy to confuse.

Behold: docker run --name dima dima

This runs this new container and calls it dima. Now there's dima the image and dima the container.

You can't do docker run --name dima dima again because the container called dima already exists, even though it has terminated a long time ago.

You can re-run it though, just docker start dima.

Second, docker leaks volumes.

Now,
add VOLUME /tmp to the end of the Dockerfile, and re-do the container:

docker container rm dima; docker run --name dima dima

Now run docker start dima several times. And say docker logs dima. Or just run docker start -i dima.

The number will keep increasing.

Because for the very container called dima there now exists a volume!

And if instead of docker start dima you run docker run dima, it will always print 1. And now we know why: because for each of these runs, a new volume is created. And leaked.

The takeaway from this point is that the universe of running-and-stopped containers exists separately from the universe of built-and-possibly-tagged images.

And then it's "trivial" to wrap one's head around. Because docker run takes an image, and docker start takes a container.

Third, docker compose silently re-uses containers.

Consider this docker-compose.yml:

services:
dima:
build: .

The third line might as well read image: dima.

Now run docker compose up several times. The number will keep going up!

Because while docker run creates a new container every time, docker compose will create containers once.

The "universe of docker compose container names" also exists. It is the same as the universe of docker containers, but with "tricky" naming. The default is the parent directory of docker-compose.yml, followed by a minus sign, followed by the name of the service, followed by a minus sign, followed by the index, starting from 1.

Running docker compose down will actually wipe the volume. But who does docker compose down for one-off pipelines, right?

You could also do docker compose run dima. But you would not if your compose topology consists of several containers. Because up is the way to go.

Fourth, and this is bizarre, volumes are not pruned.

Try this:

docker compose up && docker volume prune -f && docker compose up

The command to prune volumes does not prune them!

And there exists no simple way prune all containers tied to a volume. Here's the "shortest" way:

for i in $(docker ps -a -q --filter volume=$VOLUME); do docker container stop $i; docker container rm -f $i; done; docker volume rm $VOLUME

This "one-liner" is literally at the beginning of my noscripts that are meant to be fast, self-contained, and reproducible.

PS: docker compose up does not rebuild containers by default. So, unless you truly want to run the older version, docker compose up --build is a safe default.

PS2: Yes, this is why the use of VOLUME is discouraged in Dockerfile-s. But quite a few containers do have VOLUME-s, for instance, the posgres container. So it keeps data between runs; what's worse, it keeps table schemas too. What a wonderful footgun: your app's DB init code is broken but you're blissfully unaware!

If you've learned something today, my half an hour of typing this was not wasted. You're welcome.
👍3🔥1
On a completely unrelated note, Veritasium's IQ Test video features Derek looking through a Russian textbook.

The top section of which appears to be the handwriting of a nine years old, likely some exercise homework. Although the hand does not look Derek's, hinting at a simpler explanation.

What's not to love about this? Because surely, with this followers count, we are 100% destined to notice it.

PS: And, on another video of Veritasium today, YouTube was showing me a mental health hotline. Perhaps my browsing history does paint me as a prick when I'm feeling unwell for a day, and choose to watch something mildly educational and fun instead of good old Starcraft.
2
While I dislike Python (and prefer Rust, hehe), one thing it teaches you is that the old "enterprise-grade" Java-world "skillz" are long obsolete.

Because stuff should really a) be short, simple, and denoscriptive, and b) "just work" (c).

Seriously, I sincerely believe good software engineering taste is that short and clean code with fewer dependencies is generally what we need.

Cargo kicks ass in most aspects here. Python, especially with uv — which is a Rust tool by itself! — is surprisingly okay.

Did not expect myself to say this, but having to deal with TeamCity configuration via .teamcity/* give me shivers. It's been hours, and I can't make "Hello, world!" work. On my own TeamCity instance.

I remember the times when I argued how bad Github's Yaml-based Actions config is. Well, sure, strong typing would be great there — have you considered Cargo and Rust?

But boy, using JVM and gradle to run "workflows" so that I'm getting dozens of unreadable "Kotlin compilation errors" while all I need is to run echo 'Hello, world!'? Call me crazy, but my take here is that the crazy side here is the one that accepts how convoluted this whole thing is.

Challenge accepted. I'll do it. But it's painful af so far.
🔥2
This post carries a trivial message, but I learned the hard way that its implications are not at all obvious.

The trivial message is: Fixing LLM hallucinations is fundamentally no different from fixing similar failure modes in the human brain.

Corollary: The human brain has basic, low-level failure modes that trace back to a few misfiring neurons.

Here's my mental model. I do not claim it is correct, only that it maps reality reasonably well.

Humans share a tiny set of deeply hard-coded concepts: “good,” “fair,” “just,” “divine,” “love,” “duty,” “pleasure,” “dignity,” “loyalty,” “sanctity,” “disgust,” and a few more. They fit on two hands.

But modern civilization is far too complex and contradictory. Worse, countless actors today are aggressively “prompt-engineering” every human being for their own agendas. The cost of experimentation is near zero and the payoff enormous, so state and non-state actors have no reason not to try to “[re-]program” us. This mass-scale “civilizational programming” has reached heights unthinkable a decade or two ago. And it works.

Many things follow from this model; I will outline one minor and one major.

Minor: Remember that every person’s political and moral views reflect the nonstop nonsense they ingest. Reasonable people can debate the degree of personal responsibility to resist propaganda. But one thing is clear: most people simply repeat talking points without applying any critical scrutiny.

This is not new; what is new is the scale. Our echo chambers and propaganda engines now produce large populations who appear completely deranged—advocating agendas detached from their own lived reality and even harming themselves and their families. Activism can be noble; self-sacrifice for something worthless is emotional deficiency, not virtue.

Major: This applies to you as well — perhaps less than to most if you are reading this, but the logic stands.

No one is immune to stimuli aimed at the inner neurons of “happiness,” “safety,” “self-actualization,” etc. The only viable strategy, if sanity is a priority, is to consciously pick your echo chambers and aggressively filter emotionally charged content.

You also need resistance mechanisms — real ones, not coping mechanisms.

For example, I often find myself caring too much about the emotional state of the average human. It arguably damages my personal life. My rational brain knows exactly what restores balance: recognizing how unsalvageable many people are. Walking past a row of slot machines in Vegas and seeing hundreds of empty eyes pouring millions of dollars into pure uselessness forces me to internalize a basic truth: I cannot meaningfully extend compassion to everyone.

(Yes, gambling addiction is a real disease, and regulations exist for a reason. But most people at those machines are not addicts — they are just “regular humans,” as a friend succinctly puts it. Acknowledging that fact helps me care less emotionally, which is one of many mental tricks I utilize to stay sane.)

The takeaway is: There is nothing wrong or shameful in maintaining an arsenal of mental tricks. To live one’s own life in our increasingly hostile informational environment, we will need stronger internal tools. Begin building them early on, if only to Live Long and Prosper!
🔥3
The more I’m thinking of where the world is going the more I’m convinced its trajectory is almost exclusively determined by the answer to one question.

Is unconstrained communication the property of the Universe, or is it a social construct?

If it’s the universal property, that would simply mean that any and all at-scale censorship and speech control mechanisms will fail. We can assume they are all ephemeral and temporary, like the Prohibition. Humankind may well eventually give up alcohol altogether, but we appear to have collectively agreed that trying to out right ban it deals more harm than good.

If it’s a social construct, we have to declare that the days of free Internet are gone for good as of some ten years ago. Orwell then just happened to predict the future by generalizing a few observations well.

I know I personally would prefer to live in the world of free communication. Just imagine mesh networks work at any reasonable distance, below any reasonable signal-to-noise ratio, completely undetectable, except the very entity to which / to whom this particular piece of communication is directed.

Yes, I get it, such a world presents major challenges — from tearing apart the social fabric, all the way to literal military risks h heard of before. But if we manage to sustain our civilization, we’d be up to a great start, to conquer the Solar System and beyond.

And yes, I also get it that if the goal is purely to create a “safe and flourishing” world, collectively agreeing that free and unconstrained communication was just a fluke may well be the best first step.

Thankfully, we don’t have to decide any time soon. Various experiments, from European regulations to swarms of self-flying drones, are underway as we speak. We may well have time to course-correct at multiple bifurcation points if and as needed.

But I have to confess declaring free communication dead is something I would feel quite bitter about. And in quite a few corners of the world it can and should be pronounced dead today.
👍42
It’s remarkable how many solid language-design choices emerge once you commit to treating types as a zero-overhead runtime abstraction.
🥰3👍1🤔1
The major cause of the software crisis is that the machines have become several orders of magnitude more powerful! To put it quite bluntly: as long as there were no machines, programming was no problem at all; when we had a few weak computers, programming became a mild problem, and now we have gigantic computers, programming has become an equally gigantic problem.

1972

Edsger Dijkstra, The Humble Programmer (EWD340), Communications of the ACM.
🔥91
Writing Yaml config files used to be my least favorite activity — no schema, often no quick way to check their validity, no tab completion, no typing (of course).

On the bright side, doing this with LLMs is surprisingly pleasant.

I would not be surprised that among the first industries that completely switch to English as the definition language we will see the one of automatic container build / publish / put-together / test / retention policy / etc.

Sure, people who self-identify as Engineers, like me, would still want English to not be the Source-of-Truth of those "noscripts".

But I'm already quite comfortable with a development practice where English instructions are pushed right to the repo alongside the very LLM-generated "code".

So that a good code reviewer — me — would, of course, first and foremost check the correctness of what will be executed; but then pay at least, if not more attention to how it was described, a.k.a. prompted.

And then the very history of prompts and "code reviews" and conversations on and about them, in a simple git repo / Github review tool, will become the very source-of-truth for the future LLMs to keep improving those noscripts.

Definitely not bulletproof on DevOps scale. Nowhere near. But very, very good for smaller projects run by smaller teams.
👍3
Very good talk by Simon Peyton Jones.

He pretty much first describes how the idea of “consuming” instances, that culminated in Rust’s lifetimes and borrow checking, is very much a first-principles idea.

Then he says how, if framed just right, this “consuming” concept can be made “immutable”, naturally comparable to how IO abstracts away “mutating the World”, and thus almost monadic, but not quite.

Which allows integrating this concept into Haskell somewhat natively. Leveraging what we should be referring to as polymorphism of types in the first place.

And then “the rest”, that follows from the above in literally one step when looking at it from the right angle.

I also am finding it slightly amusing, and can totally relate, to mild bitterness that it turned out to be Rust, not Haskell, that took this very typesystem-centric idea mainstream.

https://youtu.be/t0mhvd3-60Y
👍3
Of many thoughts I have about this train of Agentic AI that I am currently on, a very positive one keeps standing out.

Capitalism f*cking works!

For many years I used to believe the world of products for tech people was so bad, that it's highly unlikely I'd be paying much for anything.

Effectively, the status quo was that mobile apps were under $5 a month, and other online services, except hosting, were under $10 a month.

Sure, I'm talking about it for years now that products such as Facebook should offer ad-free API-first experience for some $20 per month, so that we could use our own custom clients. This was a utopia with Facebook, but Twitter, now X, is actually exploring a similar path.

Nonetheless, I remained under the assumption that my "work & life setup" will not get substantially better over the years. I'll still be coding in vim, creating and reviewing pull requests from the browser, and occasionally opening up an IDE to debug some nontrivial behavior where debug-prints are not enough.

And most definitely I was under the assumption that I will not be paying anything substantial for any "dev tools", if you'd asked me ~five years ago. Perhaps some $5 monthly donation for a "better vim" or "better code review tool", something Superhuman-like. But nothing game-changing for sure.

Fast forward to today, and AI-assisted coding is here. And it is booming. And I am using the AI every day. And I am paying more for my models usage than for all other online services. Combined.

(Except perhaps my personal hosting, domains and the cloud, but that's beside the point. And it's changing quickly as we speak.)

Very soon I would be paying so much more for models that my very computer would become a commodity! Personally, I'm quite attached to the idea of having my own device, but the thought of it becoming unnecessary is more and more real as we speak.

And the best part is: nobody is forcing me to use the AI. This is the perfect Invisible Hand in action.

A decade or so ago I said "screw you, Market, you can't offer me anything".

The Market seemingly said "meh, well, I don't care about you".

We were content with each other.

Until we were not! Until the market found a way to my soul. By offering, gently and with no pushing whatsoever, something that I truly want to be using — the AI.

And offering it, I should add, at a very lucrative price point. That is, so far. But, given local models are getting better and better, I believe early-2026-grade AI coding assistants have a strict upper bound on their effective monthly price, and this upper bound is both already low enough and is going down rapidly.

What say I? I say: all hail the Market! The Market — delivers.
4🔥2
TIL that in DB schema design world, this "standard" solution exists:

“Identifying relationship via composite primary key”.

For cases such as one-to-one-to-many in databases, where the obvious solution is the third normal form, with a dedicated table to ID one-to-one pairs on ...

... instead of actually imposing 3NF under the hood, ORM systems (including SQLAlchemy!) will actually keep those non-3NF UNIQUE constraints in junction tables.

This may be a trivial piece of knowledge for you, if you're working in this space. Or, more likely, it may be totally unrelated to you, like it was for me.

But oh my God. There exists a perfectly legal solution, and the very job of the ORM is to create a thin wrapper layer for the user, while keeping the data model clean. And instead the ORMs are quietly creating a big mess under the hood, which creates all sorts of problems down the road. Problems that would not exist at all if the ORMs were actually designed well from day one.

Oh well. Hopefully not the last big revelation of my professional life.
I'm quite happy my career includes both ML/AI and Web3.

In the day and age of more and more exploits coming up in AI, the concept of human accountability is becoming increasingly important.

And what's better for individual accountability than having a proof that one's private key was used to sign some transaction?

I can't wait for the world where there two branches converge.

The engineer deploying code, or accessing production data, must use their Yubikey to sign off their change. It's all track and journaled — not necessarily on-chain, but definitely in ways that enables proving, later on, who did what.

And then, a few years from now, we can tell who was nice and who was naughty when it comes to diligence vs. negligence.

~ ~ ~

Back in late 90s and early 2000s, accountability in software engineering was no big thing.

I recall it vividly that the C# Runtime had a bug that quick-sort would be O(N^2) on a particular corner case — which was found by the judges of some competition, and it cost a strong programmer his first place, and a decent prize; a laptop computer IIRC.

We were late teeens back then, back in Russia, and we were all wondering — Microsoft sure knows who wrote that buggy code, when, and why, right?

Many years later I know it for a fact — yes, Microsoft sure does know. But it most likely did not care, because it could absolutely afford to not care. No offense taken — it's totally understandable Microsoft should not have cared back then.

(Although if I'd be the CEO, I'd definitely make sure to gift that young programmer a brand new laptop "as a courtesy for helping us isolate the bug". And perhaps give the problemsetters and the judges good prizes too — since they indeed were the ones who uncovered the bug.)

But with AI exploits here, there, and everywhere, I'm optimistic to live long enough and witness the world where not caring will no longer be an option.

Accountability — matters.

And the Web3 folks know best when it comes to how to institutionalize it.
👍1