TechLead Bits – Telegram
TechLead Bits
424 subscribers
62 photos
1 file
157 links
About software development with common sense.
Thoughts, tips and useful resources on technical leadership, architecture and engineering practices.

Author: @nelia_loginova
Download Telegram
Kafka 4.0 Official Release

If you’re a fan of Kafka like I am, you might know that Kafka 4.0 was officially released last week. Except the fact that it's the first release that operates entirely without Apache Zookeeper, it also contains some other interesting changes:

✏️ The Next Generation of the Consumer Rebalance Protocol (KIP-848). The team promised significant performance improvements and no “stop-the-world” rebalances anymore.
✏️ Early access to the Queues feature (I already described it there )
✏️ Improved transactional protocol (KIP-890) that should solve the problem with hanging transactions
✏️ Ability to make a whitelist of OIDC providers via org.apache.kafka.sasl.oauthbearer.allowed.urls property
✏️ Custom processor wrapping for Kafka Streams (KIP-1112) that should simplify common code usage across different streams topologies
✏️ Values for some default parameters were changed. Actually it's a public contract change with potential issues during upgrade, so need to be careful with that - KIP-1030
✏️ A big housekeeping work was done, so the version removes a lot of deprecations:
- v0 and v1 message formats were dropped (KIP-724)
- kafka clients versions <=2.1 are not supported anymore (KIP-1124)
- APIs and configs deprecated prior version 3.7 were removed
- Old MirrorMaker (MM1) was removed
- Old java versions support was removed, now clients require Java 11+, brokers - Java 17+

Full list of changes can be found in release notes and official upgrade recommendations.

New release looks like a significant milestone for the community 💪. As always, before any upgrade I recommend to wait for the first patch versions (4.0.x), which will probably contain fixes for the most noticeable bugs and issues.

#engineering #news #kafka
🔥7
Netflixed - The Epic Battle for America's Eyeballs

Recently I visited a bookshop to pick up a pocket book to read during a long flight. I noticed something with a word Netflix and decided to buy it. It was Netflixed: The Epic Battle for America's Eyeballs by Gina Keating.

Initially I thought that's the book about technology or leadership. But it was a story about Netflix's way to success. The book was published in 2013 but it's still relevant as Netflix remains a leader in online streaming today.

The author tells Netflix’s history starting from online DVDs rental service to online movie streaming. A main part of the book focuses on Netflix’s competition with Blockbuster (it's America’s biggest DVD and media retailer at that time). It’s really interesting to see how their market and optimization strategies went through different stages of technology evolution.

I won’t retell the whole book, but there’s one moment that really impressed me. Blockbuster was one step before beating Netflix and become a market leader in online movies services. But at that critical time, disagreements among Blockbuster’s top management led to the company crash.

Most board members failed to see that the DVD era was ending and Internet technologies were the future. They fired the executive who drove the online program and brought a new CEO with no experience in the domain. This new CEO decided to focus on expanding physical DVD stores. Unfortunately, he didn't want to hear about new technologies at all. That leads to full Blockbuster bankruptcy.

What can we learn from this? Some managers cannot accept the fact they are wrong and a bad manager can ruin the whole business. Good leaders must listen to their teams, understand industry trends, and be flexible enough to adapt to the changes. For me, the book felt like a drama story, even though I already knew what's in the end.

#booknook #leadership #business
👍1
ReBAC: Can It Make Authorization Simpler?

Security in general - and authorization in particular - is one of the most complex parts in big tech software development. At first look, it's simple: invent some role and add some verification at the API level, to make it configurable - put the mapping somewhere outside the service. Profit!

The real complexity starts at scale when you need to map hundreds of services with thousands of APIs to hundreds e2e flows and user roles. Things get even more complicated when you add dynamic access conditions—like time of day, geographical region, or contextual rules. And you should present that security matrix to the business, validate and test it. In my practice, that's always a nightmare 🤯.

So from time to time I'm checking what's there in the industry that can help to simplify authorization management. This time I checked the talk Fine-Grained Authorization for Modern Applications from NDC London 2025.

Interesting points:
✏️ Introduce ReBAC - relationship-based access control. That model allows to calculate and inherit access rules based on relationships between users and objects
✏️ To use this approach a special authorization model should be defined. It's kind of yaml configuration that describe types of entities and their relationships.
✏️ Once you have a model, you can map real entities to that and set allow\deny rules.
✏️ Opensource tool OpenFGA already implements ReBAC. It even has a playground to test and experiment with authorization rules.

Overall idea may sound interesting but a new concept still doesn't solve the fundamental problem - how to manage security at scale. That's just yet another way to produce thousands of authorization policies.

The author mentioned that the implementation of OpenFGA is inspired by Zanzibar - Google's authorization system. There is a separate whitepaper that describes main principles of how it works, so I added this whitepaper to my reading list and probably I will publish some details in the future 😉.

#architecture #security
👍4
Technology Radar

In the beginning of April Thoughtworks published a new version of Technology Radar with the latest industry trends.

Interesting points:

✏️ AI. There is a significant growth of AI agentic approach in technologies and tools, but all of them still work in a supervised fashion helping developers to automate the routine. No surprises there.

✏️ Architecture Advice Process. Architecture decision process moves to decentralized approach where anyone can make any architectural decision getting advice from the people with the relevant expertise. The approach is based on Architecture Decision Records (ADRs) and advisory forum practices. I made short ADR overview there.

✏️ OpenTelemetry Adoption. Most popular tools (e.g. Loki, Alloy, Tempo) in observability stack added OpenTelemetry native support.

✏️ Observability & ML Integration. Major monitoring platforms embedded machine learning for anomaly detection, alert correlation and root-cause analysis.

✏️ Data Product Thinking. In extended AI adoption many teams started treating data as a product with clear ownership, quality standards, and focus on customer needs. Data catalogs like DataHub, Collibra, Atlan or Informatica become more popular.

✏️ Gitlab CI\CD was moved to adopted state.

Of course, there are much more items in the report, so if you're interested I recommend to check and find trends that are relevant to your tech stack.

Since this post is about trends, I'll share one more helpful tool - StackShare. It shows the tech stacks used by specific companies, and how wide a particular technology is adopted across different companies.

#news #engineering
1👍1
Measuring Software Development Productivity

The more senior position you have, the more you need to think about how to communicate and evaluate the impact of your team’s development efforts. The business doesn't think in features and test coverage, it thinks in terms of business benefits, revenue, costs savings, and customers satisfaction.

There was an interesting post for that topic in AWS Enterprise Strategy Blog called A CTO’s Guide to Measuring Software Development Productivity. The author suggests to measure development productivity in 4 dimensions:

✏️ Business Benefits. Establish a connection between particular feature and business value it brings. Targets must be clear and measurable. For example, “Increase checkout completion from 60% to 75% within three months.” instead of “improve sales”. When measuring cost savings from automation, track process times and error rates before and after the change to show the difference.

✏️ Speed To Market. It is the time from requirement to feature delivery to production. One of the tools that can be used there is value stream mapping. In that approach you draw you process as a set of steps and then can analyze where ideas spend time, whether in active work or waiting for decisions, handoffs, or approvals. This insight helps you plan and measure future process improvements.

✏️ Delivery Reliability. This dimension is about quality. It covers reliability, performance, and security. You need to transform technical metrics (e.g., uptime, rps, response time, number of security vulnerabilities) to the business metrics like application availability, customer experience, security compliance, etc.

✏️ Team Health. Burnout team cannot deliver successful software. The leader should pay attention to the teams juggling too many complex tasks, constantly switching between projects, and working late hours. These problems predict future failures. Focused teams are the business priority.

The overall author's recommendation is to start with small steps, dimension by dimension, carefully tracking your results and share them with the stakeholders at least monthly. Strong numbers shift the conversation from controlling costs to investing in growth.

From my perspective, this is a good framework that can be used to communicate with the business and talk with them using the same language.

#leadership #management #engineering
👍2🔥1
Are Microservices Still Good Enough?

There was a lot of hype around microservices for many years. Sometimes they are used with good reasons, sometimes without. But looks like the time for fast growth came to the end, companies started to focus more on cost reduction. It promotes more practical approach for architecture selection.

One of the recent articles about this topic is Lessons from a Decade of Complexity: Microservices to Simplicity.

The author starts with downsides of microservice architecture:
✏️ Too many tiny services. Some microservices become too small.
✏️ Reliability didn't improve. One small failure can trigger cascade failure of the system.
✏️ Network complexity. More network calls produce higher latency.
✏️ Operational and maintenance overhead. Special deployment pipelines, central monitoring, logging, alerting, resource management, upgrades coordination. This is just a small part of what's needed to serve the architecture.
✏️ Poor resource utilization. Microservices can be too small that even 10 millicores are not utilized. It makes the whole cluster resource management ineffective.

Recommendations to select the architecture:
✏️ Be pragmatic. Don’t get caught up in trendy architecture patterns, select what's really needed for your task and team now.
✏️ Start simple. Keeping things simple saves time and pain in the long run.
✏️ Split only when needed. Split services when there’s a clear technical reason, like performance, resource needs, or special hardware.
✏️ Microservices are just a tool. Use them only when they help your team move faster, stay flexible, and solve real problems.
✏️ Analyze tradeoffs. Every decision has upsides and downsides. Make the best choice for your team.

Additionally the author shared a story where he and his team consolidated hundreds of microservices into larger ones. They reduced the total number of microservices from hundreds to fewer then ten. This helped to cut down alerts, simplify deployments, and improve infrastructure usage. The overall solution support became easier and less expensive.

I hope that finally cost effectiveness of technical decisions became a new trend in software development 😉.

#engineering #architecture
❤‍🔥4👍3
The Pop-up Pitch

Do you have situations when you need to sell your ideas to management? Or to explain your solution to the team? Or to convince someone with a selected approach?
The Pop-up Pitch: The Two-Hour Creative Sprint to the Most Persuasive Presentation of Your Life. This is a really helpful book from master of visualization Dan Roam on how to do that (overview of his book regarding visualization is there).

As you can suggest from the book name, it is focused on creation of persuasive presentations. As a base the author uses storytelling principles, sketching, simplicity and emotional involvement to attract the auditory attention.

Main Ideas:

✏️ To make a successful meeting you need to define its purpose. Pop-up pitch is focused on the meetings to present new ideas and meetings for sales (to request an action).

✏️ Every meeting is about persuasion. The most effective approach is positive persuasion when you don't put a pressure on the people, but attract and emotionally involve them. Positive persuasion consists of 3 elements:
1. Benefits. The presenter truly believes that idea is beneficial for the audience.
2. Truth. The idea is something the audience actually wants to get.
3. Achievability. We can do that with a clear step-by-step plan.

✏️ Visual Decoder. You should always start preparation with your idea denoscription the following dimensions:
- Title – What’s the story about?
- Who? What? – Main characters and key elements
- Where? – Where things happen and how people/parts interact
- How many? – Key numbers and quantities, measurements
- When? – Timeline and sequence of events
- Lessons Learned – What the audience should remember at the end

✏️ Pitch. It's a 10-min presentation based on storytelling techniques. Your story should have the element of drama, ups and downs. The whole storyline consists of 10 steps:
1. Clarity. A clear and simple story noscript.
2. Trust. Establish a connection with the audience, show that you understand their problems.
3. Fear. Problem explanation.
4. Hope. Show how successful results could look like.
5. Sobering Reality. We cannot continue to do the same, we need to change the approach to achieve another results.
6. Gusto. Offer a solution.
7. Courage. Show the result is achievable with key steps and a clear plan.
8. Commitment. Explain what actions are needed.
9. Reward. Show what audience can have in the nearest future.
10. True Aspiration. The big long-term win.

The book is well-structured with step-by-step guidelines to apply recommendations in practice. It makes me rethink some meetings approach and keep in mind that the most important thing in the presentation is not what you want to say, but what the audience is ready to hear.

#booknook #softskills #presentationskills #leadership
👍3
Template for storyline of the pitch

Story Sample:
1. Your upcoming presentation deserves to be amazing.
2. Take a deep breath. A big presentation is coming up.
3. But how do you grab people’s attention?
4. Imagine how great it can be.
5. The same old “as usual” presentation doesn't work anymore.
6. Maybe it’s time to try something new?
7. There’s a simple way.
8. You only need three things…
9. It’s the pop-up pitch!
10. What do you have to win?

Source: https://www.danroam.com/

#booknook #softskills #presentationskills #leadership
❤‍🔥4
The Subtle Art of Support

At IT conferences the main focus is usually on how to build a spaceship (or at least a rocket! 😃) with the latest technologies and tools. Everyone enjoys writing new features, but mostly nobody is excited about fixing bugs. That's why I was really surprised when I've seen a talk about support work - The subtle art of supporting mature products.

The author shared her experience organizing L4 support team. Actually most recommendations are really trivial like organize training sessions, improve documentation, talk with your clients, etc. But the idea to have fully separate support and development teams is really confused me.

From my point of view, such model makes sense for one-time project delivery only: you develop something, deliver it to the customer, make a support hand-over and move on. But it's totally wrong for actively developed products.

In this case separating support from development breaks the feedback loop (we talk about L4 product support, of course). You simply cannot improve a product in a right way if you're not in touch with your customers and their pain. Support is a critical activity for the business. Nobody cares about new features if existing features don't work.

I prefer a model when the teams own a product or component. It means that the team is responsible for both development and support. The better quality you have, the more capacity you can spend for feature development. Such model produces a really good motivation to work on internal stability, process optimizations and overall delivery quality.

One of the simplest way to implement the approach is to to rotate people between support and development work for a sprint, a few sprints, or a full release. In my practice, schema with 2-3 sprints works quite well.

Of course, I often hear arguments that support requires a lot of routine communications because users just "don’t use features correctly". That's why it should be some other people. But for me, that’s a sign that there is something wrong: the product is hard to use, the documentation is poor, test cases are missed, etc. That's exactly the point to perform some analysis and make improvements. And in the era of GenAI teams can automate a lot of support routine and focus on making the products really better.

#engineering #leadership
👍4
NATS: The Opensource Story

Opensource projects play a key role in modern software development. They are widely used in building commercial solutions: we all know and actively adopt Kubernetes, PosgtreSQL, Kafka, Cassandra and many other really great products. But opensource comes with a risk - a risk that one day a vendor will change the license to a commercial one (remember a story around Elasticsearch 😡?).

If a project become commercial, what can be done further:
✏️ Start paying for the product
✏️ Migrate to an opensource or home-grown alternative
✏️ Freeze the version and provide critical fixes and security patches on your own
✏️ Fork the project and start contributing to it

The actual decision cost will vary depending on the product importance and complexity. But anyway it will be extra costs and efforts.

That’s why, when choosing an open source software, I recommend to pay attention to the following:
✏️ Community: check activity in github repo, response time to issues, release frequency, number of real contributors
✏️ Foundation: if a project belongs to the Linux Foundation or CNCF the risk of a license change is very low

That's why I found a story around NATS (https://nats.io/) really interesting. Seven years ago, the NATS project was donated by Synadia to the CNCF. Since then, community has grown and has ~700 contributors. Of course, Synadia continues to play an important role in NATS development and roadmap.

But in April, Synadia officially requested to take the project back with the plans to change license to Business Source License. From CNCF blog:
Synadia’s legal counsel demanded in writing that CNCF hand over “full control of the nats.io domain name and the nats-io GitHub repository within two weeks.


This is a first attempt to take a full project back and exit from a foundation I've seen. If it succeeds, it will create a dangerous precedent in the industry and kill the trust to the opensource foundations. Actually they exist to prevent such cases and provide protection from a vendor lock.

The good news are that on May 1st CNCF defended its rights for NATS and reached an agreement with Synadia to keep the project within CNCF under Apache 2.0. In that story CNCF demonstrated the ability to protect its projects, so belonging to the CNCF is still a good indicator to choose opensource projects for your needs.

#news #technologies
🤔1
Backup Strategy: Identify Your Needs

Data loss is one of the biggest business risk in running modern software systems. And to be honest, it's rarely caused by infrastructure failures. In most real-life cases, it's a result of a buggy upgrade or a human mistake that accidentally corrupts the data.

That's why properly organized backups are the foundation of any disaster recovery plan (DR Strategies overview can be found there).

If you have a monolith app deployed on a single VM, the strategy is simple: just perform a full VM backup. But for microservice solutions with hundreds of services, different types of databases and other persistence storages, the task becomes non trivial.

To build your own backup strategy, you need to answer the following questions:

✏️ What type of data sources do you have? It can be databases, queues, file and blob storages, VMs and other infrastructure components.
✏️ What data is business critical? Classify data based on criticality, different type of data can have different requirements to RPO and even allow some data loss.
✏️ Is data primary or secondary? Some data can be reproduceable from other sources (search indexes, streaming data, deployment configuration, etc.) and it's cheaper to restore it from initial source than perform a backup and fix consistency issues.
✏️ What are the RPO and RTO requirements? According to that you will set up a backup frequency. For example, if RPO is 15 minutes then you’ll need to schedule backups at least every 15 minutes.
✏️ Are there any compliance rules? Some regulations require to keep the data for a specific period of time (e.g., billing and revenue data, personal data). That mostly impacts backup retentions policies and required hardware.

According to the answers you can choose suitable backup types, schedule, recovery and testing strategies. More about that in future posts 😉

#engineering #systemdesign #backups
👍1
Backup Strategy: Choosing the Right Backup Type

When backup requirements are collected and data is classified, it's time to choose backup frequency and type:

✏️ Full Backup. The complete copy of data is created and sent to another location (different data center or cloud region). The approach is simple but time and resource consuming. Full backups are usually done daily or weekly

✏️ Incremental Backup. It only saves the changes made from the last backup. As data volume is relatively small, this approach is fast and consumes less storage. But recovery procedure takes more time as more backup files need to be applied: full backup + each increment. That's why it's often combined with daily or weekly full backups. Incremental backup is run every 15-60 min depending on how much data you can loose (RPO).

✏️ Differential Backup. It keeps all changes since last full backup. This type stores more data than incremental backups but recovery will be faster as only full and diff backup files will be applied. It's also used in a combination with full backups.

✏️ Forever Incremental Backups. Full backup is performed only once, then only increments are saved. To restore the data, all the incremental backups must be applied in a sequence.

✏️ Synthetic Full Backup. It's an optimization version of forever incremental backups. It combines the last full backup with recent incremental backups into a new "synthetic" full backup that speed up the recovery time.

Most cloud storages support at least full and incremental backups. Other types often depend on the backup software you’re using. When backup types and schedule are defined, you can also calculate backup storage size and costs.

#engineering #systemdesign #backups
1👍1🔥1
Backup types visualization

#engineering #systemdesign #backups
❤‍🔥1👍1
Note Taking with Obsidian

Today I want to share my experience using Obsidian for personal productivity.

I'm really conservative person when it comes to the tools, so for many years I've been using hand-written notes and Notepad++ . But when I got more responsibilities and teams, I realized it difficult to keep the contexts, plans and agreements in the head or in the pile of files.

So I made one more attempt to find a better approach. My criteria is simple: if I can use an app within 10 minutes and it doesn't annoy me, it's a winner 😃. Obsidian became my love from the first screen.

At first glance, it's really simple: just a tree of files and folders, markdown and tags. That's enough to start. Of course, the real power of Obsidian is in its plugins. Obsidian has a great community with hundreds of plugins for different use cases.

Finally I came up with the following configuration:

✏️ 2 Obsidian Vaults:
1) Personal: my personal knowledge base. I use it with paid sync across all devices.
2) Work: everything is related to work. This vault stores data on work notebook only (you know, NDA, security 😉)

✏️ Folders. I organize data by domains to group projects or huge topics.

✏️ Tags. It's very convenient to mark all information and tasks to easily search them in the future.

✏️ Tasks. It's a separate plugin. It allows to create tasks, specify due dates and priorities. But the best part is that it gives you an ability to query and group tasks from different files into a single view. For example, I have a file Today with the following query:
```tasks
not done
due before tomorrow
sort by due
short mode
group by tags
````


✏️ Escalidraw. Plugin to draw simple Escalidraw diagrams.

✏️ Diagrams. The plugin integrates draw.io diagrams directly into Obsidian (Important! there is a bug, you need to disable Sketch style in settings to make it work). Edit view is not so convenient as drawio app, so to prepare diagrams I still use native app but for preview purposes, I use this plugin.

If IDE is a working place for your code, Obsidian is a working place for your thoughts. My knowledge base is growing, and now I don't understand how I survive without that before 🙂.

#softskills #productivity
5👍1👏1
Secret Management Platform at Uber

Secret Management is one of the biggest pain points for modern cloud applications, especially when you need to implement credentials rotation across different types of secrets like OAuth2 clients, database credentials, integration secrets, etc.

Last week Uber published an article about their approach to solve this task:
✏️ Automatic scans for hardcoded passwords on PR level
✏️ Centralized secrets management platform with APIs, UI and CLI to unify CRUD operations
✏️ Secrets inventory with information about owners, secret provider, rotation policy, deployment platform, security impact level
✏️ Integration with Hashicorp Vault (installed per region) on-premise and cloud secret managers (AWS, GCP) for apps in public clouds
✏️ Secrets rollout via integration with deployment systems (in Uber there are 3 of them)
✏️ New secrets rollout monitoring and failure detection
✏️ Automatic rollback to the previous secret value in case of failure
✏️ Monitoring and cleanup of orphaned secrets

The authors said that this system allows Uber to automatically rotate around 20,000 secrets per month, with no human intervention. Moreover, they mentioned that they actively work on secretless authentication to reduce dependencies on traditional secrets. Actually the direction sounds promising: the fewer secrets you have the simpler it is to manage them.

#engineering #usecase #security
🔥2👍1
Note Taking: Knowledge Base Approach

Recently I published a post about overall Obsidian usage. Today I'd like to share some tips on how I organize my knowledge base (task management deserves a separate post 😉).

The last few years I work as a technical leader at different levels (from 1-2 teams to the division with 5-7 teams). So it's really important for me to remember the technical context for each component under my supervision, the customers I work with, roadmaps, agreements, deadlines and statuses of ongoing activities.

So I came up with the following structure:

✏️ Projects. This domain contains information about time-limited activities: key technical architecture details, requirements, limitations, milestones, stakeholders, etc.

✏️ Product. I work in a product company, so I keep notes regarding product areas I’m responsible for. I organize them by components or architecture concerns like Security, Backup\Restore, Data Streaming, etc. Additionally I have sections for common parts like plans for the release, researches, quality tracking, etc.

✏️ References. This domain is about materials that are subject of my professional interests. I split them into soft skills and technical skills. I usually write short summaries in my own words for quick reference and add links to the initial sources for the reference.

✏️ People. As a leader I work with the people on their growth, so I track communication history with all agreements, roadmap and feedbacks.

✏️ Templates. Obsidian allows using notes templates. I have templates for meeting minutes, ADRs and for some other cases. They help me quickly create notes with a predefined structure.

✏️ Archive. Something that is not actual anymore. I don't delete outdated notes, I move them to the archive for the history.

Additionally to the folder structure I actively use cross-references between notes and tags.

The described approach works for me, but I don't guarantee it will work for you. That's why I recommend to check 2-3 widely used techniques like Second Brain and pick one to begin with. Start with the simplest option and step-by-step adapt it to your needs and personal convenience.

#softskills #productivity
👍4🔥3
What Does Technical Leadership Mean?

There are a lot of speculations in the industry about technical leadership term, especially about who a Tech Lead is and who is not. In some companies, Technical Lead is an official noscript. In others, it can be a Team Lead or an Architect.

But for me, technical leadership is something broader.
Is an architect a technical leader? A staff engineer? A CTO?
My answer is yes.

From my point of view, technical leadership is the ability to set technical vision for the team, solve implementation conflicts and guide teams through architectural decisions.

There is one interesting video regarding the topic - Level Up: Choosing The Technical Leadership Path. The author is explaining his vision of technical leadership:
Technical Leadership is the act of aligning a group of people in a technical context.


Common examples are solving conflicts in a code review regarding implementation approach, aligning coding practicing, defining technical contracts between teams and components, etc.

From that perspective, the author defines the following career paths:
1. Individual Contributor: Junior Engineer, Middle Engineer, Senior Engineer
2. Manager: Engineering Manager
3. Technical Leader: Staff Engineer, Tech Lead, Dev Lead

So individual contributor path is very limited because from some level of seniority you need to collaborate with other people, present your ideas and explain the reasoning behind your decisions. This requires a different set of skills: communication, leadership, empathy, ownership, delegation, coaching, etc. If you decide to grow in a technical leadership direction, you need to develop these skills like you do with any other technical skill.

#leadership #career
🔥1
Hashicorp Plugin Ecosystem

When Go didn't have a plugin package, Hashicorp implemented their own plugin architecture. The main difference from other plugin systems is that it works over RPC. At first, that might sound a bit unusual, but the approach shows really good results and it is actively used in many popular products like Hashicorp Valut, Terraform, Nomad, Velero.

Key concepts:
✏️ Plugin is a binary that runs an RPC (or gRPC) server.
✏️ A main application loads plugins from a specified directory and runs them as OS child processes.
✏️ A single connection is made between each plugin and the host process.
✏️ The connection is bidirectional, so plugin can also call application APIs.
✏️ Plugin and the application itself must be on the same host and use local network only, no remote calls are allowed.
✏️ Each plugin provides a protocol version that can be used as its API version.
✏️ A special handshake is used to establish a connection. The plugin writes its protocol version, network type, address and protocol to stdout, and the main app uses this information to connect.

Benefits of the approach:
✏️ Plugins can't crash the main process
✏️ Plugins can be written in different languages
✏️ Easy installation - just put a binary into the folder
✏️ Stdout/Stderr Syncing.  While plugins are subprocesses, they can continue to use stdout/stderr as usual and their output will get mirrored to the host app.
✏️ Host upgrade while a plugin is running. 
Plugins can be "reattached" so that the host process can be upgraded while the plugin is still running.
✏️ Plugins are secure
: Plugins have access only to the interfaces and args given to it, not to the entire memory space of the process.

In cloud ecosystem, plugins can be delivered as init containers. During startup, the plugin binary from the init container is copied into the main app container.

If you're designing some pluggable architecture, Hashicorp RPC Plugins is definitely the approach to look at.

#systemdesign #engineering
4👍1🔥1
How to Overcome Procrastination

I'm usually quite skeptical about books with flashy noscripts that promise to make you more productive, more successful and all other "mores". But the book Do It Today: Overcome Procrastination, Improve Productivity, and Achieve More Meaningful  by Darius Foroux caught my attention because it recommends some practices I actively use.

Let me explain by an example.
I usually read 3-5 books at the same time (I mean I start reading a new book without finishing the previous one). The book selection depends on my mood, energy level, and how much time I have. Sometimes I'm ready to discover complex engineering topics. Other times I prefer to read something lighter around soft skills development. If I feel exhausted and want to re-charge, then I take fiction books.

It always sounds strange to people when I explain this way of reading 😃

So when I saw that reading multiple books is one of the productivity tips, I thought that I have something common with the author and probably other recommendation would also suit me.

The author shares 33 advices to improve personal productivity.

I'll highlight the most important from my perspective:
✏️ Track where you spend your time. Before starting a task, ask yourself: _Do I really need to do this? What happens if I don’t?_
✏️ Plan your day from the night before.
✏️ Write a short summary about completed tasks at the end of the day.
✏️ Pick your outfit for the next day in advance.
✏️ Read every day (books not Internet scrolling). Surround yourself with paper books, read multiple books at the same time, it's fine.
✏️ Spend more time with your loved ones.
✏️ Perform physical activity every day.

The book won't teach you anything really new. But the fact is that even if we know productivity recommendations, we don't usually follow them. From this perspective the book gives you a push to take the first steps in a right direction. And engaging style of writing with a good piece of humor make it easy to read and follow.

#booknook #softskills #productivity
🔥4