AI-Literacy
The AI growth continues to bring new and new terms. Just look at the hype around Vibe Coding 😎. But today I want to talk about another term - AI-literacy.
Under AI-literacy the industry understands a set of competences to work with AI.
It consists of the following elements:
🔸 Know & Understand AI: common understanding how it works, critically evaluate its outputs.
🔸 Use & Apply AI: use AI tools and agents to solve different tasks, prompt engineering.
🔸 Manage AI: setting AI usage guidelines and policies, manage prompt libraries, education.
🔸 Collaborate with AI: work with AI to create innovative solutions, solve real-world problems.
Why is it interesting for us?
The competency exists, but in most companies it's not yet reflected in any policies or skill matrices. Moreover, there are often no AI usage guidelines at all. But employees definitely use it (not always effectively as they could), sometimes sending confidential data to public models 😱.
AI-literacy is a good concept that you can use to start manage AI knowledge within your team: education, guidelines, restrictions, sharing and collecting useful prompts, incorporating AI tools to your daily routine.
#leadership #ai #management
The AI growth continues to bring new and new terms. Just look at the hype around Vibe Coding 😎. But today I want to talk about another term - AI-literacy.
Under AI-literacy the industry understands a set of competences to work with AI.
It consists of the following elements:
🔸 Know & Understand AI: common understanding how it works, critically evaluate its outputs.
🔸 Use & Apply AI: use AI tools and agents to solve different tasks, prompt engineering.
🔸 Manage AI: setting AI usage guidelines and policies, manage prompt libraries, education.
🔸 Collaborate with AI: work with AI to create innovative solutions, solve real-world problems.
Why is it interesting for us?
The competency exists, but in most companies it's not yet reflected in any policies or skill matrices. Moreover, there are often no AI usage guidelines at all. But employees definitely use it (not always effectively as they could), sometimes sending confidential data to public models 😱.
AI-literacy is a good concept that you can use to start manage AI knowledge within your team: education, guidelines, restrictions, sharing and collecting useful prompts, incorporating AI tools to your daily routine.
#leadership #ai #management
❤1👍1
Uber Code Review AI Assistant
Uber continues to share their experience to integrate AI into different parts of development process. This time it's GenAI code review assistant (previously they published about GenAI On-Call Copilot and GenAI Optimizations for Go).
If you tried to make a code review with some GenAI tool you may notice it's not perfect yet: hallucinations, overengineering, noisy suggestions. It left the feeling that it produces more issues and consume more time than a human review process.
That's why Uber engineers created their own review platform.
So let's check what they implemented:
🔸 Define relevant files for analysis: filter out configuration files, generated code, and experimental directories.
🔸 Include PR changes, surrounding functions and class definitions to the LLM context.
🔸 Execute analysis calling number of different AI assistants:
- Standard: detects bugs, exception handling and logic flaws.
- Best Practices: enforces Uber-specific coding conventions and style guides.
- Security: checks application-level security vulnerabilities.
🔸 Execute another prompt to check quality of the previous step, assign a confidence score and merge overlapping suggestions.
🔸 Run a classifier for each generated comment and suppress categories with low developer value.
🔸 Publish result comments on PR.
Authors reported that the whole process takes around 4 minutes and already integrated with all Uber's monorepos: Go, Java, Android, iOS, Typenoscript, and Python.
One more interesting point, that for code analysis and comment grading 2 different models were used: Claude-4-Sonnet and OpenAI o4-mini-high.
As you can see, more and more AI systems start working in multiple stages, where one AI checks the results of another. This pattern is becoming popular and it shows really good results removing noise and decreasing the number of hallucinations.
#engineering #ai #usecase
Uber continues to share their experience to integrate AI into different parts of development process. This time it's GenAI code review assistant (previously they published about GenAI On-Call Copilot and GenAI Optimizations for Go).
If you tried to make a code review with some GenAI tool you may notice it's not perfect yet: hallucinations, overengineering, noisy suggestions. It left the feeling that it produces more issues and consume more time than a human review process.
That's why Uber engineers created their own review platform.
So let's check what they implemented:
🔸 Define relevant files for analysis: filter out configuration files, generated code, and experimental directories.
🔸 Include PR changes, surrounding functions and class definitions to the LLM context.
🔸 Execute analysis calling number of different AI assistants:
- Standard: detects bugs, exception handling and logic flaws.
- Best Practices: enforces Uber-specific coding conventions and style guides.
- Security: checks application-level security vulnerabilities.
🔸 Execute another prompt to check quality of the previous step, assign a confidence score and merge overlapping suggestions.
🔸 Run a classifier for each generated comment and suppress categories with low developer value.
🔸 Publish result comments on PR.
Authors reported that the whole process takes around 4 minutes and already integrated with all Uber's monorepos: Go, Java, Android, iOS, Typenoscript, and Python.
One more interesting point, that for code analysis and comment grading 2 different models were used: Claude-4-Sonnet and OpenAI o4-mini-high.
As you can see, more and more AI systems start working in multiple stages, where one AI checks the results of another. This pattern is becoming popular and it shows really good results removing noise and decreasing the number of hallucinations.
#engineering #ai #usecase
❤4👍3
Write It Down
Have you ever been on the meetings where people start yelling at each other? Or don't listen to each other? I've been there and what I can say: it's very difficult to manage and fix such situations.
There is one tip I learned on one of the soft skills trainings:
And you know what? It works perfectly well 👍.
Now when things start heating up, I open Notepad++, write down all the points, ask clarifying questions, and confirm I got it right. In online meetings, I share my screen so everyone can see my notes.
So next time in the meeting where the discussion becomes too emotional, keep calm and just write everything down.
#softskills #tips #leadership
Have you ever been on the meetings where people start yelling at each other? Or don't listen to each other? I've been there and what I can say: it's very difficult to manage and fix such situations.
There is one tip I learned on one of the soft skills trainings:
"If someone is yelling at you, start writing down what they say. It’s almost impossible to yell at someone who's taking notes on each word you said."
And you know what? It works perfectly well 👍.
Now when things start heating up, I open Notepad++, write down all the points, ask clarifying questions, and confirm I got it right. In online meetings, I share my screen so everyone can see my notes.
So next time in the meeting where the discussion becomes too emotional, keep calm and just write everything down.
#softskills #tips #leadership
🔥9👍2
Simple Prompt Techniques
GenAI continues to revolutionize the way we perform our tasks, and it really simplifies some part of daily routine execution. But to do that efficiently, you need to use correct prompts. The rule is simple: the better you specify the request, the better results you get.
So I’d like to share a few simple prompting methods that I’ve found really helpful.
RTF
It's perfect for simple tasks. According to the RTF you need to write your prompts in the following way:
🔸 Role: AI role and area of expertise.
🔸 Task: Task or questions denoscription.
🔸 Format: Output format or structure: code snippet, text, specific document, json structure, etc.
Example:
Role: You are an experienced Go developer.
Task: Analyze this Go function and suggest improvements to error handling and HTTP client reuse.
Format: Return a code snippet with inline comments explaining improvements.
RISEN
This framework suites for more complex tasks:
🔸 Role: AI role and area of expertise.
🔸 Instructions: Task or questions denoscription. The more details you specify, the better the output.
🔸 Steps: Steps to perform to complete the task.
🔸 Expectations: Goal of the output, what you aim to achieve. It can include examples, output format and other guidelines.
🔸 Narrowing: Limitations, restrictions, or what to focus on.
Example:
Role: You are an SRE engineer.
Instructions: Prepare outage report data [based on the provided details].
Steps: 1) Summarize timeline, 2) Identify root cause, 3) Suggest prevention.
Expectation: Output an incident report in Markdown with Summary, Impact, Root Cause, Action Items.
Narrowing: Keep it management-friendly but with enough technical detail for engineers.
I hope these prompt techniques will be useful for you as well.
#ai #tips
GenAI continues to revolutionize the way we perform our tasks, and it really simplifies some part of daily routine execution. But to do that efficiently, you need to use correct prompts. The rule is simple: the better you specify the request, the better results you get.
So I’d like to share a few simple prompting methods that I’ve found really helpful.
RTF
It's perfect for simple tasks. According to the RTF you need to write your prompts in the following way:
🔸 Role: AI role and area of expertise.
🔸 Task: Task or questions denoscription.
🔸 Format: Output format or structure: code snippet, text, specific document, json structure, etc.
Example:
Role: You are an experienced Go developer.
Task: Analyze this Go function and suggest improvements to error handling and HTTP client reuse.
Format: Return a code snippet with inline comments explaining improvements.
RISEN
This framework suites for more complex tasks:
🔸 Role: AI role and area of expertise.
🔸 Instructions: Task or questions denoscription. The more details you specify, the better the output.
🔸 Steps: Steps to perform to complete the task.
🔸 Expectations: Goal of the output, what you aim to achieve. It can include examples, output format and other guidelines.
🔸 Narrowing: Limitations, restrictions, or what to focus on.
Example:
Role: You are an SRE engineer.
Instructions: Prepare outage report data [based on the provided details].
Steps: 1) Summarize timeline, 2) Identify root cause, 3) Suggest prevention.
Expectation: Output an incident report in Markdown with Summary, Impact, Root Cause, Action Items.
Narrowing: Keep it management-friendly but with enough technical detail for engineers.
I hope these prompt techniques will be useful for you as well.
#ai #tips
👍4🔥4
Measuring System Complexity
I think we can all agree that the less complex our systems are, the easier they are to modify, operate and troubleshoot. But how can we properly measure complexity?
The most popular answer will be something related to cyclomatic complexity or number of code lines. But have you ever tried to use them in practice? I found them absolutely impractical and not actionable for huge codebases. They will always show you some numbers detecting the system is big and complex. Nothing new actually 🙃
I found more practical alternatives in Google SRE book:
🔸 Training Time: Time to onboard a new team member to the team.
🔸 Explanation Time: Time to explain high-level architecture of the service.
🔸 Administrative Diversity: Number of ways to configure similar settings in different parts of the system.
🔸 Diversity of Deployed Configurations: Number of configurations that are deployed in production. It can include installed services, their versions, feature flags, environment-specific parameters.
🔸 Age of the System: The older system tends to be more complex and fragile.
Of course, these metrices are not mathematically precise, but they provide high level indicators of the overall complexity of the existing architecture, not just individual blocks of code. And most importantly, they show what direction we should take to improve the situation.
#engineering #systemdesign
I think we can all agree that the less complex our systems are, the easier they are to modify, operate and troubleshoot. But how can we properly measure complexity?
The most popular answer will be something related to cyclomatic complexity or number of code lines. But have you ever tried to use them in practice? I found them absolutely impractical and not actionable for huge codebases. They will always show you some numbers detecting the system is big and complex. Nothing new actually 🙃
I found more practical alternatives in Google SRE book:
🔸 Training Time: Time to onboard a new team member to the team.
🔸 Explanation Time: Time to explain high-level architecture of the service.
🔸 Administrative Diversity: Number of ways to configure similar settings in different parts of the system.
🔸 Diversity of Deployed Configurations: Number of configurations that are deployed in production. It can include installed services, their versions, feature flags, environment-specific parameters.
🔸 Age of the System: The older system tends to be more complex and fragile.
Of course, these metrices are not mathematically precise, but they provide high level indicators of the overall complexity of the existing architecture, not just individual blocks of code. And most importantly, they show what direction we should take to improve the situation.
#engineering #systemdesign
🔥3👍2
Pipeline Patterns
Today we cannot imagine our CI\CD processes without pipelines. They’re everywhere: building, linting, testing, verifying compliance, deploying, and even handling maintenance tasks.
Have you ever seen the internals of those pipelines? I've seen: it's often a full mess.
So it's not a big surprise that someone started to think about how to write pipelines in resource efficient and easy to support way. That's exactly one of the topics from recent NDC Oslo conference: Pipeline Patterns and Antipatterns by Daniel Raniz Raneland.
It may not be any rocket science, but there is a good set of useful recipes:
🔸 Right pipeline for the job: Select only required steps for the task. For example, in build pipeline we can execute unit tests on PR and on the main, but we should not execute them on nightly CI with integration tests.
🔸 Conditional steps: Define a logic to skip not needed steps. For example, if you change only docs, you don't need to run build and tests.
🔸 Steps results reuse: Use artifacts from one step as the input to another steps.
🔸 Fail fast: Put steps that are failed more frequently to the beginning of the pipe.
🔸 Parallel run: Execute steps in parallel where it's possible.
🔸 Isolation: Result of one pipeline should not affect the results of the another.
🔸 Artifacts Housekeeping: Define cleanup policies for the artifacts.
🔸 Reasonable HWE: Carefully define required HWE to execute pipeline steps.
The key idea from the talk is that we should treat pipelines as any other software and we should apply the same architecture principles and best practices as for any other application.
#engineering
Today we cannot imagine our CI\CD processes without pipelines. They’re everywhere: building, linting, testing, verifying compliance, deploying, and even handling maintenance tasks.
Have you ever seen the internals of those pipelines? I've seen: it's often a full mess.
So it's not a big surprise that someone started to think about how to write pipelines in resource efficient and easy to support way. That's exactly one of the topics from recent NDC Oslo conference: Pipeline Patterns and Antipatterns by Daniel Raniz Raneland.
It may not be any rocket science, but there is a good set of useful recipes:
🔸 Right pipeline for the job: Select only required steps for the task. For example, in build pipeline we can execute unit tests on PR and on the main, but we should not execute them on nightly CI with integration tests.
🔸 Conditional steps: Define a logic to skip not needed steps. For example, if you change only docs, you don't need to run build and tests.
🔸 Steps results reuse: Use artifacts from one step as the input to another steps.
🔸 Fail fast: Put steps that are failed more frequently to the beginning of the pipe.
🔸 Parallel run: Execute steps in parallel where it's possible.
🔸 Isolation: Result of one pipeline should not affect the results of the another.
🔸 Artifacts Housekeeping: Define cleanup policies for the artifacts.
🔸 Reasonable HWE: Carefully define required HWE to execute pipeline steps.
The key idea from the talk is that we should treat pipelines as any other software and we should apply the same architecture principles and best practices as for any other application.
#engineering
YouTube
Pipeline Patterns and Antipatterns - Things your Pipeline Should (Not) Do - Daniel Raniz Raneland
This talk was recorded at NDC Oslo in Oslo, Norway. #ndcoslo #ndcconferences #developer #softwaredeveloper
Attend the next NDC conference near you:
https://ndcconferences.com
https://ndcoslo.com/
Subscribe to our YouTube channel and learn every day:…
Attend the next NDC conference near you:
https://ndcconferences.com
https://ndcoslo.com/
Subscribe to our YouTube channel and learn every day:…
👍4
The Art of Systems Thinking
We live in a world of systems. They are everywhere: businesses, family, teams, software and even ourselves. All of that are examples of complex systems. That's why systems thinking is a key skill that allows to see common systems patterns, apply changes, predict side effects, and adopt to the results of the implemented changes.
I'd like to share one of the books regarding to this topic - The Art of Systems Thinking: Essential Skills for Creativity and Problem Solving by Joseph O'Connor and Ian McDermott.
Some Takeaways:
🔸 A system is more than just the sum of its parts. If you analyze system parts separately, you can’t predict the behavior of the system.
🔸 Stable systems are more resistant to the changes.
🔸 It's not possible to make an isolated change within the system. It will always create side effects.
🔸 The leverage principle: systems resist to any change. But if you understand the system well, you can find its weak elements. A small shift there can trigger big changes.
🔸 Connections between system parts create feedback loops. They can be of 2 types:
- Reinforcing: when changes keep going in the same direction—like a snowball rolling down.
- Balancing: when changes push the system to restore the balance—like a thermostat keeping the specified temperature.
🔸 Changes are not happened immediately. If you don’t account for this, it can lead to overreaction and oscillations.
🔸 To change a system, you need to destroy the old state and build a new stable one.
Of course, these are just basics. The book goes deeper into our mental models and cognitive traps, learning principles, how shared mindset shapes people behavior (e.g., the tradegy of commons ), how escalations work and what are the reasons under main social and financial patterns.
The book is easy to read, it's written with simple language and a lot of real-life examples.
So if the topic sounds interesting, I recommend to check the whole book.
#booknook #softskills #thinking
We live in a world of systems. They are everywhere: businesses, family, teams, software and even ourselves. All of that are examples of complex systems. That's why systems thinking is a key skill that allows to see common systems patterns, apply changes, predict side effects, and adopt to the results of the implemented changes.
I'd like to share one of the books regarding to this topic - The Art of Systems Thinking: Essential Skills for Creativity and Problem Solving by Joseph O'Connor and Ian McDermott.
Some Takeaways:
🔸 A system is more than just the sum of its parts. If you analyze system parts separately, you can’t predict the behavior of the system.
🔸 Stable systems are more resistant to the changes.
🔸 It's not possible to make an isolated change within the system. It will always create side effects.
🔸 The leverage principle: systems resist to any change. But if you understand the system well, you can find its weak elements. A small shift there can trigger big changes.
🔸 Connections between system parts create feedback loops. They can be of 2 types:
- Reinforcing: when changes keep going in the same direction—like a snowball rolling down.
- Balancing: when changes push the system to restore the balance—like a thermostat keeping the specified temperature.
🔸 Changes are not happened immediately. If you don’t account for this, it can lead to overreaction and oscillations.
🔸 To change a system, you need to destroy the old state and build a new stable one.
Of course, these are just basics. The book goes deeper into our mental models and cognitive traps, learning principles, how shared mindset shapes people behavior (e.g., the tradegy of commons ), how escalations work and what are the reasons under main social and financial patterns.
The book is easy to read, it's written with simple language and a lot of real-life examples.
So if the topic sounds interesting, I recommend to check the whole book.
#booknook #softskills #thinking
Goodreads
The Art of Systems Thinking: Essential Skills for Creat…
Great book with excellent info, this copy has slightly …
❤5👍1
Kafka 4.1 Release
At the beginning of September Kafka 4.1 was released. It doesn't contain any big surprises but it follows the overall industry direction to improve security and operability.
Noticeable changes:
🔸 Preview state for Kafka Queues (Detailed overview there). It's still not recommended for production, but it's a good time to check how it works and what scenarios it really covers.
🔸 Early access to Stream Rebalance protocol. It moves rebalance logic to the broker side. Initially the approach was implemented for consumers and now it's extended for streams (KIP-1071)
🔸 Ability for plugins and connectors to register their own metrics via Monitorable interface (KIP-877)
🔸 Metrics naming unification between consumers and producers (KIP-1109). Previously Kafka consumer replaces periods (
🔸 OAuth jwt-bearer grant type support in addition to
🔸 Ability to enforce explicit naming for internal topics (like changelog, repartition). A new configuration flag prevents Kafka Streams from starting if any of their internal topics have auto-generated names (KIP-1111).
Full list of changes can be found in release note and official upgrade recommendations.
#news #technologies
At the beginning of September Kafka 4.1 was released. It doesn't contain any big surprises but it follows the overall industry direction to improve security and operability.
Noticeable changes:
🔸 Preview state for Kafka Queues (Detailed overview there). It's still not recommended for production, but it's a good time to check how it works and what scenarios it really covers.
🔸 Early access to Stream Rebalance protocol. It moves rebalance logic to the broker side. Initially the approach was implemented for consumers and now it's extended for streams (KIP-1071)
🔸 Ability for plugins and connectors to register their own metrics via Monitorable interface (KIP-877)
🔸 Metrics naming unification between consumers and producers (KIP-1109). Previously Kafka consumer replaces periods (
.) in topic names in metrics with underscores (_), while producer keeps topic name unchanged. Now both producers and consumers preserve original topic name format. Old metrics will be removed in Kafka 5.0.🔸 OAuth jwt-bearer grant type support in addition to
client_credentials (KIP-1139)🔸 Ability to enforce explicit naming for internal topics (like changelog, repartition). A new configuration flag prevents Kafka Streams from starting if any of their internal topics have auto-generated names (KIP-1111).
Full list of changes can be found in release note and official upgrade recommendations.
#news #technologies
👍1🔥1
GenAI as a Thought Partner
AI is mostly used to get answers, provide summaries, generate some text or automate routine tasks. But it can be much more than that if it's used in a Thought Partner mode.
What does it mean?
It means that you can ask AI to generate ideas, challenge your solutions, offer alternative options, or even play devil’s advocate.
This mode is really helpful for the leaders. For example, I use it to challenge my proposals, find alternative options and arguments. It helps me to be more prepared for the meetings with management and customers.
Basic template:
🔸 Role: "Act as my Strategic Thought Partner"
🔸 Context: situation or problem denoscription, objectives
🔸 Task: what to do
Example:
Act as my Strategic Thought Partner by engaging me in a structured problem-solving process. Here’s the situation: [provide necessary context].
My goal is to [state objective].
Challenge my current assumptions, ask clarifying questions, and help me think through alternative solutions. I’d like you to surface blind spots and uncover insights I may have overlooked.
More ideas to use:
🔸 Give me 10 unexpected angles to consider for...
🔸 Act as a devil's advocate and challenge my current assumptions about...
🔸 Evaluate the pros and cons for...
🔸 Help me uncover blind spots and overlooked insights related to...
Thought partner mode is a great tool, but don’t take everything as the absolute truth. If you miss any important details, it can give you totally wrong results. And of course, it still can lie, make mistakes and hallucinate 😵💫. Use it with a critical eye.
#ai #tips
AI is mostly used to get answers, provide summaries, generate some text or automate routine tasks. But it can be much more than that if it's used in a Thought Partner mode.
What does it mean?
It means that you can ask AI to generate ideas, challenge your solutions, offer alternative options, or even play devil’s advocate.
This mode is really helpful for the leaders. For example, I use it to challenge my proposals, find alternative options and arguments. It helps me to be more prepared for the meetings with management and customers.
Basic template:
🔸 Role: "Act as my Strategic Thought Partner"
🔸 Context: situation or problem denoscription, objectives
🔸 Task: what to do
Example:
Act as my Strategic Thought Partner by engaging me in a structured problem-solving process. Here’s the situation: [provide necessary context].
My goal is to [state objective].
Challenge my current assumptions, ask clarifying questions, and help me think through alternative solutions. I’d like you to surface blind spots and uncover insights I may have overlooked.
More ideas to use:
🔸 Give me 10 unexpected angles to consider for...
🔸 Act as a devil's advocate and challenge my current assumptions about...
🔸 Evaluate the pros and cons for...
🔸 Help me uncover blind spots and overlooked insights related to...
Thought partner mode is a great tool, but don’t take everything as the absolute truth. If you miss any important details, it can give you totally wrong results. And of course, it still can lie, make mistakes and hallucinate 😵💫. Use it with a critical eye.
#ai #tips
✍3🔥2👍1
A Few Words About Configurations
Ability to change system configuration is very important aspect of service operability. But too much configurations can turn system support into a nightmare.
From my experience, dev teams tend to overcomplicate provided configs. They try to allow as much options as possible. The common explanation is "We don't know what would be really needed". Then all configurations are carefully documented in a several-thousand lines guide and delivered to the ops team. Of course, ops team never ever reads it 😁
There is a really good metaphor from Google SRE Book that illustrates this situation:
Configuration is intended to be used by humans, and it should be designed for humans.
Main principle here is simplicity and reasonable defaults. The less configuration is required, the simpler system is to operate and maintain.
One more important aspect of configurability is testing surface. It is quite expensive to check all possible parameters and their combinations. As a result too much variety increases the risk of errors and human mistakes.
So next time you think about adding new configuration parameter, keep in mind that the best configuration is no configuration.
#systemdesign #engineering
Ability to change system configuration is very important aspect of service operability. But too much configurations can turn system support into a nightmare.
From my experience, dev teams tend to overcomplicate provided configs. They try to allow as much options as possible. The common explanation is "We don't know what would be really needed". Then all configurations are carefully documented in a several-thousand lines guide and delivered to the ops team. Of course, ops team never ever reads it 😁
There is a really good metaphor from Google SRE Book that illustrates this situation:
A user can ask for “hot green tea” and get roughly what they want. On the opposite end, a user can specify the whole process: water volume, boiling temperature, tea brand and flavor, steeping time, tea cup type, and tea volume in the cup.
Configuration is intended to be used by humans, and it should be designed for humans.
Main principle here is simplicity and reasonable defaults. The less configuration is required, the simpler system is to operate and maintain.
One more important aspect of configurability is testing surface. It is quite expensive to check all possible parameters and their combinations. As a result too much variety increases the risk of errors and human mistakes.
So next time you think about adding new configuration parameter, keep in mind that the best configuration is no configuration.
#systemdesign #engineering
❤3👍3
Is Open Source Free?
Do you know that the term
Dylan Beattie presented the open source history and its current trends in the talk Open Source, Open Mind: The Cost of Free Software.
The history itself is very interesting: from pirating computer games, creating first linux distro to licenses evolution, CLAs and current number of limitations to use
But here I want to highlight the following:
🔸 Open source projects provide us a code. Not more. If you want continuity, support, availability, or convenience, expect to pay. It can be licenses, managed services, sponsorship or even your own investments.
🔸 "People who share the source code do not owe you anything". They don’t even promise the software works properly or it works at all.
🔸 Open source projects can change their license to commercial at any time. You should be just ready for that (remember Redis, Graylog, Vault, ElasticSearch, etc.).
Some example:
You can take postgres for free, it's fully open.
But can you use for production? Probably not 😯.
First you need to package it, prepare installation and upgrade procedures, implement HA, configure metrics and monitoring dashboards, provide backup approach, tune security, teach operations team, etc.
You can do it on your own or pay to another company to do it for you.
So anyone who says open source is free and it costs nothing has clearly never run it in production. Open source software is really "open" but not free.
#technologies
Do you know that the term
open source was invented in 1998 to replace the term free software? To highlight "free as in freedom, not free as in beer" 😀?Dylan Beattie presented the open source history and its current trends in the talk Open Source, Open Mind: The Cost of Free Software.
The history itself is very interesting: from pirating computer games, creating first linux distro to licenses evolution, CLAs and current number of limitations to use
open software. That part I recommend to watch once you have free time, it's really entertaining.But here I want to highlight the following:
🔸 Open source projects provide us a code. Not more. If you want continuity, support, availability, or convenience, expect to pay. It can be licenses, managed services, sponsorship or even your own investments.
🔸 "People who share the source code do not owe you anything". They don’t even promise the software works properly or it works at all.
🔸 Open source projects can change their license to commercial at any time. You should be just ready for that (remember Redis, Graylog, Vault, ElasticSearch, etc.).
Some example:
You can take postgres for free, it's fully open.
But can you use for production? Probably not 😯.
First you need to package it, prepare installation and upgrade procedures, implement HA, configure metrics and monitoring dashboards, provide backup approach, tune security, teach operations team, etc.
You can do it on your own or pay to another company to do it for you.
So anyone who says open source is free and it costs nothing has clearly never run it in production. Open source software is really "open" but not free.
#technologies
YouTube
Open Source, Open Mind: The Cost of Free Software - Dylan Beattie - NDC London 2025
This talk was recorded at NDC London in London, England. #ndclondon #ndcconferences #developer #softwaredeveloper
Attend the next NDC conference near you:
https://ndcconferences.com
https://ndclondon.com/
Subscribe to our YouTube channel and learn…
Attend the next NDC conference near you:
https://ndcconferences.com
https://ndclondon.com/
Subscribe to our YouTube channel and learn…
❤4👍2
Failure Is Always An Option
One more great video from Dylan Beattie - Failure Is Always An Option. This time it's the talk about software reliability and risks of system misbehavior.
Key ideas:
🔸 Use System Thinking. Reliability is not just about software, it's about holistic view on the system that includes software, hardware, finance and people.
🔸 Design For Failure. Be ready for failure at all system levels and components.
🔸 Measure Risk by Impact not Frequency. You might never had a car accident, but it doesn’t mean you don’t need airbags and seat belts.
🔸 Focus on Results. Define
🔸 Expect Surprises. Users are really creative, they can use features in an unpredictable way. Don't be arrogant to say "This is wrong". Learn from them to build awesome stuff together.
The talk is full of interesting samples of building complex reliable systems. The most impressive part for me was the story around Apollo 13 mission 🚀.
Just imagine: shuttle, astronauts, space, and some software... The whole mission success and astronauts lives depend on software quality and reliability. Sounds like a horror, right? 😃
HA & DR for Shuttle software was implemented using 6 computers:
🔸 4 identical computers to compare results and provide availability.
🔸 5th computer to perform the same logic but with software written by a different vendor.
🔸 6th computer without software at all, the idea was to use it to install the software from scratch in case there are issues with software on all other computers. Lately 6th computer was removed from shuttles as it was "never really used".
The video has many more great examples from software engineering history, I really watched it in one sitting. And I love Dylan’s presentation style: energetic, with a good dose of humor, engaging and inspirational. Recommend 👍.
#systemdesign #engineering #reliability
One more great video from Dylan Beattie - Failure Is Always An Option. This time it's the talk about software reliability and risks of system misbehavior.
Key ideas:
🔸 Use System Thinking. Reliability is not just about software, it's about holistic view on the system that includes software, hardware, finance and people.
🔸 Design For Failure. Be ready for failure at all system levels and components.
🔸 Measure Risk by Impact not Frequency. You might never had a car accident, but it doesn’t mean you don’t need airbags and seat belts.
🔸 Focus on Results. Define
things done by outcomes not executed steps or procedures.🔸 Expect Surprises. Users are really creative, they can use features in an unpredictable way. Don't be arrogant to say "This is wrong". Learn from them to build awesome stuff together.
The talk is full of interesting samples of building complex reliable systems. The most impressive part for me was the story around Apollo 13 mission 🚀.
Just imagine: shuttle, astronauts, space, and some software... The whole mission success and astronauts lives depend on software quality and reliability. Sounds like a horror, right? 😃
HA & DR for Shuttle software was implemented using 6 computers:
🔸 4 identical computers to compare results and provide availability.
🔸 5th computer to perform the same logic but with software written by a different vendor.
🔸 6th computer without software at all, the idea was to use it to install the software from scratch in case there are issues with software on all other computers. Lately 6th computer was removed from shuttles as it was "never really used".
The video has many more great examples from software engineering history, I really watched it in one sitting. And I love Dylan’s presentation style: energetic, with a good dose of humor, engaging and inspirational. Recommend 👍.
#systemdesign #engineering #reliability
YouTube
Failure Is Always An Option • Dylan Beattie • GOTO 2023
This presentation was recorded at GOTO Amsterdam 2023. #GOTOcon #GOTOams
https://gotoams.nl
Dylan Beattie - Consultant, Software Developer & Creator of the Rockstar Programming Language @DylanBeattie
RESOURCES
https://twitter.com/dylanbeattie
https://…
https://gotoams.nl
Dylan Beattie - Consultant, Software Developer & Creator of the Rockstar Programming Language @DylanBeattie
RESOURCES
https://twitter.com/dylanbeattie
https://…
🔥2👍1
Open Infrastructure is Not Free
There was a piece of news last week that might not be very noticeable, but it's really important for the whole open source community. On Sep 23 open source foundations like Sonatype (Maven Central), Open Source Security Foundation (OpenSSF), Python Software Foundation (PyPI) and others published a joined letter - Open Infrastructure is Not Free: A Joint Statement on Sustainable Stewardship.
What problem they highlighted:
🔸 Open source infrastructure is the foundation of any modern digital infrastructure.
🔸 User expects this infrastructure to be secure, fast, reliable, and global.
🔸 Public registries are often used to distribute proprietary software (it may have open source license but it can work only as a part of a paid product).
🔸 Commercial organizations heavily use open source infrastructure as free CDN and distribution systems.
🔸 Open source infrastructure is supported by non-profit foundations and enthusiasts. They don't have enough resources to meet growing expectations.
🔸 Load on the infrastructure grows exponentially, donations - linearly.
🔸 This situation produces a disbalance: billion-dollars ecosystems live on services that are built on goodwill, unpaid weekends and sponsorships.
The problem is obvious: too many companies make money on open source infrastructure without giving a cent back. They profit, while the real costs are carried by volunteers and foundation sponsors. The claim is fair enough.
Proposed ideas:
🔸 Commercial Partnership: Fund infrastructure in proportion to usage.
🔸 Tiered Access: Free access for individual contributors, paid options for scale and performance for high-volume consumers.
🔸 Additional Capabilities: Provide additional capabilities that might be interesting for commercial entities (e.g. some statistics or analytics)
The authors said that this letter is only the beginning: they will start to actively work with foundations, governments, and industry partners to improve the situation. Looks like in 2-3 years we'll have totally different infrastructure, and, most probably, it will not be free.
#news #technologies
There was a piece of news last week that might not be very noticeable, but it's really important for the whole open source community. On Sep 23 open source foundations like Sonatype (Maven Central), Open Source Security Foundation (OpenSSF), Python Software Foundation (PyPI) and others published a joined letter - Open Infrastructure is Not Free: A Joint Statement on Sustainable Stewardship.
What problem they highlighted:
🔸 Open source infrastructure is the foundation of any modern digital infrastructure.
🔸 User expects this infrastructure to be secure, fast, reliable, and global.
🔸 Public registries are often used to distribute proprietary software (it may have open source license but it can work only as a part of a paid product).
🔸 Commercial organizations heavily use open source infrastructure as free CDN and distribution systems.
🔸 Open source infrastructure is supported by non-profit foundations and enthusiasts. They don't have enough resources to meet growing expectations.
🔸 Load on the infrastructure grows exponentially, donations - linearly.
🔸 This situation produces a disbalance: billion-dollars ecosystems live on services that are built on goodwill, unpaid weekends and sponsorships.
The problem is obvious: too many companies make money on open source infrastructure without giving a cent back. They profit, while the real costs are carried by volunteers and foundation sponsors. The claim is fair enough.
Proposed ideas:
🔸 Commercial Partnership: Fund infrastructure in proportion to usage.
🔸 Tiered Access: Free access for individual contributors, paid options for scale and performance for high-volume consumers.
🔸 Additional Capabilities: Provide additional capabilities that might be interesting for commercial entities (e.g. some statistics or analytics)
The authors said that this letter is only the beginning: they will start to actively work with foundations, governments, and industry partners to improve the situation. Looks like in 2-3 years we'll have totally different infrastructure, and, most probably, it will not be free.
#news #technologies
🔥3👍2😱1
Software Quality: What does it mean?
We all want to build high-quality products. But what do we understand under
Actually, developers, business and users mean different things under the quality.
There is a really good publication from Google team regarding this topic - Developer Productivity for Humans, Part 7: Software Quality.
The authors break down software quality into 4 types:
🔸 Process Quality. It usually includes code reviews, organizational consistency, effective planning, testing strategy, tests flakiness, distribution of work. Typically, higher process quality leads to higher code quality.
🔸 Code Quality. It's code testability, complexity, readability and maintainability. High code quality improves quality of the system by reducing defects and increasing reliability.
🔸 System Quality. It means high reliability, high performance, security, privacy and low defect rates.
🔸 Product Quality. It's the type of quality experienced by the customers. It includes utility, usability, and reliability. Also this level includes other business parameters: brand reputation, costs and overheads, revenue.
These four types of quality impact each other: the process quality affects code quality, which affects system quality, which affects product quality. The end goal is always to improve product quality.
This model also explains why ideas like "we'll improve test coverage to X% and we'll get the good quality" rarely works in practice. It might help a little bit, but the connection with product quality is far away.
So if the team is concerned about the quality, they need to analyze what type of quality they want to work on and select appropriate metrics.
#engineering #quality
We all want to build high-quality products. But what do we understand under
high-quality? Is it high test coverage? Low defects rate? Reliability? Compliance? Actually, developers, business and users mean different things under the quality.
There is a really good publication from Google team regarding this topic - Developer Productivity for Humans, Part 7: Software Quality.
The authors break down software quality into 4 types:
🔸 Process Quality. It usually includes code reviews, organizational consistency, effective planning, testing strategy, tests flakiness, distribution of work. Typically, higher process quality leads to higher code quality.
🔸 Code Quality. It's code testability, complexity, readability and maintainability. High code quality improves quality of the system by reducing defects and increasing reliability.
🔸 System Quality. It means high reliability, high performance, security, privacy and low defect rates.
🔸 Product Quality. It's the type of quality experienced by the customers. It includes utility, usability, and reliability. Also this level includes other business parameters: brand reputation, costs and overheads, revenue.
These four types of quality impact each other: the process quality affects code quality, which affects system quality, which affects product quality. The end goal is always to improve product quality.
This model also explains why ideas like "we'll improve test coverage to X% and we'll get the good quality" rarely works in practice. It might help a little bit, but the connection with product quality is far away.
So if the team is concerned about the quality, they need to analyze what type of quality they want to work on and select appropriate metrics.
#engineering #quality
research.google
Developer Productivity for Humans, Part 7: Software Quality
🔥2❤1👍1
Schrodinger Backup
Let's imagine that you carefully design your backup strategy (refer to Backup Strategy, Backup Types), deliver it to production, configure schedule to trigger it regularly, store backups on another region for DR purposes.
Can you feel safe after it?
No 😱.
The problem is that the backup is there, but not really...
Until you have a process for regularly restoring production data, you have no guarantees that it works. It's not possible to test restoration on a real production environment, so this procedure should retore data on another environment and execute at least basic sanity checks.
With this idea in mind I decided to check what's in the industry has there: I asked about testing backup procedure in X DevOps community, checked what public clouds offer and looked for the suggestions over the Internet.
Key findings:
🔸 Most teams have never tested the restoration of production backups. They verify only procedure itself on some test environments.
🔸 GCP and Azure recommend to test production restoration, but you should prepare e2e procedure on your own (or I was not able to find it quickly).
🔸 AWS offers automatic testing procedure for its managed storages with an ability to create custom validation workflows.
🔸 Uber has a great article where they shared their continuous backup\restore approach.
Surprisingly, there are not much practical information about how to implement regular restoration testing.
Most probably there are 3 reasons for that:
- It's expensive
- The process is very env and company specific
- It may be more relevant for big tech companies where data loss is a critical business risk
So don't assume that no errors and existent backup files mean that you have a backup. You don't really know until the real incident .
#engineering #backups
Let's imagine that you carefully design your backup strategy (refer to Backup Strategy, Backup Types), deliver it to production, configure schedule to trigger it regularly, store backups on another region for DR purposes.
Can you feel safe after it?
No 😱.
The problem is that the backup is there, but not really...
Until you have a process for regularly restoring production data, you have no guarantees that it works. It's not possible to test restoration on a real production environment, so this procedure should retore data on another environment and execute at least basic sanity checks.
With this idea in mind I decided to check what's in the industry has there: I asked about testing backup procedure in X DevOps community, checked what public clouds offer and looked for the suggestions over the Internet.
Key findings:
🔸 Most teams have never tested the restoration of production backups. They verify only procedure itself on some test environments.
🔸 GCP and Azure recommend to test production restoration, but you should prepare e2e procedure on your own (or I was not able to find it quickly).
🔸 AWS offers automatic testing procedure for its managed storages with an ability to create custom validation workflows.
🔸 Uber has a great article where they shared their continuous backup\restore approach.
Surprisingly, there are not much practical information about how to implement regular restoration testing.
Most probably there are 3 reasons for that:
- It's expensive
- The process is very env and company specific
- It may be more relevant for big tech companies where data loss is a critical business risk
So don't assume that no errors and existent backup files mean that you have a backup. You don't really know until the real incident .
#engineering #backups
🔥4👍1💯1
Adizes Leadership Styles
Have you ever worked with a leader who could quickly launch any initiative but terrible at organizing any process around it? Or with someone who can put in order any chaos but couldn't drive the change? According to the Dr I. Adizes it's different styles of leadership.
Adizes defines the following styles:
🔸 Producer. Focuses on WHAT should be done (product, services, some KPIs).
🔸 Administrator. Focuses on HOW should it be done (processes, methodologies).
🔸 Entrepreneur. Focuses on WHY and WHEN should it be done (changes, initiatives, new opportunities).
🔸 Integrator. Focuses on WITH WHOM should it be done (people).
The model was named PAEI by first letters from specified types.
The main idea is that most of us can embody one or two of these styles, may pick up some elements from the third, but nobody can include all of them. These styles are defined by personality type, experience and the situation.
It's good to know your own type and type of your manager. Different types of managers use different language and are interested in different things. Especially it's important in communication with your direct manager.
First time I met this model more than 10 years ago and defined myself as a strong Integrator. Recently I passed the tests one more time and found myself with main styles of Entrepreneur and Administrator. So what does it mean? The situation (position), tasks and experience were changed.
If you want to go in more details I recommend to read Management/Mismanagement Styles by I. Adizes. Actually the author has much more books but they are more or less about the same.
#booknook #softskills #leadership
Have you ever worked with a leader who could quickly launch any initiative but terrible at organizing any process around it? Or with someone who can put in order any chaos but couldn't drive the change? According to the Dr I. Adizes it's different styles of leadership.
Adizes defines the following styles:
🔸 Producer. Focuses on WHAT should be done (product, services, some KPIs).
🔸 Administrator. Focuses on HOW should it be done (processes, methodologies).
🔸 Entrepreneur. Focuses on WHY and WHEN should it be done (changes, initiatives, new opportunities).
🔸 Integrator. Focuses on WITH WHOM should it be done (people).
The model was named PAEI by first letters from specified types.
The main idea is that most of us can embody one or two of these styles, may pick up some elements from the third, but nobody can include all of them. These styles are defined by personality type, experience and the situation.
It's good to know your own type and type of your manager. Different types of managers use different language and are interested in different things. Especially it's important in communication with your direct manager.
First time I met this model more than 10 years ago and defined myself as a strong Integrator. Recently I passed the tests one more time and found myself with main styles of Entrepreneur and Administrator. So what does it mean? The situation (position), tasks and experience were changed.
If you want to go in more details I recommend to read Management/Mismanagement Styles by I. Adizes. Actually the author has much more books but they are more or less about the same.
#booknook #softskills #leadership
Goodreads
Management/Mismanagement Styles: How to Identify a Styl…
To be successful, managing a company or a team cannot t…
👍2🔥2
Schema for PAEI model by I. Adizes.
Source: https://adizes.lv/adizes-management-style-indicator/
#softskills #leadership
Source: https://adizes.lv/adizes-management-style-indicator/
#softskills #leadership
❤2👍1
AWS Aurora Stateful Blue-Green
The most existing blue-green implementations that I know relate to the stateless services. Stateful services are rarely became a subject of blue-green. The main reason is the high cost of copying production data between versions.
But in some cases we need to provide strong guarantees that the next upgrade will not break our production. AWS tried to solve this issue introducing Aurora blue green deployment.
The marketing part sounds good:
But let's check how it works:
🔸 A new Aurora cluster with the same configuration and topology is created. It has a new service name with a
🔸 The new cluster can have a higher version and other set of database parameters than a production cluster.
🔸 Logical replication is established between a production and a new green cluster.
🔸 The green cluster is readonly by default. Enabling write operations can cause replication conflicts.
🔸 Once green database is tested, it's possible to perform a switchover. The names and endpoints of the current production environment is assigned to a newly created cluster.
In simple words, it's just a new cluster that copies data from production using logical replication feature. And as a result it inherits all restrictions of that feature such as missed DDL operations, no replication for large objects, lack of extensions support and others. So you need to be very careful deciding to use this approach.
For me it looks like this solution is suitable only for very basic scenarios with simple data types. Anything more complex won't work.
#engineering #systemdesign #bluegreen
The most existing blue-green implementations that I know relate to the stateless services. Stateful services are rarely became a subject of blue-green. The main reason is the high cost of copying production data between versions.
But in some cases we need to provide strong guarantees that the next upgrade will not break our production. AWS tried to solve this issue introducing Aurora blue green deployment.
The marketing part sounds good:
You can make changes to the Aurora DB cluster in the green environment without affecting production workloads... When ready, you can _switch over_ the environments to transition the green environment to be the new production environment. The switchover typically takes under a minute with no data loss and no need for application changes.
But let's check how it works:
🔸 A new Aurora cluster with the same configuration and topology is created. It has a new service name with a
green- prefix.🔸 The new cluster can have a higher version and other set of database parameters than a production cluster.
🔸 Logical replication is established between a production and a new green cluster.
🔸 The green cluster is readonly by default. Enabling write operations can cause replication conflicts.
🔸 Once green database is tested, it's possible to perform a switchover. The names and endpoints of the current production environment is assigned to a newly created cluster.
In simple words, it's just a new cluster that copies data from production using logical replication feature. And as a result it inherits all restrictions of that feature such as missed DDL operations, no replication for large objects, lack of extensions support and others. So you need to be very careful deciding to use this approach.
For me it looks like this solution is suitable only for very basic scenarios with simple data types. Anything more complex won't work.
#engineering #systemdesign #bluegreen
Amazon
Overview of Amazon Aurora Blue/Green Deployments - Amazon Aurora
Learn about concepts related to Amazon RDS Blue/Green Deployments.
🔥2👍1
Stateful Service Upgrade Strategy
Last time I wrote about AWS Aurora Stateful Blue-Green approach. Despite the limitations it gave a good pattern that can be used to make upgrade procedure safer even for in prem installations.
For simplicity let's focus on a database example but the idea is applicable far any stateful service.
The simplified approach is the following:
🔸 Hide a real cluster name under specific DNS name (in AWS it's Route53, in Kubernetes it can be just service name)
🔸 Perform a backup of a production database
🔸 Restore production backup as a new database instance
🔸 Execute upgrade of the production cluster (or any other potentially dangerous operation)
🔸 If upgrade failed, switch DNS to the backup database created before
🔸 If upgrade succeed, just remove backup database
The main trick there is that you create backup instance before any change to production, so in case of failure, you can quickly switch the system to a working state.
Of course, there is a delta between a database created from a backup and a production database. But it in case of a real disaster it can be fine ( of course, you need to check your RPO and RTO requirements, allowed maintenance window, etc.).
Approach with backup is much simpler then logical replication, can be used in different environments and can provide you additional guarantees especially for major upgrade or huge data migrations.
#engineering #bluegreen #backups
Last time I wrote about AWS Aurora Stateful Blue-Green approach. Despite the limitations it gave a good pattern that can be used to make upgrade procedure safer even for in prem installations.
For simplicity let's focus on a database example but the idea is applicable far any stateful service.
The simplified approach is the following:
🔸 Hide a real cluster name under specific DNS name (in AWS it's Route53, in Kubernetes it can be just service name)
🔸 Perform a backup of a production database
🔸 Restore production backup as a new database instance
🔸 Execute upgrade of the production cluster (or any other potentially dangerous operation)
🔸 If upgrade failed, switch DNS to the backup database created before
🔸 If upgrade succeed, just remove backup database
The main trick there is that you create backup instance before any change to production, so in case of failure, you can quickly switch the system to a working state.
Of course, there is a delta between a database created from a backup and a production database. But it in case of a real disaster it can be fine ( of course, you need to check your RPO and RTO requirements, allowed maintenance window, etc.).
Approach with backup is much simpler then logical replication, can be used in different environments and can provide you additional guarantees especially for major upgrade or huge data migrations.
#engineering #bluegreen #backups
🔥2👍1
Platform Speed vs Efficiency
Platform teams become more and more popular. I remind you that the idea under them is very simple: move all common functionality to the platform so products can focus on business logic. It allows to reuse the same features across products, don't implement things twice and save development efforts.
Sounds good but this approach can lead to another issue: product teams generate more requirements than platform team can implement, so platform starts to be a bottleneck for everyone.
This problem is described in the article Platforms should focus on speed, not efficiency by Jan Bosch:
This denoscription reflects my own observations: products want to get more and more features for free, platform team is piled up with requests, everything is stuck.
To solve this problem the author suggest to focus not on platform efficiency but on the speed to extend the platform with the functionality required by products.
He suggested 3 strategies to achieve that:
1. Make platform optional. In that case the platform team is motivated to earn trust and solve real problems instead of optimizing their own efficiency.
2. Allow product teams to contribute the the platform code.
3. Merge product and platform. Instead of separating “platform” and “products,” create a shared codebase that contains all functionality.
From my experience p.3 is not always possible especially for large codebase. This approach requires significant investments in build and CI infrastructure that can be too expensive. But other points look relevant and they often mentioned in other resources about platform engineering.
This article is a part of the series called "Software Platforms 10 lessons". So I'm planning to read other lessons soon.
#platformengineering #engineering
Platform teams become more and more popular. I remind you that the idea under them is very simple: move all common functionality to the platform so products can focus on business logic. It allows to reuse the same features across products, don't implement things twice and save development efforts.
Sounds good but this approach can lead to another issue: product teams generate more requirements than platform team can implement, so platform starts to be a bottleneck for everyone.
This problem is described in the article Platforms should focus on speed, not efficiency by Jan Bosch:
Although the functionality in the platform might be common, very often product teams find out that they need a different flavor or that some part is missing. The product teams need to request this functionality to be developed by the platform team, which often gets overwhelmed by all the requests. The consequence is that everyone waits for the slow-moving platform team.
This denoscription reflects my own observations: products want to get more and more features for free, platform team is piled up with requests, everything is stuck.
To solve this problem the author suggest to focus not on platform efficiency but on the speed to extend the platform with the functionality required by products.
He suggested 3 strategies to achieve that:
1. Make platform optional. In that case the platform team is motivated to earn trust and solve real problems instead of optimizing their own efficiency.
2. Allow product teams to contribute the the platform code.
3. Merge product and platform. Instead of separating “platform” and “products,” create a shared codebase that contains all functionality.
From my experience p.3 is not always possible especially for large codebase. This approach requires significant investments in build and CI infrastructure that can be too expensive. But other points look relevant and they often mentioned in other resources about platform engineering.
This article is a part of the series called "Software Platforms 10 lessons". So I'm planning to read other lessons soon.
#platformengineering #engineering
Linkedin
Platform lesson #1: Platforms should focus on speed, not efficiency
Traditional thinking is that platforms are about efficiency through reuse. Product teams get a bunch of functionality for free from the platform and only have to build the remaining product-specific functionality.
👍5🔥2