Obsidian: Task Management. Part 2.
When Obsidian plugins are configured, it's time to organize a process to work with tasks.
My approach is simple:
1. Add tasks to related notes. During the work, meetings, investigations I put tasks directly to the related notes with short denoscription, priority, tags and due date.
For example, I had a meeting where we discussed CI improvements. As a result I created a note with meeting minutes and add tasks that were on my side - talk with IT team to improve CI cluster stability, add retries to the test collections, etc.
2. Create TODO. When I don't want to think where I should put a task, I just write it down to the TODO file. TODO is an unsorted list of tasks that I collect during the day.
For example, a colleague requested to help or provide specific information, PM requested to provide sprint status, etc.
3. Create Today view. As tasks are spread across different notes, I need to collect them in a single place. So I created a special page called Today. I don't write any tasks here, I use it as a daily dashboard with the following sections:
🔸 Focus. Key global topics to focus on, static.
🔸 Doing: Tasks are already in progress
🔸 Do Now: My backlog, grouped by context like management, tech, education. I also have an “others” group for everything else (otherwise I sometimes loose tasks without tags 🫣 ):
➡️ Tip: short mode gives a link to the source note with the task, it's helpful to navigate into a full context.
4. Task Board. I use it as time line for the tasks: today, tomorrow, overdue, etc.
I don't pretend my system is ideal, it just works for me and I periodically tune it when I feel something doesn't really work.
Hope it gives you a good starting point to build your own task management system. Start simple, experiment, make the system works for you.
#softskills #productivity
When Obsidian plugins are configured, it's time to organize a process to work with tasks.
My approach is simple:
1. Add tasks to related notes. During the work, meetings, investigations I put tasks directly to the related notes with short denoscription, priority, tags and due date.
For example, I had a meeting where we discussed CI improvements. As a result I created a note with meeting minutes and add tasks that were on my side - talk with IT team to improve CI cluster stability, add retries to the test collections, etc.
2. Create TODO. When I don't want to think where I should put a task, I just write it down to the TODO file. TODO is an unsorted list of tasks that I collect during the day.
For example, a colleague requested to help or provide specific information, PM requested to provide sprint status, etc.
3. Create Today view. As tasks are spread across different notes, I need to collect them in a single place. So I created a special page called Today. I don't write any tasks here, I use it as a daily dashboard with the following sections:
🔸 Focus. Key global topics to focus on, static.
🔸 Doing: Tasks are already in progress
```tasks
status.type is in_progress
sort by due
short mode
group by tags
```
🔸 Do Now: My backlog, grouped by context like management, tech, education. I also have an “others” group for everything else (otherwise I sometimes loose tasks without tags 🫣 ):
```tasks
not done
sort by due
sort by priority
short mode
tags include #management
group by tags
```
➡️ Tip: short mode gives a link to the source note with the task, it's helpful to navigate into a full context.
4. Task Board. I use it as time line for the tasks: today, tomorrow, overdue, etc.
I don't pretend my system is ideal, it just works for me and I periodically tune it when I feel something doesn't really work.
Hope it gives you a good starting point to build your own task management system. Start simple, experiment, make the system works for you.
#softskills #productivity
👍4
Secure by Design at Google
"Secure by design" is well-known software architectural principle. In recent years, as number of security incidents increased across the industry, it gain more and more attention.
But what does it actually mean?
According to Google’s Well-Architected Framework:
Sometimes it is used as a synonym to secure by default, but actually terms are different:
Google shared a paper about how they implemented Secure by Design approach . What I really liked is the idea that guidelines and education don't work: they cannot prevent human errors in large code base. The only way to make secure software is to build safe development ecosystem.
Instead of relying on developers to “do the right thing,” Google embeds security directly into the tools, frameworks, and libraries they use, for example:
🔸 application frameworks with built-in authentication and authorization
🔸 libraries with built-in XSS and other types of injections protection
🔸 memory-safe languages usage
Safe coding practices provide high confidence that if program compiles and runs then it's free of relevant vulnerabilities because if code isn't secure enough it won't even compile.
"Secure By Design" is applicable not only for development but for SRE activities as well.
Good example is Zero Touch Prod.
This principle means nobody can make any changes directly to the production systems. All changes must be done by trusted automation (GitOps), approved software with a list of relevant validations or by audited break-glass mechanism. This significantly reduces the risk of accidental or unauthorized changes.
Security by design is not just an architectural principle, it’s something that should be built into the core of your software and development ecosystem.
#engineering #security
"Secure by design" is well-known software architectural principle. In recent years, as number of security incidents increased across the industry, it gain more and more attention.
But what does it actually mean?
According to Google’s Well-Architected Framework:
Secure by design: emphasizes proactively incorporating security considerations throughout a system's development lifecycle. This approach involves using secure coding practices, conducting security reviews, and embedding security throughout the design process.
Sometimes it is used as a synonym to secure by default, but actually terms are different:
Secure by default: focuses on ensuring that a system's default settings are set to a secure mode, minimizing the need for users or administrators to take actions to secure the system.
Google shared a paper about how they implemented Secure by Design approach . What I really liked is the idea that guidelines and education don't work: they cannot prevent human errors in large code base. The only way to make secure software is to build safe development ecosystem.
Instead of relying on developers to “do the right thing,” Google embeds security directly into the tools, frameworks, and libraries they use, for example:
🔸 application frameworks with built-in authentication and authorization
🔸 libraries with built-in XSS and other types of injections protection
🔸 memory-safe languages usage
Safe coding practices provide high confidence that if program compiles and runs then it's free of relevant vulnerabilities because if code isn't secure enough it won't even compile.
"Secure By Design" is applicable not only for development but for SRE activities as well.
Good example is Zero Touch Prod.
This principle means nobody can make any changes directly to the production systems. All changes must be done by trusted automation (GitOps), approved software with a list of relevant validations or by audited break-glass mechanism. This significantly reduces the risk of accidental or unauthorized changes.
Security by design is not just an architectural principle, it’s something that should be built into the core of your software and development ecosystem.
#engineering #security
✍2👍2
The History of Microservices
Do you know how the microservices were "invented"?
Back in 2011, a group of engineers was on a workshop in Castle Brando. They discussed software engineering problems and they felt tired with large monoliths, slow releases, and heavyweight tooling.
Then they came up with the idea:
This is the part of the story that is shared in The Magic of Small Things talk by James Lewis.
In the video James shares his memories of how the idea was born, how it was populated in the community, became an industry trend and how this new concept introduced new challenges.
The talk gives you a clear look at the real reasons of this architectural pattern, its drawbacks and important characteristics without all that hype around.
For me it's a good reminder that every trend starts with an attempt to solve a real problem. And sometimes the initial problem is lost during the time, but produced a set of new problems.
So make decisions based on your actual needs.
Good video to check during the weekend 😎
#architecture
Do you know how the microservices were "invented"?
Back in 2011, a group of engineers was on a workshop in Castle Brando. They discussed software engineering problems and they felt tired with large monoliths, slow releases, and heavyweight tooling.
Then they came up with the idea:
Maybe the problem is the size. Maybe if we built lots of smaller things instead that might help.
This is the part of the story that is shared in The Magic of Small Things talk by James Lewis.
In the video James shares his memories of how the idea was born, how it was populated in the community, became an industry trend and how this new concept introduced new challenges.
The talk gives you a clear look at the real reasons of this architectural pattern, its drawbacks and important characteristics without all that hype around.
For me it's a good reminder that every trend starts with an attempt to solve a real problem. And sometimes the initial problem is lost during the time, but produced a set of new problems.
So make decisions based on your actual needs.
Good video to check during the weekend 😎
#architecture
YouTube
The Magic of Small Things - 10 Years of Microservices • James Lewis • GOTO 2024
This presentation was recorded at GOTO Copenhagen 2024. #GOTOcon #GOTOcph
https://gotocph.com
James Lewis - Software Architect & Director at Thoughtworks
RESOURCES
https://bsky.app/profile/boicy.bovon.org
https://twitter.com/boicy
https://linkedin.com/in/james…
https://gotocph.com
James Lewis - Software Architect & Director at Thoughtworks
RESOURCES
https://bsky.app/profile/boicy.bovon.org
https://twitter.com/boicy
https://linkedin.com/in/james…
👍2
Don't Shoot the Dog. Part 1: Overview.
Today I want to share a book that can help you not just at work, but in your everyday life to build better relationships with your family and friends. It's called Don’t Shoot the Dog: The Art of Teaching and Training by Karen Pryor.
Karen Pryor is a scientist specialized in marine mammal biology and behavioral psychology. She spent many years training dolphins and studying their behavior in oceanariums. Interesting, that her openings are applicable not only to animals but for humans with the same rate of success.
In her book Karen describes principles of training desired behavior with reinforcing:
I see this quote as the main problem that author tries to address.
A bit of theory from the book:
There are 2 types of reinforces: positive and negative. Positive is something that subject wants to get, negative is something subject wants to avoid.
The key idea of the book is that ❗️negative reinforcement doesn't work. Punishments don't work.
I think that’s really important, because our first instinct is usually to use negative reinforcements.
What do we do when a child doesn’t do their homework? Or ignores us when we ask for something? Or when a puppy chews the furniture? Even at school, teachers highlights our mistakes to show we did something wrong.
The book provides a theory of how behavior is formed, how different types of reinforces affect it, how behavior can be trained or untrained to achieve the desired results.
Practices to change undesired behavior are actually one of the most interesting part of the book. I'll talk about that in the next post.
#booknook #softskills #communications
Today I want to share a book that can help you not just at work, but in your everyday life to build better relationships with your family and friends. It's called Don’t Shoot the Dog: The Art of Teaching and Training by Karen Pryor.
Karen Pryor is a scientist specialized in marine mammal biology and behavioral psychology. She spent many years training dolphins and studying their behavior in oceanariums. Interesting, that her openings are applicable not only to animals but for humans with the same rate of success.
In her book Karen describes principles of training desired behavior with reinforcing:
Usually we are using them [principles] inappropriately. We threaten, we argue, we coerce, we deprive. We pounce on others when things go wrong and pass up the chance to praise them when things go right. We are harsh and impatient with our children, with each other, with ourselves even.
I see this quote as the main problem that author tries to address.
A bit of theory from the book:
A reinforcer is anything that, occurring in conjunction with an act, tends to increase the probability that the act will occur again.
There are 2 types of reinforces: positive and negative. Positive is something that subject wants to get, negative is something subject wants to avoid.
The key idea of the book is that ❗️negative reinforcement doesn't work. Punishments don't work.
I think that’s really important, because our first instinct is usually to use negative reinforcements.
What do we do when a child doesn’t do their homework? Or ignores us when we ask for something? Or when a puppy chews the furniture? Even at school, teachers highlights our mistakes to show we did something wrong.
The book provides a theory of how behavior is formed, how different types of reinforces affect it, how behavior can be trained or untrained to achieve the desired results.
Practices to change undesired behavior are actually one of the most interesting part of the book. I'll talk about that in the next post.
#booknook #softskills #communications
Goodreads
Don't Shoot the Dog: : The Art of Teaching and Training
Karen Pryor’s clear and entertaining explanation of beh…
👍1🔥1
Don't Shoot the Dog. Part 2: Change the Behavior
Let's continue with `Don’t Shoot the Dog` by Karen Pryor.
The author says that there are only 8 methods to change the undesired behavior.
To make explanation simple let's use some real-life situation:
Your roommate has an annoying habit to throw socks around the room.
#1. Shoot the animal🔫
Get rid of the source of the problem, so they physically can’t do it anymore.
Example: Change the roommate.
#2. Punishment🤬
It's the most popular and the most inefficient method. When punishment doesn't work, people try to add more and more serious punishments. But it leads nowhere.
Example: Yell and scold. Threaten to throw the socks away.
#3. Negative Reinforcement 😞
Remove something unpleasant after the desired behavior happens. The idea is that the person will behave a certain way to avoid discomfort.
Example: Ignore the roommate until socks are picked up.
#4. Extinction 😐
Remove any reinforcement, after that unwanted behavior will die on its own. The method is best applied to verbal behavior - whining, teasing, and bullying.
Example: Just wait and hope your roommate realize it’s a bad habit.
#5. Train an Incompatible Behavior🍬
Train a new behavior that can’t happen at the same time as the bad one.
Example: Pick up and wash socks together to make it fun activity, get a reward.
#6. Put the Behavior on Cue 🔕
Train the person to do the behavior only when given a specific signal. Without the cue, the behavior disappears.
Example: Have a laundry fight. See how a big mess you can both make in the room.
#7. Shape the Absence🍺
Reward any behavior except the problem one.
Example: Buy a beer to your roommate when the room is clean.
#8. Change the Motivation☺️
Make an accurate estimate of what the motivation is, and reward it.
Example: Find a motivational reward for picking up socks — or just hire a housekeeper.
As you can see there are 4 negative methods (1-4) and 4 positive (5-8). Negative methods don't teach anyone to anything and provide unpredictable results.
So if you really want to shape someone's behavior, then your choice is positive reinforcement.
I read this book a while ago and started using these ideas in my real life. What can I say: the most difficult part is to change my own instincts and avoid negative methods. Each time I need to stop, think and act in a different way. This process takes efforts and energy, sometimes I fail, but I see that positive reinforcement results are really better.
Strongly recommend to add the book to your reading list 📚.
#booknook #softskills #communications
Let's continue with `Don’t Shoot the Dog` by Karen Pryor.
The author says that there are only 8 methods to change the undesired behavior.
To make explanation simple let's use some real-life situation:
Your roommate has an annoying habit to throw socks around the room.
#1. Shoot the animal
Get rid of the source of the problem, so they physically can’t do it anymore.
Example: Change the roommate.
#2. Punishment
It's the most popular and the most inefficient method. When punishment doesn't work, people try to add more and more serious punishments. But it leads nowhere.
Example: Yell and scold. Threaten to throw the socks away.
#3. Negative Reinforcement 😞
Remove something unpleasant after the desired behavior happens. The idea is that the person will behave a certain way to avoid discomfort.
Example: Ignore the roommate until socks are picked up.
#4. Extinction 😐
Remove any reinforcement, after that unwanted behavior will die on its own. The method is best applied to verbal behavior - whining, teasing, and bullying.
Example: Just wait and hope your roommate realize it’s a bad habit.
#5. Train an Incompatible Behavior
Train a new behavior that can’t happen at the same time as the bad one.
Example: Pick up and wash socks together to make it fun activity, get a reward.
#6. Put the Behavior on Cue 🔕
Train the person to do the behavior only when given a specific signal. Without the cue, the behavior disappears.
Example: Have a laundry fight. See how a big mess you can both make in the room.
#7. Shape the Absence
Reward any behavior except the problem one.
Example: Buy a beer to your roommate when the room is clean.
#8. Change the Motivation
Make an accurate estimate of what the motivation is, and reward it.
Example: Find a motivational reward for picking up socks — or just hire a housekeeper.
As you can see there are 4 negative methods (1-4) and 4 positive (5-8). Negative methods don't teach anyone to anything and provide unpredictable results.
So if you really want to shape someone's behavior, then your choice is positive reinforcement.
I read this book a while ago and started using these ideas in my real life. What can I say: the most difficult part is to change my own instincts and avoid negative methods. Each time I need to stop, think and act in a different way. This process takes efforts and energy, sometimes I fail, but I see that positive reinforcement results are really better.
Strongly recommend to add the book to your reading list 📚.
#booknook #softskills #communications
Please open Telegram to view this post
VIEW IN TELEGRAM
Goodreads
Don't Shoot the Dog: : The Art of Teaching and Training
Karen Pryor’s clear and entertaining explanation of beh…
👍4
I haven’t drawn anything for a while, but this week I had an inspiration and prepared a sketchnote for you on the 8 methods for changing behavior by Karen Pryor!
#booknook #sketchnote #softskills #communications
#booknook #sketchnote #softskills #communications
🔥3
Designing Distributed Systems
I think you will agree that book name Designing Distributed Systems: Patterns and Paradigms for Scalable, Reliable services sounds very promising. The book was published in 2018 and it's pretended to be a catalog of modern system design patterns like GoF patterns for software design 20 years ago.
Spoiler: it did not.
Actually the book describes very basic stuff like sidecar, load balancing, sharding, leader election and some others. Patterns are presented without deep details with focus on Kubernetes objects creation.
For example: this is sharding, it helps to distribute the data across replicas, consistent hashing can be use to define appropriate shards, here is the k8s service and stateful set to do that.
One more thing that I don't like that the book contains too much recommendations to use sidecar containers. Maybe 7 years ago it look like a new trend in distributed systems development (let's remember first Istio implementation on sidecars), but it was not.
You should clearly understand when and why sidecars are applicable. Additional containers add extra resource consumption, complexity and maintenance overhead. In most cases, it's cheaper to implement required features inside a main application.
To summarize, the book suites well for junior and mid-level developers to get basic understanding of cloud architecture patterns. But for senior developers, techleads and architects it will be definitely boring 🥱 .
#booknook #systemdesign #patterns
I think you will agree that book name Designing Distributed Systems: Patterns and Paradigms for Scalable, Reliable services sounds very promising. The book was published in 2018 and it's pretended to be a catalog of modern system design patterns like GoF patterns for software design 20 years ago.
Spoiler: it did not.
Actually the book describes very basic stuff like sidecar, load balancing, sharding, leader election and some others. Patterns are presented without deep details with focus on Kubernetes objects creation.
For example: this is sharding, it helps to distribute the data across replicas, consistent hashing can be use to define appropriate shards, here is the k8s service and stateful set to do that.
One more thing that I don't like that the book contains too much recommendations to use sidecar containers. Maybe 7 years ago it look like a new trend in distributed systems development (let's remember first Istio implementation on sidecars), but it was not.
You should clearly understand when and why sidecars are applicable. Additional containers add extra resource consumption, complexity and maintenance overhead. In most cases, it's cheaper to implement required features inside a main application.
To summarize, the book suites well for junior and mid-level developers to get basic understanding of cloud architecture patterns. But for senior developers, techleads and architects it will be definitely boring 🥱 .
#booknook #systemdesign #patterns
Goodreads
Designing Distributed Systems: Patterns and Paradigms f…
Discover and share books you love on Goodreads.
❤5👍1
About Career Choices
A few days ago I wrote an essay about my career path for one educational program. It made me reflect a bit on career choices I've made.
I became a teamlead very quickly. The term
Do you think I got stuck?
Actually, I don't think so. It was my decision to stop on this level.
The reason is that I really enjoyed being a teamlead\techlead: research new technologies, develop products, apply engineering practices to solve operational problems and, of course, build something significant and valuable with the team, something that is not possible to build on your own.
During these years I grew up mostly in width extending my technical expertise and team management skills. And the last 5 years were the most amazing and interesting from that perspective in my career.
What I want to say: you don’t always have to chase a new noscript or position. If you’re not ready to get more responsibility right now, it's fine. Sometimes, it’s enough just to enjoy your work and have fun with it 😎.
This year I finally moved to another level of technical leadership - head of division. I'm now responsible for management, architecture and roadmap across six teams with around 50+ people. And I really feel I'm ready for it now. But this is an another story 😉.
#softskills #career
A few days ago I wrote an essay about my career path for one educational program. It made me reflect a bit on career choices I've made.
I became a teamlead very quickly. The term
techlead was not popular that time, but I usually combined both roles. And I spent more than 10 years on this position. Do you think I got stuck?
Actually, I don't think so. It was my decision to stop on this level.
The reason is that I really enjoyed being a teamlead\techlead: research new technologies, develop products, apply engineering practices to solve operational problems and, of course, build something significant and valuable with the team, something that is not possible to build on your own.
During these years I grew up mostly in width extending my technical expertise and team management skills. And the last 5 years were the most amazing and interesting from that perspective in my career.
What I want to say: you don’t always have to chase a new noscript or position. If you’re not ready to get more responsibility right now, it's fine. Sometimes, it’s enough just to enjoy your work and have fun with it 😎.
This year I finally moved to another level of technical leadership - head of division. I'm now responsible for management, architecture and roadmap across six teams with around 50+ people. And I really feel I'm ready for it now. But this is an another story 😉.
#softskills #career
🔥10❤1👍1
Lessons Learnt form Big Failures
Apple, Facebook, Google, Netflix, OpenAI - we all know these examples of success stories. The problem is that each success story is a unique combination of many factors that are very difficult to reproduce.
It's much more perspective to study failures, as they have more or less the same reasons and show what definitely will not lead you to any success.
Here's a collection of IT project failure case studies that cost companies tens of millions of dollars. There are cases that happened for the last ~15 years, and if you quickly go through them you will realize that most problems look very common:
📍 Corporate Culture. It's not so obvious but it's actually the root cause of many other problems like unpredicted complexity, underestimation, lack of transparency, etc. Why? When you develop the system the technical team usually knows about all that problems, moreover they know whether the system is ready for production or not. The question is if they have an ability to explain that to the management and if the management is open enough to listen.
📍 Leadership Failures. There are wide range of problems from non clear responsibilities, poor ownership, ping-pong between the teams, lack of trust, communication failures and other issues.
📍 Risk Management. For any big project you should always have a plan "B". That’s why transparency and trust are so important, they're the only way to understand what's really going on and to have a chance to adjust the plan on time and avoid a complete disaster.
Software is a socio-technical system and most failures aren't about technologies they are about people. The good news is that we as technical leaders can improve that and make our projects more successful.
#leadership #management
Apple, Facebook, Google, Netflix, OpenAI - we all know these examples of success stories. The problem is that each success story is a unique combination of many factors that are very difficult to reproduce.
It's much more perspective to study failures, as they have more or less the same reasons and show what definitely will not lead you to any success.
Here's a collection of IT project failure case studies that cost companies tens of millions of dollars. There are cases that happened for the last ~15 years, and if you quickly go through them you will realize that most problems look very common:
📍 Corporate Culture. It's not so obvious but it's actually the root cause of many other problems like unpredicted complexity, underestimation, lack of transparency, etc. Why? When you develop the system the technical team usually knows about all that problems, moreover they know whether the system is ready for production or not. The question is if they have an ability to explain that to the management and if the management is open enough to listen.
📍 Leadership Failures. There are wide range of problems from non clear responsibilities, poor ownership, ping-pong between the teams, lack of trust, communication failures and other issues.
📍 Risk Management. For any big project you should always have a plan "B". That’s why transparency and trust are so important, they're the only way to understand what's really going on and to have a chance to adjust the plan on time and avoid a complete disaster.
Software is a socio-technical system and most failures aren't about technologies they are about people. The good news is that we as technical leaders can improve that and make our projects more successful.
#leadership #management
Henricodolfing
Project Failure Case Studies
I research project failures and write case studies about them because it is a great way (for both of us) to learn from others' mistakes. Thi...
👍3🔥3
GenAI for Go Optimizations
Today, code generation with an AI assistant doesn't impress anyone, but GenAI can be helpful not only for that. Uber recently published an interesting article about using LLMs to optimize Go services.
So what they did:
🔸 Collect CPU and memory profiles from production services.
🔸 Identify the top 30 most expensive functions based on CPU usage. If runtime.mallocgc consumes more than 15% of CPU time - additionally collect a memory profile.
🔸 Apply a static filter to exclude open-source dependencies and internal runtime functions. It allows to reduce noise and focus on business code only.
🔸 Prepare a catalog of performance antipatterns, most of them were already collected during past optimization work.
🔸 Pass source code and antipatterns list to LLM for analysis.
🔸 Validate the results using a separate pipeline: check whether an antipattern is really present and whether the suggested optimization is correct.
The article also contains interesting tips how they tuned prompting, reduce hallucinations and improve the trust for the tool among developers.
What I like about Uber’s technical articles is that they always calculate the efficiency of the results:
#engineering #usecase #ai
Today, code generation with an AI assistant doesn't impress anyone, but GenAI can be helpful not only for that. Uber recently published an interesting article about using LLMs to optimize Go services.
So what they did:
🔸 Collect CPU and memory profiles from production services.
🔸 Identify the top 30 most expensive functions based on CPU usage. If runtime.mallocgc consumes more than 15% of CPU time - additionally collect a memory profile.
🔸 Apply a static filter to exclude open-source dependencies and internal runtime functions. It allows to reduce noise and focus on business code only.
🔸 Prepare a catalog of performance antipatterns, most of them were already collected during past optimization work.
🔸 Pass source code and antipatterns list to LLM for analysis.
🔸 Validate the results using a separate pipeline: check whether an antipattern is really present and whether the suggested optimization is correct.
The article also contains interesting tips how they tuned prompting, reduce hallucinations and improve the trust for the tool among developers.
What I like about Uber’s technical articles is that they always calculate the efficiency of the results:
Over four months, the number of antipatterns reduced from 265 to 176. Projecting this annually, that’s a reduction of 267 antipatterns. Addressing this volume manually, as the Go expert team would have consumed approximately 3,800 hours.
we reduced the engineering time required to detect and fix an issue from 14.5 hours to almost 1 hour of tool runtime—a 93.10% time savings.
#engineering #usecase #ai
🔥6
Platform Engineering: Shift It Down
The great video from Google experts about platform engineering.
One of the most popular DevOps concepts for the last decade was "shift left". And it showed really good results improving overall products quality, reducing delivery time and decreasing the cost of errors. At the same time it significantly increased cognitive load on developers as it placed the full burden of implementation complexity on engineers.
Speakers suggests to use a new concept to solve this problem:
Don't just shift left, shift it down.
The idea is to put all quality attributes implementation (like reliability, security, performance, testability, etc.) to the platform teams. Anything that is not a product feature but architecture should go to the platform teams.
Technical toolbox to do that consists of 2 items:
1. Abstractions: well-defined parts and components. Provides understandability, accountability, risk management levels and cost control of your system.
2. Coupling: something that make your system greater than a sum of the parts. Provides modifiability of the system, golden paths and efficiency.
To apply this toolbox in practice you need governance, policies, and education. They call it "culture and shared responsibility".
One more interesting concept from the video that I really like is to use different levels of flexibility in following the rules, depending on the consequences of an error:
For example:
An unauthenticated API can be a critical business risk, so developers must use the proper security framework. Its usage can be checked at build time to ensure it’s not missed. Build time control provides assured type of flexibility as developers cannot avoid it.
I think these levels provide a really good principles for platform teams to decide where to invest to get the biggest impact. So I definitely recommend to check the full video if you're interested in platform engineering.
#engineering
The great video from Google experts about platform engineering.
One of the most popular DevOps concepts for the last decade was "shift left". And it showed really good results improving overall products quality, reducing delivery time and decreasing the cost of errors. At the same time it significantly increased cognitive load on developers as it placed the full burden of implementation complexity on engineers.
Speakers suggests to use a new concept to solve this problem:
Don't just shift left, shift it down.
The idea is to put all quality attributes implementation (like reliability, security, performance, testability, etc.) to the platform teams. Anything that is not a product feature but architecture should go to the platform teams.
Technical toolbox to do that consists of 2 items:
1. Abstractions: well-defined parts and components. Provides understandability, accountability, risk management levels and cost control of your system.
2. Coupling: something that make your system greater than a sum of the parts. Provides modifiability of the system, golden paths and efficiency.
To apply this toolbox in practice you need governance, policies, and education. They call it "culture and shared responsibility".
One more interesting concept from the video that I really like is to use different levels of flexibility in following the rules, depending on the consequences of an error:
YOLO -> Adhoc -> Guided -> Engineered -> Assured
For example:
An unauthenticated API can be a critical business risk, so developers must use the proper security framework. Its usage can be checked at build time to ensure it’s not missed. Build time control provides assured type of flexibility as developers cannot avoid it.
I think these levels provide a really good principles for platform teams to decide where to invest to get the biggest impact. So I definitely recommend to check the full video if you're interested in platform engineering.
#engineering
YouTube
Shift down: A practical guide to platform engineering - Leah Rivers & James Brookbank
Drawing on years of experience building internal platforms at Google, this session provides actionable insights for creating effective development ecosystems. Attendees will learn how to prioritize safety, efficiency, and reliability through the collaboration…
🔥1
Important illustrations from the video.
Source: Shift down: A practical guide to platform engineering - Leah Rivers & James Brookbank
#engineering
Source: Shift down: A practical guide to platform engineering - Leah Rivers & James Brookbank
#engineering
👍1🔥1
AI-Literacy
The AI growth continues to bring new and new terms. Just look at the hype around Vibe Coding 😎. But today I want to talk about another term - AI-literacy.
Under AI-literacy the industry understands a set of competences to work with AI.
It consists of the following elements:
🔸 Know & Understand AI: common understanding how it works, critically evaluate its outputs.
🔸 Use & Apply AI: use AI tools and agents to solve different tasks, prompt engineering.
🔸 Manage AI: setting AI usage guidelines and policies, manage prompt libraries, education.
🔸 Collaborate with AI: work with AI to create innovative solutions, solve real-world problems.
Why is it interesting for us?
The competency exists, but in most companies it's not yet reflected in any policies or skill matrices. Moreover, there are often no AI usage guidelines at all. But employees definitely use it (not always effectively as they could), sometimes sending confidential data to public models 😱.
AI-literacy is a good concept that you can use to start manage AI knowledge within your team: education, guidelines, restrictions, sharing and collecting useful prompts, incorporating AI tools to your daily routine.
#leadership #ai #management
The AI growth continues to bring new and new terms. Just look at the hype around Vibe Coding 😎. But today I want to talk about another term - AI-literacy.
Under AI-literacy the industry understands a set of competences to work with AI.
It consists of the following elements:
🔸 Know & Understand AI: common understanding how it works, critically evaluate its outputs.
🔸 Use & Apply AI: use AI tools and agents to solve different tasks, prompt engineering.
🔸 Manage AI: setting AI usage guidelines and policies, manage prompt libraries, education.
🔸 Collaborate with AI: work with AI to create innovative solutions, solve real-world problems.
Why is it interesting for us?
The competency exists, but in most companies it's not yet reflected in any policies or skill matrices. Moreover, there are often no AI usage guidelines at all. But employees definitely use it (not always effectively as they could), sometimes sending confidential data to public models 😱.
AI-literacy is a good concept that you can use to start manage AI knowledge within your team: education, guidelines, restrictions, sharing and collecting useful prompts, incorporating AI tools to your daily routine.
#leadership #ai #management
❤1👍1
Uber Code Review AI Assistant
Uber continues to share their experience to integrate AI into different parts of development process. This time it's GenAI code review assistant (previously they published about GenAI On-Call Copilot and GenAI Optimizations for Go).
If you tried to make a code review with some GenAI tool you may notice it's not perfect yet: hallucinations, overengineering, noisy suggestions. It left the feeling that it produces more issues and consume more time than a human review process.
That's why Uber engineers created their own review platform.
So let's check what they implemented:
🔸 Define relevant files for analysis: filter out configuration files, generated code, and experimental directories.
🔸 Include PR changes, surrounding functions and class definitions to the LLM context.
🔸 Execute analysis calling number of different AI assistants:
- Standard: detects bugs, exception handling and logic flaws.
- Best Practices: enforces Uber-specific coding conventions and style guides.
- Security: checks application-level security vulnerabilities.
🔸 Execute another prompt to check quality of the previous step, assign a confidence score and merge overlapping suggestions.
🔸 Run a classifier for each generated comment and suppress categories with low developer value.
🔸 Publish result comments on PR.
Authors reported that the whole process takes around 4 minutes and already integrated with all Uber's monorepos: Go, Java, Android, iOS, Typenoscript, and Python.
One more interesting point, that for code analysis and comment grading 2 different models were used: Claude-4-Sonnet and OpenAI o4-mini-high.
As you can see, more and more AI systems start working in multiple stages, where one AI checks the results of another. This pattern is becoming popular and it shows really good results removing noise and decreasing the number of hallucinations.
#engineering #ai #usecase
Uber continues to share their experience to integrate AI into different parts of development process. This time it's GenAI code review assistant (previously they published about GenAI On-Call Copilot and GenAI Optimizations for Go).
If you tried to make a code review with some GenAI tool you may notice it's not perfect yet: hallucinations, overengineering, noisy suggestions. It left the feeling that it produces more issues and consume more time than a human review process.
That's why Uber engineers created their own review platform.
So let's check what they implemented:
🔸 Define relevant files for analysis: filter out configuration files, generated code, and experimental directories.
🔸 Include PR changes, surrounding functions and class definitions to the LLM context.
🔸 Execute analysis calling number of different AI assistants:
- Standard: detects bugs, exception handling and logic flaws.
- Best Practices: enforces Uber-specific coding conventions and style guides.
- Security: checks application-level security vulnerabilities.
🔸 Execute another prompt to check quality of the previous step, assign a confidence score and merge overlapping suggestions.
🔸 Run a classifier for each generated comment and suppress categories with low developer value.
🔸 Publish result comments on PR.
Authors reported that the whole process takes around 4 minutes and already integrated with all Uber's monorepos: Go, Java, Android, iOS, Typenoscript, and Python.
One more interesting point, that for code analysis and comment grading 2 different models were used: Claude-4-Sonnet and OpenAI o4-mini-high.
As you can see, more and more AI systems start working in multiple stages, where one AI checks the results of another. This pattern is becoming popular and it shows really good results removing noise and decreasing the number of hallucinations.
#engineering #ai #usecase
❤4👍3
Write It Down
Have you ever been on the meetings where people start yelling at each other? Or don't listen to each other? I've been there and what I can say: it's very difficult to manage and fix such situations.
There is one tip I learned on one of the soft skills trainings:
And you know what? It works perfectly well 👍.
Now when things start heating up, I open Notepad++, write down all the points, ask clarifying questions, and confirm I got it right. In online meetings, I share my screen so everyone can see my notes.
So next time in the meeting where the discussion becomes too emotional, keep calm and just write everything down.
#softskills #tips #leadership
Have you ever been on the meetings where people start yelling at each other? Or don't listen to each other? I've been there and what I can say: it's very difficult to manage and fix such situations.
There is one tip I learned on one of the soft skills trainings:
"If someone is yelling at you, start writing down what they say. It’s almost impossible to yell at someone who's taking notes on each word you said."
And you know what? It works perfectly well 👍.
Now when things start heating up, I open Notepad++, write down all the points, ask clarifying questions, and confirm I got it right. In online meetings, I share my screen so everyone can see my notes.
So next time in the meeting where the discussion becomes too emotional, keep calm and just write everything down.
#softskills #tips #leadership
🔥9👍2
Simple Prompt Techniques
GenAI continues to revolutionize the way we perform our tasks, and it really simplifies some part of daily routine execution. But to do that efficiently, you need to use correct prompts. The rule is simple: the better you specify the request, the better results you get.
So I’d like to share a few simple prompting methods that I’ve found really helpful.
RTF
It's perfect for simple tasks. According to the RTF you need to write your prompts in the following way:
🔸 Role: AI role and area of expertise.
🔸 Task: Task or questions denoscription.
🔸 Format: Output format or structure: code snippet, text, specific document, json structure, etc.
Example:
Role: You are an experienced Go developer.
Task: Analyze this Go function and suggest improvements to error handling and HTTP client reuse.
Format: Return a code snippet with inline comments explaining improvements.
RISEN
This framework suites for more complex tasks:
🔸 Role: AI role and area of expertise.
🔸 Instructions: Task or questions denoscription. The more details you specify, the better the output.
🔸 Steps: Steps to perform to complete the task.
🔸 Expectations: Goal of the output, what you aim to achieve. It can include examples, output format and other guidelines.
🔸 Narrowing: Limitations, restrictions, or what to focus on.
Example:
Role: You are an SRE engineer.
Instructions: Prepare outage report data [based on the provided details].
Steps: 1) Summarize timeline, 2) Identify root cause, 3) Suggest prevention.
Expectation: Output an incident report in Markdown with Summary, Impact, Root Cause, Action Items.
Narrowing: Keep it management-friendly but with enough technical detail for engineers.
I hope these prompt techniques will be useful for you as well.
#ai #tips
GenAI continues to revolutionize the way we perform our tasks, and it really simplifies some part of daily routine execution. But to do that efficiently, you need to use correct prompts. The rule is simple: the better you specify the request, the better results you get.
So I’d like to share a few simple prompting methods that I’ve found really helpful.
RTF
It's perfect for simple tasks. According to the RTF you need to write your prompts in the following way:
🔸 Role: AI role and area of expertise.
🔸 Task: Task or questions denoscription.
🔸 Format: Output format or structure: code snippet, text, specific document, json structure, etc.
Example:
Role: You are an experienced Go developer.
Task: Analyze this Go function and suggest improvements to error handling and HTTP client reuse.
Format: Return a code snippet with inline comments explaining improvements.
RISEN
This framework suites for more complex tasks:
🔸 Role: AI role and area of expertise.
🔸 Instructions: Task or questions denoscription. The more details you specify, the better the output.
🔸 Steps: Steps to perform to complete the task.
🔸 Expectations: Goal of the output, what you aim to achieve. It can include examples, output format and other guidelines.
🔸 Narrowing: Limitations, restrictions, or what to focus on.
Example:
Role: You are an SRE engineer.
Instructions: Prepare outage report data [based on the provided details].
Steps: 1) Summarize timeline, 2) Identify root cause, 3) Suggest prevention.
Expectation: Output an incident report in Markdown with Summary, Impact, Root Cause, Action Items.
Narrowing: Keep it management-friendly but with enough technical detail for engineers.
I hope these prompt techniques will be useful for you as well.
#ai #tips
👍4🔥4
Measuring System Complexity
I think we can all agree that the less complex our systems are, the easier they are to modify, operate and troubleshoot. But how can we properly measure complexity?
The most popular answer will be something related to cyclomatic complexity or number of code lines. But have you ever tried to use them in practice? I found them absolutely impractical and not actionable for huge codebases. They will always show you some numbers detecting the system is big and complex. Nothing new actually 🙃
I found more practical alternatives in Google SRE book:
🔸 Training Time: Time to onboard a new team member to the team.
🔸 Explanation Time: Time to explain high-level architecture of the service.
🔸 Administrative Diversity: Number of ways to configure similar settings in different parts of the system.
🔸 Diversity of Deployed Configurations: Number of configurations that are deployed in production. It can include installed services, their versions, feature flags, environment-specific parameters.
🔸 Age of the System: The older system tends to be more complex and fragile.
Of course, these metrices are not mathematically precise, but they provide high level indicators of the overall complexity of the existing architecture, not just individual blocks of code. And most importantly, they show what direction we should take to improve the situation.
#engineering #systemdesign
I think we can all agree that the less complex our systems are, the easier they are to modify, operate and troubleshoot. But how can we properly measure complexity?
The most popular answer will be something related to cyclomatic complexity or number of code lines. But have you ever tried to use them in practice? I found them absolutely impractical and not actionable for huge codebases. They will always show you some numbers detecting the system is big and complex. Nothing new actually 🙃
I found more practical alternatives in Google SRE book:
🔸 Training Time: Time to onboard a new team member to the team.
🔸 Explanation Time: Time to explain high-level architecture of the service.
🔸 Administrative Diversity: Number of ways to configure similar settings in different parts of the system.
🔸 Diversity of Deployed Configurations: Number of configurations that are deployed in production. It can include installed services, their versions, feature flags, environment-specific parameters.
🔸 Age of the System: The older system tends to be more complex and fragile.
Of course, these metrices are not mathematically precise, but they provide high level indicators of the overall complexity of the existing architecture, not just individual blocks of code. And most importantly, they show what direction we should take to improve the situation.
#engineering #systemdesign
🔥3👍2
Pipeline Patterns
Today we cannot imagine our CI\CD processes without pipelines. They’re everywhere: building, linting, testing, verifying compliance, deploying, and even handling maintenance tasks.
Have you ever seen the internals of those pipelines? I've seen: it's often a full mess.
So it's not a big surprise that someone started to think about how to write pipelines in resource efficient and easy to support way. That's exactly one of the topics from recent NDC Oslo conference: Pipeline Patterns and Antipatterns by Daniel Raniz Raneland.
It may not be any rocket science, but there is a good set of useful recipes:
🔸 Right pipeline for the job: Select only required steps for the task. For example, in build pipeline we can execute unit tests on PR and on the main, but we should not execute them on nightly CI with integration tests.
🔸 Conditional steps: Define a logic to skip not needed steps. For example, if you change only docs, you don't need to run build and tests.
🔸 Steps results reuse: Use artifacts from one step as the input to another steps.
🔸 Fail fast: Put steps that are failed more frequently to the beginning of the pipe.
🔸 Parallel run: Execute steps in parallel where it's possible.
🔸 Isolation: Result of one pipeline should not affect the results of the another.
🔸 Artifacts Housekeeping: Define cleanup policies for the artifacts.
🔸 Reasonable HWE: Carefully define required HWE to execute pipeline steps.
The key idea from the talk is that we should treat pipelines as any other software and we should apply the same architecture principles and best practices as for any other application.
#engineering
Today we cannot imagine our CI\CD processes without pipelines. They’re everywhere: building, linting, testing, verifying compliance, deploying, and even handling maintenance tasks.
Have you ever seen the internals of those pipelines? I've seen: it's often a full mess.
So it's not a big surprise that someone started to think about how to write pipelines in resource efficient and easy to support way. That's exactly one of the topics from recent NDC Oslo conference: Pipeline Patterns and Antipatterns by Daniel Raniz Raneland.
It may not be any rocket science, but there is a good set of useful recipes:
🔸 Right pipeline for the job: Select only required steps for the task. For example, in build pipeline we can execute unit tests on PR and on the main, but we should not execute them on nightly CI with integration tests.
🔸 Conditional steps: Define a logic to skip not needed steps. For example, if you change only docs, you don't need to run build and tests.
🔸 Steps results reuse: Use artifacts from one step as the input to another steps.
🔸 Fail fast: Put steps that are failed more frequently to the beginning of the pipe.
🔸 Parallel run: Execute steps in parallel where it's possible.
🔸 Isolation: Result of one pipeline should not affect the results of the another.
🔸 Artifacts Housekeeping: Define cleanup policies for the artifacts.
🔸 Reasonable HWE: Carefully define required HWE to execute pipeline steps.
The key idea from the talk is that we should treat pipelines as any other software and we should apply the same architecture principles and best practices as for any other application.
#engineering
YouTube
Pipeline Patterns and Antipatterns - Things your Pipeline Should (Not) Do - Daniel Raniz Raneland
This talk was recorded at NDC Oslo in Oslo, Norway. #ndcoslo #ndcconferences #developer #softwaredeveloper
Attend the next NDC conference near you:
https://ndcconferences.com
https://ndcoslo.com/
Subscribe to our YouTube channel and learn every day:…
Attend the next NDC conference near you:
https://ndcconferences.com
https://ndcoslo.com/
Subscribe to our YouTube channel and learn every day:…
👍4
The Art of Systems Thinking
We live in a world of systems. They are everywhere: businesses, family, teams, software and even ourselves. All of that are examples of complex systems. That's why systems thinking is a key skill that allows to see common systems patterns, apply changes, predict side effects, and adopt to the results of the implemented changes.
I'd like to share one of the books regarding to this topic - The Art of Systems Thinking: Essential Skills for Creativity and Problem Solving by Joseph O'Connor and Ian McDermott.
Some Takeaways:
🔸 A system is more than just the sum of its parts. If you analyze system parts separately, you can’t predict the behavior of the system.
🔸 Stable systems are more resistant to the changes.
🔸 It's not possible to make an isolated change within the system. It will always create side effects.
🔸 The leverage principle: systems resist to any change. But if you understand the system well, you can find its weak elements. A small shift there can trigger big changes.
🔸 Connections between system parts create feedback loops. They can be of 2 types:
- Reinforcing: when changes keep going in the same direction—like a snowball rolling down.
- Balancing: when changes push the system to restore the balance—like a thermostat keeping the specified temperature.
🔸 Changes are not happened immediately. If you don’t account for this, it can lead to overreaction and oscillations.
🔸 To change a system, you need to destroy the old state and build a new stable one.
Of course, these are just basics. The book goes deeper into our mental models and cognitive traps, learning principles, how shared mindset shapes people behavior (e.g., the tradegy of commons ), how escalations work and what are the reasons under main social and financial patterns.
The book is easy to read, it's written with simple language and a lot of real-life examples.
So if the topic sounds interesting, I recommend to check the whole book.
#booknook #softskills #thinking
We live in a world of systems. They are everywhere: businesses, family, teams, software and even ourselves. All of that are examples of complex systems. That's why systems thinking is a key skill that allows to see common systems patterns, apply changes, predict side effects, and adopt to the results of the implemented changes.
I'd like to share one of the books regarding to this topic - The Art of Systems Thinking: Essential Skills for Creativity and Problem Solving by Joseph O'Connor and Ian McDermott.
Some Takeaways:
🔸 A system is more than just the sum of its parts. If you analyze system parts separately, you can’t predict the behavior of the system.
🔸 Stable systems are more resistant to the changes.
🔸 It's not possible to make an isolated change within the system. It will always create side effects.
🔸 The leverage principle: systems resist to any change. But if you understand the system well, you can find its weak elements. A small shift there can trigger big changes.
🔸 Connections between system parts create feedback loops. They can be of 2 types:
- Reinforcing: when changes keep going in the same direction—like a snowball rolling down.
- Balancing: when changes push the system to restore the balance—like a thermostat keeping the specified temperature.
🔸 Changes are not happened immediately. If you don’t account for this, it can lead to overreaction and oscillations.
🔸 To change a system, you need to destroy the old state and build a new stable one.
Of course, these are just basics. The book goes deeper into our mental models and cognitive traps, learning principles, how shared mindset shapes people behavior (e.g., the tradegy of commons ), how escalations work and what are the reasons under main social and financial patterns.
The book is easy to read, it's written with simple language and a lot of real-life examples.
So if the topic sounds interesting, I recommend to check the whole book.
#booknook #softskills #thinking
Goodreads
The Art of Systems Thinking: Essential Skills for Creat…
Great book with excellent info, this copy has slightly …
❤5👍1
Kafka 4.1 Release
At the beginning of September Kafka 4.1 was released. It doesn't contain any big surprises but it follows the overall industry direction to improve security and operability.
Noticeable changes:
🔸 Preview state for Kafka Queues (Detailed overview there). It's still not recommended for production, but it's a good time to check how it works and what scenarios it really covers.
🔸 Early access to Stream Rebalance protocol. It moves rebalance logic to the broker side. Initially the approach was implemented for consumers and now it's extended for streams (KIP-1071)
🔸 Ability for plugins and connectors to register their own metrics via Monitorable interface (KIP-877)
🔸 Metrics naming unification between consumers and producers (KIP-1109). Previously Kafka consumer replaces periods (
🔸 OAuth jwt-bearer grant type support in addition to
🔸 Ability to enforce explicit naming for internal topics (like changelog, repartition). A new configuration flag prevents Kafka Streams from starting if any of their internal topics have auto-generated names (KIP-1111).
Full list of changes can be found in release note and official upgrade recommendations.
#news #technologies
At the beginning of September Kafka 4.1 was released. It doesn't contain any big surprises but it follows the overall industry direction to improve security and operability.
Noticeable changes:
🔸 Preview state for Kafka Queues (Detailed overview there). It's still not recommended for production, but it's a good time to check how it works and what scenarios it really covers.
🔸 Early access to Stream Rebalance protocol. It moves rebalance logic to the broker side. Initially the approach was implemented for consumers and now it's extended for streams (KIP-1071)
🔸 Ability for plugins and connectors to register their own metrics via Monitorable interface (KIP-877)
🔸 Metrics naming unification between consumers and producers (KIP-1109). Previously Kafka consumer replaces periods (
.) in topic names in metrics with underscores (_), while producer keeps topic name unchanged. Now both producers and consumers preserve original topic name format. Old metrics will be removed in Kafka 5.0.🔸 OAuth jwt-bearer grant type support in addition to
client_credentials (KIP-1139)🔸 Ability to enforce explicit naming for internal topics (like changelog, repartition). A new configuration flag prevents Kafka Streams from starting if any of their internal topics have auto-generated names (KIP-1111).
Full list of changes can be found in release note and official upgrade recommendations.
#news #technologies
👍1🔥1