Leveraging SAM for Single-Source Domain Generalization in Medical Image Segmentation
📄 https://arxiv.org/pdf/2401.02076.pdf
💻 https://github.com/SARIHUST/SAMMed
@computer_science_and_programming
@computer_science_and_programming
Please open Telegram to view this post
VIEW IN TELEGRAM
👍42👎2
Improving API Performance with Database Connection Pooling
The diagram below shows 5 common API optimization techniques. Today, I’ll focus on number 5, connection pooling. It is not as trivial to implement as it sounds for some languages.
When fulfilling API requests, we often need to query the database. Opening a new connection for every API call adds overhead. Connection pooling helps avoid this penalty by reusing connections.
How Connection Pooling Works
1. For each API server, establish a pool of database connections at startup.
2. Workers share these connections, requesting one when needed and returning it after.
Challenges for Some Languages
However, setting up connection pooling can be more complex for languages like PHP, Python and Node.js. These languages handle scale by having multiple processes, each serving a subset of requests.
- In these languages, database connections get tied to each process.
- Connections can't be efficiently shared across processes. Each process needs its own pool, wasting resources.
In contrast, languages like Java and Go use threads within a single process to handle requests. Connections are bound at the application level, allowing easy sharing of a centralized pool.
Connection Pooling Solution
Tools like PgBouncer work around these challenges by proxying connections at the application level.
PgBouncer creates a centralized pool that all processes can access. No matter which process makes the request, PgBouncer efficiently handles the pooling.
At high scale, all languages can benefit from running PgBouncer on a dedicated server. Now the connection pool is shared over the network for all API servers. This conserves finite database connections.
Connection pooling improves efficiency, but its implementation complexity varies across languages.
The diagram below shows 5 common API optimization techniques. Today, I’ll focus on number 5, connection pooling. It is not as trivial to implement as it sounds for some languages.
When fulfilling API requests, we often need to query the database. Opening a new connection for every API call adds overhead. Connection pooling helps avoid this penalty by reusing connections.
How Connection Pooling Works
1. For each API server, establish a pool of database connections at startup.
2. Workers share these connections, requesting one when needed and returning it after.
Challenges for Some Languages
However, setting up connection pooling can be more complex for languages like PHP, Python and Node.js. These languages handle scale by having multiple processes, each serving a subset of requests.
- In these languages, database connections get tied to each process.
- Connections can't be efficiently shared across processes. Each process needs its own pool, wasting resources.
In contrast, languages like Java and Go use threads within a single process to handle requests. Connections are bound at the application level, allowing easy sharing of a centralized pool.
Connection Pooling Solution
Tools like PgBouncer work around these challenges by proxying connections at the application level.
PgBouncer creates a centralized pool that all processes can access. No matter which process makes the request, PgBouncer efficiently handles the pooling.
At high scale, all languages can benefit from running PgBouncer on a dedicated server. Now the connection pool is shared over the network for all API servers. This conserves finite database connections.
Connection pooling improves efficiency, but its implementation complexity varies across languages.
👍109❤2
If you design complex systems, you'll love sequence diagrams
Complex system architectures can quickly become tangled and hard to follow. Enter sequence diagrams! They keep your design neat and easily understandable.
For example, check out the diagram below. It depicts a client/server interaction, clearly differentiating between a cache hit and a cache miss. This is a prime example of how visual aids simplify complex interactions.
Sequence diagrams are a must when you aim to:
- 🚀 Map out end-to-end system workflows.
- 🔍 Clarify interactions between components.
- 📚 Produce clear and concise documentation.
- 🔧 Identify design flaws.
I have two favorites for creating sequence diagrams. WebSequenceDiagrams and Mermaid (links in comment). You can make sequence diagrams easily with just text.
Do you have a go-to tool for crafting good-looking sequence diagrams? Drop your suggestions below! 👇
Complex system architectures can quickly become tangled and hard to follow. Enter sequence diagrams! They keep your design neat and easily understandable.
For example, check out the diagram below. It depicts a client/server interaction, clearly differentiating between a cache hit and a cache miss. This is a prime example of how visual aids simplify complex interactions.
Sequence diagrams are a must when you aim to:
- 🚀 Map out end-to-end system workflows.
- 🔍 Clarify interactions between components.
- 📚 Produce clear and concise documentation.
- 🔧 Identify design flaws.
I have two favorites for creating sequence diagrams. WebSequenceDiagrams and Mermaid (links in comment). You can make sequence diagrams easily with just text.
Do you have a go-to tool for crafting good-looking sequence diagrams? Drop your suggestions below! 👇
👍76👎3❤1
This media is not supported in your browser
VIEW IN TELEGRAM
Ever wondered how Docker 🐳 works?
1. Docker Build 🏗
2. Docker Push ☁️
3. Docker Run 🏃
4. Docker Pull 🚚
5. Docker Images 🖼
1. Docker Build 🏗
2. Docker Push ☁️
3. Docker Run 🏃
4. Docker Pull 🚚
5. Docker Images 🖼
👍90👎3
This media is not supported in your browser
VIEW IN TELEGRAM
18 Most common used Java List methods
1. add(E element) - Adds the specified element to the end of the list.
2. addAll(Collection<? extends E> c) - Adds all elements of the specified collection to the end of the list.
3. remove(Object o) - Removes the first occurrence of the specified element from the list.
4. remove(int index) - Removes the element at the specified position in the list.
5. get(int index) - Returns the element at the specified position in the list.
6. set(int index, E element) - Replaces the element at the specified position in the list with the specified element.
7. indexOf(Object o) - Returns the index of the first occurrence of the specified element in the list.
8. contains(Object o) - Returns true if the list contains the specified element.
9. size() - Returns the number of elements in the list.
10. isEmpty() - Returns true if the list contains no elements.
11. clear() - Removes all elements from the list.
12. toArray() - Returns an array containing all the elements in the list.
13. subList(int fromIndex, int toIndex) - Returns a view of the portion of the list between the specified fromIndex, inclusive, and toIndex, exclusive.
14. addAll(int index, Collection<? extends E> c) - Inserts all elements of the specified collection into the list, starting at the specified position.
15. iterator() - Returns an iterator over the elements in the list.
16. sort(Comparator<? super E> c) - Sorts the elements of the list according to the specified comparator.
17. replaceAll(UnaryOperator<E> operator) - Replaces each element of the list with the result of applying the given operator.
18. forEach(Consumer<? super E> action) - Performs the given action for each element of the list until all elements have been processed or the action throws an exception.
1. add(E element) - Adds the specified element to the end of the list.
2. addAll(Collection<? extends E> c) - Adds all elements of the specified collection to the end of the list.
3. remove(Object o) - Removes the first occurrence of the specified element from the list.
4. remove(int index) - Removes the element at the specified position in the list.
5. get(int index) - Returns the element at the specified position in the list.
6. set(int index, E element) - Replaces the element at the specified position in the list with the specified element.
7. indexOf(Object o) - Returns the index of the first occurrence of the specified element in the list.
8. contains(Object o) - Returns true if the list contains the specified element.
9. size() - Returns the number of elements in the list.
10. isEmpty() - Returns true if the list contains no elements.
11. clear() - Removes all elements from the list.
12. toArray() - Returns an array containing all the elements in the list.
13. subList(int fromIndex, int toIndex) - Returns a view of the portion of the list between the specified fromIndex, inclusive, and toIndex, exclusive.
14. addAll(int index, Collection<? extends E> c) - Inserts all elements of the specified collection into the list, starting at the specified position.
15. iterator() - Returns an iterator over the elements in the list.
16. sort(Comparator<? super E> c) - Sorts the elements of the list according to the specified comparator.
17. replaceAll(UnaryOperator<E> operator) - Replaces each element of the list with the result of applying the given operator.
18. forEach(Consumer<? super E> action) - Performs the given action for each element of the list until all elements have been processed or the action throws an exception.
👍126❤4
This media is not supported in your browser
VIEW IN TELEGRAM
DevOps is a set of practices that combines software development and IT operations. It aims to shorten the software development life cycle and provide continuous delivery with high software quality. 🚀
DevOps has several phases
Plan: This phase involves defining the goals, scope, and requirements of the software project. It also includes identifying the stakeholders, risks, and resources needed. 📝
Build: This phase involves writing, compiling, and packaging the code into executable units. It also includes using version control, code review, and configuration management tools. 🔧
Test: This phase involves verifying that the software meets the quality standards and functional specifications. It also includes using automated testing, performance testing, and security testing tools. 🧪
Deploy: This phase involves releasing the software to the production environment or to the end-users. It also includes using deployment automation, orchestration, and monitoring tools. 🚚
Operate: This phase involves running and maintaining the software in the production environment. It also includes using incident management, problem management, and change management tools. 🛠
Observe: This phase involves collecting and analyzing data from the software and the production environment. It also includes using logging, tracing, and metrics tools. 🔎
Continuous Feedback and Discovery: This phase involves gathering feedback from the stakeholders, users, and customers. It also includes using feedback loops, surveys, and analytics tools. It also involves discovering new opportunities, challenges, and trends. 📊
DevOps is a culture that promotes collaboration, communication, and continuous improvement. It helps to deliver software faster, better, and safer. 😊
DevOps has several phases
Plan: This phase involves defining the goals, scope, and requirements of the software project. It also includes identifying the stakeholders, risks, and resources needed. 📝
Build: This phase involves writing, compiling, and packaging the code into executable units. It also includes using version control, code review, and configuration management tools. 🔧
Test: This phase involves verifying that the software meets the quality standards and functional specifications. It also includes using automated testing, performance testing, and security testing tools. 🧪
Deploy: This phase involves releasing the software to the production environment or to the end-users. It also includes using deployment automation, orchestration, and monitoring tools. 🚚
Operate: This phase involves running and maintaining the software in the production environment. It also includes using incident management, problem management, and change management tools. 🛠
Observe: This phase involves collecting and analyzing data from the software and the production environment. It also includes using logging, tracing, and metrics tools. 🔎
Continuous Feedback and Discovery: This phase involves gathering feedback from the stakeholders, users, and customers. It also includes using feedback loops, surveys, and analytics tools. It also involves discovering new opportunities, challenges, and trends. 📊
DevOps is a culture that promotes collaboration, communication, and continuous improvement. It helps to deliver software faster, better, and safer. 😊
👍75❤1
This media is not supported in your browser
VIEW IN TELEGRAM
Computer Memory Explained
Computer memory is like a workspace for your computer. It stores data and instructions that the computer needs to access quickly.
Internal Memory:
1. ROM (Read-Only Memory):
- PROM (Programmable ROM): Programmable once by the user post-manufacturing. 🖊
- EPROM (Erasable Programmable ROM): Can be erased with ultraviolet light and reprogrammed. ☀️🔁
- EEPROM (Electrically Erasable Programmable ROM): Can be erased and reprogrammed electrically, multiple times. ⚡️
2. RAM (Random Access Memory):
- SRAM (Static RAM): Retains data as long as power is supplied, no need to refresh, faster than DRAM. ⚡️💨
- DRAM (Dynamic RAM): Stores data in capacitors that must be refreshed periodically, widely used. 🔄
- SDRAM (Synchronous DRAM): Syncs with CPU clock speed for improved performance. ⏱
- RDRAM (Rambus DRAM): High bandwidth memory with Rambus technology. 🚀
- DDR SDRAM (Double Data Rate SDRAM): Transfers data on both rising and falling clock edges.
- DDR1: First generation, higher speed and bandwidth than SDRAM. 🆕
- DDR2: Improved version of DDR1 with lower power consumption and higher speeds. 🔋💨
- DDR3: Higher speeds and reduced power consumption over DDR2. 🔋➕💨
- DDR4: Higher module density and increased performance with reduced voltage. 🔋🆙🎛
External Memory:
1. HDD (Hard Disk Drive): Uses spinning disks to read/write data, traditional storage device. 🔄💾
2. SSD (Solid State Drive): Non-volatile flash memory for faster speed than HDDs. 🚀💾
3. CD (Compact Disc): Optical disc for storing digital data, used for music and software
Computer memory is like a workspace for your computer. It stores data and instructions that the computer needs to access quickly.
Internal Memory:
1. ROM (Read-Only Memory):
- PROM (Programmable ROM): Programmable once by the user post-manufacturing. 🖊
- EPROM (Erasable Programmable ROM): Can be erased with ultraviolet light and reprogrammed. ☀️🔁
- EEPROM (Electrically Erasable Programmable ROM): Can be erased and reprogrammed electrically, multiple times. ⚡️
2. RAM (Random Access Memory):
- SRAM (Static RAM): Retains data as long as power is supplied, no need to refresh, faster than DRAM. ⚡️💨
- DRAM (Dynamic RAM): Stores data in capacitors that must be refreshed periodically, widely used. 🔄
- SDRAM (Synchronous DRAM): Syncs with CPU clock speed for improved performance. ⏱
- RDRAM (Rambus DRAM): High bandwidth memory with Rambus technology. 🚀
- DDR SDRAM (Double Data Rate SDRAM): Transfers data on both rising and falling clock edges.
- DDR1: First generation, higher speed and bandwidth than SDRAM. 🆕
- DDR2: Improved version of DDR1 with lower power consumption and higher speeds. 🔋💨
- DDR3: Higher speeds and reduced power consumption over DDR2. 🔋➕💨
- DDR4: Higher module density and increased performance with reduced voltage. 🔋🆙🎛
External Memory:
1. HDD (Hard Disk Drive): Uses spinning disks to read/write data, traditional storage device. 🔄💾
2. SSD (Solid State Drive): Non-volatile flash memory for faster speed than HDDs. 🚀💾
3. CD (Compact Disc): Optical disc for storing digital data, used for music and software
👍169❤5
How Git Works - From Working Directory to Remote Repository
[1]. Working Directory:
Your project starts here. The working directory is where you actively make changes to your files.
[2]. Staging Area (Index):
After modifying files, use git add to stage changes. This prepares them for the next commit, acting as a checkpoint.
[3]. Local Repository:
Upon staging, execute git commit to record changes in the local repository. Commits create snapshots of your project at specific points.
[4]. Stash (Optional):
If needed, use git stash to temporarily save changes without committing. Useful when switching branches or performing other tasks.
[5]. Remote Repository:
The remote repository, hosted on platforms like GitHub, is a version of your project accessible to others. Use git push to send local commits and git pull to fetch remote changes.
[6]. Remote Branch Tracking:
Local branches can be set to track corresponding branches on the remote. This eases synchronization with git pull or git push.
[1]. Working Directory:
Your project starts here. The working directory is where you actively make changes to your files.
[2]. Staging Area (Index):
After modifying files, use git add to stage changes. This prepares them for the next commit, acting as a checkpoint.
[3]. Local Repository:
Upon staging, execute git commit to record changes in the local repository. Commits create snapshots of your project at specific points.
[4]. Stash (Optional):
If needed, use git stash to temporarily save changes without committing. Useful when switching branches or performing other tasks.
[5]. Remote Repository:
The remote repository, hosted on platforms like GitHub, is a version of your project accessible to others. Use git push to send local commits and git pull to fetch remote changes.
[6]. Remote Branch Tracking:
Local branches can be set to track corresponding branches on the remote. This eases synchronization with git pull or git push.
👍81❤1👎1
This media is not supported in your browser
VIEW IN TELEGRAM
DevOps Explained!
Plan:
- Defines project goals, scope, and requirements, identifying stakeholders and resources. 📝
Build:
- Involves coding, compiling, and packaging, emphasizing version control and code management. 🔧
Test:
- Ensures software aligns with quality and functional standards, utilizing automated and security testing. 🧪
Deploy:
- Releases software precisely using deployment automation and monitoring tools. 🚚
Operate:
- Ensures operational stability, promptly addressing issues with management tools. 🛠
Observe:
- Analyzes data from software and production using logging, tracing, and metrics tools. 🔎
Continuous Feedback:
- Gathers ongoing feedback, utilizing loops, surveys, and analytics for improvement. 📊
DevOps:
- Cultivates a culture of collaboration, communication, and continuous improvement for faster, better, and safer software delivery.
Plan:
- Defines project goals, scope, and requirements, identifying stakeholders and resources. 📝
Build:
- Involves coding, compiling, and packaging, emphasizing version control and code management. 🔧
Test:
- Ensures software aligns with quality and functional standards, utilizing automated and security testing. 🧪
Deploy:
- Releases software precisely using deployment automation and monitoring tools. 🚚
Operate:
- Ensures operational stability, promptly addressing issues with management tools. 🛠
Observe:
- Analyzes data from software and production using logging, tracing, and metrics tools. 🔎
Continuous Feedback:
- Gathers ongoing feedback, utilizing loops, surveys, and analytics for improvement. 📊
DevOps:
- Cultivates a culture of collaboration, communication, and continuous improvement for faster, better, and safer software delivery.
👍82❤2
This media is not supported in your browser
VIEW IN TELEGRAM
UNTANGLE Spring Security Architecture 🔒
Authentication and Authorization:
- Validates user identity and orchestrates controlled resource access.
- Empowers comprehensive user authentication and nuanced authorization.
Security Filters:
- Intercepts incoming requests, meticulously enforcing security measures.
- Offers a flexible, layered security filter chain for diverse protection strategies.
Custom Authentication Providers:
- N Authentication Provider: Extends authentication capabilities beyond default configurations. Facilitates tailored authentication strategies and seamless integration.
- DaoAuthentication Provider: Adopts a database-backed approach for user authentication. Scrutinizes user credentials against stored records, heightening security.
Authentication Manager:
- Orchestrates the authentication process, coordinating various authentication providers.
- Serves as a pivotal component in managing user identity verification.
Token-based Security (JWT):
- Implements advanced token-based authentication for stateless communication.
- Facilitates secure interaction without the need for server-side storage.
Session Management:
- Efficiently manages user sessions, mitigating session-related risks.
- Provides adaptability for session creation, tracking, and invalidation.
Authentication Tokens:
- Username Password Authentication Token:Represents user credentials for authentication purposes.
- Leverages usernames and passwords for robust user verification.
Add/Remove Authentication Token:
- Dynamically enables the addition and removal of authentication tokens.
- Ensures real-time control over user authentication, promoting flexibility.
Authentication and Authorization:
- Validates user identity and orchestrates controlled resource access.
- Empowers comprehensive user authentication and nuanced authorization.
Security Filters:
- Intercepts incoming requests, meticulously enforcing security measures.
- Offers a flexible, layered security filter chain for diverse protection strategies.
Custom Authentication Providers:
- N Authentication Provider: Extends authentication capabilities beyond default configurations. Facilitates tailored authentication strategies and seamless integration.
- DaoAuthentication Provider: Adopts a database-backed approach for user authentication. Scrutinizes user credentials against stored records, heightening security.
Authentication Manager:
- Orchestrates the authentication process, coordinating various authentication providers.
- Serves as a pivotal component in managing user identity verification.
Token-based Security (JWT):
- Implements advanced token-based authentication for stateless communication.
- Facilitates secure interaction without the need for server-side storage.
Session Management:
- Efficiently manages user sessions, mitigating session-related risks.
- Provides adaptability for session creation, tracking, and invalidation.
Authentication Tokens:
- Username Password Authentication Token:Represents user credentials for authentication purposes.
- Leverages usernames and passwords for robust user verification.
Add/Remove Authentication Token:
- Dynamically enables the addition and removal of authentication tokens.
- Ensures real-time control over user authentication, promoting flexibility.
👍51❤2
𝗟𝗲𝗮𝗿𝗻 𝗳𝘂𝗻𝗱𝗮𝗺𝗲𝗻𝘁𝗮𝗹𝘀, 𝗻𝗼𝘁 𝗳𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸𝘀
Have you ever wondered why some technologies are still with us, and some disappeared? Here is 𝘁𝗵𝗲 𝗟𝗶𝗻𝗱𝘆 𝗘𝗳𝗳𝗲𝗰𝘁 to explain it. This effect tells me that 𝗯𝘆 𝘁𝗵𝗲 𝘁𝗶𝗺𝗲 𝗜 𝗿𝗲𝘁𝗶𝗿𝗲, 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗲𝗿𝘀 𝘄𝗶𝗹𝗹 𝘀𝘁𝗶𝗹𝗹 𝗯𝗲 𝘂𝘀𝗶𝗻𝗴 𝗖# 𝗮𝗻𝗱 𝗦𝗤𝗟. It is a concept in technology and innovation that suggests that the future life expectancy of a non-perishable item is proportional to its current age. In other words, the longer an item has been in use, the longer it is likely to continue to be used.
The concept was named after Lindy's Deli in New York City, where Nassim Nicholas Taleb popularized it in his book "𝗧𝗵𝗲 𝗕𝗹𝗮𝗰𝗸 𝗦𝘄𝗮𝗻." According to Taleb, the Lindy effect applies to many things, including technologies, ideas, and cultures, and evaluates their potential longevity.
In software development, we see that 𝗳𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸𝘀 𝗰𝗼𝗺𝗲 𝗮𝗻𝗱 𝗴𝗼, 𝗯𝘂𝘁 𝗹𝗮𝗻𝗴𝘂𝗮𝗴𝗲𝘀 𝘀𝘂𝗰𝗵 𝗮𝘀 𝗦𝗤𝗟 𝗼𝗿 𝗖# 𝗮𝗻𝗱 𝗰𝗼𝗻𝗰𝗲𝗽𝘁𝘀 𝘀𝘂𝗰𝗵 𝗮𝘀 𝗢𝗯𝗷𝗲𝗰𝘁-𝗼𝗿𝗶𝗲𝗻𝘁𝗲𝗱 𝗽𝗿𝗼𝗴𝗿𝗮𝗺𝗺𝗶𝗻𝗴 𝗼𝗿 𝗦𝗢𝗟𝗜𝗗 𝗽𝗿𝗶𝗻𝗰𝗶𝗽𝗹𝗲𝘀 𝘀𝘁𝗮𝘆. All the energy I put into learning those technologies 10-15 years ago continues to support my work today. Some things changed, but the fundamentals stayed and even got better.
So, try to 𝗹𝗲𝗮𝗿𝗻 𝘁𝗵𝗶𝗻𝗴𝘀 𝘁𝗵𝗮𝘁 𝗱𝗼𝗻'𝘁 𝗰𝗵𝗮𝗻𝗴𝗲 (quote from Jeff Bezos). Focus on foundations, not frameworks. I've been doing this for two decades now.
Have you ever wondered why some technologies are still with us, and some disappeared? Here is 𝘁𝗵𝗲 𝗟𝗶𝗻𝗱𝘆 𝗘𝗳𝗳𝗲𝗰𝘁 to explain it. This effect tells me that 𝗯𝘆 𝘁𝗵𝗲 𝘁𝗶𝗺𝗲 𝗜 𝗿𝗲𝘁𝗶𝗿𝗲, 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗲𝗿𝘀 𝘄𝗶𝗹𝗹 𝘀𝘁𝗶𝗹𝗹 𝗯𝗲 𝘂𝘀𝗶𝗻𝗴 𝗖# 𝗮𝗻𝗱 𝗦𝗤𝗟. It is a concept in technology and innovation that suggests that the future life expectancy of a non-perishable item is proportional to its current age. In other words, the longer an item has been in use, the longer it is likely to continue to be used.
The concept was named after Lindy's Deli in New York City, where Nassim Nicholas Taleb popularized it in his book "𝗧𝗵𝗲 𝗕𝗹𝗮𝗰𝗸 𝗦𝘄𝗮𝗻." According to Taleb, the Lindy effect applies to many things, including technologies, ideas, and cultures, and evaluates their potential longevity.
In software development, we see that 𝗳𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸𝘀 𝗰𝗼𝗺𝗲 𝗮𝗻𝗱 𝗴𝗼, 𝗯𝘂𝘁 𝗹𝗮𝗻𝗴𝘂𝗮𝗴𝗲𝘀 𝘀𝘂𝗰𝗵 𝗮𝘀 𝗦𝗤𝗟 𝗼𝗿 𝗖# 𝗮𝗻𝗱 𝗰𝗼𝗻𝗰𝗲𝗽𝘁𝘀 𝘀𝘂𝗰𝗵 𝗮𝘀 𝗢𝗯𝗷𝗲𝗰𝘁-𝗼𝗿𝗶𝗲𝗻𝘁𝗲𝗱 𝗽𝗿𝗼𝗴𝗿𝗮𝗺𝗺𝗶𝗻𝗴 𝗼𝗿 𝗦𝗢𝗟𝗜𝗗 𝗽𝗿𝗶𝗻𝗰𝗶𝗽𝗹𝗲𝘀 𝘀𝘁𝗮𝘆. All the energy I put into learning those technologies 10-15 years ago continues to support my work today. Some things changed, but the fundamentals stayed and even got better.
So, try to 𝗹𝗲𝗮𝗿𝗻 𝘁𝗵𝗶𝗻𝗴𝘀 𝘁𝗵𝗮𝘁 𝗱𝗼𝗻'𝘁 𝗰𝗵𝗮𝗻𝗴𝗲 (quote from Jeff Bezos). Focus on foundations, not frameworks. I've been doing this for two decades now.
👍170❤1👎1
𝗚𝗶𝘁 𝗠𝗲𝗿𝗴𝗲 𝘃𝘀 𝗥𝗲𝗯𝗮𝘀𝗲
One of the most powerful Git features is branching. Yet, while working with it, we must integrate changes from one branch into another. The way how to do this can be different.
We have two ways to do it:
𝟭. 𝗠𝗲𝗿𝗴𝗲
When you merge Branch A into Branch B (with 𝚐𝚒𝚝 𝚖𝚎𝚛𝚐𝚎), Git creates a new merge commit. This commit has two parents, one from each branch, symbolizing the confluence of histories. It's a non-destructive operation, preserving the exact history of your project, warts, and all. Merges are particularly useful in collaborative environments where maintaining the integrity and chronological order of changes is essential. Yet, merge commits can clutter the history, making it harder to follow specific lines of development.
𝟮. 𝗥𝗲𝗯𝗮𝘀𝗲
When you rebase Branch A onto Branch B (with 𝚐𝚒𝚝 𝚛𝚎𝚋𝚊𝚜𝚎), you're essentially saying, "Let's pretend these changes from Branch A were made on top of the latest changes in Branch B." Rebase rewrites the project history by creating new commits for each commit in the original branch. This results in a much cleaner, straight-line history. Yet, it could be problematic if multiple people work on the same branch, as rebasing rewrites history, which can be challenging if others have pulled or pushed the original branch.
So, when to use them:
🔹 𝗨𝘀𝗲 𝗺𝗲𝗿𝗴𝗶𝗻𝗴 𝘁𝗼 𝗽𝗿𝗲𝘀𝗲𝗿𝘃𝗲 𝘁𝗵𝗲 𝗰𝗼𝗺𝗽𝗹𝗲𝘁𝗲 𝗵𝗶𝘀𝘁𝗼𝗿𝘆, especially on shared branches or for collaborative work. It's ideal for feature branches to merge into a main or develop branch.
🔹 𝗨𝘀𝗲 𝗿𝗲𝗯𝗮𝘀𝗶𝗻𝗴 𝗳𝗼𝗿 𝗽𝗲𝗿𝘀𝗼𝗻𝗮𝗹 𝗯𝗿𝗮𝗻𝗰𝗵𝗲𝘀 or when you want a clean, linear history for easier tracking of changes. Remember to rebase locally and avoid pushing rebased branches to shared repositories. Also, be aware 𝗻𝗼𝘁 𝘁𝗼 𝗿𝗲𝗯𝗮𝘀𝗲 𝗽𝘂𝗯𝗹𝗶𝗰 𝗵𝗶𝘀𝘁𝗼𝗿𝘆. If your branch is shared with others, rebasing can rewrite history in a way that is disruptive and confusing to your collaborators.
One of the most powerful Git features is branching. Yet, while working with it, we must integrate changes from one branch into another. The way how to do this can be different.
We have two ways to do it:
𝟭. 𝗠𝗲𝗿𝗴𝗲
When you merge Branch A into Branch B (with 𝚐𝚒𝚝 𝚖𝚎𝚛𝚐𝚎), Git creates a new merge commit. This commit has two parents, one from each branch, symbolizing the confluence of histories. It's a non-destructive operation, preserving the exact history of your project, warts, and all. Merges are particularly useful in collaborative environments where maintaining the integrity and chronological order of changes is essential. Yet, merge commits can clutter the history, making it harder to follow specific lines of development.
𝟮. 𝗥𝗲𝗯𝗮𝘀𝗲
When you rebase Branch A onto Branch B (with 𝚐𝚒𝚝 𝚛𝚎𝚋𝚊𝚜𝚎), you're essentially saying, "Let's pretend these changes from Branch A were made on top of the latest changes in Branch B." Rebase rewrites the project history by creating new commits for each commit in the original branch. This results in a much cleaner, straight-line history. Yet, it could be problematic if multiple people work on the same branch, as rebasing rewrites history, which can be challenging if others have pulled or pushed the original branch.
So, when to use them:
🔹 𝗨𝘀𝗲 𝗺𝗲𝗿𝗴𝗶𝗻𝗴 𝘁𝗼 𝗽𝗿𝗲𝘀𝗲𝗿𝘃𝗲 𝘁𝗵𝗲 𝗰𝗼𝗺𝗽𝗹𝗲𝘁𝗲 𝗵𝗶𝘀𝘁𝗼𝗿𝘆, especially on shared branches or for collaborative work. It's ideal for feature branches to merge into a main or develop branch.
🔹 𝗨𝘀𝗲 𝗿𝗲𝗯𝗮𝘀𝗶𝗻𝗴 𝗳𝗼𝗿 𝗽𝗲𝗿𝘀𝗼𝗻𝗮𝗹 𝗯𝗿𝗮𝗻𝗰𝗵𝗲𝘀 or when you want a clean, linear history for easier tracking of changes. Remember to rebase locally and avoid pushing rebased branches to shared repositories. Also, be aware 𝗻𝗼𝘁 𝘁𝗼 𝗿𝗲𝗯𝗮𝘀𝗲 𝗽𝘂𝗯𝗹𝗶𝗰 𝗵𝗶𝘀𝘁𝗼𝗿𝘆. If your branch is shared with others, rebasing can rewrite history in a way that is disruptive and confusing to your collaborators.
👍103❤2
𝗧𝗵𝗲 𝗕𝗲𝘀𝘁 𝗦𝗼𝗳𝘁𝘄𝗮𝗿𝗲 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 𝗕𝗼𝗼𝗸𝘀 𝗜𝗻 𝗘𝘃𝗲𝗿𝘆 𝗖𝗮𝘁𝗲𝗴𝗼𝗿𝘆
Check out this list of all books tagged with software architecture. They are ranked based on Goodreads score with applied simple algorithmic rules (relevant to software architecture, content is not obsolete, it must be tech agnostic, and average rating > 3.5). Rating is based on the number of written reviews, including the average rating, the number of ratings, and the publishing date.
💻 https://github.com/mhadidg/software-architecture-books
Check out this list of all books tagged with software architecture. They are ranked based on Goodreads score with applied simple algorithmic rules (relevant to software architecture, content is not obsolete, it must be tech agnostic, and average rating > 3.5). Rating is based on the number of written reviews, including the average rating, the number of ratings, and the publishing date.
Please open Telegram to view this post
VIEW IN TELEGRAM
👍74👎7
Implementing RSA in Python from Scratch
🔗 https://coderoasis.com/implementing-rsa-from-scratch-in-python/https://coderoasis.com/implementing-rsa-from-scratch-in-python/
Please open Telegram to view this post
VIEW IN TELEGRAM
👍25
This media is not supported in your browser
VIEW IN TELEGRAM
𝗛𝗼𝘄 𝘁𝗼 𝘂𝘀𝗲 𝘂𝗻𝗱𝗼𝗰𝘂𝗺𝗲𝗻𝘁𝗲𝗱 𝗪𝗲𝗯 𝗔𝗣𝗜𝘀?
There are several methods to tackle this issue, primarily involving intercepting traffic originating from a web API. If the goal is to intercept HTTP/HTTPS traffic from various sources, one approach involves manually constructing a custom sniffer. However, this method can be burdensome as it requires tailoring the solution for each API individually.
Now, Postman offers a solution to sniff traffic from any API with the HTTP/HTTP protocol. What is good about this feature is that traffic capture enables you to generate a Postman collection, which you can then use to test, evaluate, and document captured APIs.
Check more at the following link:
🔗 https://blog.postman.com/introducing-postman-new-improved-system-proxy/.
There are several methods to tackle this issue, primarily involving intercepting traffic originating from a web API. If the goal is to intercept HTTP/HTTPS traffic from various sources, one approach involves manually constructing a custom sniffer. However, this method can be burdensome as it requires tailoring the solution for each API individually.
Now, Postman offers a solution to sniff traffic from any API with the HTTP/HTTP protocol. What is good about this feature is that traffic capture enables you to generate a Postman collection, which you can then use to test, evaluate, and document captured APIs.
Check more at the following link:
Please open Telegram to view this post
VIEW IN TELEGRAM
👍38❤1
𝗛𝗼𝘄 𝘁𝗼 𝗱𝗼 𝗰𝗼𝗱𝗲 𝗿𝗲𝘃𝗶𝗲𝘄𝘀 𝗽𝗿𝗼𝗽𝗲𝗿𝗹𝘆
An essential step in the software development lifecycle is code review. It enables developers to enhance code quality significantly. It resembles the authoring of a book. The author writes the story, which is then edited to ensure no mistakes like mixing up "you're" with "yours." Code review in this context refers to examining and assessing other people's code.
There are different 𝗯𝗲𝗻𝗲𝗳𝗶𝘁𝘀 𝗼𝗳 𝗮 𝗰𝗼𝗱𝗲 𝗿𝗲𝘃𝗶𝗲𝘄: it ensures consistency in design and implementation, optimizes code for better performance, is an opportunity to learn, and knowledge sharing and mentoring, as well as promotes team cohesion.
What should you look for in a code review? Try to look for things such as:
🔹 𝗗𝗲𝘀𝗶𝗴𝗻 (does this integrate well with the rest of the system, and are interactions of different components make sense)
🔹 F𝘂𝗻𝗰𝘁𝗶𝗼𝗻𝗮𝗹𝗶𝘁𝘆 (does this change is what the developer intended)
🔹 C𝗼𝗺𝗽𝗹𝗲𝘅𝗶𝘁𝘆 (is this code more complex than it should be)
🔹 𝗡𝗮𝗺𝗶𝗻𝗴 (is naming good?)
🔹 𝗘𝗻𝗴. 𝗽𝗿𝗶𝗻𝗰𝗶𝗽𝗹𝗲𝘀 (solid, kiss, dry)
🔹 𝗧𝗲𝘀𝘁𝘀 (are different kinds of tests used appropriately, code coverage),
🔹 𝗦𝘁𝘆𝗹𝗲 (does it follow style guidelines),
🔹 𝗗𝗼𝗰𝘂𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻, etc.
An essential step in the software development lifecycle is code review. It enables developers to enhance code quality significantly. It resembles the authoring of a book. The author writes the story, which is then edited to ensure no mistakes like mixing up "you're" with "yours." Code review in this context refers to examining and assessing other people's code.
There are different 𝗯𝗲𝗻𝗲𝗳𝗶𝘁𝘀 𝗼𝗳 𝗮 𝗰𝗼𝗱𝗲 𝗿𝗲𝘃𝗶𝗲𝘄: it ensures consistency in design and implementation, optimizes code for better performance, is an opportunity to learn, and knowledge sharing and mentoring, as well as promotes team cohesion.
What should you look for in a code review? Try to look for things such as:
🔹 𝗗𝗲𝘀𝗶𝗴𝗻 (does this integrate well with the rest of the system, and are interactions of different components make sense)
🔹 F𝘂𝗻𝗰𝘁𝗶𝗼𝗻𝗮𝗹𝗶𝘁𝘆 (does this change is what the developer intended)
🔹 C𝗼𝗺𝗽𝗹𝗲𝘅𝗶𝘁𝘆 (is this code more complex than it should be)
🔹 𝗡𝗮𝗺𝗶𝗻𝗴 (is naming good?)
🔹 𝗘𝗻𝗴. 𝗽𝗿𝗶𝗻𝗰𝗶𝗽𝗹𝗲𝘀 (solid, kiss, dry)
🔹 𝗧𝗲𝘀𝘁𝘀 (are different kinds of tests used appropriately, code coverage),
🔹 𝗦𝘁𝘆𝗹𝗲 (does it follow style guidelines),
🔹 𝗗𝗼𝗰𝘂𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻, etc.
👍72❤3
Encryption and Decryption using Linear Algebra with C++
This project implements a text encryption and decryption system using a matrix-based encryption technique. This project serves as an educational and practical exploration of matrix-based encryption techniques, demonstrating the fundamental concepts of encryption and decryption in a user-friendly manner.
💻 https://github.com/farukalpay/TextEncryptionWithLinearAlgebra
This project implements a text encryption and decryption system using a matrix-based encryption technique. This project serves as an educational and practical exploration of matrix-based encryption techniques, demonstrating the fundamental concepts of encryption and decryption in a user-friendly manner.
Please open Telegram to view this post
VIEW IN TELEGRAM
👍47❤1
𝗛𝗼𝘄 𝗧𝗼 𝗘𝗻𝗮𝗯𝗹𝗲 𝗖𝗼𝗻𝘁𝗶𝗻𝘂𝗼𝘂𝘀 𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻 𝘄𝗶𝘁𝗵 𝗣𝘂𝗹𝗹 𝗥𝗲𝗾𝘂𝗲𝘀𝘁𝘀?
With Pull Requests, we lost the ability to have a proper Continuous Integration (CI) process in a way that delayed integration due to code reviews. So here comes a “Ship/Show/Ask” branching strategy. The thing is that not all pull requests need code reviews.
So, whenever we make a change, we have three options:
🔹 𝗦𝗵𝗶𝗽 - Small changes that don’t need people’s review can be pushed directly to the main branch. We have some build pipelines running on the main brunch, which run tests and other checks, so it is a safety net for our changes. Some examples are: fixing a typo, increasing the minor dependency version, updated documentation.
🔹 𝗦𝗵𝗼𝘄 - Here, we want to show what has been done. When you have a branch, you open a Pull Request and merge it without a review. Yet, you still want people to be notified of the change (to review it later), but don’t expect essential discussions. Some examples are: a local refactoring, fixing a bug, added a test case.
🔹 𝗔𝘀𝗸 - Here, we make our changes and open a Pull Request while waiting for feedback. We do this because we want a proper review in case we need clarification on our approach. This is a classical way of making Pull Requests. Some examples are: Adding a new feature, major refactoring, and proof of concept.
With Pull Requests, we lost the ability to have a proper Continuous Integration (CI) process in a way that delayed integration due to code reviews. So here comes a “Ship/Show/Ask” branching strategy. The thing is that not all pull requests need code reviews.
So, whenever we make a change, we have three options:
🔹 𝗦𝗵𝗶𝗽 - Small changes that don’t need people’s review can be pushed directly to the main branch. We have some build pipelines running on the main brunch, which run tests and other checks, so it is a safety net for our changes. Some examples are: fixing a typo, increasing the minor dependency version, updated documentation.
🔹 𝗦𝗵𝗼𝘄 - Here, we want to show what has been done. When you have a branch, you open a Pull Request and merge it without a review. Yet, you still want people to be notified of the change (to review it later), but don’t expect essential discussions. Some examples are: a local refactoring, fixing a bug, added a test case.
🔹 𝗔𝘀𝗸 - Here, we make our changes and open a Pull Request while waiting for feedback. We do this because we want a proper review in case we need clarification on our approach. This is a classical way of making Pull Requests. Some examples are: Adding a new feature, major refactoring, and proof of concept.
👍63👎1
𝗦𝘁𝗮𝗰𝗸 𝗢𝘃𝗲𝗿𝗳𝗹𝗼𝘄 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 𝗜𝘀 𝗡𝗼𝘁 𝗪𝗵𝗮𝘁 𝗬𝗼𝘂 𝗠𝗲𝗮𝗻 𝗜𝘁 𝗜𝘀
In the recent interview with Scott Hanselman, 𝗥𝗼𝗯𝗲𝗿𝘁𝗮 𝗔𝗿𝗰𝗼𝘃𝗲𝗿𝗱𝗲, 𝗛𝗲𝗮𝗱 𝗢𝗳 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 𝗮𝘁 𝗦𝘁𝗮𝗰𝗸 𝗢𝘃𝗲𝗿𝗳𝗹𝗼𝘄, revealed the story about the architecture of Stack Overflow. They handle more than 6000 requests per second, 2 billion page views per month, and they manage to render a page in about 12 milliseconds. If we think about it a bit, we could imagine they use some kind of 𝗺𝗶𝗰𝗿𝗼𝘀𝗲𝗿𝘃𝗶𝗰𝗲 𝘀𝗼𝗹𝘂𝘁𝗶𝗼𝗻 𝘁𝗵𝗮𝘁 𝗿𝘂𝗻𝘀 𝗶𝗻 𝘁𝗵𝗲 𝗖𝗹𝗼𝘂𝗱 𝘄𝗶𝘁𝗵 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀.
But the story is a bit different. Their solution is 15 years old, and it is a 𝗯𝗶𝗴 𝗺𝗼𝗻𝗼𝗹𝗶𝘁𝗵𝗶𝗰 𝗮𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗿𝘂𝗻𝗻𝗶𝗻𝗴 𝗼𝗻-𝗽𝗿𝗲𝗺𝗶𝘀𝗲𝘀. It is actually 𝗮 𝘀𝗶𝗻𝗴𝗹𝗲 𝗮𝗽𝗽 on IIS, which runs 200 sites. This single app is running on nine web servers and a single SQL Server (with the addition of one hot standby).
They also use 𝘁𝘄𝗼 𝗹𝗲𝘃𝗲𝗹𝘀 𝗼𝗳 𝗰𝗮𝗰𝗵𝗲, one on SQL Server with large RAM (1.5TB), where they have 30% of DB access in RAM and also they use two Redis servers (master and replica). Besides this, they have 3 tag engine servers and 3 Elastic search servers, which are used for 34 million daily searches.
All this is handled by a 𝘁𝗲𝗮𝗺 𝗼𝗳 𝟱𝟬 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝘀, who manage to 𝗱𝗲𝗽𝗹𝗼𝘆 𝘁𝗼 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻 𝗶𝗻 𝟰 𝗺𝗶𝗻𝘀 several times daily.
Their 𝗳𝘂𝗹𝗹 𝘁𝗲𝗰𝗵 𝘀𝘁𝗮𝗰𝗸 is:
🔹 C# + ASP. NET MVC
🔹 Dapper ORM
🔹 StaeckExchange Redis
🔹 MiniProfiler
🔹 Jil JSON Deseliazier
🔹 Exceptional logger for SQL
🔹 Sigil, a .Net CIL generation helper (for when C# isn’t fast enough)
🔹 NetGain, a high-performance web socket server
🔹 Opserver, monitoring dashboard polling most systems and feeding from Orion, Bosun, or WMI.
🔹 Bosun, backend monitoring system, written in Go
In the recent interview with Scott Hanselman, 𝗥𝗼𝗯𝗲𝗿𝘁𝗮 𝗔𝗿𝗰𝗼𝘃𝗲𝗿𝗱𝗲, 𝗛𝗲𝗮𝗱 𝗢𝗳 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 𝗮𝘁 𝗦𝘁𝗮𝗰𝗸 𝗢𝘃𝗲𝗿𝗳𝗹𝗼𝘄, revealed the story about the architecture of Stack Overflow. They handle more than 6000 requests per second, 2 billion page views per month, and they manage to render a page in about 12 milliseconds. If we think about it a bit, we could imagine they use some kind of 𝗺𝗶𝗰𝗿𝗼𝘀𝗲𝗿𝘃𝗶𝗰𝗲 𝘀𝗼𝗹𝘂𝘁𝗶𝗼𝗻 𝘁𝗵𝗮𝘁 𝗿𝘂𝗻𝘀 𝗶𝗻 𝘁𝗵𝗲 𝗖𝗹𝗼𝘂𝗱 𝘄𝗶𝘁𝗵 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀.
But the story is a bit different. Their solution is 15 years old, and it is a 𝗯𝗶𝗴 𝗺𝗼𝗻𝗼𝗹𝗶𝘁𝗵𝗶𝗰 𝗮𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗿𝘂𝗻𝗻𝗶𝗻𝗴 𝗼𝗻-𝗽𝗿𝗲𝗺𝗶𝘀𝗲𝘀. It is actually 𝗮 𝘀𝗶𝗻𝗴𝗹𝗲 𝗮𝗽𝗽 on IIS, which runs 200 sites. This single app is running on nine web servers and a single SQL Server (with the addition of one hot standby).
They also use 𝘁𝘄𝗼 𝗹𝗲𝘃𝗲𝗹𝘀 𝗼𝗳 𝗰𝗮𝗰𝗵𝗲, one on SQL Server with large RAM (1.5TB), where they have 30% of DB access in RAM and also they use two Redis servers (master and replica). Besides this, they have 3 tag engine servers and 3 Elastic search servers, which are used for 34 million daily searches.
All this is handled by a 𝘁𝗲𝗮𝗺 𝗼𝗳 𝟱𝟬 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝘀, who manage to 𝗱𝗲𝗽𝗹𝗼𝘆 𝘁𝗼 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻 𝗶𝗻 𝟰 𝗺𝗶𝗻𝘀 several times daily.
Their 𝗳𝘂𝗹𝗹 𝘁𝗲𝗰𝗵 𝘀𝘁𝗮𝗰𝗸 is:
🔹 C# + ASP. NET MVC
🔹 Dapper ORM
🔹 StaeckExchange Redis
🔹 MiniProfiler
🔹 Jil JSON Deseliazier
🔹 Exceptional logger for SQL
🔹 Sigil, a .Net CIL generation helper (for when C# isn’t fast enough)
🔹 NetGain, a high-performance web socket server
🔹 Opserver, monitoring dashboard polling most systems and feeding from Orion, Bosun, or WMI.
🔹 Bosun, backend monitoring system, written in Go
👍85❤1