TechLead Bits – Telegram
TechLead Bits
424 subscribers
62 photos
1 file
157 links
About software development with common sense.
Thoughts, tips and useful resources on technical leadership, architecture and engineering practices.

Author: @nelia_loginova
Download Telegram
In one of the previous blog posts we broke down the Saga pattern and I recommended not to use it because of high complexity. However, it's really interesting to explore successful implementations of the pattern. Let’s take a look at how HALO scaled to 11.6 million users using the Saga design pattern.

HALO is a very popular shooting game that was initially introduced in 1999. At that time the game was based on a single SQL database to store the entire game data. Their growth was explosive, and single database became not enough soon.

So they set up a NoSQL database and partitioned it. Data for each player was kept in a dedicated database partition. It resolves scaling limitations, but brought new issues:
- Data writes are not atomic anymore
- Partitions may have non-consistent information
It means that players can have different game data that significantly impacts game experience.

So HALO team decided to set up Saga:
✏️ Each partition is changed within a local transaction only
✏️ Orchestrator manages update within all database partitions
✏️State of each local transaction is stored in durable distributed log that allows:
- Track if a sub-transaction failed
- Find compensating transactions that must be executed
- Track the state of compensating transactions
- Recover from failures
✏️The log is stored outside Orchestrator that makes it stateless
✏️ Orchestrator interacts with the log to identify local transaction or compensating actions to execute

The introduced technical solution enables further HALO usage growth, which still remains a popular game series for Xbox with millions of unique users.

#architecture #systemdesign #usecase
👍32
Elastic: Back to Open Source?

This week, I came across the surprising news that Elastic has decided to return Elasticsearch and Kibana to open source.

Let me remind you that 3 years ago Elastic changed their license from Apache 2.0 to semi-proprietary Server Side Public License. Teams who actively used ELK stack remember that. In response, AWS forked the latest open Elasticsearch and Kibana versions, creating OpenSearch project.

In a year, OpenSearch had 100 million downloads and gathered 8,760 pull requests from 496 contributors over the globe. It even launched its own OpenSearch Conference in 2023. The fork became extremely popular and successful.

Now, Elastic announced AGPLv3 license for Elasticsearch and Kibana products. Maybe it relates to the decrease of the interest in Elasticsearch as a product. There is also a good article on TheNewStack that makes an attempt to explain and understand the reasons of that unexpected decision, which I recommend to read if you're interested in the topic.

The main question is whether teams already using OpenSearch will switch back to Elasticsearch. I don't think so. That's easy to change the license, but it's harder to return community trust.

#news #technologies
👍7😭3
Cassandra 5.0 is Officially Released

On September 5, the Cassandra 5 GA release was announced. Why is it important? First, Cassandra doesn't get updated very often; the last major release was in 2021. Second, the end-of-support for the 3.x series was announced at the same time. So, if you're still using 3.x, it's time to start planning an upgrade at least to 4.x.

Key changes:
- Storage Attached Indexes (SAI) (CEP-7). This is a new index implementation that replaces Cassandra secondary indexes, fixing their limitations. It allows creating indexes for multiple columns on the same table, improves query performance, reduces index storage overhead, and support complex queries (like numeric range, boolean queries).
- Trie Memtables and Trie SSTables (CEP-19, CEP-25). This is a change of the underlying data structures for the in-memory memtables and on-disk SSTables. These storage formats utilize tries and byte-comparable representations of database keys to improve Cassandra’s performance for reads and modification operations.
- Migration to JDK 17
- Unified Compaction Strategy (UCS) (CEP-26). It combines the tiered and levelled compaction strategies into a single algorithm. UCS has been designed to maximize the speed of compactions, using a unique sharding mechanism that compacts partitioned data in parallel.
- New Aggregation and Math Functions. Cassandra 5 adds new native CQL functions like count, max, min, sum, avg, exp, log, round and others. Users can also create their own custom functions.
- Approximate Nearest Neighbor Vector Search (CEP-30). The feature uses SAI and a new Vector CQL type. Vector is an array of floating-point numbers that show how similar specific objects or entities are to one another. It is a powerful technique for finding relevant content within large document collections and it can be used as a data-layer technology for AI/ML projects.

New Cassandra release makes significant optimizations in existing functionality and brings some really promising new features. For full details, you can check out the release notes here.

#news #technologies
👍32🔥1
Manage Your Day

Career growth always means taking more responsibilities within the team or company. The more responsibilities you have, the more tasks you need to manage. At some point of time, you may feel like a squirrel on a wheel—constantly responding to incoming requests, issues, one after another, with no time for actual work.

Do not let external requests manage your work. Manage them yourself. This sounds simple, but everyone who has been in this situation knows it's not so easy to do in practice.

Simple tips that might help:
✏️ You don't have to respond immediately to every question you receive. In most cases, nothing bad will happen if you check your messenger or email once in 2-3 hours.
✏️ You don't have to go to every meeting you're invited to. Review invitations carefully and decide which ones are really important. It’s ok to decline or ask to reschedule.
✏️ You don't need to execute any task immediately as you receive it. Ask about priorities and deadlines, estimate impact on other tasks, discuss and plan accordingly.
✏️ If a task takes less than 2 minutes, just do it (That's the only principle from Getting Things Done that really works for me).
✏️ Book time on the calendar to work on important tasks. Try to reserve at least a few hours a day for focused work.
✏️ Set task priorities. I like Covey model, that groups all task by importance and urgency. Choose an approach that works for you.
✏️ Do not try to keep everything in the head: write down all important ideas, tasks, agreements, requests, whatever is important to perform your job.

Additionally I recommend to read Time Management Techniques That Actually Work. The article also contains a bunch of useful recommendations for the same. Try different tools and methods, and see what works for you.

Also, please, feel free to share other recommendations that work for you in the comments.

#softskills #productivity
👍6🔥2
Canonical Logs

Logging is the oldest tool to troubleshoot issues with the software. But relevant information is spread across many individual log lines, making it difficult or even impossible to quickly search right details, perform some aggregation or analysis. That's where canonical logs concept can help.

Canonical log is the structured one long log line at the end of the request (or any other type of work) that includes fields with request’s key characteristics. Having that data collocated in single information-dense lines makes queries and aggregations over it faster to write, and faster to run.

Canonical log can include the following information:
- HTTP verb, path, response code and status
- Authentication related information
- Request ID, Trace ID
- Error ID and error message
- Service info: name, version, revision
- Timing information: operation duration, percentiles, time spent in database queries and others
- Remaining and total rate limits
- Any other useful information for your service

I want to highlight that log must be structured (key-value, json) to make it machine readable. Structured logs can be easily indexed by many of existing tools, providing an ability to search and aggregate collected data.

Simple canonical log sample:
[2019-03-18 22:48:32.999] canonical-log-line alloc_count=9123 auth_type=api_key database_queries=34 duration=0.009 http_method=POST http_path=/v1/charges http_status=200 key_id=mk_123 permissions_used=account_write rate_allowed=true rate_quota=100 rate_remaining=99 request_id=req_123 team=acquiring user_id=usr_123


Good practice is to formalize log contract across services and applications. As an example protobuf structure can be used for that purposes.

Canonical logs seems to be a lightweight, flexible, and technology-agnostic technique to improve overall system observability. It's easy to implement and extend any existing logging capabilities.

References:
- Using Canonical Log Lines for Online Visibility
- Fast and flexible observability with canonical log lines
- Logs Unchained: Exploring the benefits of Canonical Logs

#engineering #observability
👍4
True Inspiration from Pixar. Part 1: Core Values

One of the recent books that really inspired me is Creativity, Inc.: Overcoming the Unseen Forces That Stand in the Way of True Inspiration by Ed Catmull, co-founder of Pixar Animation. It’s presented as a book about creative leadership and management principles that helped Pixar to build unique culture and become the best animation studio in the world. But it's much more than that.

What makes this book special for me? It's not just practical advice, but the story of a long way for a dream came true.

Book starts with Pixar’s history. Did you know that Pixar originally began with creating graphic design programs and selling special computers for them? From a young age, Ed Catmull dreamed about making computer-animated feature films (Disney films were all hand-drawn). But no one believed in his idea, so Pixar was focused on improving how computers processed and displayed graphic data.

Things changed in 1977 with the release of Star Wars by George Lucas. The era of computer effects started. For a few years, Pixar was a part of Lucasfilm, but the dream of making a fully animated movie wasn’t achieved.

In 1986, Steve Jobs bought Pixar, and the team finally had the chance to work on their first film. Toy Story in 1995 became a Pixar triumph. It was first fully computer-animated film in the world, it was extremely successful and it really changed the animation industry. But it was only the beginning. It took a lot of efforts to create next movies with the same quality, scale company culture, support creativity, keep company core values with its growth. Today, Pixar is part of Disney but operates as a separate division with its own vision, ideas, and projects.

Lessons learnt from the growth:
✏️ Building the right team is the foundation for success.
✏️ Focus on teamwork, not on individual talents.
✏️ Focus on the people, their habits, values, help them to reveal their talents.
✏️ People are more important than ideas because people create ideas.
✏️ Quality must be a set as a main condition before project start.

To be continued....

#booknook #softskills #leadership
👍2🔥1
Book covers for Creativity, Inc.: Overcoming the Unseen Forces That Stand in the Way of True Inspiration

#booknook
1
GenAI for Legacy Systems Modernization

While most people actively write about using GenAI tools to generate new code, there is a new Thoughtworks publication that focuses on the opposite — using AI to understand and refactor legacy systems.

What makes legacy systems modernization expensive?
- Lack of design and implementation details knowledge
- Lack of actual documentation
- Lack of automated tests
- Absence of human experts
- Difficulty to measure the impact of the change

To address these challenges Thoughtworks team developed a tool called CodeConcise. But the authors highlighted that you don't need exactly this tool, the approach and ideas can be used as a reference to implement your own solution.

Key concepts:
✏️ Treat code as data
✏️ Build Abstract Syntax Trees (ASTs) to identify entities and relationships in the code
✏️ Store these ASTs in graph database (neo4j)
✏️ Use a comprehension pipeline that traverses the graph using multiple algorithms, such as Depth-first Search with backtracking in post-order traversal, to enrich the graph with LLM-generated explanations at various depths (e.g. methods, classes, packages)
✏️ Integrate the enriched graph with a frontend application that implements Retrieval-Augmented Generation (RAG) approach
✏️ The RAG retrieval component pulls nodes relevant to the user’s prompt, while the LLM further traverses the graph to gather more information from their neighboring nodes to provide the LLM-generated explanations at various levels of abstraction
✏️ The same enrichment pipeline can be used to generate documentation for the existing system

For now the tool was tested with several clients to generate explanations for low-level legacy code. The next goal is to improve the model to provide answers at the higher level of abstraction, keeping in mind that it might not be directly possible by examining the code alone.

The work looks promising and could significantly reduce the time and cost of modernizing old systems (especially written on exotic languages like COBOL). It simplifies reverse-engineering and helps generate knowledge about the current system. The authors also promised to share results on improving the current model and provide more real life examples for the tool usage.

#news #engineering #ai
🔥3👍1
True Inspiration from Pixar. Part 2: Protect New Ideas

That's the second part of Creativity, Inc.: Overcoming the Unseen Forces That Stand in the Way of True Inspiration book overview (First part is there). It is about practices that can help to protect new ideas from bureaucracy, fear of feedback, and thinking "experienced people know best."

To support creativity and innovation, Pixar's leaders built a culture based on the following principles:

✏️ Candor. Ask people about candor not honesty, build processes that demonstrate value of the candor on all levels. Pixar uses Braintrust practice to train that:
- Regular offline meetings every 2-3 months.
- Teams present film fragments to identify issues.
- There is no gradation by official noscripts, everyone's opinion is important, there is no stupid or destructive feedback.
- Feedback should be focused on the problem not the person.
- Criticism is part of improving the work, not as competition.
- Create an atmosphere of trust where all member are interested in great results.

✏️ Value of Failures. Don’t fear the failure. Failure is a chance to learn. Start working, get feedback, learn, and try again. The author says that Pixar culture is unique as it doesn't just allow the people to make mistakes, but it expects them to make those mistakes.

✏️ Protect Ugly Babies. New ideas are usually not beautiful, so author called them the ugly babies. It takes time and patience for them to grow and shine. All Pixar films started as simple, sometimes awkward ideas, going through many iterations. It's important to protect new ideas from conservatism, habits to do something by only known ways, because it leads to predictable but mediocre results, killing true inspiration.

✏️ Change and Randomness. People don't like changes because it feels unsafe or overwhelming. Asking "What if?" question helps teams imagine possibilities and break through the fear barriers. One more important note there is about rules. Rules appear by some reason, but reasons are changed over time and can be not actual anymore. Outdated rules improves bureaucracy and kill creative atmosphere.

✏️ The Hidden. True leaders accept that employees often have a deeper understanding of problems. Managers don’t need to know everything, but they must encourage open communication to get a bigger picture of what’s happening in teams. Healthy cultures encourage employees share theirs opinions, report problems, make suggestions. Otherwise leaders can be in dangerous information isolation.

To be continued....

#booknook #softskills #leadership
1👍1
True Inspiration from Pixar. Part 3: Broadening the View

That's the third part of Creativity, Inc.: Overcoming the Unseen Forces That Stand in the Way of True Inspiration book overview (Other parts: 1, 2).

Throughout the book the author shares his management ideas and principles. There is one though that I find very interesting from practical point if view. He says that people who live or work together tend to become closer and share similar mental models and behavior patterns (event if those models are wrong). That fact can be used to build right team culture and create atmosphere of creativity and innovation.

Practical tools to improve collaboration within the team or company:
✏️ Daily Meetings. Daily information exchange and team work analysis improves overall team productivity.
✏️ Research Trips. All our mental models are wrong, we need regularly clean our believes and get rid of cliches. Pixar sends their employees to the locations that are relevant to the topics of the movies they are working on (e.g. real student campus for Monster University movie to get better understanding of students life and environment).
✏️ Power of Restrictions. Limited resources help to focus on important things, improve decision making and optimize internal processes.
✏️ Technology and Art Integration. New tech tools should be actively used to automate the routine, freeing up time for more important and creative tasks.
✏️ Short Experiments. Perform short experiments to prove new ideas.
✏️ Train the Vision. Pixar offers drawing courses to all employees, because drawing improves observation skills, stimulates the right side of the brain activity that is responsible for human creativity.
✏️ Dissection. It's very similar to Agile retrospective. After a film release, teams collect their lessons learnt, aggregate good experience and discuss the mistakes. The author suggest the following practice: ask everyone to list 5 things they'd do again and 5 they wouldn’t.
✏️ Continuous Education. Pixar encourages constant learning across fields. Pixar University has classes for drawing, dancing, acting skills and others. Employees from different departments attend classes together, establishing connections outside of work roles. One more important idea behind the practice: if you do something that you don't usually do at work you keep your brain healthy.

The main message from the author to the leaders: people are the most valuable part of any company🫶. Invest time and effort in your teams, help them reveal their potential, trust your colleagues, and avoid trying to control everything, delegate. Encourage openness and trust, create a safe environment within your organization. It’s hard, ongoing work, but it's the only way to build and maintain a creative culture.

I grew up on Pixar movies, and I still go to the cinema for every new release. It was really interesting to read how it's organized inside. Surprisingly, film making is very similar to software development: like any project it has budget and resource restrictions, it starts from MVP, then there is set of iterations and experiments to build the product, teams have daily syncups, some standard project processes, multiple "release candidate", deadlines and, of course, restrospectives. So suggested practices and ideas are also applicable for our daily routine to improve creativity and collaboration.

#booknook #softskills #leadership
1👍1
Cannot stop writing about Pixar. Accidentally open non-existing-page. That's the best 404 page I've seen 😍 .
🔥7😁3
Observability 2.0

There is very charismatic talk from Charity Majors, Co-Author of "Observability Engineering", called Is it time for Observability 2.0? Sounds intriguing, so let's check what's inside.

Key ideas:
📍 Observability 1.0. It's traditional observability techniques based on 3 pillars: logs, metrics and traces. It's complex, expensive and requires skilled engineer to analyze correlations between different sources of data.
📍 Observability 1.0 has significant limitations:
- static dashboards
- lack of important details
- designed to answer to pre-defined questions
- multiple source of truth
- multiple systems to support
📍Observability 2.0 paradigm is based on the idea to use wide structured logs that contain all necessary information. It makes easy to aggregate data, zoom in and out for details when needed.
📍Observability 2.0 based on single source of truth - logs, all others is just visualization, aggregation and making dynamic queries. There is no data duplication. There is no need to install and maintain a set of tools for the telemetry. In that terms it's cheaper.

Implementation tips:
📍 Instrument the code using principles of Canonical Logs (we already checked the concept there)
📍Add Trace IDs and Span IDs to trace requests chain execution
📍 Feed the data into a columnar store, to move away from predefined schemas or indexes
📍 Use a storage engine that supports high cardinality
📍Adopt tools with explorable interfaces or dynamic dashboards

Our systems have become too hard and complicated. So it's critical to have effective observability tools and practices. The approach from the talk looks promising especially as it doesn't require any new tools to be developed. Let's see if it become a new trend in observability implementation.

#engineering #observability
👍3🔥1
Shipping Threads in 5 Months

As developers, we often prefer writing new code rather then reusing the older one. Of course, new code will be better, faster, more readable, maintainable, it will use newer tools and frameworks and definitely will be great. Spoiler: No 😉

Old battle-tested code can be significantly better because it's already tested, covered by automations, it has common features in place, little learning encoded, it doesn't have stupid mistakes already, it's mature.

That's why I think there is really interesting experience that shared by Meta team how they reuse Instagram code to build Threads app from scratch.

Initially, the team had an ambitious goal to build a service to compete with X (Twitter) in a couple of months. To achieve that, they decided to use existing Instagram code with core features like profiles, posts, recommendations, followers as the base for the new service. Additionally, Threads was built on existing Instagram infra with support of existing product teams. This approach allows to deliver new fully-featured service in 5 months.

Key findings during the process:
✏️ You need the deep knowledge of legacy code to successfully reuse it
✏️ Code readability really matters
✏️ Repurposing and customization existing code for the new requirements brough additional technical debt that should be paid in the future
✏️ Shared infrastructure and product teams can significantly reduce development costs
✏️ Old code is already tested in real conditions and contains less issues then the new code

So don't rush to rewrite the existing code. Thoughtful evolutionary approach can bring more business benefits, reducing time to market and overall development costs.

#engineering #usecase
👍2🔥1
Put Your Own Mask On First

"Put your own mask on first before assisting your child 😷"—they always say it on the plane before the flight. It sounds clear and familiar for us. But this same rule can apply to other parts of our lives. Metaphorically of course.

What do we usually do to meet deadlines, deal with a pile of issues at work, or handle business pressures? The most common scenario is to work more and more, eventually leading to burnout. But a burned-out leader can't solve problems effectively or help their team to survive in the storm of difficulties or achieve business goals.

It may sound counterintuitive but more pressure and problems you face at work, the more time and care you need to give yourself: proper nutrition, walking, regular physical activity, full sleep and less overwork.

When you take care of yourself, you can better take care of your team. So put your own mask on first before assisting to others.

#softskills #leadership
8👍2
API Governance

In modern distributed systems, where individual teams manage different services, it's pretty common for each team to create their own APIs in different ways. Each team tries to make their APIs unique and impressive. As a result, a company may have a lot of APIs that follow different rules and principles and reflect organizational structure instead of business domains (you remember Conway's Law, right?). This can be a full mess.

To avoid this, APIs must be properly managed to stay consistent.

API Governance is the set of practices, tools and policies to enforce API quality, security, consistency and compliance. It involves creating standards and processes for every stage in the API lifecycle, from design, development, and testing to deployment, management, and retirement.

API Governance consists of
the following elements:
📍 Centralization. A single point where policies are created and enforced.
📍 API Contract. Standard specifications to define APIs like OpenAPI, gRPC, GraphQL, AsyncAPI and others.
📍Implementation Guidelines. Establish and enforce style guidelines for all APIs. Good examples are Google Cloud API Guidelines , Azure API Guidelines.
📍 Security Policies. Defining API security standards and policies that protect sensitive data from cyber threats and ensuring API compliance with regulations.
📍 Automation. Developers and other roles need to quickly make sure that APIs are compliant with the enterprise standards at various stages of the lifecycle.
📍Versioning.
📍Deprecation Policy.
📍API Discovery. Provide a way to easily search for and discover existing APIs.

API Governance provides the guardrails to develop high-quality consistent APIs within the company. But to make it work a good level of automation is required.

#engineering #api
1👍1
API Governance: Versioning

Let's continue today with API management and talk about versioning. To define you versioning policy you need to answer the following questions:
😲 What versioning method will be used?
🤔 When do you need to release a new version?
😬 What naming convention to use?
😵‍💫 How to keep compatibility with the clients?

The most popular versioning strategies:
✏️ No Versioning. Yes, that's also a choice 😀
✏️ Semantic Versioning. It's well-know strategy to version anything in software development world.
✏️ Stability Levels: alpha, beta, stable. Major version is changed on breaking changes. Examples: v1alpha, v2beta, v1aplha3, v2. More details in Google API Design Guide.
✏️ Release Numbers: Simple sequential versions like v1, v2, v3, updated mainly for breaking changes.
✏️ Product Release Version: Use your product’s version for APIs. Example: product version 2024.3 then API version 2024.3. In that case version is changed even if there are no major changes, but it really simplifies tracking compatibility between releases and APIs.

To reduce the impact of API changes the following compatibility strategies can be used:
✏️ Synchronized Updates: Both API and clients are updated and delivered together. Simple, fragile. It can be useful if you control and manage all API consumers.
✏️ Client Supports Multiple Versions: One client can work with multiple API versions, but outdated clients may stop working and require updates to match newer APIs.
✏️ API Serves Multiple Versions: New API version is added in parallel to the existing one on the same server. In that case you may serve as many versions as you need to support all your clients. To reduce API management overhead Hub-Spoke pattern can be used: the hub represents the main version, while spoke versions are converted from the hub. This approach is actively used in Kubernetes, so you can read more details in Kubebuilder Conversion Concepts.

Analyze your requirements and architecture, set clear rules, define the versioning and compatibility approach. It's really important to document those decisions and socialize them to your clients.

#engineering #api
1
SMURF Testing

Google introduced new mnemonic for test quality attributes - SMURF:
📌 Speed: Unit tests are faster than other test types so they can be run more often.
📌 Maintainability: Cost of debugging and maintaining tests.
📌 Utilization: A good test suite optimizes resource utilization, fewer resources cost less to run.
📌 Reliability: Sorting out flaky tests wastes developer time and costs resources in rerunning the tests.
📌 Fidelity: High-fidelity tests come closer to approximating real operating conditions. So integration tests have more fidelity than unit tests.

In many cases improving one quality attribute can affect the others, so be careful and measure your costs and trade-offs.

#engineering #testing
1
API Governance at Scale

The most difficult part of API Governance is to ensure that developers follow provided guidelines and policies. Without proper controls, the real code will eventually drift from the guidelines—it’s only a matter of time. This doesn’t happen because developers are bad or unwilling to follow the rules, but because we’re human, and humans make mistakes. Mistakes accumulate and grow over time and as a result you can get APIs that are too far from initial recommendations.

In small teams with a small codebase, developers education can work, and trained reviewers can ensure the code follows the rules. But as your team or organization grows, this approach isn't enough. I strongly believe that only automation can maintain policy compliance over a large codebase or multiple teams.

Google recently published API Governance at Scale, sharing their experience and tools to control API guidelines execution.

They introduced 3 key components:
✏️ API Improvement Proposals (AIPs). This is a design document providing high-level documentation for API development. Each rule is introduced as a separate AIP that consists of a problem denoscription and a guideline to follow (Example, AIP-126).
✏️ API Linter . This tool provides real-time checks for compliance with existing AIPs.
✏️ API Readability Program. This is an educational program to prepare and certify API design experts, who then perform a code review for API changes.

While Google developed the AIPs concept, they encourage other companies to adopt the approach. Many of the rules are generic and easily reusable. They even provide a special guide on how to adopt AIPs. Adoption strategy is not finished now, but preparation status can be tracked via appropriate Github issue.

#engineering #api
1👍1
Take a Vacation

Last week I was on vacation, so there was a little break in the publications😌. Therefore I would like to talk a little about the vacation and how important it is. High quality vacation is not just opportunity for relax but it is also a prevention mechanism for many serious diseases.

But it’s not enough just to take vacations regularly; the way you spend them determines if you re-charge your internal battery or not.

My tips for a good vacation:

✏️ Take Enough Time: Ideally, a vacation length should be at least 14 days (as a single period). If you feel heavily exhausted, then better to take 21 days. That time is usually enough to recharge.
✏️ Change the Scenery: Travelling to a new place (even a short trip) gives you new impressions, experience, fill you with new ideas, inspiration and energy. Spending time outside standard surroundings significantly decreases an overall strain level. The fact is also proved by German researchers.
✏️ Digital Detox: Don't touch your laptop, don't open working chats, don't read the news, minimize social networks usage. Give the rest to your brain from constant information noise.
✏️ Be Spontaneous: Don't try to plan everything: constant following the schedule makes vacation feel more work-like and doesn't allow to enjoy the moment. Spontaneous activities can provide more fun and satisfaction.
✏️ Do Nothing: Allow yourself to take time for idleness. That's really difficult as you feel just wasting time that can be spend more effectively😀. But that's the trick: state of nothingness rewires the brain, improve creativity and problem solving capabilities.

So take care of yourself and plan a proper rest during the year.

#softskills #productivity
4🔥4👍1
Google ARM Processor

Last week, Google announced their own custom ARM-based processor for general-purpose workloads. They promised up to 65% better price-performance and up to 60% better energy-efficiency.

Why is it interesting? Until now, only AWS offered a custom cost-optimized ARM processor - AWS Graviton. And now Google joined the competition. This shows that interest in ARM processors still grows and continue to grow in the future.

From engineering perspective, it's not possible just to switch workload from one architecture to another as images need to be pre-built for a specific architecture. One of the ways to test ARM nodes and migrate smoothly on the new architecture is by using multi-architecture images (I wrote about that here)

#engineering #news
👍2🔥1