𝗡𝗼 𝗗𝗲𝗴𝗿𝗲𝗲? 𝗡𝗼 𝗣𝗿𝗼𝗯𝗹𝗲𝗺. 𝗧𝗵𝗲𝘀𝗲 𝟰 𝗖𝗲𝗿𝘁𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀 𝗖𝗮𝗻 𝗟𝗮𝗻𝗱 𝗬𝗼𝘂 𝗮 𝗗𝗮𝘁𝗮 𝗔𝗻𝗮𝗹𝘆𝘀𝘁 𝗝𝗼𝗯😍
Dreaming of a career in data but don’t have a degree? You don’t need one. What you do need are the right skills🔗
These 4 free/affordable certifications can get you there. 💻✨
𝐋𝐢𝐧𝐤👇:-
https://pdlink.in/4ioaJ2p
Let’s get you certified and hired!✅️
Dreaming of a career in data but don’t have a degree? You don’t need one. What you do need are the right skills🔗
These 4 free/affordable certifications can get you there. 💻✨
𝐋𝐢𝐧𝐤👇:-
https://pdlink.in/4ioaJ2p
Let’s get you certified and hired!✅️
Roadmap to crack product-based companies for Big Data Engineer role:
1. Master Python, Scala/Java
2. Ace Apache Spark, Hadoop ecosystem
3. Learn data storage (SQL, NoSQL), warehousing
4. Expertise in data streaming (Kafka, Flink/Storm)
5. Master workflow management (Airflow)
6. Cloud skills (AWS, Azure or GCP)
7. Data modeling, ETL/ELT processes
8. Data viz tools (Tableau, Power BI)
9. Problem-solving, communication, attention to detail
10. Projects, certifications (AWS, Azure, GCP)
11. Practice coding, system design interviews
Here, you can find Data Engineering Resources 👇
https://whatsapp.com/channel/0029Vaovs0ZKbYMKXvKRYi3C
All the best 👍👍
1. Master Python, Scala/Java
2. Ace Apache Spark, Hadoop ecosystem
3. Learn data storage (SQL, NoSQL), warehousing
4. Expertise in data streaming (Kafka, Flink/Storm)
5. Master workflow management (Airflow)
6. Cloud skills (AWS, Azure or GCP)
7. Data modeling, ETL/ELT processes
8. Data viz tools (Tableau, Power BI)
9. Problem-solving, communication, attention to detail
10. Projects, certifications (AWS, Azure, GCP)
11. Practice coding, system design interviews
Here, you can find Data Engineering Resources 👇
https://whatsapp.com/channel/0029Vaovs0ZKbYMKXvKRYi3C
All the best 👍👍
👍3
𝟱 𝗙𝗿𝗲𝗲 𝗥𝗲𝘀𝗼𝘂𝗿𝗰𝗲𝘀 𝗧𝗵𝗮𝘁’𝗹𝗹 𝗠𝗮𝗸𝗲 𝗦𝗤𝗟 𝗙𝗶𝗻𝗮𝗹𝗹𝘆 𝗖𝗹𝗶𝗰𝗸.😍
SQL seems tough, right? 😩
These 5 FREE SQL resources will take you from beginner to advanced without boring theory dumps or confusion.📊
𝐋𝐢𝐧𝐤👇:-
https://pdlink.in/3GtntaC
Master it with ease. 💡
SQL seems tough, right? 😩
These 5 FREE SQL resources will take you from beginner to advanced without boring theory dumps or confusion.📊
𝐋𝐢𝐧𝐤👇:-
https://pdlink.in/3GtntaC
Master it with ease. 💡
👍1
7 Baby steps to learn Python:
1. Learn the basics: Start with the fundamentals of Python programming language, such as data types, variables, operators, control structures, and functions.
2. Write simple programs: Start writing simple programs to practice what you have learned. Start with small programs that solve basic problems, such as calculating the factorial of a number, checking whether a number is prime or not, or finding the sum of a sequence of numbers.
3. Work on small projects: Start working on small projects that interest you. These can be simple projects, such as creating a calculator, building a basic game, or automating a task. By working on small projects, you can develop your programming skills and gain confidence.
4. Learn from other people's code: Look at other people's code and try to understand how it works. You can find many open-source projects on platforms like GitHub. Analyze the code, see how it's structured, and try to figure out how the program works.
5. Read Python documentation: Python has extensive documentation, which is very helpful for beginners. Read the documentation to learn more about Python libraries, modules, and functions.
6. Participate in online communities: Participate in online communities like StackOverflow, Reddit, or Python forums. These communities have experienced programmers who can help you with your doubts and questions.
7. Keep practicing: Practice is the key to becoming a good programmer. Keep working on projects, practicing coding problems, and experimenting with different techniques. The more you practice, the better you'll get.
Best Resource to learn Python
Freecodecamp Python ML Course with FREE Certificate
Python for Data Analysis
Python course for beginners by Microsoft
Scientific Computing with Python
Python course by Google
Python Free Resources
Please give us credits while sharing: -> https://news.1rj.ru/str/free4unow_backup
ENJOY LEARNING 👍👍
1. Learn the basics: Start with the fundamentals of Python programming language, such as data types, variables, operators, control structures, and functions.
2. Write simple programs: Start writing simple programs to practice what you have learned. Start with small programs that solve basic problems, such as calculating the factorial of a number, checking whether a number is prime or not, or finding the sum of a sequence of numbers.
3. Work on small projects: Start working on small projects that interest you. These can be simple projects, such as creating a calculator, building a basic game, or automating a task. By working on small projects, you can develop your programming skills and gain confidence.
4. Learn from other people's code: Look at other people's code and try to understand how it works. You can find many open-source projects on platforms like GitHub. Analyze the code, see how it's structured, and try to figure out how the program works.
5. Read Python documentation: Python has extensive documentation, which is very helpful for beginners. Read the documentation to learn more about Python libraries, modules, and functions.
6. Participate in online communities: Participate in online communities like StackOverflow, Reddit, or Python forums. These communities have experienced programmers who can help you with your doubts and questions.
7. Keep practicing: Practice is the key to becoming a good programmer. Keep working on projects, practicing coding problems, and experimenting with different techniques. The more you practice, the better you'll get.
Best Resource to learn Python
Freecodecamp Python ML Course with FREE Certificate
Python for Data Analysis
Python course for beginners by Microsoft
Scientific Computing with Python
Python course by Google
Python Free Resources
Please give us credits while sharing: -> https://news.1rj.ru/str/free4unow_backup
ENJOY LEARNING 👍👍
👍2
Data engineering interviews will be 10x easier if you learn these tools in sequence👇
➤ 𝗣𝗿𝗲-𝗿𝗲𝗾𝘂𝗶𝘀𝗶𝘁𝗲𝘀
- SQL is very important
- Learn Python Funddamentals
- Pandas and Numpy Library in Python.
➤ 𝗢𝗻-𝗣𝗿𝗲𝗺 𝘁𝗼𝗼𝗹𝘀
- Learn Pyspark - In Depth (Processing tool)
- Hadoop (Distrubuted Storage)
- Hive (Datawarehouse)
- Hbase (NoSQL Database)
- Airflow (Orchestration)
- Kafka (Streaming platform)
- CICD for production readiness
➤ 𝗖𝗹𝗼𝘂𝗱 (𝗔𝗻𝘆 𝗼𝗻𝗲)
- AWS
- Azure
- GCP
➤ Do a couple of projects to get a good feel of it.
Here, you can find Data Engineering Resources 👇
https://topmate.io/analyst/910180
All the best 👍👍
➤ 𝗣𝗿𝗲-𝗿𝗲𝗾𝘂𝗶𝘀𝗶𝘁𝗲𝘀
- SQL is very important
- Learn Python Funddamentals
- Pandas and Numpy Library in Python.
➤ 𝗢𝗻-𝗣𝗿𝗲𝗺 𝘁𝗼𝗼𝗹𝘀
- Learn Pyspark - In Depth (Processing tool)
- Hadoop (Distrubuted Storage)
- Hive (Datawarehouse)
- Hbase (NoSQL Database)
- Airflow (Orchestration)
- Kafka (Streaming platform)
- CICD for production readiness
➤ 𝗖𝗹𝗼𝘂𝗱 (𝗔𝗻𝘆 𝗼𝗻𝗲)
- AWS
- Azure
- GCP
➤ Do a couple of projects to get a good feel of it.
Here, you can find Data Engineering Resources 👇
https://topmate.io/analyst/910180
All the best 👍👍
❤3👍2
𝗪𝗮𝗻𝘁 𝘁𝗼 𝗟𝗲𝗮𝗿𝗻 𝗜𝗻-𝗗𝗲𝗺𝗮𝗻𝗱 𝗧𝗲𝗰𝗵 𝗦𝗸𝗶𝗹𝗹𝘀 — 𝗳𝗼𝗿 𝗙𝗥𝗘𝗘 — 𝗗𝗶𝗿𝗲𝗰𝘁𝗹𝘆 𝗳𝗿𝗼𝗺 𝗚𝗼𝗼𝗴𝗹𝗲?😍
Whether you’re a student, job seeker, or just hungry to upskill — these 5 beginner-friendly courses are your golden ticket. 🎟️
Just career-boosting knowledge and certificates that make your resume pop📄
𝐋𝐢𝐧𝐤👇:-
https://pdlink.in/42vL6br
All The Best 🎊
Whether you’re a student, job seeker, or just hungry to upskill — these 5 beginner-friendly courses are your golden ticket. 🎟️
Just career-boosting knowledge and certificates that make your resume pop📄
𝐋𝐢𝐧𝐤👇:-
https://pdlink.in/42vL6br
All The Best 🎊
Top 30 Data Engineering Interview Questions
𝗔𝗽𝗮𝗰𝗵𝗲 𝗦𝗽𝗮𝗿𝗸
- What is the difference between transformations and actions in Spark, and can you provide an example?
- How can data partitioning be optimized for performance in Spark?
- What is the difference between cache() and persist() in Spark, and when would you use each?
𝗔𝗽𝗮𝗰𝗵𝗲 𝗞𝗮𝗳𝗸𝗮
- How does Kafka partitioning enable scalability and load balancing?
- How does Kafka’s replication mechanism provide durability and fault tolerance?
- How would you manage Kafka consumer rebalancing to minimize data loss?
𝗔𝗽𝗮𝗰𝗵𝗲 𝗔𝗶𝗿𝗳𝗹𝗼𝘄
- What are dynamic DAGs in Airflow, and what benefits do they offer?
- What are Airflow pools, and how do they help control task concurrency?
- How do you implement time-based and event-based triggers for DAGs in Airflow?
𝗗𝗮𝘁𝗮 𝗪𝗮𝗿𝗲𝗵𝗼𝘂𝘀𝗶𝗻𝗴
- How would you design a data warehouse schema for an e-commerce platform?
- What is the difference between OLAP and OLTP, and how do they complement each other?
- What are materialized views, and how do they improve query performance?
𝗖𝗜/𝗖𝗗
- How do you integrate automated testing into a CI/CD pipeline for ETL jobs?
- How do you manage environment-specific configurations in a CI/CD pipeline?
- How is version control managed for database schemas and ETL noscripts in a CI/CD pipeline?
𝗦𝗤𝗟
- How do you write a query to fetch the top 5 highest salaries in each department?
- What’s the difference between the HAVING and WHERE clauses in SQL?
- How do you handle NULL values in SQL, and how do they affect aggregate functions?
𝗣𝘆𝘁𝗵𝗼𝗻
- How do you handle large datasets in Python, and which libraries would you use for performance?
- What are context managers in Python, and how do they help with resource management?
- How do you manage and log errors in Python-based ETL pipelines?
𝗔𝘇𝘂𝗿𝗲 𝗗𝗮𝘁𝗮𝗯𝗿𝗶𝗰𝗸𝘀
- How would you optimize a Databricks job using Spark SQL on large datasets?
- What is Delta Lake in Databricks, and how does it ensure data consistency?
- How do you manage and secure access to Databricks clusters for multiple users?
𝗔𝘇𝘂𝗿𝗲 𝗗𝗮𝘁𝗮 𝗙𝗮𝗰𝘁𝗼𝗿𝘆
- What are linked services in Azure Data Factory, and how do they facilitate data integration?
- How do you use mapping data flows in Azure Data Factory to transform and filter data?
- How do you monitor and troubleshoot failures in Azure Data Factory pipelines?
𝗔𝗽𝗮𝗰𝗵𝗲 𝗦𝗽𝗮𝗿𝗸
- What is the difference between transformations and actions in Spark, and can you provide an example?
- How can data partitioning be optimized for performance in Spark?
- What is the difference between cache() and persist() in Spark, and when would you use each?
𝗔𝗽𝗮𝗰𝗵𝗲 𝗞𝗮𝗳𝗸𝗮
- How does Kafka partitioning enable scalability and load balancing?
- How does Kafka’s replication mechanism provide durability and fault tolerance?
- How would you manage Kafka consumer rebalancing to minimize data loss?
𝗔𝗽𝗮𝗰𝗵𝗲 𝗔𝗶𝗿𝗳𝗹𝗼𝘄
- What are dynamic DAGs in Airflow, and what benefits do they offer?
- What are Airflow pools, and how do they help control task concurrency?
- How do you implement time-based and event-based triggers for DAGs in Airflow?
𝗗𝗮𝘁𝗮 𝗪𝗮𝗿𝗲𝗵𝗼𝘂𝘀𝗶𝗻𝗴
- How would you design a data warehouse schema for an e-commerce platform?
- What is the difference between OLAP and OLTP, and how do they complement each other?
- What are materialized views, and how do they improve query performance?
𝗖𝗜/𝗖𝗗
- How do you integrate automated testing into a CI/CD pipeline for ETL jobs?
- How do you manage environment-specific configurations in a CI/CD pipeline?
- How is version control managed for database schemas and ETL noscripts in a CI/CD pipeline?
𝗦𝗤𝗟
- How do you write a query to fetch the top 5 highest salaries in each department?
- What’s the difference between the HAVING and WHERE clauses in SQL?
- How do you handle NULL values in SQL, and how do they affect aggregate functions?
𝗣𝘆𝘁𝗵𝗼𝗻
- How do you handle large datasets in Python, and which libraries would you use for performance?
- What are context managers in Python, and how do they help with resource management?
- How do you manage and log errors in Python-based ETL pipelines?
𝗔𝘇𝘂𝗿𝗲 𝗗𝗮𝘁𝗮𝗯𝗿𝗶𝗰𝗸𝘀
- How would you optimize a Databricks job using Spark SQL on large datasets?
- What is Delta Lake in Databricks, and how does it ensure data consistency?
- How do you manage and secure access to Databricks clusters for multiple users?
𝗔𝘇𝘂𝗿𝗲 𝗗𝗮𝘁𝗮 𝗙𝗮𝗰𝘁𝗼𝗿𝘆
- What are linked services in Azure Data Factory, and how do they facilitate data integration?
- How do you use mapping data flows in Azure Data Factory to transform and filter data?
- How do you monitor and troubleshoot failures in Azure Data Factory pipelines?
👍2
Forwarded from Artificial Intelligence
𝗧𝗖𝗦 𝗙𝗥𝗘𝗘 𝗗𝗮𝘁𝗮 𝗔𝗻𝗮𝗹𝘆𝘁𝗶𝗰𝘀 𝗖𝗲𝗿𝘁𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗖𝗼𝘂𝗿𝘀𝗲𝘀😍
Want to kickstart your career in Data Analytics but don’t know where to begin?👨💻
TCS has your back with a completely FREE course designed just for beginners✅
𝐋𝐢𝐧𝐤👇:-
https://pdlink.in/4jNMoEg
Just pure, job-ready learning📍
Want to kickstart your career in Data Analytics but don’t know where to begin?👨💻
TCS has your back with a completely FREE course designed just for beginners✅
𝐋𝐢𝐧𝐤👇:-
https://pdlink.in/4jNMoEg
Just pure, job-ready learning📍
👍1
⌨️ MongoDB Cheat Sheet
This Post includes a MongoDB cheat sheet to make it easy for our followers to work with MongoDB.
Working with databases
Working with rows
Working with Documents
Querying data from documents
Modifying data in documents
Searching
MongoDB is a flexible, document-orientated, NoSQL database program that can scale to any enterprise volume without compromising search performance.
This Post includes a MongoDB cheat sheet to make it easy for our followers to work with MongoDB.
Working with databases
Working with rows
Working with Documents
Querying data from documents
Modifying data in documents
Searching
🔥1
Forwarded from Artificial Intelligence
𝟲 𝗕𝗲𝘀𝘁 𝗬𝗼𝘂𝗧𝘂𝗯𝗲 𝗖𝗵𝗮𝗻𝗻𝗲𝗹𝘀 𝘁𝗼 𝗠𝗮𝘀𝘁𝗲𝗿 𝗣𝗼𝘄𝗲𝗿 𝗕𝗜😍
Power BI Isn’t Just a Tool—It’s a Career Game-Changer🚀
Whether you’re a student, a working professional, or switching careers, learning Power BI can set you apart in the competitive world of data analytics📊
𝐋𝐢𝐧𝐤👇:-
https://pdlink.in/3ELirpu
Your Analytics Journey Starts Now✅️
Power BI Isn’t Just a Tool—It’s a Career Game-Changer🚀
Whether you’re a student, a working professional, or switching careers, learning Power BI can set you apart in the competitive world of data analytics📊
𝐋𝐢𝐧𝐤👇:-
https://pdlink.in/3ELirpu
Your Analytics Journey Starts Now✅️
👍1
𝗘𝗻𝗱-𝘁𝗼-𝗘𝗻𝗱 𝗗𝗮𝘁𝗮 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 𝗣𝗿𝗼𝗷𝗲𝗰𝘁 𝗙𝗹𝗼𝘄
From real-time streaming to batch processing, data lakes to warehouses, ETL to BI, etc this covers it all !
Simple Example:
◾ The project starts with data ingestion using APIs and batch processes to collect raw data.
◾ Apache Kafka enables real-time streaming, while ETL pipelines process and transform the data efficiently.
◾ Apache Airflow orchestrates workflows, ensuring seamless scheduling and automation.
◾ The processed data is stored in a Delta Lake with ACID transactions, maintaining reliability and governance.
◾ For analytics, the data is structured in a Data Warehouse (Snowflake, Redshift, or BigQuery) using optimized star schema modeling.
◾ SQL indexing and Parquet compression enhance performance.
◾ Apache Spark enables high-speed parallel computing for advanced transformations.
◾ BI tools provide insights, while DataOps with CI/CD automates deployments.
𝗟𝗲𝘁𝘀 𝗸𝗻𝗼𝘄 𝗺𝗼𝗿𝗲 𝗮𝗯𝗼𝘂𝘁 𝗗𝗮𝘁𝗮 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴:
- ETL + Data Pipelines = Data Flow Automation
- SQL + Indexing = Query Optimization
- Apache Airflow + DAGs = Workflow Orchestration
- Apache Kafka + Streaming = Real-Time Data
- Snowflake + Data Sharing = Cross-Platform Analytics
- Delta Lake + ACID Transactions = Reliable Data Storage
- Data Lake + Data Governance = Managed Data Assets
- Data Warehouse + BI Tools = Business Insights
- Apache Spark + Parallel Processing = High-Speed Computing
- Parquet + Compression = Optimized Storage
- Redshift + Spectrum = Querying External Data
- BigQuery + Serverless SQL = Scalable Analytics
- Data Engineering + Python = Automation & Scripting
- Batch Processing + Scheduling = Scalable Data Workflows
- DataOps + CI/CD = Automated Deployments
- Data Modeling + Star Schema = Optimized Analytics
- Metadata Management + Data Catalogs = Data Discovery
- Data Ingestion + API Calls = Seamless Data Flow
- Graph Databases + Neo4j = Relationship Analytics
- Data Masking + Privacy Compliance = Secure Data
From real-time streaming to batch processing, data lakes to warehouses, ETL to BI, etc this covers it all !
Simple Example:
◾ The project starts with data ingestion using APIs and batch processes to collect raw data.
◾ Apache Kafka enables real-time streaming, while ETL pipelines process and transform the data efficiently.
◾ Apache Airflow orchestrates workflows, ensuring seamless scheduling and automation.
◾ The processed data is stored in a Delta Lake with ACID transactions, maintaining reliability and governance.
◾ For analytics, the data is structured in a Data Warehouse (Snowflake, Redshift, or BigQuery) using optimized star schema modeling.
◾ SQL indexing and Parquet compression enhance performance.
◾ Apache Spark enables high-speed parallel computing for advanced transformations.
◾ BI tools provide insights, while DataOps with CI/CD automates deployments.
𝗟𝗲𝘁𝘀 𝗸𝗻𝗼𝘄 𝗺𝗼𝗿𝗲 𝗮𝗯𝗼𝘂𝘁 𝗗𝗮𝘁𝗮 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴:
- ETL + Data Pipelines = Data Flow Automation
- SQL + Indexing = Query Optimization
- Apache Airflow + DAGs = Workflow Orchestration
- Apache Kafka + Streaming = Real-Time Data
- Snowflake + Data Sharing = Cross-Platform Analytics
- Delta Lake + ACID Transactions = Reliable Data Storage
- Data Lake + Data Governance = Managed Data Assets
- Data Warehouse + BI Tools = Business Insights
- Apache Spark + Parallel Processing = High-Speed Computing
- Parquet + Compression = Optimized Storage
- Redshift + Spectrum = Querying External Data
- BigQuery + Serverless SQL = Scalable Analytics
- Data Engineering + Python = Automation & Scripting
- Batch Processing + Scheduling = Scalable Data Workflows
- DataOps + CI/CD = Automated Deployments
- Data Modeling + Star Schema = Optimized Analytics
- Metadata Management + Data Catalogs = Data Discovery
- Data Ingestion + API Calls = Seamless Data Flow
- Graph Databases + Neo4j = Relationship Analytics
- Data Masking + Privacy Compliance = Secure Data
👍3
Join our WhatsApp channel for more data engineering resources
👇👇
https://whatsapp.com/channel/0029Vaovs0ZKbYMKXvKRYi3C
👇👇
https://whatsapp.com/channel/0029Vaovs0ZKbYMKXvKRYi3C
👍2
Forwarded from Data Analysis Books | Python | SQL | Excel | Artificial Intelligence | Power BI | Tableau | AI Resources
𝟱 𝗙𝗥𝗘𝗘 𝗜𝗕𝗠 𝗖𝗲𝗿𝘁𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗖𝗼𝘂𝗿𝘀𝗲𝘀 𝘁𝗼 𝗦𝗸𝘆𝗿𝗼𝗰𝗸𝗲𝘁 𝗬𝗼𝘂𝗿 𝗥𝗲𝘀𝘂𝗺𝗲😍
From mastering Cloud Computing to diving into Deep Learning, Docker, Big Data, and IoT Blockchain
IBM, one of the biggest tech companies, is offering 5 FREE courses that can seriously upgrade your resume and skills — without costing you anything.
𝗟𝗶𝗻𝗸:-👇
https://pdlink.in/44GsWoC
Enroll For FREE & Get Certified ✅
From mastering Cloud Computing to diving into Deep Learning, Docker, Big Data, and IoT Blockchain
IBM, one of the biggest tech companies, is offering 5 FREE courses that can seriously upgrade your resume and skills — without costing you anything.
𝗟𝗶𝗻𝗸:-👇
https://pdlink.in/44GsWoC
Enroll For FREE & Get Certified ✅
👍1
Lets say you have 5 TB of data stored in your Amazon S3 bucket consisting of 500 million records and 100 columns.
Now, suppose there are 100 cities and you want to get the data for a particular city, and you want to retrieve only 10 columns.
~ considering each city has equal amount of records,
we want to get 1% of data in terms of number of rows
and 10% in terms of columns
thats roughly 0.1% of the actual data which might be 5 GB roughly.
Now lets the pricing if you are using serverless technology like AWS Athena
- the worst case you end up having the data in a csv format (row based) with no compression. you end up scanning the entire 5 TB data and you pay $25 for this query. (The charges are $5 for each TB of data scanned)
Now lets try to improve it..
- use a columnar file format like parquet with snappy compression which takes lesser space so your 5 TB data might roughly become 2 TB (actually it will be even lesser)
- partition this based on city so that we have 1 folder for each city.
This way you have 2 TB data sitting across 100 folders, but you have to scan just one folder which is 20 GB,
Not just this you need 10 columns out of 100 so roughly you scan 10% of 20 GB (as we are using columnar file format)
This comes out to be 2 GB only.
so how much do we pay?
just $.01 which is 2500 times lesser than what you paid earlier.
This is how you save cost.
what we did?
- using columnar file formats for column pruning
- using partitioning for row pruning
- using efficient compression techniques
Join our WhatsApp channel for more data engineering resources
👇👇
https://whatsapp.com/channel/0029Vaovs0ZKbYMKXvKRYi3C
Now, suppose there are 100 cities and you want to get the data for a particular city, and you want to retrieve only 10 columns.
~ considering each city has equal amount of records,
we want to get 1% of data in terms of number of rows
and 10% in terms of columns
thats roughly 0.1% of the actual data which might be 5 GB roughly.
Now lets the pricing if you are using serverless technology like AWS Athena
- the worst case you end up having the data in a csv format (row based) with no compression. you end up scanning the entire 5 TB data and you pay $25 for this query. (The charges are $5 for each TB of data scanned)
Now lets try to improve it..
- use a columnar file format like parquet with snappy compression which takes lesser space so your 5 TB data might roughly become 2 TB (actually it will be even lesser)
- partition this based on city so that we have 1 folder for each city.
This way you have 2 TB data sitting across 100 folders, but you have to scan just one folder which is 20 GB,
Not just this you need 10 columns out of 100 so roughly you scan 10% of 20 GB (as we are using columnar file format)
This comes out to be 2 GB only.
so how much do we pay?
just $.01 which is 2500 times lesser than what you paid earlier.
This is how you save cost.
what we did?
- using columnar file formats for column pruning
- using partitioning for row pruning
- using efficient compression techniques
Join our WhatsApp channel for more data engineering resources
👇👇
https://whatsapp.com/channel/0029Vaovs0ZKbYMKXvKRYi3C
👍4🔥1