1. How many report formats are available in Excel?
There are three report formats available in Excel; they are:
1. Compact Form
2. Outline Form
3. Tabular Form
2. What are sets in Tableau?
Sets are custom fields that define a subset of data based on some conditions. A set can be based on a computed condition, for example, a set may contain customers with sales over a certain threshold. Computed sets update as your data changes. Alternatively, a set can be based on specific data point in your view.
3. What is the difference between DROP and TRUNCATE commands?
DROP command removes a table and it cannot be rolled back from the database whereas TRUNCATE command removes all the rows from the table.
4. What is slicing in Python?
Ans: Slicing is used to access parts of sequences like lists, tuples, and strings. The syntax of slicing is-[start:end:step]. The step can be omitted as well. When we write [start:end] this returns all the elements of the sequence from the start (inclusive) till the end-1 element. If the start or end element is negative i, it means the ith element from the end.
5. What is the map() and filter() function in Python?
The map() function is a higher-order function. This function accepts another function and a sequence of ‘iterables’ as parameters and provides output after applying the function to each iterable in the sequence. The filter() function is used to generate an output list of values that return true when the function is called.
There are three report formats available in Excel; they are:
1. Compact Form
2. Outline Form
3. Tabular Form
2. What are sets in Tableau?
Sets are custom fields that define a subset of data based on some conditions. A set can be based on a computed condition, for example, a set may contain customers with sales over a certain threshold. Computed sets update as your data changes. Alternatively, a set can be based on specific data point in your view.
3. What is the difference between DROP and TRUNCATE commands?
DROP command removes a table and it cannot be rolled back from the database whereas TRUNCATE command removes all the rows from the table.
4. What is slicing in Python?
Ans: Slicing is used to access parts of sequences like lists, tuples, and strings. The syntax of slicing is-[start:end:step]. The step can be omitted as well. When we write [start:end] this returns all the elements of the sequence from the start (inclusive) till the end-1 element. If the start or end element is negative i, it means the ith element from the end.
5. What is the map() and filter() function in Python?
The map() function is a higher-order function. This function accepts another function and a sequence of ‘iterables’ as parameters and provides output after applying the function to each iterable in the sequence. The filter() function is used to generate an output list of values that return true when the function is called.
👍2
1. What is Data Integrity?
Data Integrity is the assurance of accuracy and consistency of data over its entire life-cycle and is a critical aspect of the design, implementation, and usage of any system which stores, processes, or retrieves data. It also defines integrity constraints to enforce business rules on the data when it is entered into an application or a database.
2. What is the Difference Between Joining and Blending in Tableau?
Combining the data from two or more different sources is data blending, such as Oracle, Excel, and SQL Server. In data blending, each data source contains its own set of dimensions and measures. Combining the data between two or more tables or sheets within the same data source is data joining. All the combined tables or sheets contain a common set of dimensions and measures.
3. What is slicing in Python?
As the name suggests, ‘slicing’ is taking parts of.
Syntax for slicing is [start : stop : step]
start is the starting index from where to slice a list or tuple
stop is the ending index or where to stop.
step is the number of steps to jump.
Default value for start is 0, stop is number of items, step is 1.
Slicing can be done on strings, arrays, lists, and tuples.
4. What is the difference between NOW() and CURRENT_DATE() in SQL?
NOW() returns a constant time that indicates the time at which the statement began to execute. (Within a stored function or trigger, NOW() returns the time at which the function or triggering statement began to execute.
The simple difference between NOW() and CURRENT_DATE() is that NOW() will fetch the current date and time both in format ‘YYYY-MM_DD HH:MM:SS’ while CURRENT_DATE() will fetch the date of the current day ‘YYYY-MM_DD’.
Data Integrity is the assurance of accuracy and consistency of data over its entire life-cycle and is a critical aspect of the design, implementation, and usage of any system which stores, processes, or retrieves data. It also defines integrity constraints to enforce business rules on the data when it is entered into an application or a database.
2. What is the Difference Between Joining and Blending in Tableau?
Combining the data from two or more different sources is data blending, such as Oracle, Excel, and SQL Server. In data blending, each data source contains its own set of dimensions and measures. Combining the data between two or more tables or sheets within the same data source is data joining. All the combined tables or sheets contain a common set of dimensions and measures.
3. What is slicing in Python?
As the name suggests, ‘slicing’ is taking parts of.
Syntax for slicing is [start : stop : step]
start is the starting index from where to slice a list or tuple
stop is the ending index or where to stop.
step is the number of steps to jump.
Default value for start is 0, stop is number of items, step is 1.
Slicing can be done on strings, arrays, lists, and tuples.
4. What is the difference between NOW() and CURRENT_DATE() in SQL?
NOW() returns a constant time that indicates the time at which the statement began to execute. (Within a stored function or trigger, NOW() returns the time at which the function or triggering statement began to execute.
The simple difference between NOW() and CURRENT_DATE() is that NOW() will fetch the current date and time both in format ‘YYYY-MM_DD HH:MM:SS’ while CURRENT_DATE() will fetch the date of the current day ‘YYYY-MM_DD’.
👍4❤3
1. What are Query and Query language?
A query is nothing but a request sent to a database to retrieve data or information. The required data can be retrieved from a table or many tables in the database.
Query languages use various types of queries to retrieve data from databases. SQL, Datalog, and AQL are a few examples of query languages; however, SQL is known to be the widely used query language.
2. What are Superkey and candidate key?
A super key may be a single or a combination of keys that help to identify a record in a table. Know that Super keys can have one or more attributes, even though all the attributes are not necessary to identify the records.
A candidate key is the subset of Superkey, which can have one or more than one attributes to identify records in a table. Unlike Superkey, all the attributes of the candidate key must be helpful to identify the records.
3. What do you mean by buffer pool and mention its benefits?
A buffer pool in SQL is also known as a buffer cache. All the resources can store their cached data pages in a buffer pool. The size of the buffer pool can be defined during the configuration of an instance of SQL Server.
The following are the benefits of a buffer pool:
Increase in I/O performance
Reduction in I/O latency
Increase in transaction throughput
Increase in reading performance
4. What is the difference between Zero and NULL values in SQL?
When a field in a column doesn’t have any value, it is said to be having a NULL value. Simply put, NULL is the blank field in a table. It can be considered as an unassigned, unknown, or unavailable value. On the contrary, zero is a number, and it is an available, assigned, and known value.
A query is nothing but a request sent to a database to retrieve data or information. The required data can be retrieved from a table or many tables in the database.
Query languages use various types of queries to retrieve data from databases. SQL, Datalog, and AQL are a few examples of query languages; however, SQL is known to be the widely used query language.
2. What are Superkey and candidate key?
A super key may be a single or a combination of keys that help to identify a record in a table. Know that Super keys can have one or more attributes, even though all the attributes are not necessary to identify the records.
A candidate key is the subset of Superkey, which can have one or more than one attributes to identify records in a table. Unlike Superkey, all the attributes of the candidate key must be helpful to identify the records.
3. What do you mean by buffer pool and mention its benefits?
A buffer pool in SQL is also known as a buffer cache. All the resources can store their cached data pages in a buffer pool. The size of the buffer pool can be defined during the configuration of an instance of SQL Server.
The following are the benefits of a buffer pool:
Increase in I/O performance
Reduction in I/O latency
Increase in transaction throughput
Increase in reading performance
4. What is the difference between Zero and NULL values in SQL?
When a field in a column doesn’t have any value, it is said to be having a NULL value. Simply put, NULL is the blank field in a table. It can be considered as an unassigned, unknown, or unavailable value. On the contrary, zero is a number, and it is an available, assigned, and known value.
👍2
Q1: How do you ensure data consistency and integrity in a data warehousing environment?
Ans: I implement data validation checks, use constraints like primary and foreign keys, and ensure that ETL processes have error-handling mechanisms. Regular audits and data reconciliation processes are also set up to ensure data accuracy and consistency.
Q2: Describe a situation where you had to design a star schema for a data warehousing project.
Ans: For a retail sales data warehousing project, I designed a star schema with a central fact table containing sales transactions. Surrounding this were dimension tables like Products, Stores, Time, and Customers. This structure allowed for efficient querying and reporting of sales metrics across various dimensions.
Q3: How would you use data analytics to assess credit risk for loan applicants?
Ans: I'd analyze the applicant's financial history, including credit score, income, employment stability, and existing debts. Using predictive modeling, I'd assess the probability of default based on historical data of similar applicants. This would help in making informed lending decisions.
Q4: Describe a situation where you had to ensure data security for sensitive financial data.
Ans: While working on a project involving customer transaction data, I ensured that all data was encrypted both at rest and in transit. I also implemented role-based access controls, ensuring that only authorized personnel could access specific data sets. Regular audits and penetration tests were conducted to identify and rectify potential vulnerabilities.
Ans: I implement data validation checks, use constraints like primary and foreign keys, and ensure that ETL processes have error-handling mechanisms. Regular audits and data reconciliation processes are also set up to ensure data accuracy and consistency.
Q2: Describe a situation where you had to design a star schema for a data warehousing project.
Ans: For a retail sales data warehousing project, I designed a star schema with a central fact table containing sales transactions. Surrounding this were dimension tables like Products, Stores, Time, and Customers. This structure allowed for efficient querying and reporting of sales metrics across various dimensions.
Q3: How would you use data analytics to assess credit risk for loan applicants?
Ans: I'd analyze the applicant's financial history, including credit score, income, employment stability, and existing debts. Using predictive modeling, I'd assess the probability of default based on historical data of similar applicants. This would help in making informed lending decisions.
Q4: Describe a situation where you had to ensure data security for sensitive financial data.
Ans: While working on a project involving customer transaction data, I ensured that all data was encrypted both at rest and in transit. I also implemented role-based access controls, ensuring that only authorized personnel could access specific data sets. Regular audits and penetration tests were conducted to identify and rectify potential vulnerabilities.
❤1👍1
Roadmap to become a data analyst
1. Foundation Skills:
•Strengthen Mathematics: Focus on statistics relevant to data analysis.
•Excel Basics: Master fundamental Excel functions and formulas.
2. SQL Proficiency:
•Learn SQL Basics: Understand SELECT statements, JOINs, and filtering.
•Practice Database Queries: Work with databases to retrieve and manipulate data.
3. Excel Advanced Techniques:
•Data Cleaning in Excel: Learn to handle missing data and outliers.
•PivotTables and PivotCharts: Master these powerful tools for data summarization.
4. Data Visualization with Excel:
•Create Visualizations: Learn to build charts and graphs in Excel.
•Dashboard Creation: Understand how to design effective dashboards.
5. Power BI Introduction:
•Install and Explore Power BI: Familiarize yourself with the interface.
•Import Data: Learn to import and transform data using Power BI.
6. Power BI Data Modeling:
•Relationships: Understand and establish relationships between tables.
•DAX (Data Analysis Expressions): Learn the basics of DAX for calculations.
7. Advanced Power BI Features:
•Advanced Visualizations: Explore complex visualizations in Power BI.
•Custom Measures and Columns: Utilize DAX for customized data calculations.
8. Integration of Excel, SQL, and Power BI:
•Importing Data from SQL to Power BI: Practice connecting and importing data.
•Excel and Power BI Integration: Learn how to use Excel data in Power BI.
9. Business Intelligence Best Practices:
•Data Storytelling: Develop skills in presenting insights effectively.
•Performance Optimization: Optimize reports and dashboards for efficiency.
10. Build a Portfolio:
•Showcase Excel Projects: Highlight your data analysis skills using Excel.
•Power BI Projects: Feature Power BI dashboards and reports in your portfolio.
11. Continuous Learning and Certification:
•Stay Updated: Keep track of new features in Excel, SQL, and Power BI.
•Consider Certifications: Obtain relevant certifications to validate your skills.
1. Foundation Skills:
•Strengthen Mathematics: Focus on statistics relevant to data analysis.
•Excel Basics: Master fundamental Excel functions and formulas.
2. SQL Proficiency:
•Learn SQL Basics: Understand SELECT statements, JOINs, and filtering.
•Practice Database Queries: Work with databases to retrieve and manipulate data.
3. Excel Advanced Techniques:
•Data Cleaning in Excel: Learn to handle missing data and outliers.
•PivotTables and PivotCharts: Master these powerful tools for data summarization.
4. Data Visualization with Excel:
•Create Visualizations: Learn to build charts and graphs in Excel.
•Dashboard Creation: Understand how to design effective dashboards.
5. Power BI Introduction:
•Install and Explore Power BI: Familiarize yourself with the interface.
•Import Data: Learn to import and transform data using Power BI.
6. Power BI Data Modeling:
•Relationships: Understand and establish relationships between tables.
•DAX (Data Analysis Expressions): Learn the basics of DAX for calculations.
7. Advanced Power BI Features:
•Advanced Visualizations: Explore complex visualizations in Power BI.
•Custom Measures and Columns: Utilize DAX for customized data calculations.
8. Integration of Excel, SQL, and Power BI:
•Importing Data from SQL to Power BI: Practice connecting and importing data.
•Excel and Power BI Integration: Learn how to use Excel data in Power BI.
9. Business Intelligence Best Practices:
•Data Storytelling: Develop skills in presenting insights effectively.
•Performance Optimization: Optimize reports and dashboards for efficiency.
10. Build a Portfolio:
•Showcase Excel Projects: Highlight your data analysis skills using Excel.
•Power BI Projects: Feature Power BI dashboards and reports in your portfolio.
11. Continuous Learning and Certification:
•Stay Updated: Keep track of new features in Excel, SQL, and Power BI.
•Consider Certifications: Obtain relevant certifications to validate your skills.
Data Analyst Interview Questions
[Python, SQL, PowerBI]
1. Is indentation required in python?
Ans: Indentation is necessary for Python. It specifies a block of code. All code within loops, classes, functions, etc is specified within an indented block. It is usually done using four space characters. If your code is not indented necessarily, it will not execute accurately and will throw errors as well.
2. What are Entities and Relationships?
Ans:
Entity: An entity can be a real-world object that can be easily identifiable. For example, in a college database, students, professors, workers, departments, and projects can be referred to as entities.
Relationships: Relations or links between entities that have something to do with each other. For example – The employee’s table in a company’s database can be associated with the salary table in the same database.
3. What are Aggregate and Scalar functions?
Ans: An aggregate function performs operations on a collection of values to return a single scalar value. Aggregate functions are often used with the GROUP BY and HAVING clauses of the SELECT statement. A scalar function returns a single value based on the input value.
4. What are Custom Visuals in Power BI?
Ans: Custom Visuals are like any other visualizations, generated using Power BI. The only difference is that it develops the custom visuals using a custom SDK. The languages like JQuery and JavaScript are used to create custom visuals in Power BI
ENJOY LEARNING 👍👍
[Python, SQL, PowerBI]
1. Is indentation required in python?
Ans: Indentation is necessary for Python. It specifies a block of code. All code within loops, classes, functions, etc is specified within an indented block. It is usually done using four space characters. If your code is not indented necessarily, it will not execute accurately and will throw errors as well.
2. What are Entities and Relationships?
Ans:
Entity: An entity can be a real-world object that can be easily identifiable. For example, in a college database, students, professors, workers, departments, and projects can be referred to as entities.
Relationships: Relations or links between entities that have something to do with each other. For example – The employee’s table in a company’s database can be associated with the salary table in the same database.
3. What are Aggregate and Scalar functions?
Ans: An aggregate function performs operations on a collection of values to return a single scalar value. Aggregate functions are often used with the GROUP BY and HAVING clauses of the SELECT statement. A scalar function returns a single value based on the input value.
4. What are Custom Visuals in Power BI?
Ans: Custom Visuals are like any other visualizations, generated using Power BI. The only difference is that it develops the custom visuals using a custom SDK. The languages like JQuery and JavaScript are used to create custom visuals in Power BI
ENJOY LEARNING 👍👍
👍3
DATA ANALYST Interview Questions (0-3 yr) (SQL, Power BI)
👉 Power BI:
Q1: Explain step-by-step how you will create a sales dashboard from scratch.
Q2: Explain how you can optimize a slow Power BI report.
Q3: Explain Any 5 Chart Types and Their Uses in Representing Different Aspects of Data.
👉SQL:
Q1: Explain the difference between RANK(), DENSE_RANK(), and ROW_NUMBER() functions using example.
Q2 – Q4 use Table: employee (EmpID, ManagerID, JoinDate, Dept, Salary)
Q2: Find the nth highest salary from the Employee table.
Q3: You have an employee table with employee ID and manager ID. Find all employees under a specific manager, including their subordinates at any level.
Q4: Write a query to find the cumulative salary of employees department-wise, who have joined the company in the last 30 days.
Q5: Find the top 2 customers with the highest order amount for each product category, handling ties appropriately. Table: Customer (CustomerID, ProductCategory, OrderAmount)
👉Behavioral:
Q1: Why do you want to become a data analyst and why did you apply to this company?
Q2: Describe a time when you had to manage a difficult task with tight deadlines. How did you handle it?
I have curated best 80+ top-notch Data Analytics Resources 👇👇
https://whatsapp.com/channel/0029VaGgzAk72WTmQFERKh02
Hope this helps you 😊
👉 Power BI:
Q1: Explain step-by-step how you will create a sales dashboard from scratch.
Q2: Explain how you can optimize a slow Power BI report.
Q3: Explain Any 5 Chart Types and Their Uses in Representing Different Aspects of Data.
👉SQL:
Q1: Explain the difference between RANK(), DENSE_RANK(), and ROW_NUMBER() functions using example.
Q2 – Q4 use Table: employee (EmpID, ManagerID, JoinDate, Dept, Salary)
Q2: Find the nth highest salary from the Employee table.
Q3: You have an employee table with employee ID and manager ID. Find all employees under a specific manager, including their subordinates at any level.
Q4: Write a query to find the cumulative salary of employees department-wise, who have joined the company in the last 30 days.
Q5: Find the top 2 customers with the highest order amount for each product category, handling ties appropriately. Table: Customer (CustomerID, ProductCategory, OrderAmount)
👉Behavioral:
Q1: Why do you want to become a data analyst and why did you apply to this company?
Q2: Describe a time when you had to manage a difficult task with tight deadlines. How did you handle it?
I have curated best 80+ top-notch Data Analytics Resources 👇👇
https://whatsapp.com/channel/0029VaGgzAk72WTmQFERKh02
Hope this helps you 😊
👍2❤1
🚀 Key Skills for Aspiring Tech Specialists
📊 Data Analyst:
- Proficiency in SQL for database querying
- Advanced Excel for data manipulation
- Programming with Python or R for data analysis
- Statistical analysis to understand data trends
- Data visualization tools like Tableau or PowerBI
- Data preprocessing to clean and structure data
- Exploratory data analysis techniques
🧠 Data Scientist:
- Strong knowledge of Python and R for statistical analysis
- Machine learning for predictive modeling
- Deep understanding of mathematics and statistics
- Data wrangling to prepare data for analysis
- Big data platforms like Hadoop or Spark
- Data visualization and communication skills
- Experience with A/B testing frameworks
🏗 Data Engineer:
- Expertise in SQL and NoSQL databases
- Experience with data warehousing solutions
- ETL (Extract, Transform, Load) process knowledge
- Familiarity with big data tools (e.g., Apache Spark)
- Proficient in Python, Java, or Scala
- Knowledge of cloud services like AWS, GCP, or Azure
- Understanding of data pipeline and workflow management tools
🤖 Machine Learning Engineer:
- Proficiency in Python and libraries like scikit-learn, TensorFlow
- Solid understanding of machine learning algorithms
- Experience with neural networks and deep learning frameworks
- Ability to implement models and fine-tune their parameters
- Knowledge of software engineering best practices
- Data modeling and evaluation strategies
- Strong mathematical skills, particularly in linear algebra and calculus
🧠 Deep Learning Engineer:
- Expertise in deep learning frameworks like TensorFlow or PyTorch
- Understanding of Convolutional and Recurrent Neural Networks
- Experience with GPU computing and parallel processing
- Familiarity with computer vision and natural language processing
- Ability to handle large datasets and train complex models
- Research mindset to keep up with the latest developments in deep learning
🤯 AI Engineer:
- Solid foundation in algorithms, logic, and mathematics
- Proficiency in programming languages like Python or C++
- Experience with AI technologies including ML, neural networks, and cognitive computing
- Understanding of AI model deployment and scaling
- Knowledge of AI ethics and responsible AI practices
- Strong problem-solving and analytical skills
🔊 NLP Engineer:
- Background in linguistics and language models
- Proficiency with NLP libraries (e.g., NLTK, spaCy)
- Experience with text preprocessing and tokenization
- Understanding of sentiment analysis, text classification, and named entity recognition
- Familiarity with transformer models like BERT and GPT
- Ability to work with large text datasets and sequential data
🌟 Embrace the world of data and AI, and become the architect of tomorrow's technology!
📊 Data Analyst:
- Proficiency in SQL for database querying
- Advanced Excel for data manipulation
- Programming with Python or R for data analysis
- Statistical analysis to understand data trends
- Data visualization tools like Tableau or PowerBI
- Data preprocessing to clean and structure data
- Exploratory data analysis techniques
🧠 Data Scientist:
- Strong knowledge of Python and R for statistical analysis
- Machine learning for predictive modeling
- Deep understanding of mathematics and statistics
- Data wrangling to prepare data for analysis
- Big data platforms like Hadoop or Spark
- Data visualization and communication skills
- Experience with A/B testing frameworks
🏗 Data Engineer:
- Expertise in SQL and NoSQL databases
- Experience with data warehousing solutions
- ETL (Extract, Transform, Load) process knowledge
- Familiarity with big data tools (e.g., Apache Spark)
- Proficient in Python, Java, or Scala
- Knowledge of cloud services like AWS, GCP, or Azure
- Understanding of data pipeline and workflow management tools
🤖 Machine Learning Engineer:
- Proficiency in Python and libraries like scikit-learn, TensorFlow
- Solid understanding of machine learning algorithms
- Experience with neural networks and deep learning frameworks
- Ability to implement models and fine-tune their parameters
- Knowledge of software engineering best practices
- Data modeling and evaluation strategies
- Strong mathematical skills, particularly in linear algebra and calculus
🧠 Deep Learning Engineer:
- Expertise in deep learning frameworks like TensorFlow or PyTorch
- Understanding of Convolutional and Recurrent Neural Networks
- Experience with GPU computing and parallel processing
- Familiarity with computer vision and natural language processing
- Ability to handle large datasets and train complex models
- Research mindset to keep up with the latest developments in deep learning
🤯 AI Engineer:
- Solid foundation in algorithms, logic, and mathematics
- Proficiency in programming languages like Python or C++
- Experience with AI technologies including ML, neural networks, and cognitive computing
- Understanding of AI model deployment and scaling
- Knowledge of AI ethics and responsible AI practices
- Strong problem-solving and analytical skills
🔊 NLP Engineer:
- Background in linguistics and language models
- Proficiency with NLP libraries (e.g., NLTK, spaCy)
- Experience with text preprocessing and tokenization
- Understanding of sentiment analysis, text classification, and named entity recognition
- Familiarity with transformer models like BERT and GPT
- Ability to work with large text datasets and sequential data
🌟 Embrace the world of data and AI, and become the architect of tomorrow's technology!
👍4❤1