Data Analytics Interview Questions
Q1: Describe a situation where you had to clean a messy dataset. What steps did you take?
Ans: I encountered a dataset with missing values, duplicates, and inconsistent formats. I used Python's Pandas library to identify and handle missing values, standardized data formats using regular expressions, and removed duplicates. I also validated the cleaned data against known benchmarks to ensure accuracy.
Q2: How do you handle outliers in a dataset?
Ans: I start by visualizing the data using box plots or scatter plots to identify potential outliers. Then, depending on the nature of the data and the problem context, I might cap the outliers, transform the data, or even remove them if they're due to errors.
Q3: How would you use data to suggest optimal pricing strategies to Airbnb hosts?
Ans: I'd analyze factors like location, property type, amenities, local events, and historical booking rates. Using regression analysis, I'd model the relationship between these factors and pricing to suggest an optimal price range. Additionally, analyzing competitor pricing in the area can provide insights into market rates.
Q4: Describe a situation where you used data to improve the user experience on the Airbnb platform.
Ans: While analyzing user feedback and platform interaction data, I noticed that users often had difficulty navigating the booking process. Based on this, I suggested streamlining the booking steps and providing clearer instructions. A/B testing confirmed that these changes led to a higher conversion rate and improved user feedback.
Q1: Describe a situation where you had to clean a messy dataset. What steps did you take?
Ans: I encountered a dataset with missing values, duplicates, and inconsistent formats. I used Python's Pandas library to identify and handle missing values, standardized data formats using regular expressions, and removed duplicates. I also validated the cleaned data against known benchmarks to ensure accuracy.
Q2: How do you handle outliers in a dataset?
Ans: I start by visualizing the data using box plots or scatter plots to identify potential outliers. Then, depending on the nature of the data and the problem context, I might cap the outliers, transform the data, or even remove them if they're due to errors.
Q3: How would you use data to suggest optimal pricing strategies to Airbnb hosts?
Ans: I'd analyze factors like location, property type, amenities, local events, and historical booking rates. Using regression analysis, I'd model the relationship between these factors and pricing to suggest an optimal price range. Additionally, analyzing competitor pricing in the area can provide insights into market rates.
Q4: Describe a situation where you used data to improve the user experience on the Airbnb platform.
Ans: While analyzing user feedback and platform interaction data, I noticed that users often had difficulty navigating the booking process. Based on this, I suggested streamlining the booking steps and providing clearer instructions. A/B testing confirmed that these changes led to a higher conversion rate and improved user feedback.
👍13❤2
Q1: How would you handle real-time data streaming for analyzing user listening patterns?
Ans: I'd use platforms like Apache Kafka for real-time data ingestion. Using Python, I'd process this stream to identify real-time patterns and store aggregated data for further analysis.
Q2: Describe a situation where you had to use time series analysis to forecast a trend.
Ans: I analyzed monthly active users to forecast future growth. Using Python's statsmodels, I applied ARIMA modeling to the time series data and provided a forecast for the next six months.
Q3: How would you segment and analyze user behavior based on their music preferences?
Ans: I'd cluster users based on their listening history using unsupervised machine learning techniques like K-means clustering. This would help in creating personalized playlists or recommendations.
Q4: How do you handle missing or incomplete data in user listening logs?
Ans: I'd use imputation methods based on the nature of the missing data. For instance, if a user's listening time is missing, I might impute it based on their average listening time or use collaborative filtering methods to estimate it based on similar users.
Ans: I'd use platforms like Apache Kafka for real-time data ingestion. Using Python, I'd process this stream to identify real-time patterns and store aggregated data for further analysis.
Q2: Describe a situation where you had to use time series analysis to forecast a trend.
Ans: I analyzed monthly active users to forecast future growth. Using Python's statsmodels, I applied ARIMA modeling to the time series data and provided a forecast for the next six months.
Q3: How would you segment and analyze user behavior based on their music preferences?
Ans: I'd cluster users based on their listening history using unsupervised machine learning techniques like K-means clustering. This would help in creating personalized playlists or recommendations.
Q4: How do you handle missing or incomplete data in user listening logs?
Ans: I'd use imputation methods based on the nature of the missing data. For instance, if a user's listening time is missing, I might impute it based on their average listening time or use collaborative filtering methods to estimate it based on similar users.
👍5
1. What is a Self-Join?
A self-join is a type of join that can be used to connect two tables. As a result, it is a unary relationship. Each row of the table is attached to itself and all other rows of the same table in a self-join. As a result, a self-join is mostly used to combine and compare rows from the same database table.
2. What is OLTP?
OLTP, or online transactional processing, allows huge groups of people to execute massive amounts of database transactions in real time, usually via the internet. A database transaction occurs when data in a database is changed, inserted, deleted, or queried.
3. What is the difference between joining and blending in Tableau?
Joining term is used when you are combining data from the same source, for example, worksheet in an Excel file or tables in Oracle databaseWhile blending requires two completely defined data sources in your report.
4. How to prevent someone from copying the cell from your worksheet in excel?
If you want to protect your worksheet from being copied, go into Menu bar > Review > Protect sheet > Password.
By entering password you can prevent your worksheet from getting copied.
5. What are the different integrity rules present in the DBMS?
The different integrity rules present in DBMS are as follows:
Entity Integrity: This rule states that the value of the primary key can never be NULL. So, all the tuples in the column identified as the primary key should have a value.
Referential Integrity: This rule states that either the value of the foreign key is NULL or it should be the primary key of any other relation.
A self-join is a type of join that can be used to connect two tables. As a result, it is a unary relationship. Each row of the table is attached to itself and all other rows of the same table in a self-join. As a result, a self-join is mostly used to combine and compare rows from the same database table.
2. What is OLTP?
OLTP, or online transactional processing, allows huge groups of people to execute massive amounts of database transactions in real time, usually via the internet. A database transaction occurs when data in a database is changed, inserted, deleted, or queried.
3. What is the difference between joining and blending in Tableau?
Joining term is used when you are combining data from the same source, for example, worksheet in an Excel file or tables in Oracle databaseWhile blending requires two completely defined data sources in your report.
4. How to prevent someone from copying the cell from your worksheet in excel?
If you want to protect your worksheet from being copied, go into Menu bar > Review > Protect sheet > Password.
By entering password you can prevent your worksheet from getting copied.
5. What are the different integrity rules present in the DBMS?
The different integrity rules present in DBMS are as follows:
Entity Integrity: This rule states that the value of the primary key can never be NULL. So, all the tuples in the column identified as the primary key should have a value.
Referential Integrity: This rule states that either the value of the foreign key is NULL or it should be the primary key of any other relation.
👍15❤4👌2🤔1
👍1
Q1: How would you analyze data to understand user connection patterns on a professional network?
Ans: I'd use graph databases like Neo4j for social network analysis. By analyzing connection patterns, I can identify influencers or isolated communities.
Q2: Describe a challenging data visualization you created to represent user engagement metrics.
Ans: I visualized multi-dimensional data showing user engagement across features, regions, and time using tools like D3.js, creating an interactive dashboard with drill-down capabilities.
Q3: How would you identify and target passive job seekers on LinkedIn?
Ans: I'd analyze user behavior patterns, like increased profile updates, frequent visits to job postings, or engagement with career-related content, to identify potential passive job seekers.
Q4: How do you measure the effectiveness of a new feature launched on LinkedIn?
Ans: I'd set up A/B tests, comparing user engagement metrics between those who have access to the new feature and a control group. I'd then analyze metrics like time spent, feature usage frequency, and overall platform engagement to measure effectiveness.
Ans: I'd use graph databases like Neo4j for social network analysis. By analyzing connection patterns, I can identify influencers or isolated communities.
Q2: Describe a challenging data visualization you created to represent user engagement metrics.
Ans: I visualized multi-dimensional data showing user engagement across features, regions, and time using tools like D3.js, creating an interactive dashboard with drill-down capabilities.
Q3: How would you identify and target passive job seekers on LinkedIn?
Ans: I'd analyze user behavior patterns, like increased profile updates, frequent visits to job postings, or engagement with career-related content, to identify potential passive job seekers.
Q4: How do you measure the effectiveness of a new feature launched on LinkedIn?
Ans: I'd set up A/B tests, comparing user engagement metrics between those who have access to the new feature and a control group. I'd then analyze metrics like time spent, feature usage frequency, and overall platform engagement to measure effectiveness.
👍19❤4👏2
👍4❤2🤔1
33 companies that are CURRENTLY HIRING for 100% REMOTE JOBS
👇👇
https://www.linkedin.com/posts/sql-analysts_jobboard-remotehiring-remoteworking-activity-7141483435960832000-2k4s?utm_source=share&utm_medium=member_android
Like this LinkedIn post and bookmark it for your future reference
👇👇
https://www.linkedin.com/posts/sql-analysts_jobboard-remotehiring-remoteworking-activity-7141483435960832000-2k4s?utm_source=share&utm_medium=member_android
Like this LinkedIn post and bookmark it for your future reference
👍9🎉2❤1
Hey everyone!
May I request you all to FOLLOW our Data Analytics page
Here's the exclusive link
🔗 https://www.linkedin.com/company/sql-analysts/
This is an official linkedin page for free courses & updates!
Including our giveaways, sessions & much more!
May I request you all to FOLLOW our Data Analytics page
Here's the exclusive link
🔗 https://www.linkedin.com/company/sql-analysts/
This is an official linkedin page for free courses & updates!
Including our giveaways, sessions & much more!
👍7🤔1
Most Awaited Giveaway for Free Python Courses with Projects
👇👇
https://www.linkedin.com/posts/sql-analysts_python-viral-pythonprogramming-activity-7141826568485605376-tf7V?utm_source=share&utm_medium=member_android
👇👇
https://www.linkedin.com/posts/sql-analysts_python-viral-pythonprogramming-activity-7141826568485605376-tf7V?utm_source=share&utm_medium=member_android
👍4❤2👌1
Complete Syllabus for Data Analytics interview:
SQL:
1. Basic
- SELECT statements with WHERE, ORDER BY, GROUP BY, HAVING
- Basic JOINS (INNER, LEFT, RIGHT, FULL)
- Creating and using simple databases and tables
2. Intermediate
- Aggregate functions (COUNT, SUM, AVG, MAX, MIN)
- Subqueries and nested queries
- Common Table Expressions (WITH clause)
- CASE statements for conditional logic in queries
3. Advanced
- Advanced JOIN techniques (self-join, non-equi join)
- Window functions (OVER, PARTITION BY, ROW_NUMBER, RANK, DENSE_RANK, lead, lag)
- optimization with indexing
- Data manipulation (INSERT, UPDATE, DELETE)
Python:
1. Basic
- Syntax, variables, data types (integers, floats, strings, booleans)
- Control structures (if-else, for and while loops)
- Basic data structures (lists, dictionaries, sets, tuples)
- Functions, lambda functions, error handling (try-except)
- Modules and packages
2. Pandas & Numpy
- Creating and manipulating DataFrames and Series
- Indexing, selecting, and filtering data
- Handling missing data (fillna, dropna)
- Data aggregation with groupby, summarizing data
- Merging, joining, and concatenating datasets
3. Basic Visualization
- Basic plotting with Matplotlib (line plots, bar plots, histograms)
- Visualization with Seaborn (scatter plots, box plots, pair plots)
- Customizing plots (sizes, labels, legends, color palettes)
- Introduction to interactive visualizations (e.g., Plotly)
Excel:
1. Basic
- Cell operations, basic formulas (SUMIFS, COUNTIFS, AVERAGEIFS, IF, AND, OR, NOT & Nested Functions etc.)
- Introduction to charts and basic data visualization
- Data sorting and filtering
- Conditional formatting
2. Intermediate
- Advanced formulas (V/XLOOKUP, INDEX-MATCH, nested IF)
- PivotTables and PivotCharts for summarizing data
- Data validation tools
- What-if analysis tools (Data Tables, Goal Seek)
3. Advanced
- Array formulas and advanced functions
- Data Model & Power Pivot
- Advanced Filter
- Slicers and Timelines in Pivot Tables
- Dynamic charts and interactive dashboards
Power BI:
1. Data Modeling
- Importing data from various sources
- Creating and managing relationships between different datasets
- Data modeling basics (star schema, snowflake schema)
2. Data Transformation
- Using Power Query for data cleaning and transformation
- Advanced data shaping techniques
- Calculated columns and measures using DAX
3. Data Visualization and Reporting
- Creating interactive reports and dashboards
- Visualizations (bar, line, pie charts, maps)
- Publishing and sharing reports, scheduling data refreshes
Statistics Fundamentals:
Mean, Median, Mode, Standard Deviation, Variance, Probability Distributions, Hypothesis Testing, P-values, Confidence Intervals, Correlation, Simple Linear Regression, Normal Distribution, Binomial Distribution, Poisson Distribution.
SQL:
1. Basic
- SELECT statements with WHERE, ORDER BY, GROUP BY, HAVING
- Basic JOINS (INNER, LEFT, RIGHT, FULL)
- Creating and using simple databases and tables
2. Intermediate
- Aggregate functions (COUNT, SUM, AVG, MAX, MIN)
- Subqueries and nested queries
- Common Table Expressions (WITH clause)
- CASE statements for conditional logic in queries
3. Advanced
- Advanced JOIN techniques (self-join, non-equi join)
- Window functions (OVER, PARTITION BY, ROW_NUMBER, RANK, DENSE_RANK, lead, lag)
- optimization with indexing
- Data manipulation (INSERT, UPDATE, DELETE)
Python:
1. Basic
- Syntax, variables, data types (integers, floats, strings, booleans)
- Control structures (if-else, for and while loops)
- Basic data structures (lists, dictionaries, sets, tuples)
- Functions, lambda functions, error handling (try-except)
- Modules and packages
2. Pandas & Numpy
- Creating and manipulating DataFrames and Series
- Indexing, selecting, and filtering data
- Handling missing data (fillna, dropna)
- Data aggregation with groupby, summarizing data
- Merging, joining, and concatenating datasets
3. Basic Visualization
- Basic plotting with Matplotlib (line plots, bar plots, histograms)
- Visualization with Seaborn (scatter plots, box plots, pair plots)
- Customizing plots (sizes, labels, legends, color palettes)
- Introduction to interactive visualizations (e.g., Plotly)
Excel:
1. Basic
- Cell operations, basic formulas (SUMIFS, COUNTIFS, AVERAGEIFS, IF, AND, OR, NOT & Nested Functions etc.)
- Introduction to charts and basic data visualization
- Data sorting and filtering
- Conditional formatting
2. Intermediate
- Advanced formulas (V/XLOOKUP, INDEX-MATCH, nested IF)
- PivotTables and PivotCharts for summarizing data
- Data validation tools
- What-if analysis tools (Data Tables, Goal Seek)
3. Advanced
- Array formulas and advanced functions
- Data Model & Power Pivot
- Advanced Filter
- Slicers and Timelines in Pivot Tables
- Dynamic charts and interactive dashboards
Power BI:
1. Data Modeling
- Importing data from various sources
- Creating and managing relationships between different datasets
- Data modeling basics (star schema, snowflake schema)
2. Data Transformation
- Using Power Query for data cleaning and transformation
- Advanced data shaping techniques
- Calculated columns and measures using DAX
3. Data Visualization and Reporting
- Creating interactive reports and dashboards
- Visualizations (bar, line, pie charts, maps)
- Publishing and sharing reports, scheduling data refreshes
Statistics Fundamentals:
Mean, Median, Mode, Standard Deviation, Variance, Probability Distributions, Hypothesis Testing, P-values, Confidence Intervals, Correlation, Simple Linear Regression, Normal Distribution, Binomial Distribution, Poisson Distribution.
👍101❤43🥰4👏4🎉4🤔2👌2
1. What do Tableau's sets and groups mean?
Data is grouped using sets and groups according to predefined criteria. The primary distinction between the two is that although a set can have only two options—either in or out—a group can divide the dataset into several groups. A user should decide which group or sets to apply based on the conditions.
3.What do you mean by a Bag of Words (BOW)?
It is used for word frequency or occurrences to train a classifier.
It contains a text representation that describes the frequency with which words appear in a document.
It has two steps:
-A list of terms that are well-known.
-A metric for determining the existence of well-known terms.
3. What are Nested Triggers?
Triggers may implement DML by using INSERT, UPDATE, and DELETE statements. These triggers that contain DML and find other triggers for data modification are called Nested Triggers.
4. What is a True positive rate and a false positive rate?
True positive rate or Recall: It gives us the percentage of the true positives captured by the model out of all the Actual Positive class.
TPR = TP/ (TP+FN)
False Positive rate: It gives us the percentage of all the false positives by my model prediction from the all Actual Negative class.
FPR = FP/(FP+TN)
Data is grouped using sets and groups according to predefined criteria. The primary distinction between the two is that although a set can have only two options—either in or out—a group can divide the dataset into several groups. A user should decide which group or sets to apply based on the conditions.
3.What do you mean by a Bag of Words (BOW)?
It is used for word frequency or occurrences to train a classifier.
It contains a text representation that describes the frequency with which words appear in a document.
It has two steps:
-A list of terms that are well-known.
-A metric for determining the existence of well-known terms.
3. What are Nested Triggers?
Triggers may implement DML by using INSERT, UPDATE, and DELETE statements. These triggers that contain DML and find other triggers for data modification are called Nested Triggers.
4. What is a True positive rate and a false positive rate?
True positive rate or Recall: It gives us the percentage of the true positives captured by the model out of all the Actual Positive class.
TPR = TP/ (TP+FN)
False Positive rate: It gives us the percentage of all the false positives by my model prediction from the all Actual Negative class.
FPR = FP/(FP+TN)
👍15❤2🤔1
JP Morgan is hiring Data Analyst
https://www.linkedin.com/posts/sql-analysts_dataanalytics-job-dataanalyst-activity-7144012106911186944-R01l?utm_source=share&utm_medium=member_android
https://www.linkedin.com/posts/sql-analysts_dataanalytics-job-dataanalyst-activity-7144012106911186944-R01l?utm_source=share&utm_medium=member_android
❤1👍1
Data Analytics Pattern Identification....;;
Trend Analysis: Examining data over time to identify upward or downward trends.
Seasonal Patterns: Identifying recurring patterns or trends based on seasons or specific time periods
Correlation: Understanding relationships between variables and how changes in one may affect another.
Outlier Detection: Identifying data points that deviate significantly from the overall pattern.
Clustering: Grouping similar data points together to find natural patterns within the data.
Classification: Categorizing data into predefined classes or groups based on certain features.
Regression Analysis: Predicting a dependent variable based on the values of independent variables.
Frequency Distribution: Analyzing the distribution of values within a dataset.
Pattern Recognition: Identifying recurring structures or shapes within the data.
Text Analysis: Extracting insights from unstructured text data through techniques like sentiment analysis or topic modeling.
These patterns help organizations make informed decisions, optimize processes, and gain a deeper understanding of their data.
Trend Analysis: Examining data over time to identify upward or downward trends.
Seasonal Patterns: Identifying recurring patterns or trends based on seasons or specific time periods
Correlation: Understanding relationships between variables and how changes in one may affect another.
Outlier Detection: Identifying data points that deviate significantly from the overall pattern.
Clustering: Grouping similar data points together to find natural patterns within the data.
Classification: Categorizing data into predefined classes or groups based on certain features.
Regression Analysis: Predicting a dependent variable based on the values of independent variables.
Frequency Distribution: Analyzing the distribution of values within a dataset.
Pattern Recognition: Identifying recurring structures or shapes within the data.
Text Analysis: Extracting insights from unstructured text data through techniques like sentiment analysis or topic modeling.
These patterns help organizations make informed decisions, optimize processes, and gain a deeper understanding of their data.
👍20❤4👌2
If you have ever given an SQL interview some of the questions would be definitely from below list :
1- How to find duplicates in a table
2- How to delete duplicates from a table
3- Difference between union and union all
4- Difference between rank,row_number and dense_rank
5- Find records in a table which are not present in another table
6- Find second highest salary employees in each department
7- Find employees with salary more than their manager's salary
8- Difference between inner and left join
9- update a table and swap gender values.
If not exact at least flavor of these questions are always asked in interviews irrespective of your experience level
Tech Community & Referrals Network
-> https://news.1rj.ru/str/addlist/SGkp16pI1XQ2YmEx
All the best 👍👍
1- How to find duplicates in a table
2- How to delete duplicates from a table
3- Difference between union and union all
4- Difference between rank,row_number and dense_rank
5- Find records in a table which are not present in another table
6- Find second highest salary employees in each department
7- Find employees with salary more than their manager's salary
8- Difference between inner and left join
9- update a table and swap gender values.
If not exact at least flavor of these questions are always asked in interviews irrespective of your experience level
Tech Community & Referrals Network
-> https://news.1rj.ru/str/addlist/SGkp16pI1XQ2YmEx
All the best 👍👍
👍25❤6🥰4
👍9
👍8👏2
Top 8 Excel interview questions data analysts 👇👇
1. Advanced Formulas:
- Can you explain the difference between VLOOKUP and INDEX-MATCH functions? When would you prefer one over the other?
- How would you use the SUMIFS function to analyze data with multiple criteria?
2. Data Cleaning and Manipulation:
- Describe a scenario where you had to clean and transform messy data in Excel. What techniques did you use?
- How do you remove duplicates from a dataset, and what considerations should be taken into account?
3. Pivot Tables:
- Explain the purpose of a pivot table. Provide an example of when you used a pivot table to derive meaningful insights.
- What are slicers in a pivot table, and how can they be beneficial in data analysis?
4. Data Visualization:
- Share your approach to creating effective charts and graphs in Excel to communicate data trends.
- How would you use conditional formatting to highlight key information in a dataset?
5. Statistical Analysis:
- Discuss a situation where you applied statistical analysis in Excel to draw conclusions from a dataset.
- Explain the steps you would take to perform regression analysis in Excel.
6. Macros and Automation:
- Have you ever used Excel macros to automate a repetitive task? If so, provide an example.
- What are the potential risks and benefits of using macros in a data analysis workflow?
7. Data Validation:
- How do you implement data validation in Excel, and why is it important in data analysis?
- Can you give an example of when you used Excel's data validation to improve data accuracy?
8. Data Linking and External Data Sources:
- Describe a situation where you had to link data from multiple Excel workbooks. How did you approach this task?
- How would you import data from an external database into Excel for analysis?
ENJOY LEARNING 👍👍
1. Advanced Formulas:
- Can you explain the difference between VLOOKUP and INDEX-MATCH functions? When would you prefer one over the other?
- How would you use the SUMIFS function to analyze data with multiple criteria?
2. Data Cleaning and Manipulation:
- Describe a scenario where you had to clean and transform messy data in Excel. What techniques did you use?
- How do you remove duplicates from a dataset, and what considerations should be taken into account?
3. Pivot Tables:
- Explain the purpose of a pivot table. Provide an example of when you used a pivot table to derive meaningful insights.
- What are slicers in a pivot table, and how can they be beneficial in data analysis?
4. Data Visualization:
- Share your approach to creating effective charts and graphs in Excel to communicate data trends.
- How would you use conditional formatting to highlight key information in a dataset?
5. Statistical Analysis:
- Discuss a situation where you applied statistical analysis in Excel to draw conclusions from a dataset.
- Explain the steps you would take to perform regression analysis in Excel.
6. Macros and Automation:
- Have you ever used Excel macros to automate a repetitive task? If so, provide an example.
- What are the potential risks and benefits of using macros in a data analysis workflow?
7. Data Validation:
- How do you implement data validation in Excel, and why is it important in data analysis?
- Can you give an example of when you used Excel's data validation to improve data accuracy?
8. Data Linking and External Data Sources:
- Describe a situation where you had to link data from multiple Excel workbooks. How did you approach this task?
- How would you import data from an external database into Excel for analysis?
ENJOY LEARNING 👍👍
👍27❤10
1. What are the ways to detect outliers?
Outliers are detected using two methods:
Box Plot Method: According to this method, the value is considered an outlier if it exceeds or falls below 1.5*IQR (interquartile range), that is, if it lies above the top quartile (Q3) or below the bottom quartile (Q1).
Standard Deviation Method: According to this method, an outlier is defined as a value that is greater or lower than the mean ± (3*standard deviation).
2. What is a Recursive Stored Procedure?
A stored procedure that calls itself until a boundary condition is reached, is called a recursive stored procedure. This recursive function helps the programmers to deploy the same set of code several times as and when required.
3. What is the shortcut to add a filter to a table in EXCEL?
The filter mechanism is used when you want to display only specific data from the entire dataset. By doing so, there is no change being made to the data. The shortcut to add a filter to a table is Ctrl+Shift+L.
4. What is DAX in Power BI?
DAX stands for Data Analysis Expressions. It's a collection of functions, operators, and constants used in formulas to calculate and return values. In other words, it helps you create new info from data you already have.
Outliers are detected using two methods:
Box Plot Method: According to this method, the value is considered an outlier if it exceeds or falls below 1.5*IQR (interquartile range), that is, if it lies above the top quartile (Q3) or below the bottom quartile (Q1).
Standard Deviation Method: According to this method, an outlier is defined as a value that is greater or lower than the mean ± (3*standard deviation).
2. What is a Recursive Stored Procedure?
A stored procedure that calls itself until a boundary condition is reached, is called a recursive stored procedure. This recursive function helps the programmers to deploy the same set of code several times as and when required.
3. What is the shortcut to add a filter to a table in EXCEL?
The filter mechanism is used when you want to display only specific data from the entire dataset. By doing so, there is no change being made to the data. The shortcut to add a filter to a table is Ctrl+Shift+L.
4. What is DAX in Power BI?
DAX stands for Data Analysis Expressions. It's a collection of functions, operators, and constants used in formulas to calculate and return values. In other words, it helps you create new info from data you already have.
👍25❤10🥰1