Data analytics is the science of discovering and interpreting meaningful patterns in data. Why is this important? For one, it helps to identify critical and significant patterns and cause-and-effect relationships that would have otherwise gone unnoticed. Also, business intelligence helps company leaders make informed decisions that are based not on emotions, instincts, or any other irrational reasons but on actual patterns. This information is obtained by data processing. These solutions help companies increase sales, reduce costs, and implement further improvements.
The amount of data in the world is growing exponentially, and the need for its processing and analysis is growing fast. As a result, companies are depending more and more on analytics to increase revenue and drive innovation. For example, additional data is being created as business interactions become increasingly digital. Consequently, there are new opportunities to gain information on how to make interfaces more personalized, improve customer satisfaction, improve service, develop new and improved products, and increase sales. Besides this, in the business world and beyond, data processing and analysis helps solve the most complex tasks on a global scale.
What Does a Data Analyst Do?
A specialist who analyzes data differs from other people. This person can find more meaningful information and clues than any other specialist. Typically, a Data Analyst is looking for answers to the following questions:
- How much have we earned for the last month/quarter/year?
- Why have users become less active (or left)?
- How did a product update affect users?
- When can we expect a return on investment when entering this market?
- What can be done to reduce the production cost of a product?
These are examples of general questions; each Data Analyst answers questions within the responsibility of their team.
There are two ways one can make rational decisions in business:
- Using the expert assessment method—decisions are accepted by relying on the professional experience of specialists and their qualified ideas.
- Using a data-driven approach—decisions are made based on data analytics. This approach allows you to confirm or deny expert judgment and avoid low-quality decisions caused by cognitive distortions.
An analyst in an IT company works with data to find insights, cause-and-effect relationships, business growth points, and weak points. This information is then used by product managers, marketers, CEOs, and other company employees to make decisions.
In order to answer questions, Data Analysts generate hypotheses regarding product improvements and test them with A/B tests, which they then evaluate to make a final decision about proposed changes. Sometimes making decisions is quite simple—for example, when revenues have increased and users are happy. Yet often testing results are ambiguous or require a number of additional factors to be considered, such as user indicators or the load on the technical infrastructure, which turns making a decision into tangible art.
A data processing and analysis specialist collects, analyzes, and interprets big data to identify patterns, obtain analytics, make forecasts, and develop action plans. Big data can be defined as data sets where the variety, volume, and acquisition rate exceed the processing capabilities available within the framework of previous information management methods. Data processing and analysis specialists use many types of big data. Two examples are given below.
- Structured data is usually arranged in rows and columns and contains words and numbers (names, dates, credit card data, etc.). For example, a specialist in data processing and analysis working in the utility industry might analyze tables with data on the production and use of electricity to find an opportunity to reduce costs and identify patterns that can cause equipment failure.
- Unstructured data is not ordered in any way and may contain document files with texts, information from social networks and mobile devices, website content, and video data. For example, a Data Analyst working in retail can analyze unstructured call-center notes, email data, surveys, and social media posts to answer questions about improving customer service.
The features of a data set can be described as quantitative (structured numerical data), qualitative, or categorical (i.e., the data is not represented by numerical values or capable of being grouped by categories). Specialists need to know what type of data they are working with, as the kinds of analysis used and the types of graphs suitable for data visualization depend on it.
Different Types of Data Analysis
Before we move on to descriptive statistics, let's talk about an essential step in preparing statistics: quality assurance. Before proceeding with any analysis, it is necessary to ensure there are no errors or omissions in the data and the data is complete, without duplicates, correctly organized, and suitable for further analysis.
We often receive data in rows and columns in the form of a table, but this data is only sometimes correctly organized for further manipulations. Errors in the data lead to unreliable results, and an incorrect data structure increases the task duration. Therefore, the first stage of any analysis is to check the initial data for correctness, correct errors if necessary, and structure the data.
There are different options for different types of data analysis, but there are four types of data analysis that clients are most often looking for:
- Descriptive analysis
- Diagnostic analysis
- Predictive analysis
- Prescriptive analytics
Each begins with data preparation for further processing and ends with a review of the results. All three types of analysis differ in the level of complexity of working with information and the degree of human participation.
Let’s first consider descriptive data analysis.
Descriptive Analytics
The simplest data analysis type is descriptive, which is also known as descriptive statistics.
Descriptive statistics involves a concise and informative description of data in the form of graphs, tables, and numerical expressions. It is important to note that statistical methods for data analysis determine the type of variables.
A normality test is performed for quantitative data. It means positive and negative standard deviations are calculated as descriptive statistics, median and quartiles, and minimum and maximum values in the sample.
The frequencies of occurrence are also calculated for qualitative indicators.
Descriptive analysis answers the question “what happened?” and could look something like any of these examples:
- Features of patients: The sample comprises 34% healthy people and 66% sick people.
- Portrait of clients: The study participants include 13% women and 87% men, with an average age of 35.
- Customer summary: Overall, 92 clients in just one year, 25 (27%) applied again, and 67 (73%) did not return.
Descriptive data statistics includes tests for normal distribution. First, when processing data, it is necessary to check for normal distribution; this will allow you to choose further data processing methods to obtain reliable results correctly. Parametric methods are used for normal distribution; nonparametric methods are used for non-normal distribution.
There are many tests that can be used to determine if the distribution is normal. Frequently used tests include:
- Shapiro-Wilk test
- Chi-square test
- Kolmogorov-Smirnov criterion
If the probability of a random difference is small (i.e., a P value less than 0.05), the difference is recognized as significant (not unexpected), and the distribution of the feature is not normal.
Other tests used in descriptive statistics include:
- Analysis of distribution center indicators: determines a data set's average or most typical value
- Evaluation of data scatter in the population: the degree of individual deviations from the central trend, data variability (standard deviation, quartile range)
- Frequency analysis: assessment of the frequency of occurrence of a feature
- Data visualization: Distribution histograms and frequency diagrams
Thus, descriptive statistics allows data to be presented more meaningfully, making it easier to interpret.
We talk about how to identify differences in features between groups, check for a relationship between indicators, identify homogeneous groups, and build a statistical model in the following sections.
Predictive Analytics
The term predictive analytics today is commonly understood as a set of operations that allow predicting the results of events in the future solely based on experience in similar cases. In this understanding, if we consider its designation in the broadest sense, there are elements of classical statistics, game theory, and functional mathematical analysis. As mentioned earlier, the list of areas in which calculations of this order are used is extremely wide. Bank employees, businesspeople, advertising specialists, and even programmers must use the appropriate terminology.
Stages of Predictive Analytics Modules
As mentioned earlier, a predictive analysis model is a set of operations consisting of various processes and procedures. The central units under study, upon on which predictions are subsequently built, are arrays of all kinds of data. These may include:
- Calculations collected via the internet
- Information from CRM packages
- Meter readings, including telemetric ones
- Various business parameters
Moreover, many types of modern software already include sections that allow one or more types of predictive analysis to be carried out.
Predictive Analytics Tools
The central toolkit of a specialist working in the field of predictive analytics first of all includes specialized programming languages. Some of the most popular solutions that take into account various parameters include:
- Python
- RapidMiner
- KNIME HR
- IBM SPSS Modeler
- Watson Analytics
- SAS Enterprise Miner
- Oracle Big Data Preparation
All of these software environments are, in one way or another, suitable for making functional and reliable forecasts.
Prescriptive Analytics
Instead of just predicting, prescriptive analytics tunes certain variables to achieve the best possible results and suggests a specific course of action.
Companies may use one or all types of analytics but have recently favored SAS Enterprise Miner and Oracle Big Data Preparation. The predictive and prescriptive analytics market was valued at $6.64 billion last year and is expected to reach $22.5 billion by 2024.
Prescriptive Analytics Capabilities
Like predictive analytics, prescriptive analytics allows us to turn raw data into valuable business insights. The difference is that the latter offers the best possible course of action that a company can take.
Prescriptive analytics is based on the idea of optimization. When building an analysis model for each business, you must consider all factors—even negligible ones—including the supply chain, employee salaries, workplace planning, energy costs, and possible delivery problems.
Prescriptive analytics companies use different tools. Artificial intelligence and machine learning can be called driving forces, but it is rarely limited to just these. For example, the Gartner list includes:
- Chart analysis
- Simulation
- Complex event processing, which includes data from multiple sources to find patterns and model complex circumstances
- Neural networks
- Recommendation engines (algorithms for predicting positive and negative preferences based on past user experience)
- Heuristics or alternative methods for solving a problem when an exact solution cannot be found
- Machine learning
One way or another, all the technologies that prescriptive analytics uses make results more accurate and reduce the time to get them. Such analytical models are already being used in various areas, including e-commerce.
Diagnostic Analytics
Diagnostic analytics is used for troubleshooting. It helps to answer the question “why did this happen?” by diving into the data to uncover causality. It helps uncover root causes by digging into data sets to find relationships and patterns.
There are many different types of diagnostic analytics. The most popular styles include regression analysis, process mining, and text mining.
This type of analytics can answer questions such as “why did our website traffic suddenly drop last week?” or “why is customer traffic increasing?”
Let's say, for example, that you're in the e-commerce industry and notice that sales have declined over the past few months. To determine why this might happen, you can use diagnostic analytics to analyze data like website traffic, conversion rates, average order value, and more. This would give you a better understanding of what the problem is so you can take steps to fix it. This type of analysis helps businesses resolve issues quickly to avoid costly mistakes.
Using this type of analytics, you could look for new data sources that could provide more information about the drop in sales. You might go further and find that despite the high number of website visitors and the high number of "add to cart" actions, only a tiny percentage of visitors make a purchase. Further investigation may reveal that most customers dropped out of the program while entering a delivery address.
This gives you a clue as to what the problem is. Maybe the target doesn’t load correctly on mobile devices, or it needs to be shorter and more comfortable. As you dig deeper, you'll get closer to finding the answer to your data anomaly.
But diagnostic analytics isn't just for diagnosing problems; it can also be used to determine what causes favorable results.
Data Analytics Techniques
Now, let's look at the primary data analysis techniques. Factor analysis, regression analysis, cohort analysis, and time series methods are among the most widely used techniques today.
Factor Analysis
In modern statistics, factor analysis is used to identify hidden factors affecting the observed variables. The main idea of this technique is that the number of factors influencing variables and the main trends and relationships between these variables can be reduced to a smaller number.
Factor analysis can be applied in many areas, from economics and business to medical and psychological statistics. This method helps researchers determine which factors affect the observed variables and which variables are most sensitive to the effects of these factors.
One of the most critical tasks using factor analysis is identifying the main factors affecting the change in indicators within a particular sample. Besides this, factor analysis can be used to determine the relationships between different variables and to predict the values of these variables based on the available data.
Overall, factor analysis is a powerful tool allowing researchers to explore the relationships between different variables in greater depth and precision, uncover hidden factors, and build more-accurate models to predict the future values of those variables.
As a rule, this approach searches for hidden factors affecting the final result. All in all, it's a valuable tool for data exploration.
Regression Analysis
Regression analysis involves utilizing a set of statistical methods for evaluating relationships between variables. It can be used to assess the degree of association between variables and to model future dependencies. Regression methods show how changes in the independent variables can be used to fix the difference in the dependent variable.
A dependent variable in business is called a predictor (a characteristic observed to change). It can be the level of sales, risks, pricing, performance, and so on. Independent variables explain the behavior of the above factors (season, purchasing power of the population, place of sale, and much more). Regression analysis includes several models. The most common are linear, multi-linear (or multiple linear), and nonlinear.
As we can see from the names, the models differ in the type of dependence of variables: linear is described by a linear function; multi-linear also represents a linear function, but it includes more parameters (independent variables); the nonlinear model is one in which the experimental data are characterized by a process that is nonlinear (exponential, logarithmic, trigonometric, and so on).
Most often, Data Analysts use simple linear and multi-linear models. Regression analysis offers many applications in various disciplines, including finance. Here are some examples of linear regressions:
- Forecasting indicators
- Evaluation of marketing effectiveness
- Risk assessment
- Detection of essential factors
- Asset pricing
Cohort Analysis
Cohort analysis helps to get a complete picture of the results of campaigns and business promotions. It can be used to identify patterns to improve the customer experience and understand what gaps need to be fixed.
The method is critical for understanding the effectiveness of marketing activities, as it allows for determining which sources generate the most conversions and sales. This helps businesses to adjust their promotion strategy in real time.
Segmentation works similarly and involves dividing the audience into segments according to similar characteristics (age, gender, place of residence, interests, etc.). They track how a particular segment behaves and, considering its behavior, improve the commercial offer, marketing strategy, and customer journey.
However, there is a significant difference between cohort analysis and segmentation:
- Cohort analysis examines how users with different characteristics but the same experience perform the same action simultaneously.
- Segment analysis analyzes the result of a group of users with similar characteristics who perform different actions at different times.
With cohort analysis, we study how a specific metric changes depending on user behavior. As a result, it is possible to more clearly identify errors, shortcomings, and weaknesses when considering a specific action. By observing people's behavior in cohorts, you can evaluate the effectiveness of marketing activities.
Cohort analysis includes the following steps:
- Metric definition
- Formation of cohorts
- Comparison of cohorts and analysis of metrics
Time Series Analysis
Time series analysis is essential in many fields, including business, science, technology, and economics. Time series analysis techniques help to understand the behavior and change of time data and reveal hidden trends and patterns. There are three main methods of time series analysis: stationarity, autocorrelation, and extrapolation.
Stationarity is a time series property, meaning the mean and standard deviation do not change over time. If the time series is stationary, it can be easily analyzed and predicted. A nonstationary time series can have a trend (constant rise or fall), cyclicity (repetition of cycles), or seasonality (repetition of certain events at different times of the year).
Autocorrelation is a measure of the correlation between the values of a series with a difference in time. If the time series has a high autocorrelation, the importance of the series in different periods has a strong relationship with each other. Autocorrelation can be calculated using the Pearson correlation function, which calculates the correlation between two variables. This means calculating the correlation between the series values in different periods for the time series. If the autocorrelation is significant, this may indicate a trend, cyclicity, or seasonality in the series.
Extrapolation methods use the values of previous periods to predict the future values of a series. Such methods include the moving average, exponential smoothing, and Holt-Winters methods. The moving average process calculates the average value of a series over a certain period and then uses the results to predict future values.
Exponential smoothing uses an exponential function to predict future values, while the Holt-Winters method adds trend and seasonality to the exponential smoothing model.
Here are some examples of how time series analysis can be used to predict various phenomena in real life:
- Road traffic forecasting. Time series analysis can be used to predict traffic on the roads, which helps organize the work of traffic management services and plan road repairs. To do this, workers install sensors to collect data on the streets, past traffic flows are analyzed, and forecasts are made for the future. This improves traffic efficiency and saves travelers time.
- Forecasting weather changes. Time series analysis can also be used to predict weather changes. One of the main applications here is forecasting temperature, precipitation, and other factors that can affect people's lives and the economy. To do this, analysts use past weather data and data on climate and other factors that may affect future weather changes. This allows them to warn people about possible natural disasters and take appropriate measures to mitigate them.
- Forecasting demand for goods. Time series analysis can be used to predict the demand for goods, which helps firms plan production, purchases, and sales in the market. To do this, Data Analysts use data on past sales of goods and various factors that may affect demand in the future (for example, prices of competing interests and macroeconomic factors).
Data analytics is the foundation for leaders at all levels to build their strategies. A strategy based on evidence and established patterns is much more likely to succeed than one based on premonitions and subjective experience. Effective collection, ordering, and analysis of data, as well as subsequent implementation of solutions gleaned from the results, should take place in any company, regardless of the nature of the business. Only by using analytics is it possible to make and implement realistic plans for the future.
Related Articles
A DAY IN THE LIFE OF A DATA ANALYST
DATA ANALYST: AN INTRODUCTION TO THE PROFESSION