The regression coefficients are 0.8 and 0.2.
The coefficient of correlation r is the geometric mean of the regression coefficients, which is:
√(0.8 × 0.2) = 0.4
Therefore, the value of the coefficient of correlation is 0.4.
The document discusses the least squares regression method for determining the line of best fit for a dataset. It explains that the least squares method finds the line that minimizes the sum of the squares of the distances between the observed responses in the dataset and the responses predicted by the linear approximation. The document provides steps to calculate the line of best fit, including calculating the slope and y-intercept. It also includes an example of applying the least squares method to find the line of best fit for a dataset relating t-shirt prices and number of t-shirts sold.
- Regression analysis is used to predict the value of a dependent variable based on one or more independent variables and explain the relationship between them.
- There are different types of regression depending on whether the dependent variable is continuous or binary. Ordinary least squares regression is used for continuous dependent variables while logistic regression is used for binary dependent variables.
- The simple linear regression model describes the relationship between one independent and one dependent variable as a linear equation. This can be extended to multiple linear regression with more than one independent variable.
Curve fitting is the process of finding the best fit mathematical function for a series of data points. It involves constructing curves or equations that model the relationship between dependent and independent variables. The least squares method is commonly used, which finds the curve that minimizes the sum of the squares of the distances between the data points and the curve. This provides a single curve that best represents the overall trend of the data. Examples of linear and nonlinear curve fitting are provided, along with the process of linearizing nonlinear relationships to apply linear regression techniques.
School Project-Mathematics-properties of rational numbers, Square and square ...SwethaRM2
The properties of rational numbers, Square and square roots, cube and cube roots-
A rational number is a number that can be expressed as the quotient or fraction p/q of two integers, a numerator p and a non-zero denominator q.Since q may be equal to 1, every integer is a rational number.
This document presents Chebyshev's inequality and its applications. It begins with the statement of Chebyshev's inequality relating the probability that a random variable X deviates from its mean by a certain number of standard deviations. It then provides a proof of the inequality for both continuous and discrete random variables. Several examples are worked through applying Chebyshev's inequality to find bounds on probabilities for specific distributions. Problems include finding bounds on probabilities for uniform and exponential distributions and determining values for which probabilities exceed thresholds.
The document discusses curve fitting and the principle of least squares. It explains that curve fitting involves finding the curve of best fit that passes through most data points. The principle of least squares states that the curve of best fit is the curve for which the sum of the squares of the errors between the data points and fitted curve is minimized. It provides examples of using the method of least squares to fit linear, quadratic, and exponential curves to data.
This document provides an overview of regression analysis, including:
- Regression analysis measures the average relationship between variables to predict dependent variables from independent variables and show relationships.
- It is widely used in business to predict things like production, prices, and profits. It is also used in sociological and economic studies.
- There are three main methods for studying regression: least squares method, deviations from means method, and deviations from assumed means method. Examples are provided of calculating regression equations for bivariate data using each method.
- Regression analysis is a statistical technique for modeling relationships between variables, where one variable is dependent on the others. It allows predicting the average value of the dependent variable based on the independent variables.
- The key assumptions of regression models are that the error terms are normally distributed with zero mean and constant variance, and are independent of each other.
- Linear regression specifies that the dependent variable is a linear combination of the parameters, though the independent variables need not be linearly related. In simple linear regression with one independent variable, the least squares estimates of the intercept and slope are calculated to minimize the sum of squared errors.
The document discusses the least squares regression method for determining the line of best fit for a dataset. It explains that the least squares method finds the line that minimizes the sum of the squares of the distances between the observed responses in the dataset and the responses predicted by the linear approximation. The document provides steps to calculate the line of best fit, including calculating the slope and y-intercept. It also includes an example of applying the least squares method to find the line of best fit for a dataset relating t-shirt prices and number of t-shirts sold.
- Regression analysis is used to predict the value of a dependent variable based on one or more independent variables and explain the relationship between them.
- There are different types of regression depending on whether the dependent variable is continuous or binary. Ordinary least squares regression is used for continuous dependent variables while logistic regression is used for binary dependent variables.
- The simple linear regression model describes the relationship between one independent and one dependent variable as a linear equation. This can be extended to multiple linear regression with more than one independent variable.
Curve fitting is the process of finding the best fit mathematical function for a series of data points. It involves constructing curves or equations that model the relationship between dependent and independent variables. The least squares method is commonly used, which finds the curve that minimizes the sum of the squares of the distances between the data points and the curve. This provides a single curve that best represents the overall trend of the data. Examples of linear and nonlinear curve fitting are provided, along with the process of linearizing nonlinear relationships to apply linear regression techniques.
School Project-Mathematics-properties of rational numbers, Square and square ...SwethaRM2
The properties of rational numbers, Square and square roots, cube and cube roots-
A rational number is a number that can be expressed as the quotient or fraction p/q of two integers, a numerator p and a non-zero denominator q.Since q may be equal to 1, every integer is a rational number.
This document presents Chebyshev's inequality and its applications. It begins with the statement of Chebyshev's inequality relating the probability that a random variable X deviates from its mean by a certain number of standard deviations. It then provides a proof of the inequality for both continuous and discrete random variables. Several examples are worked through applying Chebyshev's inequality to find bounds on probabilities for specific distributions. Problems include finding bounds on probabilities for uniform and exponential distributions and determining values for which probabilities exceed thresholds.
The document discusses curve fitting and the principle of least squares. It explains that curve fitting involves finding the curve of best fit that passes through most data points. The principle of least squares states that the curve of best fit is the curve for which the sum of the squares of the errors between the data points and fitted curve is minimized. It provides examples of using the method of least squares to fit linear, quadratic, and exponential curves to data.
This document provides an overview of regression analysis, including:
- Regression analysis measures the average relationship between variables to predict dependent variables from independent variables and show relationships.
- It is widely used in business to predict things like production, prices, and profits. It is also used in sociological and economic studies.
- There are three main methods for studying regression: least squares method, deviations from means method, and deviations from assumed means method. Examples are provided of calculating regression equations for bivariate data using each method.
- Regression analysis is a statistical technique for modeling relationships between variables, where one variable is dependent on the others. It allows predicting the average value of the dependent variable based on the independent variables.
- The key assumptions of regression models are that the error terms are normally distributed with zero mean and constant variance, and are independent of each other.
- Linear regression specifies that the dependent variable is a linear combination of the parameters, though the independent variables need not be linearly related. In simple linear regression with one independent variable, the least squares estimates of the intercept and slope are calculated to minimize the sum of squared errors.
The document discusses simple linear regression analysis. It provides definitions and formulas for simple linear regression, including that the regression equation is y = a + bx. An example is shown of using the stepwise method to determine if there is a significant relationship between number of absences (x) and grades (y) for students. The analysis finds a significant negative relationship, meaning more absences correlated with lower grades. Formulas are provided for calculating the slope, intercept, and testing significance of the regression model.
The document discusses the method of least squares for fitting curves to data points. It begins by introducing trend analysis and hypothesis testing as two applications of curve fitting. It then describes least squares regression and interpolation as two approaches for curve fitting. The main part of the document provides details on the least squares method, including forming normal equations from the data points, solving the normal equations to determine the coefficients of the curve, and examples of fitting straight lines and parabolas to data. It concludes by noting the key application of data fitting to minimize residuals and limitations when there is uncertainty in the independent variable.
Regression analysis is a statistical technique for predicting a dependent variable based on one or more independent variables. Simple linear regression fits a straight line to the data to predict a continuous dependent variable (y) from a single independent variable (x). The output is an equation of the form y= b0 + b1x + ε, where b0 is the y-intercept, b1 is the slope, and ε is the error. Multiple linear regression extends this to include more than one independent variable. Regression analysis calculates the "best fit" line that minimizes the residuals, or differences between predicted and observed y values.
The document discusses the geometric distribution, a discrete probability distribution that models the number of Bernoulli trials needed to get one success. It defines the geometric distribution and gives its probability mass function. Some key properties and applications are discussed, including: the mean is 1/p, the variance is q/p^2, where q is 1-p. It is used in situations like modeling the probability of events occurring after repeated independent trials with a constant probability of success each trial. Examples given include analyzing success rates in sports and deciding when to stop research trials.
The document discusses the Poisson distribution, which describes the probability of rare events. It has one parameter, the mean (m), and is used when the number of trials is large but the probability of an individual success is small. Examples of Poisson distributions given include defects per box of screws and printing mistakes per page. The key characteristics are outlined, such as being discrete and positively skewed. The document provides an example problem calculating probabilities based on the average number of accidents per month at an intersection. It also demonstrates how to fit data to a Poisson distribution by calculating expected frequencies based on the observed mean number of events.
The document discusses regression analysis, including definitions, uses, calculating regression equations from data, graphing regression lines, the standard error of estimate, and limitations. Regression analysis is a statistical technique used to understand the relationship between variables and allow for predictions. The document provides examples of calculating regression equations from various data sets and determining the standard error of estimate.
Regression analysis is a statistical technique used to estimate the relationships between variables. It allows one to predict the value of a dependent variable based on the value of one or more independent variables. The document discusses simple linear regression, where there is one independent variable, as well as multiple linear regression which involves two or more independent variables. Examples of linear relationships that can be modeled using regression analysis include price vs. quantity, sales vs. advertising, and crop yield vs. fertilizer usage. The key methods for performing regression analysis covered in the document are least squares regression and regressions based on deviations from the mean.
i. The document discusses polynomials, including definitions, types of polynomials based on degree, and key concepts like factorization, the relationship between zeros and coefficients of quadratic and cubic polynomials, and graphs of polynomials.
ii. Key results covered are: (x3+y3) = (x+y)(x2-xy+y2), the remainder theorem, and the factor theorem.
iii. Examples of polynomial factorization and the relationship between zeros and coefficients are provided.
This document discusses Karl Pearson's coefficient of correlation and how it is used to measure the relationship between two variables. It defines positive, negative, and zero correlation, and explains that Pearson's correlation coefficient (represented by r) varies from -1 to 1, where -1 is total negative correlation, 0 is no correlation, and 1 is total positive correlation. The document also provides an example of calculating r using product-moment method for a set of test score data, and interprets the resulting correlation value.
This document discusses the Lagrange multiplier method for finding the constrained maximum or minimum of a function subject to an equality constraint. It provides examples of using Lagrange multipliers to find the dimensions of a rectangle with maximum area given a perimeter, and to find the points on a circle closest to and farthest from a given point. The key steps are to set up the Lagrange multiplier equation relating the gradients of the objective function and constraint, solve for the critical points, and evaluate the objective function at these points to find the maximum or minimum.
This chapter summary covers simple linear regression models. Key topics include determining the simple linear regression equation, measures of variation such as total, explained, and unexplained sums of squares, assumptions of the regression model including normality, homoscedasticity and independence of errors. Residual analysis is discussed to examine linearity and assumptions. The coefficient of determination, standard error of estimate, and Durbin-Watson statistic are also introduced.
Multiple regression analysis is a powerful technique used for predicting the unknown value of a variable from the known value of two or more variables.
- Simple linear regression is used to predict values of one variable (dependent variable) given known values of another variable (independent variable).
- A regression line is fitted through the data points to minimize the deviations between the observed and predicted dependent variable values. The equation of this line allows predicting dependent variable values for given independent variable values.
- The coefficient of determination (R2) indicates how much of the total variation in the dependent variable is explained by the regression line. The standard error of estimate provides a measure of how far the observed data points deviate from the regression line on average.
- Prediction intervals can be constructed around predicted dependent variable values to indicate the uncertainty in predictions for a given confidence level, based on the
The document provides an introduction to regression analysis and performing regression using SPSS. It discusses key concepts like dependent and independent variables, assumptions of regression like linearity and homoscedasticity. It explains how to calculate regression coefficients using the method of least squares and how to perform regression analysis in SPSS, including selecting variables and interpreting the output.
A binomial random variable is the number of successes x in n repeated trials of a binomial experiment. The probability distribution of a binomial random variable is called a binomial distribution. Suppose we flip a coin two times and count the number of heads (successes).
We define the definite integral as a limit of Riemann sums, compute some approximations, then investigate the basic additive and comparative properties
This document presents hypotheses concerning proportions. It introduces proportions and the binomial distribution. It discusses hypotheses for one proportion using a z-test and provides an example. It also discusses hypotheses for two proportions using a z-test and provides an example. Finally, it discusses analyzing contingency tables using a chi-squared test and provides an example. References for further information are also included.
Here are the key steps and results:
1. Load the data and run a multiple linear regression with x1 as the target and x2, x3 as predictors.
R-squared is 0.89
2. Add x4, x5 as additional predictors.
R-squared increases to 0.94
3. Add x6, x7 as additional predictors.
R-squared further increases to 0.98
So as more predictors are added, the R-squared value increases, indicating more of the variation in x1 is explained by the model. However, adding too many predictors can lead to overfitting.
This document provides an overview of regression analysis, including:
- Regression analysis is used to study the relationship between variables and predict one variable from another. It can be linear or non-linear.
- Simple regression involves one independent and one dependent variable, while multiple regression involves two or more independent variables.
- The method of least squares is used to determine the regression equation that best fits the data by minimizing the sum of the squared residuals.
Chapter 16: Correlation
(enhanced by VisualBee)nunngera
Correlation is a statistical method used to measure the relationship between two variables. A relationship exists when changes in one variable are accompanied by consistent changes in the other. A correlation evaluates the direction, form, and degree of the relationship. The Pearson correlation specifically measures the direction and strength of a linear relationship between two numerical variables. Other correlational methods like Spearman and point-biserial correlations can be used for ordinal or dichotomous variable relationships.
The document discusses simple linear regression analysis. It provides definitions and formulas for simple linear regression, including that the regression equation is y = a + bx. An example is shown of using the stepwise method to determine if there is a significant relationship between number of absences (x) and grades (y) for students. The analysis finds a significant negative relationship, meaning more absences correlated with lower grades. Formulas are provided for calculating the slope, intercept, and testing significance of the regression model.
The document discusses the method of least squares for fitting curves to data points. It begins by introducing trend analysis and hypothesis testing as two applications of curve fitting. It then describes least squares regression and interpolation as two approaches for curve fitting. The main part of the document provides details on the least squares method, including forming normal equations from the data points, solving the normal equations to determine the coefficients of the curve, and examples of fitting straight lines and parabolas to data. It concludes by noting the key application of data fitting to minimize residuals and limitations when there is uncertainty in the independent variable.
Regression analysis is a statistical technique for predicting a dependent variable based on one or more independent variables. Simple linear regression fits a straight line to the data to predict a continuous dependent variable (y) from a single independent variable (x). The output is an equation of the form y= b0 + b1x + ε, where b0 is the y-intercept, b1 is the slope, and ε is the error. Multiple linear regression extends this to include more than one independent variable. Regression analysis calculates the "best fit" line that minimizes the residuals, or differences between predicted and observed y values.
The document discusses the geometric distribution, a discrete probability distribution that models the number of Bernoulli trials needed to get one success. It defines the geometric distribution and gives its probability mass function. Some key properties and applications are discussed, including: the mean is 1/p, the variance is q/p^2, where q is 1-p. It is used in situations like modeling the probability of events occurring after repeated independent trials with a constant probability of success each trial. Examples given include analyzing success rates in sports and deciding when to stop research trials.
The document discusses the Poisson distribution, which describes the probability of rare events. It has one parameter, the mean (m), and is used when the number of trials is large but the probability of an individual success is small. Examples of Poisson distributions given include defects per box of screws and printing mistakes per page. The key characteristics are outlined, such as being discrete and positively skewed. The document provides an example problem calculating probabilities based on the average number of accidents per month at an intersection. It also demonstrates how to fit data to a Poisson distribution by calculating expected frequencies based on the observed mean number of events.
The document discusses regression analysis, including definitions, uses, calculating regression equations from data, graphing regression lines, the standard error of estimate, and limitations. Regression analysis is a statistical technique used to understand the relationship between variables and allow for predictions. The document provides examples of calculating regression equations from various data sets and determining the standard error of estimate.
Regression analysis is a statistical technique used to estimate the relationships between variables. It allows one to predict the value of a dependent variable based on the value of one or more independent variables. The document discusses simple linear regression, where there is one independent variable, as well as multiple linear regression which involves two or more independent variables. Examples of linear relationships that can be modeled using regression analysis include price vs. quantity, sales vs. advertising, and crop yield vs. fertilizer usage. The key methods for performing regression analysis covered in the document are least squares regression and regressions based on deviations from the mean.
i. The document discusses polynomials, including definitions, types of polynomials based on degree, and key concepts like factorization, the relationship between zeros and coefficients of quadratic and cubic polynomials, and graphs of polynomials.
ii. Key results covered are: (x3+y3) = (x+y)(x2-xy+y2), the remainder theorem, and the factor theorem.
iii. Examples of polynomial factorization and the relationship between zeros and coefficients are provided.
This document discusses Karl Pearson's coefficient of correlation and how it is used to measure the relationship between two variables. It defines positive, negative, and zero correlation, and explains that Pearson's correlation coefficient (represented by r) varies from -1 to 1, where -1 is total negative correlation, 0 is no correlation, and 1 is total positive correlation. The document also provides an example of calculating r using product-moment method for a set of test score data, and interprets the resulting correlation value.
This document discusses the Lagrange multiplier method for finding the constrained maximum or minimum of a function subject to an equality constraint. It provides examples of using Lagrange multipliers to find the dimensions of a rectangle with maximum area given a perimeter, and to find the points on a circle closest to and farthest from a given point. The key steps are to set up the Lagrange multiplier equation relating the gradients of the objective function and constraint, solve for the critical points, and evaluate the objective function at these points to find the maximum or minimum.
This chapter summary covers simple linear regression models. Key topics include determining the simple linear regression equation, measures of variation such as total, explained, and unexplained sums of squares, assumptions of the regression model including normality, homoscedasticity and independence of errors. Residual analysis is discussed to examine linearity and assumptions. The coefficient of determination, standard error of estimate, and Durbin-Watson statistic are also introduced.
Multiple regression analysis is a powerful technique used for predicting the unknown value of a variable from the known value of two or more variables.
- Simple linear regression is used to predict values of one variable (dependent variable) given known values of another variable (independent variable).
- A regression line is fitted through the data points to minimize the deviations between the observed and predicted dependent variable values. The equation of this line allows predicting dependent variable values for given independent variable values.
- The coefficient of determination (R2) indicates how much of the total variation in the dependent variable is explained by the regression line. The standard error of estimate provides a measure of how far the observed data points deviate from the regression line on average.
- Prediction intervals can be constructed around predicted dependent variable values to indicate the uncertainty in predictions for a given confidence level, based on the
The document provides an introduction to regression analysis and performing regression using SPSS. It discusses key concepts like dependent and independent variables, assumptions of regression like linearity and homoscedasticity. It explains how to calculate regression coefficients using the method of least squares and how to perform regression analysis in SPSS, including selecting variables and interpreting the output.
A binomial random variable is the number of successes x in n repeated trials of a binomial experiment. The probability distribution of a binomial random variable is called a binomial distribution. Suppose we flip a coin two times and count the number of heads (successes).
We define the definite integral as a limit of Riemann sums, compute some approximations, then investigate the basic additive and comparative properties
This document presents hypotheses concerning proportions. It introduces proportions and the binomial distribution. It discusses hypotheses for one proportion using a z-test and provides an example. It also discusses hypotheses for two proportions using a z-test and provides an example. Finally, it discusses analyzing contingency tables using a chi-squared test and provides an example. References for further information are also included.
Here are the key steps and results:
1. Load the data and run a multiple linear regression with x1 as the target and x2, x3 as predictors.
R-squared is 0.89
2. Add x4, x5 as additional predictors.
R-squared increases to 0.94
3. Add x6, x7 as additional predictors.
R-squared further increases to 0.98
So as more predictors are added, the R-squared value increases, indicating more of the variation in x1 is explained by the model. However, adding too many predictors can lead to overfitting.
This document provides an overview of regression analysis, including:
- Regression analysis is used to study the relationship between variables and predict one variable from another. It can be linear or non-linear.
- Simple regression involves one independent and one dependent variable, while multiple regression involves two or more independent variables.
- The method of least squares is used to determine the regression equation that best fits the data by minimizing the sum of the squared residuals.
Chapter 16: Correlation
(enhanced by VisualBee)nunngera
Correlation is a statistical method used to measure the relationship between two variables. A relationship exists when changes in one variable are accompanied by consistent changes in the other. A correlation evaluates the direction, form, and degree of the relationship. The Pearson correlation specifically measures the direction and strength of a linear relationship between two numerical variables. Other correlational methods like Spearman and point-biserial correlations can be used for ordinal or dichotomous variable relationships.
Correlation and regression analysis are statistical methods used to determine if a relationship exists between variables and describe the nature of that relationship. A scatter plot graphs the independent and dependent variables and allows visualization of any trends in the data. The correlation coefficient measures the strength and direction of the linear relationship between variables, ranging from -1 to 1. Regression finds the linear "best fit" line that minimizes the residuals and can be used to predict dependent variable values.
Descriptive & statistical study on Stock MarketMaryam Khan
This study examines the relationship between oil prices and stock returns in Pakistan using secondary data and regression analysis. It analyzes four companies - Attock Petroleum, Pakistan State Oil, Lucky Cement, and Attock Cement - from two sectors, oil and gas and construction materials. The hypothesis is that oil prices have a positive relationship with stock returns. Regression results show a negative relationship, with higher oil prices linked to lower stock returns. In conclusion, increases in oil prices raise costs for businesses. The study recommends that firms monitor supply and demand forces influencing oil prices and properly use hedging techniques.
Correlation is a statistical technique that measures the relationship between two variables. A positive correlation means the variables increase and decrease together, while a negative correlation means one variable increases as the other decreases. The strength of a correlation is measured numerically from 0 to 1 for positive correlations and 0 to -1 for negative correlations. These numbers are called correlation coefficients.
This document discusses correlation and regression analysis. It defines correlation as dealing with the association between two or more variables, and identifies different types including positive/negative, simple/multiple, and linear/non-linear. Regression analysis predicts the value of a dependent variable based on an independent variable. Key aspects covered include Karl Pearson's coefficient of correlation, Spearman's rank correlation coefficient, regression lines, coefficients, and estimating values from the regression equation.
Correlation and regression analysis are statistical methods used to determine relationships between variables. Correlation determines if a linear relationship exists between variables but does not imply causation. While correlation between age and height in children suggests a causal relationship, correlation between mood and health is less clear on causality. Regression analysis helps understand how changes in independent variables impact a dependent variable when other independent variables are held fixed. Linear regression models the dependent variable as a linear combination of parameters, while non-linear regression uses iterative procedures when the model is non-linear in parameters.
The document describes the Sphinx as having the head and breast of a woman, eagle's wings, and the body of a lion. It discusses Greek mythology figures associated with the Sphinx like Typhon, Echidna, and Chimera. It also describes Oedipus solving the Sphinx's riddle about a creature being four-footed, two-footed, and three-footed. A second riddle is about two sisters continuously giving birth to one another, which is solved as day and night.
Correlation and regression analysis are statistical tools used to analyze relationships between variables. Correlation measures the strength and direction of association between two variables on a scale from -1 to 1. Regression analysis uses one variable to predict the value of another variable and draws a best-fit line to represent their relationship. There are always two lines of regression - one showing the regression of x on y and the other showing the regression of y on x. Regression coefficients from these lines indicate the slope and intercept of the lines and can help estimate unknown variable values based on known values.
The document discusses correlation and linear regression. It defines Pearson and Spearman correlation as statistical techniques to measure the relationship between two variables. Pearson correlation measures the linear association between interval variables, while Spearman correlation measures statistical dependence between two variables using their rank order. Linear regression finds the best fit linear relationship between a dependent and independent variable to predict changes in one based on the other. The key assumptions and interpretations of correlation coefficients and regression lines are also covered.
The document discusses the chi-square test, which offers an alternative method for testing the significance of differences between two proportions. It was developed by Karl Pearson and follows a specific chi-square distribution. To calculate chi-square, contingency tables are made noting observed and expected frequencies, and the chi-square value is calculated using the formula. Degrees of freedom are also calculated. Chi-square test is commonly used to test proportions, associations between events, and goodness of fit to a theory. However, it has limitations when expected values are less than 5 and does not measure strength of association or indicate causation.
The chi-square test is used to determine if an observed frequency distribution differs from an expected theoretical distribution. It can test goodness of fit, independence of attributes, and homogeneity. The test involves calculating chi-square by taking the sum of the squares of the differences between observed and expected frequencies divided by expected frequencies. For the test to be valid, certain conditions must be met regarding sample size, expected frequencies, independence, and randomness. The test has some limitations such as not measuring strength of association and being unreliable with small expected frequencies.
Hypothesis is usually considered as the principal instrument in research and quality control. Its main function is to suggest new experiments and observations. In fact, many experiments are carried out with the deliberate object of testing hypothesis. Decision makers often face situations wherein they are interested in testing hypothesis on the basis of available information and then take decisions on the basis of such testing. In Six –Sigma methodology, hypothesis testing is a tool of substance and used in analysis phase of the six sigma project so that improvement can be done in right direction
The document discusses correlation and regression, explaining that correlation describes the strength of a linear relationship between two variables, while regression tells us how to draw the straight line described by the correlation. It provides examples of using correlation coefficients to determine the strength and direction of relationships between independent and dependent variables, and discusses calculating correlation coefficients and using regression analysis to predict variable relationships and outcomes.
This document discusses correlation and regression. Correlation describes the strength and direction of a linear relationship between two variables, while regression allows predicting a dependent variable from an independent variable. It provides examples of calculating the correlation coefficient r to determine the strength and direction of relationships between variables like education and self-esteem or family income and number of children. The regression equation describes the linear regression line and can be used to predict values of the dependent variable from known values of the independent variable.
This document discusses correlation, regression, and the general linear model. It defines correlation as assessing the relationship between two variables, while regression describes how well one variable can predict another. Pearson's r standardizes the covariance between variables. Linear regression finds the best-fitting line that minimizes the residuals through the least squares method. The coefficient of determination, r-squared, indicates how much variance in the dependent variable is explained by the independent variable. Multiple regression extends this to include multiple independent variables. The general linear model encompasses linear regression and can analyze effects across multiple dependent variables.
This document discusses correlation, regression, and the general linear model. It defines correlation as assessing the relationship between two variables, while regression describes how well one variable can predict another. Pearson's r standardizes the covariance between variables. Linear regression finds the best-fitting line that minimizes the residuals through the least squares method. The coefficient of determination, r2, indicates how much variance in the dependent variable is explained by the independent variable. Multiple regression extends this to include multiple independent variables. The general linear model encompasses both linear regression and multiple regression.
This document discusses correlation, regression, and the general linear model. It defines correlation as assessing the relationship between two variables, while regression describes how well one variable can predict another. Pearson's r standardizes the covariance between variables. Linear regression finds the best-fitting line that minimizes the residuals through the least squares method. The coefficient of determination, r-squared, indicates how much variance in the dependent variable is explained by the independent variable. Multiple regression extends this to include multiple independent variables. The general linear model encompasses linear regression and can analyze effects across multiple dependent variables.
Correlation & Regression for Statistics Social Sciencessuser71ac73
This document discusses correlation, regression, and the general linear model. It defines correlation as assessing the relationship between two variables, while regression describes how well one variable can predict another. Pearson's r standardizes the covariance between variables. Linear regression finds the best-fitting line that minimizes the residuals through the least squares method. The coefficient of determination, r-squared, indicates how much variance in the dependent variable is explained by the independent variable. Multiple regression extends this to include multiple independent variables. The general linear model encompasses both simple and multiple regression.
This document discusses correlation, regression, and the general linear model. It defines correlation as assessing the relationship between two variables, while regression describes how well one variable can predict another. Pearson's r standardizes the covariance between variables. Linear regression finds the best-fitting line that minimizes the residuals through the least squares method. The coefficient of determination, r-squared, indicates how much variance in the dependent variable is explained by the independent variable. Multiple regression extends this to include multiple independent variables. The general linear model encompasses both simple and multiple regression.
This document discusses correlation, regression, and the general linear model. It defines correlation as assessing the relationship between two variables, while regression describes how well one variable can predict another. Pearson's r standardizes the covariance between variables. Linear regression finds the best-fitting line that minimizes the residuals through the least squares method. The coefficient of determination, r-squared, indicates how much variance in the dependent variable is explained by the independent variable. Multiple regression extends this to include multiple independent variables. The general linear model encompasses both simple and multiple regression.
This document discusses correlation, regression, and the general linear model. It defines correlation as assessing the relationship between two variables, while regression describes how well one variable can predict another. Pearson's r standardizes the covariance between variables. Linear regression finds the best-fitting line that minimizes the residuals through the least squares method. The coefficient of determination, r2, indicates how much variance in the dependent variable is explained by the independent variable. Multiple regression extends this to include multiple independent variables. The general linear model encompasses both linear regression and multiple regression.
This document discusses correlation and regression analysis. It defines correlation as assessing the relationship between two variables, while regression determines how well one variable can predict another. Correlation does not imply causation. Pearson's r standardizes the covariance between variables and ranges from -1 to 1, indicating the strength and direction of their linear relationship. Regression finds the best-fitting linear relationship through the least squares method to minimize residuals and predict one variable from another. It provides the slope and intercept of the regression line. The coefficient of determination, r-squared, indicates how well the regression model fits the data.
To get a copy of the slides for free Email me at: japhethmuthama@gmail.com
You can also support my PhD studies by donating a 1 dollar to my PayPal.
PayPal ID is japhethmuthama@gmail.com
To get a copy of the slides for free Email me at: japhethmuthama@gmail.com
You can also support my PhD studies by donating a 1 dollar to my PayPal.
PayPal ID is japhethmuthama@gmail.com
Biostats coorelation vs rREGRESSION.DIFFERENCE BETWEEN CORRELATION AND REGRES...Payaamvohra1
CORRELATION
REGRESSION
BIOSTATISTICS
SEMESTER 8
M PHARMACY
CORRELATION VS REGRESSION
REGRESSION ANALYSIS
LINEAR AND MULTIPLE REGREISSIO
CORRELATION COEFFICIENT
Finding the relationship between two quantitative variables without being able to infer causal relationships
Correlation is a statistical technique used to determine the degree to which two variables are related
This presentation covered the following topics:
1. Definition of Correlation and Regression
2. Meaning of Correlation and Regression
3. Types of Correlation and Regression
4. Karl Pearson's methods of correlation
5. Bivariate Grouped data method
6. Spearman's Rank correlation Method
7. Scattered diagram method
8. Interpretation of correlation coefficient
9. Lines of Regression
10. regression Equations
11. Difference between correlation and regression
12. Related examples
This document discusses correlation and regression analysis. It begins by outlining the chapter's objectives and providing an introduction to investigating relationships between variables using statistical analysis. The document then presents examples of collecting data to study potential relationships between variables like stone dimensions, human heights and weights, and sprint and long jump performances. It introduces various statistical measures for quantifying relationships in data, including covariance, Pearson's product moment correlation coefficient, and Spearman's rank correlation coefficient. Examples are provided to demonstrate calculating and interpreting these statistics. Limitations of correlation analysis are also noted.
This document discusses relationships between variables in experiments. It defines two types of relationships: functional and statistical. A functional relationship is a perfect mathematical relationship where each value of the independent variable corresponds to a single, unique value of the dependent variable. A statistical relationship is imperfect, with a range of possible dependent variable values for each independent variable value. The document also discusses simple linear regression analysis, how to estimate regression coefficients, and how to interpret them to understand the relationship between variables.
This document provides an overview of regression models and analysis techniques. It introduces simple and multiple linear regression, as well as logistic regression. It discusses assessing regression models, cross-validation, model selection, and using regression models for prediction. Additionally, it covers the similarities and differences between linear and logistic regression, and assessing correlation without inferring causation. Scatter plots, correlation coefficients, and computing regression equations are also summarized.
Similar to Correlation by Neeraj Bhandari ( Surkhet.Nepal ) (20)
Dividend tax by Neeraj Bhandari (Surkhet, Nepal)Neeraj Bhandari
This document summarizes taxation of dividends in India. It discusses that dividends paid by companies are taxed through a dividend distribution tax paid by the company to the government. For shareholders, dividends received from domestic companies are not taxable as the tax is paid at the company level. The current dividend distribution tax rate is 15% plus applicable surcharges and cess. From 2016, high earning individuals receiving over 10 lakhs in annual dividends also pay an additional 10% tax. The document provides an example calculation and notes that dividends are not taxed twice, with the company responsible for paying the dividend distribution tax within 14 days.
Summer Internship Report Project - NIC ASIA BANK Nepal by Neeraj Bhandari (Su...Neeraj Bhandari
The document is an internship report submitted by Neeraj Bhandari to fulfill requirements for their MBA program. It provides details of Neeraj's internship at NIC Asia Bank in Surkhet, Nepal. The report includes an introduction to banking in Nepal, an overview of NIC Asia Bank, and an analysis of activities in the bank's Customer Service Department where Neeraj was placed. The report aims to relate Neeraj's academic knowledge to practical experience in banking.
Introducing Nepal by Neeraj Bhandari (Surkhet Nepal)Neeraj Bhandari
Nepal is located in South Asia between India and China. It is divided into three physiographic areas - mountains, hills and Terai. Nepali culture includes Hindu and Buddhist influences. Some highlights of Nepalese culture presented are major festivals like Dashain and Tihar, national symbols like the rhinoceros and cow, and languages including Nepali. Important tourist attractions discussed are Mount Everest, Lumbini as the birthplace of Buddha, and national parks featuring wildlife like Chitwan.
Introducing Nepal by Neeraj Bhandari (Surkhet Nepal)Neeraj Bhandari
Nepal is located in South Asia between India and China. It is divided into three physiographic areas - mountains, hills and Terai. The presentation provides information about Nepal's culture including national symbols, religion, languages and greetings in Nepali. It highlights Nepal's famous destinations like Mount Everest, birthplace of Buddha, and national parks featuring wildlife like one-horned rhinos. Several festivals celebrated in Nepal are also mentioned.
Retail Management by Neeraj Bhandari (Surkhet, Nepal)Neeraj Bhandari
This document discusses retail management in India. It defines retailing as activities involved in selling goods to final consumers. Retail management involves processes that help customers procure merchandise from stores. Major players in Indian retail include Pantaloon Retail, Shoppers Stop, Brandhouse, and Trent. Pantaloon Retail operates stores across 51 cities while Shoppers Stop has 51 department stores. Future prospects for retail in India are strong due to economic growth, infrastructure development, and changing consumer demographics.
Entrepreneurial Barriers and Challenges by Neeraj Bhandari (Surkhet, Nepal)Neeraj Bhandari
This document discusses entrepreneurship and the barriers and challenges faced by entrepreneurs. It defines entrepreneurship as pursuing opportunities regardless of available resources and managing business ventures in a competitive marketplace. The main barriers are environmental like lack of materials, labor, infrastructure and funds; personal such as lack of confidence and motivation; and social like customs and laws. Women entrepreneurs face additional problems of scarce resources, family responsibilities, limited mobility and education, and societal biases. Key entrepreneur challenges involve financing, building teams, visionary leadership, and decision-making. Successful entrepreneurs have leadership skills, take risks, are confident and innovative.
Introduction to Body Language by Neeraj Bhandari (Surkhet, Nepal)Neeraj Bhandari
Body language conveys nonverbal communication through gestures, postures, and facial expressions. It discusses types of body language including eye contact, facial expressions, posture, hand gestures, personal space, and body contact. The document provides tips for positive body language such as making eye contact without staring, relaxing shoulders, nodding when others talk, smiling, keeping hands and head up confidently, and maintaining personal space.
Importance of Communication in Business by Neeraj Bhandari (Surkhet, Nepal)Neeraj Bhandari
This document discusses the importance of communication in business. It defines communication as the exchange of information through shared symbols. Effective communication is important to inform, educate, and entertain others. Communication can be verbal, including written and oral forms, or non-verbal through body language and expressions. Written communication is most common in business. The document provides tips for effective business emails, such as keeping messages short and simple, and outlines potential consequences of unprofessional emails, like reduced productivity. Overall, strong communication skills are important for making good impressions, building relationships, and managing conflicts in professional settings.
Introduction to Planning by Neeraj Bhandari (Surkhet,Nepal)Neeraj Bhandari
Planning involves defining organizational goals, establishing overall strategies, and developing comprehensive plans to coordinate work. It includes determining what to do and how to do it. There are strategic plans that focus on long-term goals and operational plans for short-term goals. The planning process involves being aware of opportunities, setting objectives, considering alternatives, comparing them to goals, choosing an alternative, formulating supporting plans, and creating budgets.
Report on Nepal Telecom by Neeraj Bhandari (Surkhet, Nepal)Neeraj Bhandari
Nepal Telecom is the largest telecommunications provider in Nepal, with a 60.30% share of the mobile phone market and 91% of fixed telephone lines. It has provided nationwide coverage since being established in 2032. However, it now faces competition from private companies like Ncell and United Telecom. While Nepal Telecom has a strong brand and history of service, it needs to improve its customer service, upgrade technologies more quickly, and streamline operations to address weaknesses like high overhead costs. Adopting new opportunities from technological advances and expanding rural coverage could help it thrive despite threats from competitors, regulations, and economic challenges.
Project Report on Bajaj and Hero Honda by Neeraj Bhandari (Surkhet,Nepal)Neeraj Bhandari
This document appears to be a student research project comparing consumer buying behavior between Hero Honda and Bajaj motorcycles in India. It includes declarations, certificates of completion, acknowledgements, and outlines the objectives, literature review, research methodology, data analysis and findings. The student conducted surveys of 40 respondents in Greater Noida to understand factors influencing their choice between the two brands and their satisfaction levels. The analysis focuses on personal, psychological and social factors affecting purchasing decisions.
Body Languagge by Neeraj Bhandari (Surkhet,Nepal)Neeraj Bhandari
Body language conveys nonverbal communication through gestures, postures, and facial expressions. It discusses types of body language including eye contact, facial expressions, posture, hand gestures, personal space, and body contact. The document provides tips for positive body language such as making eye contact without staring, relaxing shoulders, nodding when others talk, smiling, keeping hands and head positioned confidently, and maintaining personal space.
Entrepreneurial Barriers and Challenges by Neeraj Bhandari (Surkhet,Nepal)Neeraj Bhandari
This document discusses entrepreneurship and the barriers and challenges faced by entrepreneurs. It defines entrepreneurship as pursuing opportunities regardless of available resources and managing business ventures in a competitive marketplace. The main barriers are environmental like lack of materials, labor, infrastructure and funds; personal such as lack of confidence and motivation; and social like customs and laws. Women entrepreneurs face additional problems of scarce resources, family responsibilities, limited mobility and education, and societal biases. Key entrepreneur challenges involve financing, building teams, visionary leadership, and decision-making. Successful entrepreneurs have leadership skills, take risks, are confident and innovative.
Office Etiquette by Neeraj Bhandari (Surkhet,Nepal)Neeraj Bhandari
This document discusses office etiquette and proper workplace behavior. It provides tips on appropriate office attire, using polite manners like "please" and "thank you", and being a team player. Electronic etiquette tips include keeping phone calls and emails brief and professional, using appropriate greetings and closings in emails, and selecting a professional email address and ringtone. The document also lists positive behaviors like asking open-ended questions and shaking hands firmly, and inappropriate actions like coughing on others, gossiping, selling to coworkers, and hovering by coworkers' desks.
Business Plan by Neeraj Bhandari (Surkhet,Nepal)Neeraj Bhandari
This business plan outlines the opening of a new restaurant in New Delhi, India called Restaurant. The plan details the restaurant's mission to provide excellent food and customer service in an inspiring atmosphere. Key aspects of the plan include operating hours, target customer segments, marketing strategies, and financial projections. The owner aims to differentiate the restaurant by its unique menu, employee retention focus, and cost control.
Organizational Structure by Neeraj Bhandari (Surkhet,Nepal)Neeraj Bhandari
The document discusses different types of organizational structures used by companies. It defines organizational structure as the system that establishes hierarchy and how a company operates. There are three main dimensions of structure: vertical differentiation of decision-making, horizontal division into sub-units based on functions or products, and integrating mechanisms. Common structures include functional (grouped by specialized areas), divisional (grouped by products/markets), hybrid (mix of functional and divisional), and matrix (employees report to both functional and project managers). Each structure has advantages and disadvantages for coordination, expertise, innovation, and performance measurement.
Print Marketing by Neeraj Bhandari (Surkhet,Nepal)Neeraj Bhandari
Print advertising uses physically printed media like magazines and newspapers to reach consumers. It has advantages like special ad positioning, credibility due to a long lifespan, and high reach of prospective customers. Common types of print advertising include newspaper ads, magazine ads, billboards/posters, and direct mail advertisements. Newspaper ads are printed on inexpensive paper and reach daily readers, while magazine ads can be powerful tools in specific publications. Billboards and posters advertise to people on the move locally. Direct mail involves delivering advertising material directly to recipients.
Retail Management by Neeraj bhandari (Surkhet Nepal)Neeraj Bhandari
This document discusses retail management in India. It defines retailing as activities involved in selling goods to final consumers. Retail management involves processes that help customers procure merchandise from stores. Major players in Indian retail include Pantaloon Retail, Shoppers Stop, Brandhouse, and Trent. Pantaloon Retail operates stores across 51 cities while Shoppers Stop has 51 department stores. Future prospects for retail in India are strong due to economic growth, infrastructure development, and changing consumer demographics.
International Trade and Policy- Introduction by Neeraj Bhandari (Surkhet Nepal)Neeraj Bhandari
This document provides an overview of an international trade and policy course. The 6-unit course covers topics like international trade theories, economic growth and trade, trade policies, and economic integration. It lists two textbooks for the course. It then discusses what international trade is, its importance for firms and nations, and how it has grown. It covers reasons firms engage in trade like expanding markets and lowering costs. It also discusses modes of international business operations and how economic events in one country can impact others due to increased interdependence. Finally, it lists some international economic problems faced by different countries.
NBFCs and Cooperative Banks by Neeraj Bhandari (Surkhet Nepal)Neeraj Bhandari
Non-banking financial companies (NBFCs) provide banking services without a banking license and cannot accept deposits. They are regulated and registered with the Reserve Bank of India. Common NBFCs include asset finance companies, investment companies, and loan companies. Cooperative banks are owned by their members and provide banking and financial services to members. They operate on principles of cooperation and mutual assistance rather than profit. Common cooperative banks include primary credit societies, central cooperative banks, state cooperative banks, and urban cooperative banks.
_Lufthansa Airlines MIA Terminal (1).pdfrc76967005
Lufthansa Airlines MIA Terminal is the highest level of luxury and convenience at Miami International Airport (MIA). Through the use of contemporary facilities, roomy seating, and quick check-in desks, travelers may have a stress-free journey. Smooth navigation is ensured by the terminal's well-organized layout and obvious signage, and travelers may unwind in the premium lounges while they wait for their flight. Regardless of your purpose for travel, Lufthansa's MIA terminal
202406 - Cape Town Snowflake User Group - LLM & RAG.pdfDouglas Day
Content from the July 2024 Cape Town Snowflake User Group focusing on Large Language Model (LLM) functions in Snowflake Cortex. Topics include:
Prompt Engineering.
Vector Data Types and Vector Functions.
Implementing a Retrieval
Augmented Generation (RAG) Solution within Snowflake
Dive into the details of how to leverage these advanced features without leaving the Snowflake environment.
🔥Night Call Girls Pune 💯Call Us 🔝 7014168258 🔝💃Independent Pune Escorts Servi...
Correlation by Neeraj Bhandari ( Surkhet.Nepal )
1. CORRELATION
Correlation is a statistical measurement of
the relationship between two variables
such that a change in one variable results
a change in other variable and such
variables are called correlated.
Thus the correlation analysis is a
mathematical tool which is used to
measure the degree to which are variable
is linearly related to each other
2. DIRECT OR POSITIVE CORRELATION
If the increase(or decrease) in one
variable results in a corresponding
increase (or decrease) in the other, the
correlation is said to be direct or
positive.
INVERSE OR NEGATIVE CORRELATION
If the increase(or decrease) in one
variable results in a corresponding
decrease (or increase) in the other, the
correlation is said to be inverse or
negative correlation.
3. For example, the correlation
between (i)The income and
expenditure; is positive.
And the correlation between (i)
the volume and pressure of a
perfect gas; is negative.
4. LINEAR CORRELATION
A relation in which the values of
two variable have a constant
ratio is called linear correlation
(or perfect correlation).
NON LINEAR CORRELATION
A relation in which the values of
two variable does not have a
constant ratio is called a non
linear correlation.
5. Karl Pearson’s Coefficient of
Correlation-
Correlation coefficient between two
variables x and y is denoted by r(x,y)
and it is a numerical measure of linear
relationship between them.
r=
Where r = correlation coefficient
between x and y
σx= standard deviation of x
σy = standard deviation of y
n= no. of observations
6. Properties of coefficient of
correlation-
(i) It is the degree of measure of correlation
(ii)The value of r(x,y) lies between -1 and 1.
(iii) If r=1, then the correlation is perfect
positive.
(iv) If r= -1, then the correlation is perfect
negative.
(v) If r = 0,then variables are independent ,
i.e. no correlation
7. (vi) Correlation coefficient is
independent of change of origin and
scale.
If X and Y are random variables and
a,b,c,d are any numbers provided that
a ≠0, c ≠0 ,then
r( aX+b, cY+d) = r(X,Y)
8. Example:- Calculate the correlation
coefficient of the following heights(in inches)
of fathers(X) and their sons(Y):
X : 65 66 67 67 68 69 70 72
Y : 67 68 65 68 72 72 69 71
12. = 0
= 0
r(U,V) =
On putting all the values we get-
r(U,V) = .603
13. RANK CORRELATION-
Let (xi ,yi) i = 1,2,3……n be the ranks of n
individuals in the group for the characteristic A
and B respectively.
Co-efficient of correlation between the ranks is
called the rank correlation co-efficient between the
characteristic A and B for that group of
individuals.
r = 1-
Where di denotes the difference in ranks of the ith
individual.
14. EXAMPLE-
Compute the rank correlation co-efficient for the following
data-
Person : A B C D E F G H I J
Rank in Maths : 9 10 6 5 7 2 4 8 1 3
Rank in Physics:1 2 3 4 5 6 7 8 9 10
15. Person R1 R2 d=R1 -R2 d2
A 9 1 8 64
B 10 2 8 64
C 6 3 3 9
D 5 4 1 1
E 7 5 2 4
F 2 6 -4 16
G 4 7 -3 9
H 8 8 0 0
I 1 9 -8 64
J 3 10 -7 49
TOTAL 280
17. Repeated Ranks
2 2 2 2
1 1 2 2
2
1 1 1
6 1 1 ..... 1
12 12 12
1
1
k kd m m m m m m
r
n n
Example : Obtain the rank correlation co-efficient for the following data ;
X 68 64 75 50 64 80 75 40 55 64
Y 62 58 68 45 81 60 68 48 50 70
18. X 68 64 75 50 64 80 75 40 55 64
Y 62 58 68 45 81 60 68 48 50 70
Ranks in
X
4 6 2.5 9 6 1 2.5 10 8 6
Ranks in
Y
5 7 3.5 10 1 6 3.5 9 8 2
d=x-y -1 -1 -1 -1 5 -5 -1 1 0 4 0
d2 1 1 1 1 25 25 1 1 0 16 72
2 2 2 2
1 1 2 2 3 3
2
2 2 2
2
1 1 1
6 1 1 1
12 12 12
1
1
1 1 1
6 72 2 2 1 3 3 1 2 2 1
12 12 12
1
10 10 1
6 75 6
1 0.545
990 11
d m m m m m m
r
n n
r
r
19. Regression Analysis
The term regression means some
sort of functional relationship
between two or more variables.
Regression measures the nature
and extent of correlation.
Regression is the estimation or
prediction of unknown values of one
variable from known values of
another variable.
20. CURVE OF REGRESSION AND
REGRESSION EQUATION
If two variates x and y are correlated, then
the scatter diagram will be more or less
concentrated round a curve. This curve is
called the curve of regression.
The mathematical equation of the
regression curve is called regression
equation.
21. LINEAR REGRESSION
When the points of the scatter
diagram concentrate round a
straight line, the regression is called
linear and this straight line is known
as the line of regression.
22. LINES OF REGRESSION
In case of n pairs (x,y), we can assume x or y
as independent or dependent variable.
Either of the two may be estimated for the
given values of the other. Thus if want to
estimate y for given values of x, we shall
have the regression equation of the form y =
a + bx, called the regression line of y on x.
And if we wish to estimate x from the given
values of y, we shall have the regression line
of the form x = A + By, called the regression
line of x on y.
Thus in general, we always have two lines of
regression
26. Where is the regression co-efficient.
xyb
2 2
( )
x
xy
y
n xy x y
b r
n y y
27. Theorem :- Correlation co-efficient is the geometric mean between the
regression co-efficients.
The co-efficient of regression are
Then geometric mean =
= co-efficient of correlation
y x
yx xy
x y
r r
b and b
yx
y x
rr
r
28. EXAMPLE-
Find the line of regression of y on x for the data given below:
X: 1.53 1.78 2.60 2.95 3.43
Y: 33.50 36.30 40 45.80 53.50
29. Solution:
x y x y
1.53 33.50 2.3409 51.255
1.78 36.30 2.1684 64.614
2.60 40.00 6.76 104
2.95 45.80 8.7025 135.11
3.42 53.50 11.6964 182.97
2
x
12.28x 209.1y 2
32.67x 537.95xy
30. Here n=5
= 9.726
Then, the line of regression of y on x
y=17.932+9.726x
Which is required line of regression of y on x.
2 2
( )
yx
n xy x y
b
n x x
( )yxy y b x x
31. Question:
For 10 observations on price (x) and
supply (y), the following data were
obtained :
Obtain the two lines of regression
and estimate the supply when price
is 16 units.
2 2
130., 220., 2288., 5506., 3467x y x y xy
32. Solution:
Regression coefficient of y on x
=1.015
Regression line of y on x is
y=1.015x+8.805
10,, 13., 22
x y
n x y
n n
2 2
( )
yx
n xy x y
b
n x x
( )yxy y b x x
33. Since we are to estimate supply (y) when price (x)
is given therefore we are to use regression line of y
on x here.
When x=16 units
y = 1.105(16)+8.805
=25.045
34. Ques:- From the following data, find the most likely
value of y when x=24:
Mean (x)=18.1, mean (y)=985.8
S.D (x)=2, S.D (y)=36.4,
r=0.58
35. Ex. In a partially destroyed laboratory
record of an analysis of a correlation data, the
following results only are eligible : Variance of
x = 9 Regression equations :
What were (a) the mean values of x and y ,
(b) the standard deviation of x and y and the
coefficient of correlation between x and y
8 10 66 0, 40 18 214.x y x y
36. 2
(i)Sinceboth thelinesof regression passthrough thepoint(x,y)therefore,
8 10 66 0
40 18 214 0 .
13 17
( ) 9 3
0.8 6.6
x x
x y
x y Solvetheseeqs
x and y
ii Variance of x
Theequationsof linesof regressioncanbewritten as
y x and x
2
0.45 5.35
0.8 0.45
* 0.8*0.45 0.36
0.6
0.8*0.3
4
0.6
yx xy
yx xy
y yx x
yx y
x
y
b and b
r b b
r
r b
b
r
37. Ques. : If the regression co-efficient are 0.8 and
0.2, what would be the value of co-efficient of
correlation.
38. Ques.: The equations of two lines of regression
obtained in a correlation analysis of 60 observation
are 5x = 6y +24 , and 1000y =768 x – 3608.
What is the co-efficient of correlation ?
Mean values of x and y.
What is the ratio of variance of x and y ?