This presentation includes an introduction to statistics, introduction to sampling methods, collection of data, classification and tabulation, frequency distribution, graphs and measures of central tendency.
This document provides an introduction to statistics, including definitions, types, data measurement, and important terms. It defines statistics as the collection, analysis, interpretation, and presentation of numerical data. Statistics can be descriptive, dealing with conclusions about a particular group, or inferential, using a sample to make inferences about a larger population. There are four levels of data measurement - nominal, ordinal, interval, and ratio. Important statistical terms defined include population, sample, parameter, and statistic.
Introduction to Statistics - Basic concepts
- How to be a good doctor - A step in Health promotion
- By Ibrahim A. Abdelhaleem - Zagazig Medical Research Society (ZMRS)
This document discusses statistics and their uses in various fields such as business, health, learning, research, social sciences, and natural resources. It provides examples of how statistics are used in starting businesses, manufacturing, marketing, and engineering. Statistics help decision-makers reduce ambiguity and assess risks. They are used to interpret data and make informed decisions. However, statistics also have limitations as they only show averages and may not apply to individuals.
Introduction to statistics...ppt rahulRahul Dhaker
This document provides an introduction to statistics and biostatistics. It discusses key concepts including:
- The definitions and origins of statistics and biostatistics. Biostatistics applies statistical methods to biological and medical data.
- The four main scales of measurement: nominal, ordinal, interval, and ratio scales. Nominal scales classify data into categories while ratio scales allow for comparisons of magnitudes and ratios.
- Descriptive statistics which organize and summarize data through methods like frequency distributions, measures of central tendency, and graphs. Frequency distributions condense data into tables and charts. Measures of central tendency include the mean, median, and mode.
Measures of dispersion
Absolute measure, relative measures
Range of Coe. of Range
Mean deviation and coe. of mean deviation
Quartile deviation IQR, coefficient of QD
Standard deviation and coefficient of variation
The document discusses various measures of central tendency used in statistics. The three most common measures are the mean, median, and mode. The mean is the sum of all values divided by the number of values and is affected by outliers. The median is the middle value when data is arranged from lowest to highest. The mode is the most frequently occurring value in a data set. Each measure has advantages and disadvantages depending on the type of data distribution. The mean is the most reliable while the mode can be undefined. In symmetrical distributions, the mean, median and mode are equal, but the mean is higher than the median for positively skewed data and lower for negatively skewed data.
This document discusses various measures of central tendency including the mean, median, and mode. It provides definitions and formulas for calculating each measure. The mean is the average and is calculated by summing all values and dividing by the total number of data points. The median is the middle value when data is arranged in order. The mode is the value that occurs most frequently in the data set. Examples are given to demonstrate calculating each measure. The document also discusses advantages and limitations of each central tendency measure.
This document discusses measures of central tendency, including the mean, median, and mode. It provides examples of calculating each measure using sample data sets. The mean is the average value calculated by summing all values and dividing by the number of data points. The median is the middle value when data is ordered from lowest to highest. The mode is the most frequently occurring value. Examples are given to demonstrate calculating the mean, median, and mode from sets of numeric data.
This document provides an introduction to statistics, including definitions, types, data measurement, and important terms. It defines statistics as the collection, analysis, interpretation, and presentation of numerical data. Statistics can be descriptive, dealing with conclusions about a particular group, or inferential, using a sample to make inferences about a larger population. There are four levels of data measurement - nominal, ordinal, interval, and ratio. Important statistical terms defined include population, sample, parameter, and statistic.
Introduction to Statistics - Basic concepts
- How to be a good doctor - A step in Health promotion
- By Ibrahim A. Abdelhaleem - Zagazig Medical Research Society (ZMRS)
This document discusses statistics and their uses in various fields such as business, health, learning, research, social sciences, and natural resources. It provides examples of how statistics are used in starting businesses, manufacturing, marketing, and engineering. Statistics help decision-makers reduce ambiguity and assess risks. They are used to interpret data and make informed decisions. However, statistics also have limitations as they only show averages and may not apply to individuals.
Introduction to statistics...ppt rahulRahul Dhaker
This document provides an introduction to statistics and biostatistics. It discusses key concepts including:
- The definitions and origins of statistics and biostatistics. Biostatistics applies statistical methods to biological and medical data.
- The four main scales of measurement: nominal, ordinal, interval, and ratio scales. Nominal scales classify data into categories while ratio scales allow for comparisons of magnitudes and ratios.
- Descriptive statistics which organize and summarize data through methods like frequency distributions, measures of central tendency, and graphs. Frequency distributions condense data into tables and charts. Measures of central tendency include the mean, median, and mode.
Measures of dispersion
Absolute measure, relative measures
Range of Coe. of Range
Mean deviation and coe. of mean deviation
Quartile deviation IQR, coefficient of QD
Standard deviation and coefficient of variation
The document discusses various measures of central tendency used in statistics. The three most common measures are the mean, median, and mode. The mean is the sum of all values divided by the number of values and is affected by outliers. The median is the middle value when data is arranged from lowest to highest. The mode is the most frequently occurring value in a data set. Each measure has advantages and disadvantages depending on the type of data distribution. The mean is the most reliable while the mode can be undefined. In symmetrical distributions, the mean, median and mode are equal, but the mean is higher than the median for positively skewed data and lower for negatively skewed data.
This document discusses various measures of central tendency including the mean, median, and mode. It provides definitions and formulas for calculating each measure. The mean is the average and is calculated by summing all values and dividing by the total number of data points. The median is the middle value when data is arranged in order. The mode is the value that occurs most frequently in the data set. Examples are given to demonstrate calculating each measure. The document also discusses advantages and limitations of each central tendency measure.
This document discusses measures of central tendency, including the mean, median, and mode. It provides examples of calculating each measure using sample data sets. The mean is the average value calculated by summing all values and dividing by the number of data points. The median is the middle value when data is ordered from lowest to highest. The mode is the most frequently occurring value. Examples are given to demonstrate calculating the mean, median, and mode from sets of numeric data.
This presentation covers statistics, its importance, its applications, branches of statistics, basic concepts used in statistics, data sampling, types of sampling,types of data and collection of data.
Measure of dispersion has two types Absolute measure and Graphical measure. There are other different types in there.
In this slide the discussed points are:
1. Dispersion & it's types
2. Definition
3. Use
4. Merits
5. Demerits
6. Formula & math
7. Graph and pictures
8. Real life application.
Descriptive statistics are methods of describing the characteristics of a data set. It includes calculating things such as the average of the data, its spread and the shape it produces.
This document provides an overview of measures of dispersion, including range, quartile deviation, mean deviation, standard deviation, and variance. It defines dispersion as a measure of how scattered data values are around a central value like the mean. Different measures of dispersion are described and formulas are provided. The standard deviation is identified as the most useful measure as it considers all data values and is not overly influenced by outliers. Examples are included to demonstrate calculating measures of dispersion.
This document discusses several definitions of economics provided by prominent economists over time. It begins by summarizing Adam Smith's definition from 1776 that viewed economics as the science of wealth. It then discusses Alfred Marshall's 1890 definition that considered economics the study of mankind in business. Next, it outlines Lionel Robbins' 1932 definition that defined economics as studying human behavior related to scarce means and alternative uses. Finally, it provides Paul Samuelson's modern definition from 1948 that viewed economics as concerning how society employs its resources. The document then briefly discusses the main divisions of economics as consumption, production, exchange, distribution, and public finance.
This document discusses the scope and uses of statistics across various fields such as planning, economics, business, industry, mathematics, science, psychology, education, war, banking, government, sociology, and more. It outlines functions of statistics like presenting facts, testing hypotheses, forecasting, policymaking, enlarging knowledge, measuring uncertainty, simplifying data, deriving valid inferences, and drawing rational conclusions. It also covers characteristics, advantages, and limitations of statistics.
This presentation gives you a brief idea;
-definition of frequency distribution
- types of frequency distribution
-types of charts used in the distribution
-a problem on creating types of distribution
-advantages and limitations of the distribution
This document discusses different types of statistics used in research. Descriptive statistics are used to organize and summarize data using tables, graphs, and measures. Inferential statistics allow inferences about populations based on samples through techniques like surveys and polls. The key difference is that descriptive statistics describe samples while inferential statistics allow conclusions about populations beyond the current data.
This document provides an introduction to statistics. It discusses why statistics is important and required for many programs. Reasons include the prevalence of numerical data in daily life, the use of statistical techniques to make decisions that affect people, and the need to understand how data is used to make informed decisions. The document also defines key statistical concepts such as population, parameter, sample, statistic, descriptive statistics, inferential statistics, variables, and different types of variables.
This document discusses sampling and sampling distributions. It begins by explaining why sampling is preferable to a census in terms of time, cost and practicality. It then defines the sampling frame as the listing of items that make up the population. Different types of samples are described, including probability and non-probability samples. Probability samples include simple random, systematic, stratified, and cluster samples. Key aspects of each type are defined. The document also discusses sampling distributions and how the distribution of sample statistics such as means and proportions can be approximated as normal even if the population is not normal, due to the central limit theorem. It provides examples of how to calculate probabilities and intervals for sampling distributions.
This document provides information about various statistical measures of central tendency including the median, mode, and quartiles. It defines each measure and provides examples of how to calculate them from both grouped and ungrouped data sets. Formulas are given for calculating the median, quartiles, deciles, and percentiles for grouped data. The mode is defined as the value that occurs most frequently in a data set, and a formula is provided for calculating it from grouped frequency distributions.
This document provides information about various measures of central tendency including arithmetic mean, median, mode, and quartiles. It defines each measure and provides formulas and examples for calculating them for different types of data series, including individual, discrete, frequency distribution, and cumulative frequency series. Formulas are given for calculating the arithmetic mean, median, quartiles, and mode of a data set, along with examples worked out step-by-step. Advantages and disadvantages of each measure are also discussed.
This document introduces the concept of data classification and levels of measurement in statistics. It explains that data can be either qualitative or quantitative. Qualitative data consists of attributes and labels while quantitative data involves numerical measurements. The document also outlines the four levels of measurement - nominal, ordinal, interval, and ratio - from lowest to highest. Each level allows for different types of statistical calculations, with the ratio level permitting the most complex calculations like ratios of two values.
The geometric mean is a type of average that indicates the central tendency of a set of numbers using their product, as opposed to the arithmetic mean which uses their sum, and it is calculated by taking the nth root of the product of the numbers. The geometric mean is more appropriate than the arithmetic mean for describing proportional growth and ratios, and it has various applications in fields like optics, signal processing, geometry, and finance.
This presentation is about Basic Statistics-related to types of Data-Qualitative and Quantitative, and its Examples in everyday life- By: Dr. Farhana Shaheen
This document summarizes key concepts from an introduction to statistics textbook. It covers types of data (quantitative, qualitative, levels of measurement), sampling (population, sample, randomization), experimental design (observational studies, experiments, controlling variables), and potential misuses of statistics (bad samples, misleading graphs, distorted percentages). The goal is to illustrate how common sense is needed to properly interpret data and statistics.
This document provides information about statistics including its definition, origins, uses in different fields, and key statistical concepts. It defines statistics as the mathematical science pertaining to the collection, analysis, interpretation, and presentation of data. Some key points:
- Statistics originated from needs to base policy on demographic and economic data and has broadened to include collecting and analyzing data in general.
- It is widely used today in government, business, and natural and social sciences to make accurate inferences from data and decisions in uncertainty.
- The document also defines and provides examples of important statistical concepts including the mean, mode, and median.
Descriptive statistics are used to describe and summarize the basic features of data through measures of central tendency like the mean, median, and mode, and measures of variability like range, variance and standard deviation. The mean is the average value and is best for continuous, non-skewed data. The median is less affected by outliers and is best for skewed or ordinal data. The mode is the most frequent value and is used for categorical data. Measures of variability describe how spread out the data is, with higher values indicating more dispersion.
1. The document discusses various measures of dispersion used to quantify how spread out or variable a data set is. It describes measures such as range, mean deviation, variance, and standard deviation.
2. It also discusses relative measures of dispersion like the coefficient of variation, which allows comparison of variability between data sets with different units or averages. The coefficient of variation expresses variability as a percentage of the mean.
3. Additional concepts covered include skewness, which refers to the asymmetry of a distribution, and kurtosis, which measures the peakedness of a distribution compared to a normal distribution. Positive or negative skewness and leptokurtic, mesokurtic, or platykurtic k
This document provides an overview of probability concepts including:
- Probability is a numerical measure of the likelihood of an event occurring, ranging from 0 (impossible) to 1 (certain).
- An experiment generates outcomes that make up the sample space. Events are collections of outcomes.
- Simple events have a defined probability based on being equally likely. The probability of an event is the sum of probabilities of the simple events it contains.
- Rules like the multiplication rule for independent events and additive rule for unions allow calculating probabilities of composite events.
- Complement and conditional probabilities relate the probabilities of events. Independent events do not influence each other's probabilities.
Statistics can be defined in both a singular and plural sense. In the singular sense, it refers to statistical methods for collecting, analyzing, and interpreting numerical data. In the plural sense, it refers to the actual numerical facts or data collected. Statistics involves systematically collecting, organizing, presenting, analyzing, and interpreting numerical data to describe features and characteristics. It allows for comparing facts, establishing relationships, and facilitating policymaking and decision making. However, statistics only studies aggregates and averages, not individual cases, and results are true only on average. It also requires properly contextualizing and referencing results.
The document provides an introduction to statistics, discussing the meaning, history, and applications of statistics. It defines key statistical concepts such as population and sample, descriptive and inferential statistics. It also discusses the different types of variables and levels of measurement. The document traces the history of statistics from ancient times to the present day, highlighting important contributors to the field. It provides examples of how statistics is used in different domains like education, business, research, and government.
This presentation covers statistics, its importance, its applications, branches of statistics, basic concepts used in statistics, data sampling, types of sampling,types of data and collection of data.
Measure of dispersion has two types Absolute measure and Graphical measure. There are other different types in there.
In this slide the discussed points are:
1. Dispersion & it's types
2. Definition
3. Use
4. Merits
5. Demerits
6. Formula & math
7. Graph and pictures
8. Real life application.
Descriptive statistics are methods of describing the characteristics of a data set. It includes calculating things such as the average of the data, its spread and the shape it produces.
This document provides an overview of measures of dispersion, including range, quartile deviation, mean deviation, standard deviation, and variance. It defines dispersion as a measure of how scattered data values are around a central value like the mean. Different measures of dispersion are described and formulas are provided. The standard deviation is identified as the most useful measure as it considers all data values and is not overly influenced by outliers. Examples are included to demonstrate calculating measures of dispersion.
This document discusses several definitions of economics provided by prominent economists over time. It begins by summarizing Adam Smith's definition from 1776 that viewed economics as the science of wealth. It then discusses Alfred Marshall's 1890 definition that considered economics the study of mankind in business. Next, it outlines Lionel Robbins' 1932 definition that defined economics as studying human behavior related to scarce means and alternative uses. Finally, it provides Paul Samuelson's modern definition from 1948 that viewed economics as concerning how society employs its resources. The document then briefly discusses the main divisions of economics as consumption, production, exchange, distribution, and public finance.
This document discusses the scope and uses of statistics across various fields such as planning, economics, business, industry, mathematics, science, psychology, education, war, banking, government, sociology, and more. It outlines functions of statistics like presenting facts, testing hypotheses, forecasting, policymaking, enlarging knowledge, measuring uncertainty, simplifying data, deriving valid inferences, and drawing rational conclusions. It also covers characteristics, advantages, and limitations of statistics.
This presentation gives you a brief idea;
-definition of frequency distribution
- types of frequency distribution
-types of charts used in the distribution
-a problem on creating types of distribution
-advantages and limitations of the distribution
This document discusses different types of statistics used in research. Descriptive statistics are used to organize and summarize data using tables, graphs, and measures. Inferential statistics allow inferences about populations based on samples through techniques like surveys and polls. The key difference is that descriptive statistics describe samples while inferential statistics allow conclusions about populations beyond the current data.
This document provides an introduction to statistics. It discusses why statistics is important and required for many programs. Reasons include the prevalence of numerical data in daily life, the use of statistical techniques to make decisions that affect people, and the need to understand how data is used to make informed decisions. The document also defines key statistical concepts such as population, parameter, sample, statistic, descriptive statistics, inferential statistics, variables, and different types of variables.
This document discusses sampling and sampling distributions. It begins by explaining why sampling is preferable to a census in terms of time, cost and practicality. It then defines the sampling frame as the listing of items that make up the population. Different types of samples are described, including probability and non-probability samples. Probability samples include simple random, systematic, stratified, and cluster samples. Key aspects of each type are defined. The document also discusses sampling distributions and how the distribution of sample statistics such as means and proportions can be approximated as normal even if the population is not normal, due to the central limit theorem. It provides examples of how to calculate probabilities and intervals for sampling distributions.
This document provides information about various statistical measures of central tendency including the median, mode, and quartiles. It defines each measure and provides examples of how to calculate them from both grouped and ungrouped data sets. Formulas are given for calculating the median, quartiles, deciles, and percentiles for grouped data. The mode is defined as the value that occurs most frequently in a data set, and a formula is provided for calculating it from grouped frequency distributions.
This document provides information about various measures of central tendency including arithmetic mean, median, mode, and quartiles. It defines each measure and provides formulas and examples for calculating them for different types of data series, including individual, discrete, frequency distribution, and cumulative frequency series. Formulas are given for calculating the arithmetic mean, median, quartiles, and mode of a data set, along with examples worked out step-by-step. Advantages and disadvantages of each measure are also discussed.
This document introduces the concept of data classification and levels of measurement in statistics. It explains that data can be either qualitative or quantitative. Qualitative data consists of attributes and labels while quantitative data involves numerical measurements. The document also outlines the four levels of measurement - nominal, ordinal, interval, and ratio - from lowest to highest. Each level allows for different types of statistical calculations, with the ratio level permitting the most complex calculations like ratios of two values.
The geometric mean is a type of average that indicates the central tendency of a set of numbers using their product, as opposed to the arithmetic mean which uses their sum, and it is calculated by taking the nth root of the product of the numbers. The geometric mean is more appropriate than the arithmetic mean for describing proportional growth and ratios, and it has various applications in fields like optics, signal processing, geometry, and finance.
This presentation is about Basic Statistics-related to types of Data-Qualitative and Quantitative, and its Examples in everyday life- By: Dr. Farhana Shaheen
This document summarizes key concepts from an introduction to statistics textbook. It covers types of data (quantitative, qualitative, levels of measurement), sampling (population, sample, randomization), experimental design (observational studies, experiments, controlling variables), and potential misuses of statistics (bad samples, misleading graphs, distorted percentages). The goal is to illustrate how common sense is needed to properly interpret data and statistics.
This document provides information about statistics including its definition, origins, uses in different fields, and key statistical concepts. It defines statistics as the mathematical science pertaining to the collection, analysis, interpretation, and presentation of data. Some key points:
- Statistics originated from needs to base policy on demographic and economic data and has broadened to include collecting and analyzing data in general.
- It is widely used today in government, business, and natural and social sciences to make accurate inferences from data and decisions in uncertainty.
- The document also defines and provides examples of important statistical concepts including the mean, mode, and median.
Descriptive statistics are used to describe and summarize the basic features of data through measures of central tendency like the mean, median, and mode, and measures of variability like range, variance and standard deviation. The mean is the average value and is best for continuous, non-skewed data. The median is less affected by outliers and is best for skewed or ordinal data. The mode is the most frequent value and is used for categorical data. Measures of variability describe how spread out the data is, with higher values indicating more dispersion.
1. The document discusses various measures of dispersion used to quantify how spread out or variable a data set is. It describes measures such as range, mean deviation, variance, and standard deviation.
2. It also discusses relative measures of dispersion like the coefficient of variation, which allows comparison of variability between data sets with different units or averages. The coefficient of variation expresses variability as a percentage of the mean.
3. Additional concepts covered include skewness, which refers to the asymmetry of a distribution, and kurtosis, which measures the peakedness of a distribution compared to a normal distribution. Positive or negative skewness and leptokurtic, mesokurtic, or platykurtic k
This document provides an overview of probability concepts including:
- Probability is a numerical measure of the likelihood of an event occurring, ranging from 0 (impossible) to 1 (certain).
- An experiment generates outcomes that make up the sample space. Events are collections of outcomes.
- Simple events have a defined probability based on being equally likely. The probability of an event is the sum of probabilities of the simple events it contains.
- Rules like the multiplication rule for independent events and additive rule for unions allow calculating probabilities of composite events.
- Complement and conditional probabilities relate the probabilities of events. Independent events do not influence each other's probabilities.
Statistics can be defined in both a singular and plural sense. In the singular sense, it refers to statistical methods for collecting, analyzing, and interpreting numerical data. In the plural sense, it refers to the actual numerical facts or data collected. Statistics involves systematically collecting, organizing, presenting, analyzing, and interpreting numerical data to describe features and characteristics. It allows for comparing facts, establishing relationships, and facilitating policymaking and decision making. However, statistics only studies aggregates and averages, not individual cases, and results are true only on average. It also requires properly contextualizing and referencing results.
The document provides an introduction to statistics, discussing the meaning, history, and applications of statistics. It defines key statistical concepts such as population and sample, descriptive and inferential statistics. It also discusses the different types of variables and levels of measurement. The document traces the history of statistics from ancient times to the present day, highlighting important contributors to the field. It provides examples of how statistics is used in different domains like education, business, research, and government.
This document discusses statistical analysis techniques including measures of central tendency, variance, standard deviation, t-tests, and levels of significance. It provides an example of using these techniques to analyze plant height data from a fertilizer experiment and determine if differences in heights between treated and untreated plants are statistically significant. The document introduces the concepts and calculations involved in describing and analyzing quantitative data using common statistical methods.
Chapter 1 introduction to statistics for engineers 1 (1)abfisho
This document provides an introduction to statistics. It defines statistics as the science of collecting, analyzing, and presenting data systematically. Statistics has two main branches - descriptive statistics, which describes data through measures like averages without generalizing beyond the sample, and inferential statistics, which makes generalizations from samples to populations. The document lists important terms in statistics like data, variables, population, sample, and sample size. It also outlines the main steps in a statistical investigation, including collecting and organizing data. Statistics has many applications in fields like business, engineering, health, and economics.
Statistics involves collecting, organizing, analyzing, and interpreting data. Descriptive statistics describe characteristics of a data set through measures like central tendency and variability. Inferential statistics draw conclusions about a population based on a sample. Key terms include population, sample, parameter, statistic, data types, levels of measurement, and sampling techniques like simple random sampling. Common data gathering methods are interviews, questionnaires, and registration records. Data can be presented textually, in tables, or graphically through charts, graphs, and maps.
The document discusses the approval of the drug AZT to treat AIDS in 1987. It describes how early clinical trials showed AZT significantly reduced deaths among AIDS patients compared to a control group. However, statistical analysis was needed to determine if the results were due to the drug or chance. Statistical tests found the probability the results were due to chance was less than 1 in 1000. Armed with this evidence, the FDA approved AZT after only 21 months of testing.
Statistics is the collection and analysis of data. There are two main branches: descriptive statistics, which organizes and summarizes data, and inferential statistics, which uses descriptive statistics to make predictions. Statistics starts with a question and uses data to provide information to help make decisions. It is widely used in business, health, education, research, social sciences, and natural resources.
This document provides a teaching guide for a Statistics and Probability course for senior high school students. It begins with an introduction that discusses the importance of statistics and data analysis. It then outlines the structure and goals of the teaching guide, which includes sections on introduction, instruction, practice, enrichment, and evaluation. The guide is meant to help teachers facilitate student understanding, mastery of concepts, and a sense of ownership over their learning. It also discusses aligning the guide with DepEd and CHED standards to prepare students for college. The preface provides additional context on statistics as a discipline and its growing importance.
Statistics is the science of dealing with numbers.
It is used for collection, summarization, presentation and analysis of data.
Statistics provides a way of organizing data to get information on a wider and more formal (objective) basis than relying on personal experience (subjective).
This document provides an overview of key terminology and concepts in statistics. It discusses topics like populations and samples, variables and their measurement, levels of measurement, research methods like correlational analysis and experiments, and mathematical notation used in statistics. The goal is to introduce readers to what statistics is about at a high level and prepare them for further study of important statistical concepts.
This document provides an overview of key concepts in statistics including:
- Descriptive statistics such as frequency distributions which organize and summarize data
- Inferential statistics which make estimates or predictions about populations based on samples
- Types of variables including quantitative, qualitative, discrete and continuous
- Levels of measurement including nominal, ordinal, interval and ratio
- Common measures of central tendency (mean, median, mode) and dispersion (range, standard deviation)
This document provides an introduction to key concepts in statistics, including variables, populations, samples, types of variables, measurement scales, correlational studies, experiments, other study types, data, descriptive statistics, and inferential statistics. It defines important terms and outlines the goals and characteristics of different statistical methods and study designs.
The document provides an introduction to statistics, covering its origin and development, definitions, types of statistics (descriptive and inferential), data collection methods, organization and presentation of data, and variables. It discusses how statistics has evolved from its early use by governments to keep records to its current role across various fields such as business, research, and the natural and social sciences. Key aspects of statistics like data collection, organization, analysis, and interpretation are also introduced.
This document summarizes key concepts from Chapter 1 of an introductory statistics textbook. It defines statistics, distinguishes between populations and samples, parameters and statistics, and descriptive and inferential statistics. It also classifies data types and levels of measurement, and discusses experimental design concepts like data collection methods and sampling techniques.
This document provides an introduction to descriptive statistics and measures of central tendency, including the mean, median, and mode. It discusses how the mean can be impacted by outliers, while the median is not. The standard deviation and variance are introduced as measures of dispersion that quantify how much values vary from the mean or from each other. Finally, the document discusses different ways of organizing and graphing data, including histograms, pie charts, line graphs, and scatter plots.
This document provides an introduction to key concepts in statistics. It discusses various statistical measures such as measures of central tendency (mean, median, mode), measures of dispersion (range, standard deviation), correlation, and different types of correlation (simple, partial, multiple). It also outlines common statistical methods like scatter diagrams, Karl Pearson's method, and rank correlation method. The role of computer technology in statistics is mentioned.
This document provides an introduction to statistics and statistical concepts. It covers topics such as course objectives, purposes of statistics, population and sampling, types of data and variables, levels of measurement, and nominal level of measurement. The key points are that statistics can describe, summarize, predict and identify relationships in data, and that there are different levels of variables from nominal to ratio scales.
This document provides an introduction to statistics, including definitions, scope, and measures of central tendency. It defines statistics as the science of collecting, organizing, analyzing, interpreting, and presenting data. Statistics has applications in various fields including social sciences, planning, mathematics, economics, and business management. Common measures of central tendency discussed are the arithmetic mean, geometric mean, harmonic mean, median, and mode. Formulas for calculating the arithmetic mean using individual data, frequency distributions, and class intervals are provided.
The document discusses various concepts related to research methodology and sampling. It defines key terms like hypothesis, test marketing, sample and population. It explains different sampling methods like probability sampling which includes simple random sampling, systematic random sampling and stratified sampling. It also discusses cluster sampling under probability sampling. The document also covers non-probability sampling methods like convenience sampling, purposive sampling and quota sampling. It highlights the differences between probability and non-probability sampling. Finally, it outlines the steps involved in sampling design.
This document discusses key concepts in statistics including:
- Descriptive statistics such as measures of central tendency (mean, median, mode), measures of dispersion (range, interquartile range, standard deviation, variance), and measures of shape.
- The difference between parameters and statistics, and how statistics are used to estimate population parameters.
- Types of data including primary data, secondary data, and how probability and non-probability samples are collected.
- Key aspects of statistical studies such as populations, samples, and how statistics can be used to make inferences about populations.
concept of sample and sampling, sampling process and problems, types of samples: probability and non probability sampling, determination and sample size, sampling and non sampling errors
1. Sampling is the process of selecting a subset of items from a population to gather information about the entire population. It involves selecting a sample using probability or non-probability methods.
2. Probability sampling methods like simple random sampling, systematic sampling, and stratified sampling ensure each item has a known, non-zero chance of being selected. Non-probability methods like convenience sampling and purposive sampling rely on researcher judgment.
3. The central limit theorem states that as sample size increases, the sample mean will approach a normal distribution, allowing inferences about the population mean from a sample. Sampling error is reduced with larger sample sizes.
1. Sampling is the process of selecting a subset of items from a population to make inferences about the entire population. It is often used instead of a complete census or enumeration due to the time, cost, and resources required for a census.
2. There are two main types of sampling: probability sampling, where every item has a known, non-zero chance of being selected, and non-probability sampling, where items are selected in a non-random way based on the researcher's judgment.
3. Common probability sampling methods include simple random sampling, systematic sampling, stratified sampling, and cluster sampling. Common non-probability methods include convenience sampling and purposive sampling. The appropriate sampling method depends on the
This document discusses different sampling methods used in research. It begins by defining key terms like population, sample, sampling frame, and probability versus non-probability sampling. It then describes various probability sampling techniques in detail, including simple random sampling, systematic random sampling, stratified random sampling, and cluster random sampling. The document explains the steps for implementing each method and provides examples. It also notes advantages and disadvantages of sampling methods.
The document defines key concepts related to sampling, including population, sample, sampling methods, and errors. It discusses different types of sampling methods like probability sampling (simple random sampling, stratified sampling, cluster sampling) and non-probability sampling (convenience sampling, judgement sampling). It also explains sampling frame, sampling frame error, random sampling error, and non-response error that can occur in sampling. The document provides steps involved in conducting a sample survey from defining the target population to selecting the sampling technique and sample size.
This document discusses different sampling methods used in research. It defines key terms like population, sample, sampling unit and frame. It explains the difference between probability and non-probability sampling. Probability methods discussed include simple random sampling, systematic sampling and cluster sampling. Advantages of probability sampling are an absence of bias and minimal sampling errors. Non-probability methods are useful when the population is homogeneous or operational considerations are important. The document provides details on how to implement simple random and systematic random sampling techniques.
This document provides an introduction to biostatistics. It defines biostatistics as the application of statistical tools and concepts to data from biological sciences and medicine. The two main branches of statistics are described as descriptive statistics, which involves organizing and summarizing sample data, and inferential statistics, which involves generalizing from samples to populations. Several key statistical concepts are also defined, including populations, samples, variables, data types, levels of measurement, and common sampling methods. The objectives are to demonstrate knowledge of these fundamental statistical terms and concepts.
1. The document provides an overview of obtaining data through various sampling techniques. It discusses descriptive and inferential statistics, and defines key terms like population, sample, variables and levels of measurement.
2. It then covers different methods for collecting and obtaining data, whether through primary research like surveys or experiments, or secondary research of existing data.
3. The document outlines different sampling techniques, including non-probability methods like convenience sampling and purposive sampling, as well as probability methods like simple random sampling, stratified sampling and cluster sampling.
This document provides an overview of sampling theory and methods. It defines key terms like population, sample, parameter, statistic, and discusses reasons for sampling such as cost, time, and other limitations that prevent examining an entire population. It describes the basic concepts of probability and non-probability sampling. Specific probability sampling methods covered include simple random sampling and systematic sampling. The advantages and disadvantages of these methods are also discussed.
1) Sampling involves collecting data from a subset of individuals (the sample) rather than from the entire population.
2) There are two main types of sampling: probability sampling, where each individual has a known chance of being selected, and non-probability sampling, where the probability of selection is unknown.
3) Common probability sampling methods include simple random sampling, stratified sampling, systematic sampling, and cluster sampling. Non-probability methods include quota sampling and snowball sampling.
This document discusses different sampling procedures used in research, including their advantages and steps. It covers:
1) Simple random sampling, where every subset has an equal chance of selection. Steps include assigning numbers and selecting samples randomly.
2) Systematic random sampling, which selects every kth element with a random start. This can help ensure representation across a population.
3) Stratified random sampling, which divides a population into homogeneous strata and draws proportional samples from each, ensuring all groups are represented.
4) Cluster sampling, where naturally occurring clusters are randomly selected and all or some units within are sampled, reducing costs but decreasing precision.
The document provides information on survey design and quantitative data analysis methods. It discusses different sampling methods including probability sampling techniques like simple random sampling, systematic random sampling, and stratified random sampling. It also covers non-probability sampling and factors to consider when determining sample size. The document then outlines steps for designing a survey including the components of a survey method plan and instrumentation. It concludes with an overview of quantitative data analysis methods for surveys, specifically descriptive statistics like frequencies, measures of central tendency, and measures of dispersion.
This document provides an overview of sampling methods and sample size estimation. It begins with definitions of key concepts such as population, sample, parameter, and statistic. It then discusses the history of sampling and why it is used. The document outlines different sampling methods like simple random sampling, stratified sampling, and cluster sampling. It also covers sampling errors, non-sampling errors, and ways to improve response rates. Finally, it discusses how to estimate appropriate sample sizes based on desired confidence levels and margins of error.
The document discusses sampling methods and statistical inference. It defines key terms like population, sample, sampling frame. It describes different sampling techniques including random sampling methods like simple random sampling and systematic sampling. It also covers non-random sampling techniques like quota sampling and convenience sampling. The minimum sample size is calculated using a standard formula. Statistical inference is defined as using a sample to make conclusions about the larger population. The key difference between a sample and population is also highlighted.
This document provides an overview of various computer science concepts including data types, loops, linear search, bubble sort, 2D arrays, files, stacks, queues, and linked lists. Key algorithms like linear search and bubble sort are explained with examples of pseudocode. Different data structures such as arrays, stacks, queues and linked lists are also introduced along with their core operations.
This document discusses project management and computer-aided design and manufacturing. It begins by explaining the importance of disaster recovery planning and risk assessment when unexpected events occur. It then discusses prototyping software solutions, different development methods like rapid application development and waterfall, and the benefits of computer-aided design and manufacturing applications.
1) The document discusses various aspects of project management including the typical stages of a project (conception, planning, execution, monitoring and control, closure), types of project management software (desktop, web-based, single-user, collaborative), and tools used in project management like critical path analysis, Gantt charts, and PERT charts.
2) It explains how project management software can help with planning tasks, scheduling resources, tracking costs, and facilitating communication between team members. Key features like templates, task scheduling, resource allocation, and cost tracking are described.
3) Critical path analysis and Gantt charts are explained as tools to identify critical tasks, show dependencies between tasks, and track project progress
13.03 - Satellite communication systemsAnjan Mahanta
Satellites orbit Earth at different distances and are used for communication systems. Low Earth orbit satellites are closest while geostationary satellites are furthest. Satellite broadband provides internet access to remote areas, but has higher latency and costs than other options. Satellite television and radio broadcasting is delivered via satellites in geostationary orbit to provide more channels than other methods. The global positioning system uses satellites to calculate locations on Earth through trilateration of distances to multiple satellites.
This document discusses various network security issues and methods. It covers topics like unauthorized access, malware, denial of service attacks, security methods like access rights and firewalls, and ways to protect against threats such as encryption, backups, and anti-virus software. Network security is important because when computers are connected, there are increased risks from other devices gaining access without permission. Hackers, viruses, and other threats can read or damage data if networks are not properly secured.
This document discusses various network components and their functions. It begins by explaining how data packets are transmitted with header information containing the source and destination addresses. It then describes several common network devices:
Switches inspect packets and forward them to the correct computer, reducing network congestion. Hubs simply transfer all data through their ports, allowing any device to see all network traffic. Wireless access points connect WiFi devices to a network using radio frequencies. Network interface cards and wireless network interface cards allow devices to connect to networks physically or wirelessly.
Routers read address information to determine a packet's destination network and direct the traffic accordingly. Other components discussed include repeaters, gateways, bridges, firewalls, servers,
The document discusses how information technology has impacted various aspects of society, including e-business, online shopping, online banking, electronic fund transfers, automatic stock control, digital currencies, data mining, social networking, video conferencing, web conferencing, teleworking, technology in sport, medicine, manufacturing, and education. It provides examples of how each area has been transformed by new technologies and both the benefits and criticisms of technology integration. Discussion topics are also included at the end to analyze impacts further.
This document discusses several emerging technologies including 3D printing, augmented reality, artificial intelligence, biometrics, cloud computing, computer-assisted translation, holographic imaging, quantum cryptography, robotics, QR codes, ultra-high definition television, vision enhancement, and virtual reality. For each technology, the document provides a brief explanation of what it is and how it works. The document also includes several discussion points about the impacts and applications of these technologies.
This document provides examples of conditional statistical functions in spreadsheets including COUNT, COUNTA, COUNT IF, SUMIF, AVERAGEIF, COUNTIFS, and SUMPRODUCT. These functions allow users to count, sum, or average values in a range that meet one or more criteria. Examples are given counting values that are numbers, strings like "Pass" or "Fail", summing values above 500, and averaging marks for different letter grades or statuses.
This document discusses nested IF functions in spreadsheets. It provides examples of using nested IF statements to calculate a student's grade based on their score, calculate a hotel bill based on room type and number of nights, and calculate salary deduction based on income amount. The nested IF statements allow evaluating multiple logical conditions to return the appropriate value.
The VLOOKUP function in Excel looks up values in a table and returns data from a specific column. It supports exact and approximate matching of values as well as wildcards for partial matches. The lookup values must be in the first column of the table, with the return columns to the right. Examples are provided to demonstrate looking up values between numbers, calculating grades from another worksheet, and using VLOOKUP with multiple tables. The similar HLOOKUP function performs horizontal lookups but is less commonly used as most tables are vertical.
This document provides an overview of various text and string functions in Excel including LEFT, RIGHT, MID, LEN, FIND, PROPER, REPT, TRIM, UPPER, LOWER, SUBSTITUTE, CONCATENATE, JOIN STRINGS, YEAR functions along with examples of their syntax and usage. It also includes 2 problems demonstrating the use of these functions to extract parts of text and strings to create an email address and additional columns from imported data.
This document provides instructions for using various date and time functions in Excel, including TODAY, NOW, DATEDIF, and NETWORKDAYS. It explains how to calculate the difference between dates and times, ages, work schedules, and more. Functions covered include DATE, WEEKDAY, WORKDAY, and formatting dates, times, and time intervals. Examples demonstrate calculating days until birthdays, generating time sheets, and finding weekends and holidays between dates.
Routers direct data packets between networks by reading the destination address and using a table to determine the best path. They allow data to be transferred between different networks until it reaches its final destination. Network interface cards connect devices like computers to the network and prepare data for sending and receiving. Security risks on networks include password interception, viruses, and hackers accessing networks without permission. Anti-virus software, firewalls, and changing passwords regularly can help prevent security issues.
An expert system uses a knowledge base and set of rules to provide diagnoses or recommendations by reasoning through user-provided information. It has three main components: a knowledge base containing facts and heuristics gathered from experts; an inference engine that asks questions and follows logic chains to reach conclusions; and a user interface for interaction. Expert systems are used in fields like medicine, mechanics, and customer service to apply expertise to problems. They have advantages like consistency but also limitations since they cannot think creatively like humans.
Fixed hard disks are used for storing operating systems and applications on a computer. They have high access speeds and storage capacities but cannot be removed from the computer. Portable hard disks can store large files and transfer them between computers, but can be easily stolen. Magnetic tapes provide extremely large storage capacities for backups but have slow access speeds. Optical disks like CDs and DVDs are used to distribute software, movies and files but have slower data rates than hard disks. Solid state storage like memory sticks are small, robust and portable but have lower capacity and speeds than hard disks. Backups protect against data loss by making copies of files and storing them separately from the originals.
This document provides an overview of computer networks, including their advantages and disadvantages. It defines different types of networks such as LANs, WANs, VPNs, client-server networks, and peer-to-peer networks. LANs cover small areas like homes or offices while WANs span larger distances. Client-server networks have centralized servers that control access and resources, while peers have equal access in peer-to-peer networks. VPNs allow secure remote access to private networks. The document also discusses intranets and examples of network usage.
The digital divide refers to differences in access to technology between groups. It can be caused by urban vs rural locations, wealth, and a country's level of development. Improving infrastructure and technology sharing initiatives could help reduce the divide by improving access.
The document discusses e-safety and health and safety issues related to computer use. It explains why personal data should be kept confidential to avoid identity theft and online exploitation. It describes different types of malware like phishing, pharming, spyware, and ransomware. It also discusses health issues from prolonged computer use like repetitive strain injury, eye strain, and back pain. Measures are provided to improve e-safety, protect from malware, and prevent health issues like taking regular breaks and maintaining proper posture.
Creativity for Innovation and SpeechmakingMattVassar1
Tapping into the creative side of your brain to come up with truly innovative approaches. These strategies are based on original research from Stanford University lecturer Matt Vassar, where he discusses how you can use them to come up with truly innovative solutions, regardless of whether you're using to come up with a creative and memorable angle for a business pitch--or if you're coming up with business or technical innovations.
Brand Guideline of Bashundhara A4 Paper - 2024khabri85
It outlines the basic identity elements such as symbol, logotype, colors, and typefaces. It provides examples of applying the identity to materials like letterhead, business cards, reports, folders, and websites.
Get Success with the Latest UiPath UIPATH-ADPV1 Exam Dumps (V11.02) 2024yarusun
Are you worried about your preparation for the UiPath Power Platform Functional Consultant Certification Exam? You can come to DumpsBase to download the latest UiPath UIPATH-ADPV1 exam dumps (V11.02) to evaluate your preparation for the UIPATH-ADPV1 exam with the PDF format and testing engine software. The latest UiPath UIPATH-ADPV1 exam questions and answers go over every subject on the exam so you can easily understand them. You won't need to worry about passing the UIPATH-ADPV1 exam if you master all of these UiPath UIPATH-ADPV1 dumps (V11.02) of DumpsBase. #UIPATH-ADPV1 Dumps #UIPATH-ADPV1 #UIPATH-ADPV1 Exam Dumps
Post init hook in the odoo 17 ERP ModuleCeline George
In Odoo, hooks are functions that are presented as a string in the __init__ file of a module. They are the functions that can execute before and after the existing code.
Information and Communication Technology in EducationMJDuyan
(𝐓𝐋𝐄 𝟏𝟎𝟎) (𝐋𝐞𝐬𝐬𝐨𝐧 2)-𝐏𝐫𝐞𝐥𝐢𝐦𝐬
𝐄𝐱𝐩𝐥𝐚𝐢𝐧 𝐭𝐡𝐞 𝐈𝐂𝐓 𝐢𝐧 𝐞𝐝𝐮𝐜𝐚𝐭𝐢𝐨𝐧:
Students will be able to explain the role and impact of Information and Communication Technology (ICT) in education. They will understand how ICT tools, such as computers, the internet, and educational software, enhance learning and teaching processes. By exploring various ICT applications, students will recognize how these technologies facilitate access to information, improve communication, support collaboration, and enable personalized learning experiences.
𝐃𝐢𝐬𝐜𝐮𝐬𝐬 𝐭𝐡𝐞 𝐫𝐞𝐥𝐢𝐚𝐛𝐥𝐞 𝐬𝐨𝐮𝐫𝐜𝐞𝐬 𝐨𝐧 𝐭𝐡𝐞 𝐢𝐧𝐭𝐞𝐫𝐧𝐞𝐭:
-Students will be able to discuss what constitutes reliable sources on the internet. They will learn to identify key characteristics of trustworthy information, such as credibility, accuracy, and authority. By examining different types of online sources, students will develop skills to evaluate the reliability of websites and content, ensuring they can distinguish between reputable information and misinformation.
Artificial Intelligence (AI) has revolutionized the creation of images and videos, enabling the generation of highly realistic and imaginative visual content. Utilizing advanced techniques like Generative Adversarial Networks (GANs) and neural style transfer, AI can transform simple sketches into detailed artwork or blend various styles into unique visual masterpieces. GANs, in particular, function by pitting two neural networks against each other, resulting in the production of remarkably lifelike images. AI's ability to analyze and learn from vast datasets allows it to create visuals that not only mimic human creativity but also push the boundaries of artistic expression, making it a powerful tool in digital media and entertainment industries.
8+8+8 Rule Of Time Management For Better ProductivityRuchiRathor2
This is a great way to be more productive but a few things to
Keep in mind:
- The 8+8+8 rule offers a general guideline. You may need to adjust the schedule depending on your individual needs and commitments.
- Some days may require more work or less sleep, demanding flexibility in your approach.
- The key is to be mindful of your time allocation and strive for a healthy balance across the three categories.
2. Course Outline
Definitions, Scope and Limitations
Introduction to sampling methods
Collection of data, Classification and Tabulation
Frequency Distribution
Diagrammatic and Graphical Representation
Measures of Central Tendency
2
4. Introduction
In the modern world of computers and information
technology, the importance of statistics is very well
recognized by all the disciplines.
Statistics has originated as a science of statehood and
found applications slowly and steadily in
◦ Agriculture,
◦ Economics,
◦ Commerce,
◦ Biology,
◦ Medicine,
◦ Industry,
◦ Planning, education and so on.
As on date there is no other human walk of life, where
statistics cannot be applied.
4
6. Origin and Growth of Statistics
The word ‘ Statistics’ and ‘ Statistical’ are all derived from the
Latin word Status, means a political state.
Statistics is concerned with scientific methods for
◦ collecting,
◦ organising,
◦ summarising,
◦ presenting and analysing data
◦ as well as deriving valid conclusions and making
reasonable decisions on the basis of this analysis.
6
7. Meaning of Statistics
The word ‘ statistic’ is used to refer to
◦ Numerical facts, such as the number of people living
in particular area.
◦ The study of ways of collecting, analysing and
interpreting the facts.
7
8. Definition by A.L.Bowley
“Statistics are numerical statement of facts in any
department of enquiry placed in relation to each other.”
“Statistics may be rightly called the scheme of averages.”
8
9. Definition by Croxton and Cowden
“Statistics may be defined as the science of collection,
presentation analysis and interpretation of numerical data
from the logical analysis.”
◦ 1. Collection of Data
◦ 2. Presentation of data
◦ 3. Analysis of data
◦ 4. Interpretation of data
9
11. Scope of Statistics
◦ 1. Statistics and Industry
◦ 2. Statistics and Commerce
◦ 3. Statistics and Agriculture
◦ 4. Statistics and Economics
◦ 5. Statistics and Education
◦ 6. Statistics and Planning
◦ 7. Statistics and Medicine
◦ 8. Statistics and Modern applications
11
12. Limitations of Statistics
◦ Statistics is not suitable to the study of qualitative
phenomenon
◦ Statistics does not study individuals
◦ Statistics laws are not exact
◦ Statistics table may be misused
◦ Statistics is only, one of the methods of studying a
problem
◦
12
28. Introduction
Sampling is very often used in our daily life.
For example:
While purchasing food grains from a shop we usually examine a
handful from the bag to assess the quality of the commodity.
A doctor examines a few drops of blood as sample and draws
conclusion about the blood constitution of the whole body.
28
29. Population
In a statistical enquiry, all the items, which fall
within the purview of enquiry, are known as
Population or Universe.
Examples: Total number of students studying in a
school or college, total number of books in a
library, etc.
29
31. Population
Sometimes it is possible and practical to examine
every person or item in the population we wish to
describe. We call this a Complete enumeration,
or census.
We use sampling when it is not possible to
measure every item in the population.
31
32. Sampling
Statisticians use the word sample to describe a
portion chosen from the population.
A finite subset of statistical individuals defined in a
population is called a sample.
The number of units in a sample is called the sample
size.
32
33. Reasons for selecting a sample
Complete enumerations are practically impossible when
the population is infinite.
When the results are required in a short time.
When the area of survey is wide.
When resources for survey are limited particularly in
respect of money and trained persons.
When the item or unit is destroyed under investigation.
33
34. Sampling
Representing Size
Mathematically, A capital N is used to represent the size of a population.
A lowercase n is used to represent the size of a sample.
Let’s say you want to find the average GPA of a student at your university.
Your university has 20,000 students, and you select 100 students and ask
them their GPAs.
What are N and n in this example?
N, the size of your population, is 20,000
n, the size of your sample, is 100
34
35. Principles of Sampling
Principle of statistical regularity
Principle of Inertia of large numbers
Principle of Validity
Principle of Optimization
35
36. Methods of selection of samples
Simple Random Sampling
Every member of the population(N) has an equal
chance of being selected for your sample(n).
This is arguably the best sampling method, as your
sample is almost guaranteed to be representative of
your population. However, it is rarely ever used due
to being too impractical.
36
38. Methods of selection of samples
Stratified Random Sampling
With this method, the population(N) is split into non-
overlapping groups ("strata"), then simple random
sampling is done on each group to form a sample(n).
One example of this would be splitting a population
of students into men and women, then sampling from
each of the two groups. This may allow us to collect
the same amount of information as simple random
sampling, but use less people.
38
40. Methods of selection of samples
Systematic Random Sampling
In this method, every nth individual from the
population(N) is placed in the sample(n).
For example, if you add every 7th individual to
walk out of a supermarket to your sample, you are
performing systematic sampling.
40
42. Lottery Method
This is the most popular and simplest method.
In this method all the items of the population are
numbered on separate slips of paper of same size, shape
and colour.
They are folded and mixed up in a container.
The required numbers of slips are selected at random for
the desire sample size.
If the universe is infinite this method is inapplicable.
42
44. Table of Random Numbers
A random number table is so constructed that all
digits 0 to 9 appear independent of each other with
equal frequency.
If we have to select a sample from population of size
N= 100, then the numbers can be combined three by
three to give the numbers from 001 to 100.
When the size of the population is less than thousand,
three digit number 000,001,002,….. 999 are assigned.
If any random number is greater than the population
size N, then N can be subtracted from the random
number drawn.
44
47. Random Number selection using
calculators and computers
Random number can be
generated through scientific
calculator or computers.
For each press of the key get a
new random numbers.
The ways of selection of sample
is similar to that of using
random number table.
47
48. Merits of using random numbers
Personal bias is eliminated as a selection depends solely on
chance.
A random sample is in general a representative sample for a
homogenous population.
There is no need for the thorough knowledge of the units of
the population.
The accuracy of a sample can be tested by examining another
sample from the same universe when the universe is
unknown.
This method is also used in other methods of sampling.
48
49. Limitations of using random numbers
Preparing lots or using random number tables is tedious
when the population is large.
When there is large difference between the units of
population, the simple random sampling may not be a
representative sample.
The size of the sample required under this method is more
than that required by stratified random sampling.
It is generally seen that the units of a simple random sample
lie apart geographically. The cost and time of collection of
data are more.
49
50. Stratified Random Sampling
There are two types of stratified sampling. They
are proportional and non-proportional.
In the proportional sampling 20 equal and
proportionate representation is given to
subgroups
The population size is denoted by N and the
sample size is denoted by ‘ n’
The sample fractions is a constant for each
stratum. That is given by n/N = c.
50
52. Stratified Random Sampling
Merits
It is more representative.
It ensures greater accuracy.
It is easy to administer as the universe is sub - divided.
Greater geographical concentration reduces time and
expenses.
When the original population is badly skewed, this method
is appropriate.
For non – homogeneous population, it may field good
results.
52
53. Stratified Random Sampling
Limitations
To divide the population into homogeneous strata,
it requires more money, time and statistical
experience which is a difficult one.
Improper stratification leads to bias, if the
different strata overlap such a sample will not be a
representative one.
53
54. Systematic Sampling
This method is widely employed because of its
ease and convenience.
A frequently used method of sampling when a
complete list of the population is available is
systematic sampling.
It is also called Quasi-random sampling.
54
55. Systematic Sampling
Selection Procedure
The whole sample selection is based on just a random start.
The first unit is selected with the help of random numbers
and the rest get selected automatically according to some
pre designed pattern is known as systematic sampling.
With systematic random sampling every Kth element in the
frame is selected for the sample, with the starting point
among the first K elements determined at random.
55
57. Systematic Sampling
Merits
This method is simple and convenient.
Time and work is reduced much.
If proper care is taken result will be accurate.
It can be used in infinite population.
57
63. Collection of Data, Classification and Tabulation
Introduction
Everybody collects, interprets and uses information, much
of it in a numerical or statistical forms in day-to-day life.
In everyday life, in business and industry, certain
statistical information is necessary and it is independent
to know where to find it how to collect it.
As employees of any firm, people want to compare their
salaries and working conditions, promotion opportunities
and so on.
63
64. Collection of Data, Classification and Tabulation
Nature of Data
Time Series Data
Spatial Data (place)
Spacio-temporal data (time & place)
64
65. Nature of Data
Time Series Data
It is a collection of a set of numerical values, collected
over a period of time.
65
66. Nature of Data
Spatial Data
If the data collected is connected with that of a place,
then it is termed as spatial data.
66
67. Nature of Data
Spacio Temporal Data
If the data collected is connected to the time as well as
place then it is known as spacio temporal data.
67
70. Classification of Data
Objectives of Classification
It condenses the mass of data in an easily assimilable
form.
It eliminates unnecessary details.
It facilitates comparison and highlights the significant
aspect of data.
It enables one to get a mental picture of the information
and helps in drawing inferences.
It helps in the statistical treatment of the information
collected.
70
71. Types of Classification
Chronological Classification
In chronological classification the collected data are arranged
according to the order of time expressed in years, months, weeks, etc.,
71
73. Types of Classification
Geographical Classification
In this type of classification the data are classified according to
geographical region or place.
73
74. Types of Classification
Qualitative Classification
In this type of classification data are classified on the basis
of same attributes or quality like sex, literacy, religion,
employment etc., Such attributes cannot be measured
along with a scale.
Examples:
◦ He is brown and black
◦ He has long hair
◦ He has lots of energy
◦ He is clever
74
75. Types of Classification
Quantitative Classification
Quantitative classification refers to the classification of
data according to some characteristics that can be
measured such as height, weight, etc.
Examples:
◦ He has 4 legs
◦ He has 2 brothers
◦ He weighs 25.5 kg
◦ He is 170 cm tall
75
77. Tabulation
Tabulation is the process of summarizing classified or
grouped data in the form of a table so that it is easily
understood and an investigator is quickly able to locate
the desired information.
77
78. Preparing a Table
An ideal table should consist of the following main parts:
◦ 1. Table number
◦ 2. Title of the table
◦ 3. Captions or column headings
◦ 4. Stubs or row designation
◦ 5. Body of the table
◦ 6. Footnotes
◦ 7. Sources of data
78
85. Data Collection Project
Prepare a questionnaire
Collect the data
Separate the data into qualitative and quantitative
types
Display data in graphs
Project Examples –
◦ 24 hour activities of our students
◦ Money spending habits of our students
85
89. Frequency Distribution
A frequency distribution is constructed for three main reasons:
1. To facilitate the analysis of data.
2. To estimate frequencies of the unknown population
distribution from the distribution of sample data and
3. To facilitate the computation of various statistical
measures
89
95. Nature of Class
Class Limit
◦ The class limits are the lowest and the highest values that can
be included in the class. For example, take the class 30-40.
The lowest value of the class is 30 and highest class is 40.
◦ In statistical calculations, lower class limit is denoted by L
and upper class limit by U.
95
97. Nature of Class
Class Interval
◦ The class interval may be defined as the size of each grouping
of data. For example, 50-75, 75-100, 100-125… are class
intervals.
97
98. Nature of Class
Width or size of the class interval
◦ The difference between the lower and upper class limits is called
Width or size of class interval and is denoted by ‘ C’ .
98
99. Nature of Class
Range
◦ The difference between largest and smallest value of the
observation is called The Range
◦ It is denoted by ‘ R’
R = Largest value – Smallest value
R = L - S
99
100. Nature of Class
Mid-value or Middle Point
◦ The central point of a class interval is called the mid
value or mid-point.
◦ It is found out by adding the upper and lower limits of a
class and dividing the sum by 2.
◦ Mid-Value = L + U / 2
◦ For example, if the class interval is 20-30 then the mid-
value is 20+30 / 2 = 25
100
101. Nature of Class
Frequency
◦ Number of observations falling within a particular class
interval is called frequency of that class.
◦
101
102. Nature of Class
No of class intervals
◦ The number of class interval in a frequency is matter of
importance.
◦ The number of class interval should not be too many.
◦ For an ideal frequency distribution, the number of class
intervals can vary from 5 to 15.
◦
102
105. Types of Class Intervals
1.Exclusive Method
When the class intervals are so fixed that the upper limit of
one class is the lower limit of the next class; it is known as
the exclusive method of classification.
◦
◦
105
107. Types of Class Intervals
2.Inclusive Method
In this method, the overlapping of the class
intervals is avoided.
Both the lower and upper limits are included in
the class interval.
◦
◦
107
109. Types of Class Intervals
3. Open end classes
A class limit is missing either at the lower end of
the first class interval or at the upper end of the
last class interval or both are not specified.
◦
◦
109
119. Cumulative frequency table
Cumulative frequency distribution has a running total of the values. It
is constructed by adding the frequency of the first class interval to the
frequency of the second class interval.
119
123. Chapter 5: Graphs
What is a graph?
It is a diagram that exhibits a relationship, often functional,
between two sets of numbers as a set of points having
coordinates determined by the relationship.
123
125. Types of Graphs
Line Graph
A line graph is useful in displaying data or
information that changes continuously over time.
The points on a line graph are connected by a line.
Another name for a line graph is a line chart.
125
126. Types of Graphs
Bar Graph
A bar graph is a chart that uses either horizontal
or vertical bars to show comparisons among
categories.
126
127. Types of Graphs
Pie Chart
A pie chart (or a circle chart) is a circular
statistical graphic, which is divided into slices to
illustrate numerical proportion.
127
129. Creating Graph in Excel
Enter the following information in Excel
Worksheet to create a Line Graph
129
130. Creating Graph in Excel
Enter the following information in Excel
Worksheet to create a Bar Graph
130
131. Creating Graph in Excel
Enter the following information in Excel
Worksheet to create a Pie Chart
131
132. Chapter 6
132
Measures of Central Tendency
•In the study of a population with respect to one in which
we are interested we may get a large number of
observations.
133. Measures of Central Tendency
133
•It is not possible to grasp any idea about the characteristic
when we look at all the observations.
•So it is better to get one number for one group.
134. Measures of Central Tendency
134
• That number must be a good representative one for all the
observations to give a clear picture of that characteristic.
• Such representative number can be a central value for all
these observations.
• This central value is called a measure of central tendency.
144. Median
The median is that value of the variant which divides the
group into two equal parts, one part comprising all values
greater, and the other, all values less than median.
Arrange the given values in the increasing or decreasing
order.
If the number of values are odd, median is the middle
value.
If the number of values are even, median is the mean of
middle two values.
144
•
150. Mode
The mode refers to that value in a distribution, which
occur most frequently. It is an actual value, which has the
highest concentration of items in and around it.
150
•