This document summarizes key concepts from Chapter 1 of an introductory statistics textbook. It defines statistics, distinguishes between populations and samples, parameters and statistics, and descriptive and inferential statistics. It also classifies data types and levels of measurement, and discusses experimental design concepts like data collection methods and sampling techniques.
These introductory statistics slides will give you a basic understanding of statistics, types of statistics, variable and its types, the levels of measurements, data collection techniques, and types of sampling.
Here are some common sources of primary and secondary data:
Primary data sources:
- Surveys (questionnaires, interviews)
- Experiments
- Observations
- Focus groups
Secondary data sources:
- Government data (census data, vital statistics)
- Published research studies
- Organizational records and documents
- Media reports
- Commercial data providers
This document summarizes key concepts from an introduction to statistics textbook. It covers types of data (quantitative, qualitative, levels of measurement), sampling (population, sample, randomization), experimental design (observational studies, experiments, controlling variables), and potential misuses of statistics (bad samples, misleading graphs, distorted percentages). The goal is to illustrate how common sense is needed to properly interpret data and statistics.
Introduction to Statistics - Basic concepts
- How to be a good doctor - A step in Health promotion
- By Ibrahim A. Abdelhaleem - Zagazig Medical Research Society (ZMRS)
This chapter introduces the basic concepts and terminology of statistics. It discusses two main branches of statistics - descriptive statistics which involves collecting, organizing and summarizing data, and inferential statistics which allows drawing conclusions about populations from samples. The chapter also covers variables, populations, samples, parameters, statistics and how to organize and visualize data through tables, charts and graphs. It emphasizes that statistics helps turn data into useful information for decision making in business.
1. The document discusses different sampling methods including simple random sampling, systematic random sampling, stratified sampling, and cluster sampling.
2. It provides examples of how each sampling method works and how samples are selected from the overall population.
3. Exercises are provided to determine which sampling method should be used for different scenarios involving selecting samples from identified populations.
This document provides an overview of basic statistics concepts. It defines statistics as the science of collecting, presenting, analyzing, and reasonably interpreting data. Descriptive statistics are used to summarize and organize data through methods like tables, graphs, and descriptive values, while inferential statistics allow researchers to make general conclusions about populations based on sample data. Variables can be either categorical or quantitative, and their distributions and presentations are discussed.
Descriptive statistics are methods of describing the characteristics of a data set. It includes calculating things such as the average of the data, its spread and the shape it produces.
These introductory statistics slides will give you a basic understanding of statistics, types of statistics, variable and its types, the levels of measurements, data collection techniques, and types of sampling.
Here are some common sources of primary and secondary data:
Primary data sources:
- Surveys (questionnaires, interviews)
- Experiments
- Observations
- Focus groups
Secondary data sources:
- Government data (census data, vital statistics)
- Published research studies
- Organizational records and documents
- Media reports
- Commercial data providers
This document summarizes key concepts from an introduction to statistics textbook. It covers types of data (quantitative, qualitative, levels of measurement), sampling (population, sample, randomization), experimental design (observational studies, experiments, controlling variables), and potential misuses of statistics (bad samples, misleading graphs, distorted percentages). The goal is to illustrate how common sense is needed to properly interpret data and statistics.
Introduction to Statistics - Basic concepts
- How to be a good doctor - A step in Health promotion
- By Ibrahim A. Abdelhaleem - Zagazig Medical Research Society (ZMRS)
This chapter introduces the basic concepts and terminology of statistics. It discusses two main branches of statistics - descriptive statistics which involves collecting, organizing and summarizing data, and inferential statistics which allows drawing conclusions about populations from samples. The chapter also covers variables, populations, samples, parameters, statistics and how to organize and visualize data through tables, charts and graphs. It emphasizes that statistics helps turn data into useful information for decision making in business.
1. The document discusses different sampling methods including simple random sampling, systematic random sampling, stratified sampling, and cluster sampling.
2. It provides examples of how each sampling method works and how samples are selected from the overall population.
3. Exercises are provided to determine which sampling method should be used for different scenarios involving selecting samples from identified populations.
This document provides an overview of basic statistics concepts. It defines statistics as the science of collecting, presenting, analyzing, and reasonably interpreting data. Descriptive statistics are used to summarize and organize data through methods like tables, graphs, and descriptive values, while inferential statistics allow researchers to make general conclusions about populations based on sample data. Variables can be either categorical or quantitative, and their distributions and presentations are discussed.
Descriptive statistics are methods of describing the characteristics of a data set. It includes calculating things such as the average of the data, its spread and the shape it produces.
This chapter discusses descriptive statistics including organizing and graphing qualitative and quantitative data, measures of central tendency, and measures of dispersion. It covers frequency distributions, histograms, polygons, measures of central tendency (mean, median, mode), measures of dispersion (range, variance, standard deviation), skewness, and cumulative frequency distributions. The objectives are to describe and interpret graphical displays of data, compute various statistical measures, and identify shapes of distributions.
This presentation includes an introduction to statistics, introduction to sampling methods, collection of data, classification and tabulation, frequency distribution, graphs and measures of central tendency.
Here are the modes for the three examples:
1. The mode is 3. This value occurs most frequently among the number of errors committed by the typists.
2. The mode is 82. This value occurs most frequently among the number of fruits yielded by the mango trees.
3. The mode is 12 and 15. These values occur most frequently among the students' quiz scores.
This document summarizes the key topics and concepts covered in Chapter 2 of the 9th edition of the business statistics textbook "Presenting Data in Tables and Charts". The chapter discusses guidelines for analyzing data and organizing both numerical and categorical data. It then covers various methods for tabulating and graphing univariate and bivariate data, including tables, histograms, frequency distributions, scatter plots, bar charts, pie charts, and contingency tables.
This document discusses measures of central tendency, including the mean, median, and mode. It provides examples of calculating each measure using sample data sets. The mean is the average value calculated by summing all values and dividing by the number of data points. The median is the middle value when data is ordered from lowest to highest. The mode is the most frequently occurring value. Examples are given to demonstrate calculating the mean, median, and mode from sets of numeric data.
SAMPLING ; SAMPLING TECHNIQUES – RANDOM SAMPLING (SIMPLE RANDOM SAMPLING)Navya Jayakumar
This document discusses simple random sampling, which is a type of probability sampling technique where each member of the population has an equal chance of being selected. It provides examples to illustrate simple random sampling, such as selecting sugar from a bag or using a lottery system or random number table to randomly pick sample members. The key aspects of simple random sampling are that selection is random and does not depend on the characteristics of the population members, giving each member an equal chance of selection.
This presentation is about Basic Statistics-related to types of Data-Qualitative and Quantitative, and its Examples in everyday life- By: Dr. Farhana Shaheen
The document defines a sampling distribution of sample means as a distribution of means from random samples of a population. The mean of sample means equals the population mean, and the standard deviation of sample means is smaller than the population standard deviation, equaling it divided by the square root of the sample size. As sample size increases, the distribution of sample means approaches a normal distribution according to the Central Limit Theorem.
Elements Of Research Design | Purpose Of Study | Important Of Research Design |FaHaD .H. NooR
This document discusses key elements of research design including the purpose of a study, type of investigation, study setting, population, time horizon, and importance of considering research design early. It describes exploratory, descriptive and hypothesis testing purposes. Correlational and causal studies are covered as well as field, lab and contrived settings. Individuals, groups, organizations can be units of analysis. Cross-sectional and longitudinal time horizons are presented. Reliability including stability over time and internal consistency are also summarized.
Quartiles divide a sorted data set into quarters based on the values. The first quartile (Q1) is the median between the smallest number and the overall median. The second quartile (Q2) is the median. The third quartile (Q3) is the median between the overall median and highest value. In an example data set of 11 numbers, the quartiles were Q1=5, Q2=7, and Q3=9.
Practical Research 2 Chapter 3: Common Statistical ToolsDaianMoreno1
This document provides an overview of common statistical tools including:
- The arithmetic mean, which is the sum of a list of numbers divided by the total number of items. It provides an overall trend of data.
- Frequency distributions which show how many evaluations fall into various categories using tables, histograms or pie charts.
- Bar graphs which present categorical and numeric variables in class intervals through bars to show patterns.
- Standard deviation which measures how spread out numbers are from the mean. A low standard deviation means numbers align with the mean.
- T-tests and Pearson's correlation coefficient which measure relationships between variables, and the chi-square test which compares expected to observed categorical variable frequencies.
This document summarizes quantitative data analysis techniques for summarizing data from samples and generalizing to populations. It discusses variables, simple and effect statistics, statistical models, and precision of estimates. Key points covered include describing data distribution through plots and statistics, common effect statistics for different variable types and models, ensuring model fit, and interpreting precision, significance, and probability to generalize from samples.
Understanding data type is an important concept in statistics, when you are designing an experiment, you want to know what type of data you are dealing with, that will decide what type of statistical analysis, visualizations and prediction algorithms could be used.
#data #data types #ai #machine learning #statistics #data science #data analytics #artificial intelligence
This document discusses point and interval estimation. It defines an estimator as a function used to infer an unknown population parameter based on sample data. Point estimation provides a single value, while interval estimation provides a range of values with a certain confidence level, such as 95%. Common point estimators include the sample mean and proportion. Interval estimators account for variability in samples and provide more information than point estimators. The document provides examples of how to construct confidence intervals using point estimates, confidence levels, and standard errors or deviations.
This document discusses statistics and their uses in various fields such as business, health, learning, research, social sciences, and natural resources. It provides examples of how statistics are used in starting businesses, manufacturing, marketing, and engineering. Statistics help decision-makers reduce ambiguity and assess risks. They are used to interpret data and make informed decisions. However, statistics also have limitations as they only show averages and may not apply to individuals.
Percentiles are positional measures used to indicate an individual's position within a group. They divide a data set into 100 equal parts, with percentiles (denoted Px) indicating what percent of values are less than a specified value. Common percentiles include the median (P50), quartiles (P25, P50, P75), and deciles. Percentiles are calculated using a formula that determines the position number based on the total number of data points and percentile value. This position is then used to find the corresponding value within ordered data.
This document discusses frequency distributions and how to construct them from raw data. It provides examples of creating stem-and-leaf displays, frequency tables, relative frequency tables, and cumulative frequency tables from various data sets. Key concepts covered include class width, class boundaries, tallying data, and calculating relative frequencies and percentages. Overall, the document serves as a tutorial on how to organize and summarize data using various types of frequency distributions.
This document discusses time series analysis. It defines a time series as a collection of observations made sequentially over time. Examples include financial, scientific, demographic, and meteorological time series data. The document contrasts time series data with cross-sectional data. It also describes the components of a time series, including trends, seasonal variations, cyclical variations, and irregular/random variations. The purposes and uses of time series analysis are discussed, along with methods for decomposing and measuring trends in time series data.
This document provides information on different types of charts and graphs used in statistics. It defines bar graphs, pie charts, histograms, frequency polygons, ogives, pictograms and discusses their uses, advantages and disadvantages. Examples are given for each type of graph to demonstrate how they are constructed and how data is represented visually. Key information on choosing appropriate scales and plotting points for different graphs is also presented.
This document discusses several definitions of economics provided by prominent economists over time. It begins by summarizing Adam Smith's definition from 1776 that viewed economics as the science of wealth. It then discusses Alfred Marshall's 1890 definition that considered economics the study of mankind in business. Next, it outlines Lionel Robbins' 1932 definition that defined economics as studying human behavior related to scarce means and alternative uses. Finally, it provides Paul Samuelson's modern definition from 1948 that viewed economics as concerning how society employs its resources. The document then briefly discusses the main divisions of economics as consumption, production, exchange, distribution, and public finance.
Statistics is the science of dealing with numbers.
It is used for collection, summarization, presentation and analysis of data.
Statistics provides a way of organizing data to get information on a wider and more formal (objective) basis than relying on personal experience (subjective).
This chapter discusses descriptive statistics including organizing and graphing qualitative and quantitative data, measures of central tendency, and measures of dispersion. It covers frequency distributions, histograms, polygons, measures of central tendency (mean, median, mode), measures of dispersion (range, variance, standard deviation), skewness, and cumulative frequency distributions. The objectives are to describe and interpret graphical displays of data, compute various statistical measures, and identify shapes of distributions.
This presentation includes an introduction to statistics, introduction to sampling methods, collection of data, classification and tabulation, frequency distribution, graphs and measures of central tendency.
Here are the modes for the three examples:
1. The mode is 3. This value occurs most frequently among the number of errors committed by the typists.
2. The mode is 82. This value occurs most frequently among the number of fruits yielded by the mango trees.
3. The mode is 12 and 15. These values occur most frequently among the students' quiz scores.
This document summarizes the key topics and concepts covered in Chapter 2 of the 9th edition of the business statistics textbook "Presenting Data in Tables and Charts". The chapter discusses guidelines for analyzing data and organizing both numerical and categorical data. It then covers various methods for tabulating and graphing univariate and bivariate data, including tables, histograms, frequency distributions, scatter plots, bar charts, pie charts, and contingency tables.
This document discusses measures of central tendency, including the mean, median, and mode. It provides examples of calculating each measure using sample data sets. The mean is the average value calculated by summing all values and dividing by the number of data points. The median is the middle value when data is ordered from lowest to highest. The mode is the most frequently occurring value. Examples are given to demonstrate calculating the mean, median, and mode from sets of numeric data.
SAMPLING ; SAMPLING TECHNIQUES – RANDOM SAMPLING (SIMPLE RANDOM SAMPLING)Navya Jayakumar
This document discusses simple random sampling, which is a type of probability sampling technique where each member of the population has an equal chance of being selected. It provides examples to illustrate simple random sampling, such as selecting sugar from a bag or using a lottery system or random number table to randomly pick sample members. The key aspects of simple random sampling are that selection is random and does not depend on the characteristics of the population members, giving each member an equal chance of selection.
This presentation is about Basic Statistics-related to types of Data-Qualitative and Quantitative, and its Examples in everyday life- By: Dr. Farhana Shaheen
The document defines a sampling distribution of sample means as a distribution of means from random samples of a population. The mean of sample means equals the population mean, and the standard deviation of sample means is smaller than the population standard deviation, equaling it divided by the square root of the sample size. As sample size increases, the distribution of sample means approaches a normal distribution according to the Central Limit Theorem.
Elements Of Research Design | Purpose Of Study | Important Of Research Design |FaHaD .H. NooR
This document discusses key elements of research design including the purpose of a study, type of investigation, study setting, population, time horizon, and importance of considering research design early. It describes exploratory, descriptive and hypothesis testing purposes. Correlational and causal studies are covered as well as field, lab and contrived settings. Individuals, groups, organizations can be units of analysis. Cross-sectional and longitudinal time horizons are presented. Reliability including stability over time and internal consistency are also summarized.
Quartiles divide a sorted data set into quarters based on the values. The first quartile (Q1) is the median between the smallest number and the overall median. The second quartile (Q2) is the median. The third quartile (Q3) is the median between the overall median and highest value. In an example data set of 11 numbers, the quartiles were Q1=5, Q2=7, and Q3=9.
Practical Research 2 Chapter 3: Common Statistical ToolsDaianMoreno1
This document provides an overview of common statistical tools including:
- The arithmetic mean, which is the sum of a list of numbers divided by the total number of items. It provides an overall trend of data.
- Frequency distributions which show how many evaluations fall into various categories using tables, histograms or pie charts.
- Bar graphs which present categorical and numeric variables in class intervals through bars to show patterns.
- Standard deviation which measures how spread out numbers are from the mean. A low standard deviation means numbers align with the mean.
- T-tests and Pearson's correlation coefficient which measure relationships between variables, and the chi-square test which compares expected to observed categorical variable frequencies.
This document summarizes quantitative data analysis techniques for summarizing data from samples and generalizing to populations. It discusses variables, simple and effect statistics, statistical models, and precision of estimates. Key points covered include describing data distribution through plots and statistics, common effect statistics for different variable types and models, ensuring model fit, and interpreting precision, significance, and probability to generalize from samples.
Understanding data type is an important concept in statistics, when you are designing an experiment, you want to know what type of data you are dealing with, that will decide what type of statistical analysis, visualizations and prediction algorithms could be used.
#data #data types #ai #machine learning #statistics #data science #data analytics #artificial intelligence
This document discusses point and interval estimation. It defines an estimator as a function used to infer an unknown population parameter based on sample data. Point estimation provides a single value, while interval estimation provides a range of values with a certain confidence level, such as 95%. Common point estimators include the sample mean and proportion. Interval estimators account for variability in samples and provide more information than point estimators. The document provides examples of how to construct confidence intervals using point estimates, confidence levels, and standard errors or deviations.
This document discusses statistics and their uses in various fields such as business, health, learning, research, social sciences, and natural resources. It provides examples of how statistics are used in starting businesses, manufacturing, marketing, and engineering. Statistics help decision-makers reduce ambiguity and assess risks. They are used to interpret data and make informed decisions. However, statistics also have limitations as they only show averages and may not apply to individuals.
Percentiles are positional measures used to indicate an individual's position within a group. They divide a data set into 100 equal parts, with percentiles (denoted Px) indicating what percent of values are less than a specified value. Common percentiles include the median (P50), quartiles (P25, P50, P75), and deciles. Percentiles are calculated using a formula that determines the position number based on the total number of data points and percentile value. This position is then used to find the corresponding value within ordered data.
This document discusses frequency distributions and how to construct them from raw data. It provides examples of creating stem-and-leaf displays, frequency tables, relative frequency tables, and cumulative frequency tables from various data sets. Key concepts covered include class width, class boundaries, tallying data, and calculating relative frequencies and percentages. Overall, the document serves as a tutorial on how to organize and summarize data using various types of frequency distributions.
This document discusses time series analysis. It defines a time series as a collection of observations made sequentially over time. Examples include financial, scientific, demographic, and meteorological time series data. The document contrasts time series data with cross-sectional data. It also describes the components of a time series, including trends, seasonal variations, cyclical variations, and irregular/random variations. The purposes and uses of time series analysis are discussed, along with methods for decomposing and measuring trends in time series data.
This document provides information on different types of charts and graphs used in statistics. It defines bar graphs, pie charts, histograms, frequency polygons, ogives, pictograms and discusses their uses, advantages and disadvantages. Examples are given for each type of graph to demonstrate how they are constructed and how data is represented visually. Key information on choosing appropriate scales and plotting points for different graphs is also presented.
This document discusses several definitions of economics provided by prominent economists over time. It begins by summarizing Adam Smith's definition from 1776 that viewed economics as the science of wealth. It then discusses Alfred Marshall's 1890 definition that considered economics the study of mankind in business. Next, it outlines Lionel Robbins' 1932 definition that defined economics as studying human behavior related to scarce means and alternative uses. Finally, it provides Paul Samuelson's modern definition from 1948 that viewed economics as concerning how society employs its resources. The document then briefly discusses the main divisions of economics as consumption, production, exchange, distribution, and public finance.
Statistics is the science of dealing with numbers.
It is used for collection, summarization, presentation and analysis of data.
Statistics provides a way of organizing data to get information on a wider and more formal (objective) basis than relying on personal experience (subjective).
Introduction to statistics...ppt rahulRahul Dhaker
This document provides an introduction to statistics and biostatistics. It discusses key concepts including:
- The definitions and origins of statistics and biostatistics. Biostatistics applies statistical methods to biological and medical data.
- The four main scales of measurement: nominal, ordinal, interval, and ratio scales. Nominal scales classify data into categories while ratio scales allow for comparisons of magnitudes and ratios.
- Descriptive statistics which organize and summarize data through methods like frequency distributions, measures of central tendency, and graphs. Frequency distributions condense data into tables and charts. Measures of central tendency include the mean, median, and mode.
This document provides an overview of key terminology and concepts in statistics. It discusses topics like populations and samples, variables and their measurement, levels of measurement, research methods like correlational analysis and experiments, and mathematical notation used in statistics. The goal is to introduce readers to what statistics is about at a high level and prepare them for further study of important statistical concepts.
This document discusses statistical analysis techniques including measures of central tendency, variance, standard deviation, t-tests, and levels of significance. It provides an example of using these techniques to analyze plant height data from a fertilizer experiment and determine if differences in heights between treated and untreated plants are statistically significant. The document introduces the concepts and calculations involved in describing and analyzing quantitative data using common statistical methods.
Statistics involves collecting, organizing, analyzing, and interpreting data. Descriptive statistics describe characteristics of a data set through measures like central tendency and variability. Inferential statistics draw conclusions about a population based on a sample. Key terms include population, sample, parameter, statistic, data types, levels of measurement, and sampling techniques like simple random sampling. Common data gathering methods are interviews, questionnaires, and registration records. Data can be presented textually, in tables, or graphically through charts, graphs, and maps.
Statistics can be defined in both a singular and plural sense. In the singular sense, it refers to statistical methods for collecting, analyzing, and interpreting numerical data. In the plural sense, it refers to the actual numerical facts or data collected. Statistics involves systematically collecting, organizing, presenting, analyzing, and interpreting numerical data to describe features and characteristics. It allows for comparing facts, establishing relationships, and facilitating policymaking and decision making. However, statistics only studies aggregates and averages, not individual cases, and results are true only on average. It also requires properly contextualizing and referencing results.
The document provides an introduction to statistics, discussing the meaning, history, and applications of statistics. It defines key statistical concepts such as population and sample, descriptive and inferential statistics. It also discusses the different types of variables and levels of measurement. The document traces the history of statistics from ancient times to the present day, highlighting important contributors to the field. It provides examples of how statistics is used in different domains like education, business, research, and government.
This document provides an introduction to key concepts in statistics, including variables, populations, samples, types of variables, measurement scales, correlational studies, experiments, other study types, data, descriptive statistics, and inferential statistics. It defines important terms and outlines the goals and characteristics of different statistical methods and study designs.
The document provides an introduction to statistics, covering its origin and development, definitions, types of statistics (descriptive and inferential), data collection methods, organization and presentation of data, and variables. It discusses how statistics has evolved from its early use by governments to keep records to its current role across various fields such as business, research, and the natural and social sciences. Key aspects of statistics like data collection, organization, analysis, and interpretation are also introduced.
This document provides an introduction to descriptive statistics and measures of central tendency, including the mean, median, and mode. It discusses how the mean can be impacted by outliers, while the median is not. The standard deviation and variance are introduced as measures of dispersion that quantify how much values vary from the mean or from each other. Finally, the document discusses different ways of organizing and graphing data, including histograms, pie charts, line graphs, and scatter plots.
Here are the steps to find the measures of central tendency for this data set:
1. Find the mean (average) by adding all the values and dividing by the total number of values:
34 + 35 + 40 + 40 + 48 + 21 + 20 + 19 + 34 + 45 + 19 + 17 + 18 + 15 + 16 = 400
Total number of values is 15
So, mean = 400/15 = 26.67
2. Find the mode by determining the most frequent value:
The most frequent values are 40 and 19, both appearing twice. Therefore, the modes are 40 and 19.
3. Find the median by ordering the values from lowest to highest and picking the middle value:
This document provides an introduction to key concepts in statistics. It discusses various statistical measures such as measures of central tendency (mean, median, mode), measures of dispersion (range, standard deviation), correlation, and different types of correlation (simple, partial, multiple). It also outlines common statistical methods like scatter diagrams, Karl Pearson's method, and rank correlation method. The role of computer technology in statistics is mentioned.
This document provides an introduction to statistics and statistical concepts. It covers topics such as course objectives, purposes of statistics, population and sampling, types of data and variables, levels of measurement, and nominal level of measurement. The key points are that statistics can describe, summarize, predict and identify relationships in data, and that there are different levels of variables from nominal to ratio scales.
Statistics is the collection and analysis of data. There are two main branches: descriptive statistics, which organizes and summarizes data, and inferential statistics, which uses descriptive statistics to make predictions. Statistics starts with a question and uses data to provide information to help make decisions. It is widely used in business, health, education, research, social sciences, and natural resources.
This document provides an introduction to basic statistical concepts. It defines statistics as the study of numerical data and notes that while it uses mathematics, statistics arises from practical situations. The founder of modern statistics is identified as Ronald Fisher. Primary data is defined as data collected directly from sources, while secondary data is collected from existing sources. Key concepts explained include range, frequency, frequency tables, bar graphs, histograms, frequency polygons, and measures of central tendency like mean, median and mode. An example is provided to illustrate calculating these measures.
This document provides an introduction to statistics. It defines statistics as the science of data that involves collecting, classifying, summarizing, organizing, and interpreting numerical information. It outlines key terms such as data, population, sample, parameter, and statistic. It describes different types of variables like independent and dependent variables. It discusses descriptive statistics, inferential statistics, and predictive modeling. Finally, it explains important concepts like measures of central tendency, measures of variation, and statistical distributions like the normal distribution.
1) Simulation involves defining a scenario with known probabilistic outcomes, running the scenario many times to model likely outcomes, and comparing the results to alternative models.
2) There are 5 steps to simulation: state the problem, make assumptions, create a mathematical model, run many repetitions, and state conclusions.
3) Probability models describe random phenomena using a sample space (all possible outcomes) and assigning probabilities to each outcome or event.
This document provides an introduction to key concepts in statistics including data, populations, samples, parameters, statistics, descriptive statistics, and inferential statistics. It defines data as information from observations or measurements. Statistics is defined as collecting, organizing, analyzing and interpreting data to make decisions. Descriptive statistics involves summarizing and displaying data, while inferential statistics uses sample data to draw conclusions about populations. Examples are provided to illustrate identifying populations and samples, and distinguishing between parameters and statistics.
The document discusses different types of data and levels of measurement. It describes qualitative data as consisting of attributes, labels or non-numerical entries, while quantitative data contains numerical measurements or counts. Four levels of measurement are introduced: nominal involving categories, ordinal allowing ordering, interval where differences are meaningful, and ratio where values can be expressed as multiples. Examples are provided to demonstrate classifying different data sets by type and level of measurement.
This document provides an introduction to statistics, including definitions of key terms. It discusses how statistics involves collecting, organizing, analyzing and interpreting data. A population is the entire set of data, while a sample is a subset of a population. Parameters describe populations and statistics describe samples. There are two main branches of statistics - descriptive statistics which organizes and summarizes data, and inferential statistics which uses samples to draw conclusions about populations. Data can be qualitative like names or quantitative with numerical values, and have different levels of measurement from nominal to ratio. Experimental design involves identifying variables of interest and collecting representative data using methods like surveys, experiments or sampling techniques.
This document provides an introduction to key concepts in statistics including:
1) Data consists of observations and measurements while statistics involves collecting, organizing, analyzing and interpreting data to make decisions.
2) A population is the total collection of interest while a sample is a subset, parameters describe populations and statistics describe samples.
3) Descriptive statistics involves summarizing and displaying data while inferential statistics uses samples to draw conclusions about populations.
This document outlines the content of a biostatistics course. It introduces statistics, defining it as the collection, organization, analysis and interpretation of data to draw conclusions. It discusses descriptive and inferential statistics. It also covers topics like data classification, levels of measurement, sampling techniques and methods of data collection that will be taught in the course's first four chapters. These chapters will address central tendency, variation, frequency distributions, and range.
The document discusses different types and levels of data. There are two types of data: qualitative and quantitative. Qualitative data consists of attributes like names while quantitative data consists of numerical values. There are four levels of measurement for data: nominal, ordinal, interval, and ratio. Each level allows for different statistical calculations and comparisons of the data.
This document discusses experimental design and methods of data collection. It describes the basic steps of experimental design as identifying variables of interest, developing a data collection plan, collecting and analyzing data, and identifying errors. Common data collection methods are outlined as taking a census, using sampling, simulations, experiments, and surveys. Simple random sampling and other sampling techniques like stratified sampling, cluster sampling, and systematic sampling are also defined.
This document provides an overview of key concepts in statistics including:
1. Statistics involves collecting, organizing, analyzing, and interpreting data to make decisions. Data comes from observations, counts, or measurements.
2. A population is the entire group being studied, while a sample is a subset of the population. Parameters describe populations, while statistics describe samples.
3. Descriptive statistics involve summarizing and displaying data, while inferential statistics use samples to draw conclusions about populations.
4. Data can be qualitative (attributes) or quantitative (numbers). It can also be measured at the nominal, ordinal, interval, or ratio level.
Presentation is made by the student of M.phil Jameel Ahmed Qureshi Faculty of Education Elsa Kazi campus Hyderabad UoS Jamshoron, This presentation is an assignment assign by the Dr. Mumtaz Khwaja
This document provides an overview of statistics. It defines statistics as the science of collecting, organizing, analyzing, and interpreting data to make decisions. There are two main types of data: populations, which are all possible outcomes, and samples, which are subsets of populations. The document distinguishes between parameters, which describe population characteristics, and statistics, which describe sample characteristics. Finally, it outlines the two branches of statistics: descriptive statistics, which involves summarizing and displaying data, and inferential statistics, which uses samples to draw conclusions about populations.
This document provides an introduction to statistical theory. It discusses why statistics are studied and defines key statistical concepts such as populations, samples, parameters, statistics, descriptive statistics, inferential statistics, and the different types of data and variables. It also covers experimental design, methods for collecting data such as surveys and sampling, and different sampling methods like random, stratified, cluster, and systematic sampling.
This document provides an introduction to statistics, including defining key terms and concepts. It discusses what statistics is, the difference between populations and samples, parameters and statistics. It also outlines the two main branches of statistics - descriptive statistics, which involves organizing and summarizing data, and inferential statistics, which uses samples to draw conclusions about populations. The document then discusses different types of data, such as qualitative vs. quantitative, and the four levels of measurement for quantitative data. Finally, it discusses methods for designing statistical studies and collecting data, such as interviews, questionnaires, observation, and using registration data or mechanical devices.
Pg. 05Question FiveAssignment #Deadline Day 22.docxmattjtoni51554
Pg. 05
Question Five
Assignment #
Deadline: Day 22/10/2017 @ 23:59
[Total Mark for this Assignment is 25]
System Analysis and Design
IT 243
College of Computing and Informatics
Question One
5 Marks
Learning Outcome(s):
Understand the need of Feasibility analysis in project approval and its types
What is feasibility analysis? List and briefly discuss three kinds of feasibility analysis.
Question Two
5 Marks
Learning Outcome(s):
Understand the various cost incurred in project development
How can you classify costs? Describe each cost classification and provide a typical example of each category.Question Three
5 Marks
Learning Outcome(s):
System Development Life Cycle methodologies (Waterfall & Prototyping)
There a several development methodologies for the System Development Life Cycle (SDLC). Among these are the Waterfall and System Prototyping models. Compare the two methodologies in details in terms of the following criteria.
Criteria
Waterfall
System Prototyping
Description
Requirements Clarity
System complexity
Project Time schedule
Question Four
5 Marks
Learning Outcome(s):
Understand JAD Session and its procedure
What is JAD session? Describe the five major steps in conducting JAD sessions.
Question Five
5 Marks
Learning Outcome(s):
Ability to distinguish between functional and non functional requirements
State what is meant by the functional and non-functional requirements. What are the primary types of nonfunctional requirements? Give two examples of each. What role do nonfunctional requirements play in the project overall?
# Marks
4 - PRELIMINARY DATA SCREENING
4.1 Introduction: Problems in Real Data
Real datasets often contain errors, inconsistencies in responses or measurements, outliers, and missing values. Researchers should conduct thorough preliminary data screening to identify and remedy potential problems with their data prior to running the data analyses that are of primary interest. Analyses based on a dataset that contains errors, or data that seriously violate assumptions that are required for the analysis, can yield misleading results.
Some of the potential problems with data are as follows: errors in data coding and data entry, inconsistent responses, missing values, extreme outliers, nonnormal distribution shapes, within-group sample sizes that are too small for the intended analysis, and nonlinear relations between quantitative variables. Problems with data should be identified and remedied (as adequately as possible) prior to analysis. A research report should include a summary of problems detected in the data and any remedies that were employed (such as deletion of outliers or data transformations) to address these problems.
4.2 Quality Control During Data Collection
There are many different possible methods of data collection. A psychologist may collect data on personality or attitudes by asking participants to answer questions on a questionnaire..
Elementary Statistics Picturing the World ch01.1Debra Wallace
This document provides an overview and introduction to statistics. It defines key statistical concepts such as data, population, sample, parameter, and statistic. It also distinguishes between descriptive and inferential statistics. Specifically, it states that statistics is the science of collecting, organizing, analyzing, and interpreting data to make decisions. A population is the entire set being studied, while a sample is a subset of the population. A parameter describes a characteristic of the entire population, while a statistic describes a characteristic of a sample. Descriptive statistics involves summarizing and displaying data, while inferential statistics involves using samples to draw conclusions about populations.
Chapter 19Basic Quantitative Data AnalysisData Cleaning.docxketurahhazelhurst
This summary discusses a literature review on the outcomes of neural stem cell transplantation and localized drug therapy for patients with traumatic brain injury. The research method is a review of quantitative data from scholarly journals on the effectiveness of neural stem cell treatment compared to control groups. The population includes the over 1.7 million Americans who sustain traumatic brain injuries annually. The sampling frame analyzes data from 18 laboratory mice that received neural stem cell grafts or were in a control group. Data collection involved extracting quantifiable results from published research articles. Data analysis found that neural stem cell engraftment reduced secondary effects of injury and increased neurogenesis based on a t-test analysis with a 95% confidence interval.
Data Analysis in Research: Descriptive Statistics & NormalityIkbal Ahmed
This document discusses different types of data and data analysis techniques used in research. It defines data as any set of characters gathered for analysis. Research data can take many forms including documents, laboratory notes, questionnaires, and digital outputs. There are two main types of data: quantitative data which can be measured numerically, and qualitative data involving words and symbols. Common quantitative analysis techniques described are descriptive statistics to summarize variables and inferential statistics to understand relationships. Qualitative analysis techniques include content analysis, narrative analysis and grounded theory.
The document discusses different types of data that can be collected in statistics including categorical vs. quantitative data, discrete vs. continuous data, and different levels of measurement for data including nominal, ordinal, interval, and ratio scales. It also discusses key concepts such as parameters, statistics, populations, and samples. Potential pitfalls in statistical analysis are outlined such as misleading conclusions, nonresponse bias, and issues with survey question wording and order.
Practical applications and analysis in Research Methodology Hafsa Ranjha
Lara expresses concern about the low numbers of Latinos attending and graduating from higher education. She identifies four factors that may influence Latino students' college success based on research: family, religion, social support, and motivation. She ties these factors to theories of child development and ethnic identity. For her dissertation, Lara uses a mixed-methods explanatory design with two phases - a quantitative questionnaire followed by qualitative interviews - to examine how these factors influence college success among Mexican American students. She analyzes and interprets the data using statistical tools and software.
This document provides an overview of key concepts from Chapter 1 of the textbook "Elementary Statistics". It defines important statistical terms like population, sample, parameter, and statistic. It also distinguishes between different types of data and levels of measurement. Additionally, it discusses the importance of collecting sample data through appropriate random sampling methods. Critical thinking in statistics is emphasized, highlighting factors like the context, source, and sampling method of data when evaluating statistical claims. Different ways of collecting data through studies and experiments are also introduced.
Similar to Chapter 1 introduction to statistics (20)
This document discusses denouncing injustice and proclaiming justice according to Judiac Christian doctrine and human experience. It refers to teachings of the church and notes that the Catholic Bishop's Conference of the Philippines is the official organization of the Catholic hierarchy in the Philippines.
The document provides an overview of operating system basics, including what an operating system is, examples of common operating systems, their key characteristics and capabilities. It discusses how operating systems work, manage hardware/software resources, and provide services to computer programs. Specific operating systems covered include Windows, Mac OS X, Linux, and mobile operating systems like iOS and Android. [/SUMMARY]
A network operating system (NOS) is software that enhances a basic operating system with networking features to support workstations and PCs on a local area network. Examples include Novell Netware, Microsoft Windows Server, and Linux servers. A NOS provides features for security, file/print sharing, directories, and remote access. Common tasks involve user administration, backups, and security monitoring. A client accesses services from a server. Windows Server is designed for enterprise management while Linux servers offer flexibility and cost advantages. Peer-to-peer networks allow direct sharing while client/server uses centralized file servers.
The document provides an overview of open source licensing. It defines open source software as software with an open source license that gives users the rights to use, modify, and distribute the software as well as access its source code. Prominent open source programs and vendors are listed. The history and roles of the Open Source Initiative (OSI) and Open Source Definition (OSD) are described. Common open source licenses like the GPL, BSD, and Mozilla licenses are outlined and compared. The risks and benefits of open source software are briefly discussed.
The document discusses operating system concepts including:
1. The operating system controls computer resources and provides an interface between applications and hardware.
2. It hides hardware complexity and manages resources like processors, memory, and devices.
3. Key OS components include processes, files, pipes, and system calls that allow programs to request services from the OS kernel.
This document provides guidance on designing the logical structure of Active Directory. It discusses designing forests, domains, and organizational units (OUs) to simplify management, optimize performance, and delegate administration appropriately. The key steps are:
1. Identify project teams and assign roles like executive sponsor, architect and manager.
2. Design forests based on autonomy and isolation needs. Common models are organizational, resource and restricted access forests.
3. Design domains considering models like single or regional domains.
4. Integrate Active Directory with the existing DNS infrastructure.
5. Design OUs to delegate control over resources to appropriate administrators.
This document provides an overview of Active Directory fundamentals, including logical and physical concepts. It discusses domains, trees, forests, organizational units, domain controllers, sites, site topology, global catalogs, and the domain naming service (DNS). Key points covered include boundaries of security, replication, and administration for domains; transitive trust relationships and shared components for trees and forests; and the purpose and usage of sites, site links, and global catalogs.
Difference in Differences - Does Strict Speed Limit Restrictions Reduce Road ...ThinkInnovation
Objective
To identify the impact of speed limit restrictions in different constituencies over the years with the help of DID technique to conclude whether having strict speed limit restrictions can help to reduce the increasing number of road accidents on weekends.
Context*
Generally, on weekends people tend to spend time with their family and friends and go for outings, parties, shopping, etc. which results in an increased number of vehicles and crowds on the roads.
Over the years a rapid increase in road casualties was observed on weekends by the Government.
In the year 2005, the Government wanted to identify the impact of road safety laws, especially the speed limit restrictions in different states with the help of government records for the past 10 years (1995-2004), the objective was to introduce/revive road safety laws accordingly for all the states to reduce the increasing number of road casualties on weekends
* The Speed limit restriction can be observed before 2000 year as well, but the strict speed limit restriction rule was implemented from 2000 year to understand the impact
Strategies
Observe the Difference in Differences between ‘year’ >= 2000 & ‘year’ <2000
Observe the outcome from multiple linear regression by considering all the independent variables & the interaction term
Interview Methods - Marital and Family Therapy and Counselling - Psychology S...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
This presentation is about health care analysis using sentiment analysis .
*this is very useful to students who are doing project on sentiment analysis
*
❻❸❼⓿❽❻❷⓿⓿❼KALYAN MATKA CHART FINAL OPEN JODI PANNA FIXXX DPBOSS MATKA RESULT MATKA GUESSING KALYAN CHART FINAL ANK SATTAMATAK KALYAN MAKTA SATTAMATAK KALYAN MAKTA
Discover the cutting-edge telemetry solution implemented for Alan Wake 2 by Remedy Entertainment in collaboration with AWS. This comprehensive presentation dives into our objectives, detailing how we utilized advanced analytics to drive gameplay improvements and player engagement.
Key highlights include:
Primary Goals: Implementing gameplay and technical telemetry to capture detailed player behavior and game performance data, fostering data-driven decision-making.
Tech Stack: Leveraging AWS services such as EKS for hosting, WAF for security, Karpenter for instance optimization, S3 for data storage, and OpenTelemetry Collector for data collection. EventBridge and Lambda were used for data compression, while Glue ETL and Athena facilitated data transformation and preparation.
Data Utilization: Transforming raw data into actionable insights with technologies like Glue ETL (PySpark scripts), Glue Crawler, and Athena, culminating in detailed visualizations with Tableau.
Achievements: Successfully managing 700 million to 1 billion events per month at a cost-effective rate, with significant savings compared to commercial solutions. This approach has enabled simplified scaling and substantial improvements in game design, reducing player churn through targeted adjustments.
Community Engagement: Enhanced ability to engage with player communities by leveraging precise data insights, despite having a small community management team.
This presentation is an invaluable resource for professionals in game development, data analytics, and cloud computing, offering insights into how telemetry and analytics can revolutionize player experience and game performance optimization.
4. Section 1.1 Objectives
• Define statistics
• Distinguish between a population and a sample
• Distinguish between a parameter and a statistic
• Distinguish between descriptive statistics and inferential statistics
Larson/Farber 4th ed. 4
5. What is Data?
Data
Consist of information coming from observations, counts, measurements,
or responses.
Larson/Farber 4th ed. 5
• “People who eat three daily servings of whole grains
have been shown to reduce their risk of…stroke by
37%.” (Source: Whole Grains Council)
• “Seventy percent of the 1500 IT students playing
DOTA 2 and CSGO.”
6. What is Statistics?
Statistics
Applied mathematics that deals
with collection, organization,
presentation, analysis, and
interpretation of numerical data
in order to make decisions.
Larson/Farber 4th ed. 6
7. Data Sets
Larson/Farber 4th ed. 7
Population
The collection of all outcomes,
responses, measurements, or
counts that are of interest.
Sample
A subset of the population.
8. Most agreed or strongly agreed (60%) that
they like interactive videogames and only 18%
disagreed or strongly disagreed. The rest
weren't sure (22%).
In June 2007, A survey asked Smart Girls what they
thought about Interactive Video Games. There were 147
respondents, 97% of who are girls, and most are between
10 and 14 years old. Almost 40% are the oldest in their
family and 30% are the youngest. Middle child and only
child came out about even with 20 saying they are the
middle child and 19 saying they are an only child.
Example: Identifying Data Sets
9. Example: Identifying Data Sets
In a recent survey, 4501 adults in the Philippines were asked if they
think global warming is a problem that requires immediate
government action. Nine hundred thirty-nine of the adults said yes.
Identify the population and the sample. Describe the data set.
(Adapted from: Pew Research Center)
Larson/Farber 4th ed. 9
10. Solution: Identifying Data Sets
• The population consists of the responses of
all adults in the PHL.
• The sample consists of the responses of the
1526 adults in the PHL in the survey.
• The sample is a subset of the responses of all
adults in the PHL.
• The data set consists of 824 yes’s and 702
no’s.
Larson/Farber 4th ed. 10
Responses of adults in
the PHL (population)
Responses of
adults in survey
(sample)
11. Parameter and Statistic
Parameter
A number that describes a population
characteristic.
Average age of all people in the United States
Larson/Farber 4th ed. 11
Statistic
A number that describes a sample
characteristic.
Average age of people from a sample
of three states
12. Example: Distinguish Parameter and Statistic
Larson/Farber 4th ed. 12
Decide whether the numerical value describes a
population parameter or a sample statistic.
1. A recent survey of a sample of NBAs
reported that the average salary for an
NBA is more than Php82,000. (Source:
The Wall Street Journal)
Solution:
Sample statistic (the average of Php82,000 is
based on a subset of the population)
13. Example: Distinguish Parameter and Statistic
Larson/Farber 4th ed. 13
Decide whether the numerical value describes a
population parameter or a sample statistic.
2. Starting salaries for the 667 IT
graduates from Holy Angel University
increased 8.5% from the previous year.
Solution:
Population parameter (the percent increase of
8.5% is based on all 667 graduates’ starting
salaries)
14. Branches of Statistics
Larson/Farber 4th ed. 14
Descriptive Statistics
Involves organizing,
summarizing, and
displaying data.
e.g. Tables, charts,
averages
Inferential Statistics
Involves using sample
data to draw
conclusions about a
population.
15. Example:
A teacher arranges the scores obtained by his students in a graph
A researcher may wish to find out whether
exposure to pollution may reduce life span
Descriptive
Inferential
16. Example: Descriptive and Inferential Statistics
Decide which part of the study represents the descriptive branch of
statistics. What conclusions might be drawn from the study using
inferential statistics?
Larson/Farber 4th ed. 16
A large sample of men, aged 48,
was studied for 18 years. For
unmarried men, approximately
70% were alive at age 65. For
married men, 90% were alive at
age 65. (Source: The Journal of
Family Issues)
17. Solution: Descriptive and Inferential Statistics
Descriptive statistics involves statements such as “For unmarried men,
approximately 70% were alive at age 65” and “For married men, 90%
were alive at 65.”
A possible inference drawn from the study is that being married is
associated with a longer life for men.
Larson/Farber 4th ed. 17
18. Section 1.1 Summary
• Defined statistics
• Distinguished between a population and a sample
• Distinguished between a parameter and a statistic
• Distinguished between descriptive statistics and inferential statistics
Larson/Farber 4th ed. 18
20. Types of Data According to Sources
1. Primary data. They refer to information which is directly
gathered from respondents or which is based on direct or
firsthand experience.
Example: diary
2. Secondary data. They refer to information which is taken from
published or unpublished data gathered by other individuals or
agencies.
Example: magazine, books
21. Types of Variables
Qualitative Variable
Consists of attributes, labels, or nonnumerical entries.
Larson/Farber 4th ed. 21
Major Place of birth Eye color
22. Types of Variavles
Quantitative variables
Numerical measurements or counts.
Larson/Farber 4th ed. 22
Age Weight of a letter Temperature
23. Section 1.2 Objectives
• Distinguish between qualitative data and quantitative data
• Classify data with respect to the four levels of measurement
Larson/Farber 4th ed. 23
24. Example: Classifying Data by Type
The base prices of several vehicles are shown in the table. Which data are
qualitative data and which are quantitative data? (Source Ford Motor
Company)
Larson/Farber 4th ed. 24
25. Solution: Classifying Data by Type
Larson/Farber 4th ed. 25
Quantitative Data
(Base prices of
vehicles models are
numerical entries)
Qualitative Data
(Names of vehicle
models are
nonnumerical entries)
26. Classification of quantitative variables
1. Continuous data
- numerical responses that arise from a
measurement process.
Ex. 1.234 in, 2.8 cm
2. Discrete data
-these are numerical responses that arise
from a counting process.
Ex. Number of children in a community
27. Levels of Measurement
1. Nominal level of measurement
• Qualitative data only
• Categorized using names, labels, or qualities
• No mathematical computations can be made
Larson/Farber 4th ed. 27
2. Ordinal level of measurement
• Qualitative or quantitative data
• Data can be arranged in order
• Differences between data entries is not meaningful
28. Example: Classifying Data by Level
Two data sets are shown. Which data set consists of data at the nominal
level? Which data set consists of data at the ordinal level? (Source: Nielsen
Media Research)
Larson/Farber 4th ed. 28
29. Solution: Classifying Data by Level
Ordinal level (lists the rank of five
TV programs. Data can be ordered.
Difference between ranks is not
meaningful.)
Larson/Farber 4th ed. 29
Nominal level (lists the
call letters of each network
affiliate. Call letters are
names of network
affiliates.)
30. Levels of Measurement
3. Interval level of measurement
•Quantitative data
•Data can ordered
•Differences between data entries is meaningful
•Zero represents a position on a scale (not an inherent
zero – zero does not imply “none”)
Larson/Farber 4th ed. 30
31. Example: Classifying Data by Level
Two data sets are shown. Which data set consists of data at the interval
level? Which data set consists of data at the ratio level? (Source: Major
League Baseball)
Larson/Farber 4th ed. 31
32. Levels of Measurement
4. Ratio level of measurement
•Similar to interval level
•Zero entry is an inherent zero (implies “none”)
•A ratio of two data values can be formed
•One data value can be expressed as a multiple of
another
Larson/Farber 4th ed. 32
33. Solution: Classifying Data by Level
Interval level (Quantitative data.
Can find a difference between two
dates, but a ratio does not make
sense.)
Larson/Farber 4th ed. 33
Ratio level (Can find
differences and write
ratios.)
34. Summary of Four Levels of Measurement
Larson/Farber 4th ed. 34
Level of
Measuremen
t
Put data
in
categories
Arrange
data in
order
Subtract
data
values
Determine if one
data value is a
multiple of
another
Nominal Yes No No No
Ordinal Yes Yes No No
Interval Yes Yes Yes No
Ratio Yes Yes Yes Yes
35. Section 1.2 Summary
• Distinguished between qualitative data and quantitative data
• Classified data with respect to the four levels of measurement
Larson/Farber 4th ed. 35
36. Section 1.3 Objectives
• Discuss how to design a statistical study
• Discuss data collection techniques
• Discuss sampling techniques
Larson/Farber 4th ed. 36
37. Methods of Collecting Data
• 1. Interview Method
• A. Direct method- the researcher personally interview the
respondent.
• B. Indirect method- the researcher uses a telephone to
interview the respondent.
2. Questionnaire Method
is a list of well-planned questions written on paper, which
can be either personally administered or mailed by the
researcher to the respondents.
3. Observation Method
the researcher observes the subject of the study which may
be an individual, a group, or any unit of interest.
38. Methods of Collecting Data
• 4. Registration Method
• Examples of data gathered using this method are those
obtained from National Statistics Office(NSO), Land
Transportation, Department of Education, and other
government agencies.
5. Mechanical Devices
The devices that can be used when gathering data for social
and educational researches are the camera, projector, tape
recorder, etc. In chemical, biological and medical
researches, the common devices are x-ray machine, CT
scan, microscope, etc. In astronomy and atmospheric
researches, the telescope, barometer, radar machine,
computer, etc.
39. Example: Methods of Data Collection
A study of how fourth grade students solve a puzzle.
Larson/Farber 4th ed. 39
Solution:
Observational study (observe and
measure certain characteristics of
part of a population)
40. Example: Methods of Data Collection
A study of U.S. residents’ approval rating of the U.S. president.
Larson/Farber 4th ed. 40
Solution:
Interview(Ask “Do you approve
of the way the president is
handling his job?”)
41. Sampling Techniques
Probability Sampling
it is a sampling technique in which every individual in a
population has an equal chance of being selected to be a member of the
sample.
1. Random Sampling
selects a sample using the concept of the lottery method.
x x
x
xx
x
x
x x
x
x
x
x
x
x
x x
xx
x
x
x
x
x
x xx x
x
x
x
x
xx
x
x
x
x
x x
xx
x
x
x
x
x
x xx x
x
x
x
x
x
x
x
x
x
x
x x
xx
x
x
x
x
x
x xx x
x x
x
xx
x
x
x
x
x x
xx
x
x
x
x
x
x xx x
x x
x
x
x x
x
xx
x
x
x
x
42. Other Sampling Techniques
2. Systematic Sample
• Choose a starting value at random. Then choose every kth member
of the population.
Larson/Farber 4th ed. 42
• In the West Ridge County example you could assign
a different number to each household, randomly
choose a starting number, then select every 100th
household.
43. 3.Stratified Sample selects a sample when the population is segmented
into groups or sections called stratifications or strata.
Larson/Farber 4th ed. 43
• To collect a stratified sample of the number of people
who live in Angeles City households, you could
divide the households into socioeconomic levels and
then randomly select households from each level.
44. Example: Identifying Sampling Techniques
You are doing a study to determine the opinion of students at your school
regarding computer games research. Identify the sampling technique
used.
Larson/Farber 4th ed. 44
1. You divide the student population with respect to
majors and randomly select and question some
students in each major.
Solution:
Stratified sampling (the students are divided into
strata (majors) and a sample is selected from each
major)
45. Example: Identifying Sampling Techniques
2. You assign each student a number and generate
random numbers. You then question each student
whose number is randomly selected.
Larson/Farber 4th ed. 45
Solution:
Simple random sample (each sample of the same size
has an equal chance of being selected and each
student has an equal chance of being selected.)
46. Sampling Techniques
2. Non-Probability Sampling
1. Purposive Sampling
select the sample respondents based on certain
criteria laid down by the researcher.
2. Quota Sampling
samples are selected using quota system.
3. Convenience Sampling
the researcher picks his sample respondents from
the population that he finds convenient to interview
due to their availability or accesability.
47. Other Sampling Techniques
4. Cluster Sample
• Divide the population into groups (clusters) and
select all of the members in one or more, but not
all, of the clusters.
Larson/Farber 4th ed. 47
• In Pampanga example you could divide the
households into clusters according to zip codes, then
select all the households in one or more, but not all,
zip codes.