valuation is a methodological area that is closely related to, but distinguishable from more traditional social research. Evaluation utilizes many of the same methodologies used in traditional social research, but because evaluation takes place within a political and organizational context, it requires group skills, management ability, political dexterity, sensitivity to multiple stakeholders and other skills that social research in general does not rely on as much.
This document discusses ethical considerations in research. It defines ethics as rules that guide moral behavior and research principles. Ethics in research provides rules for appropriate and inappropriate research conduct and application of findings. The document outlines three main components of ethics in research: truthfulness, courtesy, and respect for human rights. It provides examples of each component, such as obtaining permission before collecting data, avoiding fraud/misconduct, and protecting participants' confidentiality, dignity, and right to withdraw. The overall summary is that the document defines ethics and its role in research, then outlines and gives examples of three key ethical components to consider which are truthfulness, courtesy, and respect for human rights.
This document discusses different types of descriptive research studies including normative surveys, educational surveys, and psychological research studies. It provides examples of each type of descriptive study including the purpose, procedures, and key findings. A normative survey examines typical conditions and practices to establish norms. An educational survey looks at factors related to the teaching and learning process. A psychological research study compares behaviors and reactions in different situations. Descriptive research aims to describe current conditions and phenomena without manipulating variables.
Basic research is the search for fundamental knowledge and understanding without a specific commercial application or use in mind. It aims to increase scientific knowledge for its own sake. Some key aspects of basic research include that it is theoretical, builds new knowledge, explores fundamental principles without seeking to solve direct problems, and lays the foundation for applied research. The goal is to expand understanding of phenomena through studying questions like the origins of the universe or composition of subatomic particles, without necessarily creating something new.
This document discusses experimental research designs. It describes pre-experimental designs like one-shot case studies which lack random assignment and controls. True experimental designs, like pretest-posttest control group designs, manipulate variables and use random assignment and controls. Quasi-experimental designs, such as non-equivalent control groups, lack random assignment. Factorial designs examine effects of manipulating two or more independent variables simultaneously. The document provides examples and discusses threats to validity for different designs.
The document discusses the importance and functions of research. It states that research corrects and expands perceptions by gathering new information on topics that are not well understood. Research also develops and evaluates concepts, practices, theories, and methods for testing these ideas. Additionally, research provides factual information to inform planning, decision-making, and evaluations for solving real-world problems related to issues like population growth, drug addiction, and crime. The document emphasizes that research is important for advancing human knowledge and improving life, and will continue to be relevant as long as people seek to expand their understanding of the world.
This document discusses correlational research designs. Correlational studies can show relationships between two variables to indicate cause and effect or predict future outcomes. There are three main types of correlational studies: observational research, survey research, and archival research. Correlational research allows analysis of relationships among many variables and provides correlation coefficients to measure direction and degree of relationships. Interpreting correlations involves scattergrams, correlation coefficients from -1 to 1, and determining explained variance through r-squared values. However, correlation does not necessarily prove causation as third variables could be the true cause.
The document discusses research design and its key principles. It defines research design as a plan or blueprint for conducting a study that maximizes control over interfering factors and validity of findings. Some key points made:
- Research design refers to how a study will be conducted, the type of data collected, and means used to obtain the data.
- Reliability refers to consistency of data, while validity refers to accuracy and truth of measurements.
- Threats to validity include history, selection, testing, instrumentation, maturation, and mortality.
- Descriptive, experimental, and qualitative designs are three basic types of research design.
This document provides an overview of qualitative research. It discusses the history and characteristics of qualitative research, including that it seeks to understand perspectives from local populations. The document outlines various qualitative methods like case studies, ethnography, and grounded theory. It also discusses issues in qualitative research such as gaining entry, selecting participants, and enhancing validity. Strategies to reduce bias like triangulation and examining outliers are presented.
This document discusses ethical considerations in research. It defines ethics as rules that guide moral behavior and research principles. Ethics in research provides rules for appropriate and inappropriate research conduct and application of findings. The document outlines three main components of ethics in research: truthfulness, courtesy, and respect for human rights. It provides examples of each component, such as obtaining permission before collecting data, avoiding fraud/misconduct, and protecting participants' confidentiality, dignity, and right to withdraw. The overall summary is that the document defines ethics and its role in research, then outlines and gives examples of three key ethical components to consider which are truthfulness, courtesy, and respect for human rights.
This document discusses different types of descriptive research studies including normative surveys, educational surveys, and psychological research studies. It provides examples of each type of descriptive study including the purpose, procedures, and key findings. A normative survey examines typical conditions and practices to establish norms. An educational survey looks at factors related to the teaching and learning process. A psychological research study compares behaviors and reactions in different situations. Descriptive research aims to describe current conditions and phenomena without manipulating variables.
Basic research is the search for fundamental knowledge and understanding without a specific commercial application or use in mind. It aims to increase scientific knowledge for its own sake. Some key aspects of basic research include that it is theoretical, builds new knowledge, explores fundamental principles without seeking to solve direct problems, and lays the foundation for applied research. The goal is to expand understanding of phenomena through studying questions like the origins of the universe or composition of subatomic particles, without necessarily creating something new.
This document discusses experimental research designs. It describes pre-experimental designs like one-shot case studies which lack random assignment and controls. True experimental designs, like pretest-posttest control group designs, manipulate variables and use random assignment and controls. Quasi-experimental designs, such as non-equivalent control groups, lack random assignment. Factorial designs examine effects of manipulating two or more independent variables simultaneously. The document provides examples and discusses threats to validity for different designs.
The document discusses the importance and functions of research. It states that research corrects and expands perceptions by gathering new information on topics that are not well understood. Research also develops and evaluates concepts, practices, theories, and methods for testing these ideas. Additionally, research provides factual information to inform planning, decision-making, and evaluations for solving real-world problems related to issues like population growth, drug addiction, and crime. The document emphasizes that research is important for advancing human knowledge and improving life, and will continue to be relevant as long as people seek to expand their understanding of the world.
This document discusses correlational research designs. Correlational studies can show relationships between two variables to indicate cause and effect or predict future outcomes. There are three main types of correlational studies: observational research, survey research, and archival research. Correlational research allows analysis of relationships among many variables and provides correlation coefficients to measure direction and degree of relationships. Interpreting correlations involves scattergrams, correlation coefficients from -1 to 1, and determining explained variance through r-squared values. However, correlation does not necessarily prove causation as third variables could be the true cause.
The document discusses research design and its key principles. It defines research design as a plan or blueprint for conducting a study that maximizes control over interfering factors and validity of findings. Some key points made:
- Research design refers to how a study will be conducted, the type of data collected, and means used to obtain the data.
- Reliability refers to consistency of data, while validity refers to accuracy and truth of measurements.
- Threats to validity include history, selection, testing, instrumentation, maturation, and mortality.
- Descriptive, experimental, and qualitative designs are three basic types of research design.
This document provides an overview of qualitative research. It discusses the history and characteristics of qualitative research, including that it seeks to understand perspectives from local populations. The document outlines various qualitative methods like case studies, ethnography, and grounded theory. It also discusses issues in qualitative research such as gaining entry, selecting participants, and enhancing validity. Strategies to reduce bias like triangulation and examining outliers are presented.
Historical methods of research involve systematically examining accounts of past events through primary and secondary sources to develop an interpretation. There are several key steps, including identifying a topic, conducting background research, analyzing sources through external and internal criticism, and developing a narrative. While it cannot control variables like other methods, historical research helps understand contemporary issues, illuminate cultural interactions, and reevaluate existing theories about the past.
This document discusses descriptive research design. Descriptive research aims to observe and describe phenomena as they occur naturally without manipulation. It can be used to identify problems, justify practices, and develop theories. Descriptive studies describe characteristics like frequency, percentages, averages without relating variables. Types include univariate, exploratory, and comparative designs. Limitations include inability to determine causation and potential for bias. The document provides an example of a descriptive study evaluating nursing students' knowledge and attitudes about Alzheimer's disease.
This document outlines different types of literature reviews, including narrative reviews, critical reviews, scoping reviews, conceptual reviews, state-of-the-art reviews, argumentative reviews, integrative reviews, historical reviews, methodological reviews, theoretical reviews, quantitative and qualitative meta-analysis reviews, and systematic reviews. It provides brief descriptions of each type of literature review and what they aim to accomplish, such as summarizing previous research, identifying gaps, or comparing and evaluating perspectives.
The document discusses two types of educational research: descriptive research and survey research. Descriptive research aims to describe characteristics of a population or phenomenon and focuses on "what" rather than "why". Survey research involves collecting data through surveys or questionnaires. Key points covered include characteristics and methods of descriptive research such as observational studies and case studies; examples of descriptive research topics; advantages and disadvantages. Survey research types like cross-sectional and longitudinal surveys are defined, as are their purposes, uses and steps in conducting survey research.
1. The document discusses various research methods and types including older techniques based on authority and experience versus the modern scientific method.
2. It explains key terms like theory, hypothesis, and variables and outlines the steps of the scientific method including identifying a problem, suggesting solutions, deductive reasoning, and testing hypotheses.
3. Several types of research are described such as experimental, descriptive, and historical research with experimental research involving controlling variables and testing hypotheses through experiments.
Historical research involves the systematic study of past events and problems through primary and secondary sources. It includes identifying a problem, collecting and evaluating data sources through external and internal criticism, synthesizing information, and interpreting conclusions. Some examples are essays from the Civil War, school attendance records over decades, and high school diplomas from the 1920s. While historical research provides perspective on current issues, it is limited by unavailable data and an inability to control past variables. Overall, understanding history assists in defining past situations and their modern meaning.
Basic research aims to develop or enhance theory without considering immediate practical applications. It involves collecting and analyzing data to test hypotheses and advance scientific understanding, even if the results have no obvious commercial or practical benefits. Some key points about basic research include:
- It increases fundamental knowledge and understanding without direct commercial objectives.
- Findings may not be immediately useful but often lay the groundwork for applied research and new technologies.
- It is primarily conducted by universities and seeks to explain phenomena rather than solve practical problems.
- The goal is explanatory research to generate new ideas and theories about how the world works.
Difference between qualitative and quantitative research shaniShani Jyothis
nursing research### quantitative research###qualitative research###difference#### process of research ......
Quantitative Vs qualitative research.......÷######$###@@@@@@@@@@ based on hypothesis, ............., variables analysis,............ interpretation, .............
Research ethics involves applying ethical principles to scientific research involving human subjects. The objectives of research ethics are to protect human participants, ensure research benefits society, and ensure research is conducted ethically. Key principles include minimizing harm, obtaining informed consent, protecting anonymity and confidentiality, avoiding misleading practices, and allowing participants to withdraw. Research ethics promotes trust, accountability, and social values in research. However, research poses risks of physical, psychological, social, and economic harm to participants. It may also divert resources from other health needs.
Experimental research design aims to test hypotheses about causal relationships. It involves manipulating an independent variable and observing its effect on a dependent variable under controlled conditions. True experimental designs have three key features - manipulation, control, and randomization. Manipulation means consciously controlling the independent variable. Control involves using a control group to account for extraneous variables. Randomization ensures subjects are randomly assigned to conditions. Common true experimental designs include post-test only, pretest-posttest, Solomon four-group, factorial, randomized block, and crossover designs. While powerful for establishing causation, experimental research also has limitations for studying humans.
This document discusses qualitative research methods. It defines qualitative research as seeking to understand social phenomena through natural settings and the meanings and experiences of participants. Qualitative research employs descriptive data from real-world contexts and inductive analysis to describe findings from the participants' perspectives. Some key methods are participant observation, interviews, and focus groups. Qualitative research is flexible and asks open-ended questions to get complex responses. It can help interpret quantitative data by explaining real-world situations.
This document discusses the definition and purpose of research. It defines research as the systematic process of collecting and analyzing information to increase understanding of a topic or to solve a problem. The purpose of research is to gain new knowledge, correct perceptions, and find solutions to problems. Some key characteristics of good research include careful planning and analysis, accurate observation, and openness to new ideas. Nursing research specifically aims to improve patient care and develop effective solutions to health issues.
The document discusses various methods of educational evaluation, including formative and summative assessment, internal and external evaluation, and qualitative and quantitative measures. It describes different types of evaluation tools like essays, short answers, objective tests, observations, anecdotal records, checklists, rating scales, and oral exams. The purpose of evaluation is to measure student achievement and program effectiveness in order to improve the educational process.
This document discusses different types of experimental research designs, including their advantages and disadvantages. It covers true experimental designs like pretest-posttest and Solomon four-group designs. It also discusses quasi-experimental designs like nonequivalent control group and time series designs, as well as pre-experimental designs. Threats to internal and external validity are explained for different designs.
A research report summarizes a completed study by outlining the problem investigated, research questions addressed, and data collected and analyzed. It has three main sections - an introductory section providing background and methodology, a body section detailing literature review, study design, analysis and results, and a reference section citing sources. The introductory section includes a title page, abstract, and table of contents. The body section presents the study's framework, findings, and conclusions. References and appendices provide supplemental material. Overall, a research report communicates the details and outcomes of an original study conducted by the researcher.
This document defines research and discusses its purposes. Research is defined as a systematic, scientific process of gathering and analyzing data to answer questions and solve problems. It involves posing a question, collecting data to answer it, and presenting the answer. The purposes of research include discovering new facts, solving existing problems, improving techniques and developing new products, discovering previously unknown substances, and satisfying curiosity. Overall, research aims to increase knowledge and understanding while improving and preserving human life.
This document provides an overview of various quantitative data analysis techniques including parametric and non-parametric statistics, descriptive statistics, contingency analysis, t-tests, ANOVA, correlation, and regression. It discusses assumptions and processes for each technique and how to interpret results. Computer software like SPSS and SAS can be used to analyze large, complex datasets.
This document discusses different types of research designs used in experimental research. It begins by defining research and outlining the key characteristics of systematic, logical, empirical, reductive, and replicable research. It then presents a continuum of research designs ranging from analytical to experimental. Several types of experimental designs are discussed in detail, including true experimental designs involving manipulation, control and randomization, as well as quasi-experimental and pre-experimental designs that lack one or more of these elements. Specific true experimental designs explained include post-test only, pretest-posttest, Solomon four-group, factorial, randomized block, and crossover designs. Quasi-experimental designs covered are nonrandomized control group and time-series designs. The
This document discusses evaluation principles, processes, components, and strategies for evaluating community health programs. It begins by defining evaluation and explaining that the community nurse evaluates community responses to health programs to measure progress towards goals and objectives. The evaluation process involves assessing implementation, short-term impacts, and long-term outcomes. Key components of evaluation include relevance, progress, cost-efficiency, effectiveness, and outcomes. The document then describes various evaluation strategies like case studies, surveys, experimental design, monitoring, and cost-benefit/cost-effectiveness analyses and how they can be useful for evaluation.
The document discusses curriculum evaluation models and processes. It defines curriculum evaluation as assessing the strengths and weaknesses of a curriculum to improve its effectiveness. Several models are described, including Tyler's objectives-centered model which evaluates curriculum elements like objectives and student outcomes. Stufflebeam's CIPP model assesses curriculum context, inputs, processes, and products. The stakeholder-responsive model focuses on curriculum implementation from stakeholders' perspectives. Scriven's consumer-oriented model uses criteria and checklists to conduct formative or summative evaluations. Overall, the document outlines different approaches to curriculum evaluation to enhance learning outcomes.
Historical methods of research involve systematically examining accounts of past events through primary and secondary sources to develop an interpretation. There are several key steps, including identifying a topic, conducting background research, analyzing sources through external and internal criticism, and developing a narrative. While it cannot control variables like other methods, historical research helps understand contemporary issues, illuminate cultural interactions, and reevaluate existing theories about the past.
This document discusses descriptive research design. Descriptive research aims to observe and describe phenomena as they occur naturally without manipulation. It can be used to identify problems, justify practices, and develop theories. Descriptive studies describe characteristics like frequency, percentages, averages without relating variables. Types include univariate, exploratory, and comparative designs. Limitations include inability to determine causation and potential for bias. The document provides an example of a descriptive study evaluating nursing students' knowledge and attitudes about Alzheimer's disease.
This document outlines different types of literature reviews, including narrative reviews, critical reviews, scoping reviews, conceptual reviews, state-of-the-art reviews, argumentative reviews, integrative reviews, historical reviews, methodological reviews, theoretical reviews, quantitative and qualitative meta-analysis reviews, and systematic reviews. It provides brief descriptions of each type of literature review and what they aim to accomplish, such as summarizing previous research, identifying gaps, or comparing and evaluating perspectives.
The document discusses two types of educational research: descriptive research and survey research. Descriptive research aims to describe characteristics of a population or phenomenon and focuses on "what" rather than "why". Survey research involves collecting data through surveys or questionnaires. Key points covered include characteristics and methods of descriptive research such as observational studies and case studies; examples of descriptive research topics; advantages and disadvantages. Survey research types like cross-sectional and longitudinal surveys are defined, as are their purposes, uses and steps in conducting survey research.
1. The document discusses various research methods and types including older techniques based on authority and experience versus the modern scientific method.
2. It explains key terms like theory, hypothesis, and variables and outlines the steps of the scientific method including identifying a problem, suggesting solutions, deductive reasoning, and testing hypotheses.
3. Several types of research are described such as experimental, descriptive, and historical research with experimental research involving controlling variables and testing hypotheses through experiments.
Historical research involves the systematic study of past events and problems through primary and secondary sources. It includes identifying a problem, collecting and evaluating data sources through external and internal criticism, synthesizing information, and interpreting conclusions. Some examples are essays from the Civil War, school attendance records over decades, and high school diplomas from the 1920s. While historical research provides perspective on current issues, it is limited by unavailable data and an inability to control past variables. Overall, understanding history assists in defining past situations and their modern meaning.
Basic research aims to develop or enhance theory without considering immediate practical applications. It involves collecting and analyzing data to test hypotheses and advance scientific understanding, even if the results have no obvious commercial or practical benefits. Some key points about basic research include:
- It increases fundamental knowledge and understanding without direct commercial objectives.
- Findings may not be immediately useful but often lay the groundwork for applied research and new technologies.
- It is primarily conducted by universities and seeks to explain phenomena rather than solve practical problems.
- The goal is explanatory research to generate new ideas and theories about how the world works.
Difference between qualitative and quantitative research shaniShani Jyothis
nursing research### quantitative research###qualitative research###difference#### process of research ......
Quantitative Vs qualitative research.......÷######$###@@@@@@@@@@ based on hypothesis, ............., variables analysis,............ interpretation, .............
Research ethics involves applying ethical principles to scientific research involving human subjects. The objectives of research ethics are to protect human participants, ensure research benefits society, and ensure research is conducted ethically. Key principles include minimizing harm, obtaining informed consent, protecting anonymity and confidentiality, avoiding misleading practices, and allowing participants to withdraw. Research ethics promotes trust, accountability, and social values in research. However, research poses risks of physical, psychological, social, and economic harm to participants. It may also divert resources from other health needs.
Experimental research design aims to test hypotheses about causal relationships. It involves manipulating an independent variable and observing its effect on a dependent variable under controlled conditions. True experimental designs have three key features - manipulation, control, and randomization. Manipulation means consciously controlling the independent variable. Control involves using a control group to account for extraneous variables. Randomization ensures subjects are randomly assigned to conditions. Common true experimental designs include post-test only, pretest-posttest, Solomon four-group, factorial, randomized block, and crossover designs. While powerful for establishing causation, experimental research also has limitations for studying humans.
This document discusses qualitative research methods. It defines qualitative research as seeking to understand social phenomena through natural settings and the meanings and experiences of participants. Qualitative research employs descriptive data from real-world contexts and inductive analysis to describe findings from the participants' perspectives. Some key methods are participant observation, interviews, and focus groups. Qualitative research is flexible and asks open-ended questions to get complex responses. It can help interpret quantitative data by explaining real-world situations.
This document discusses the definition and purpose of research. It defines research as the systematic process of collecting and analyzing information to increase understanding of a topic or to solve a problem. The purpose of research is to gain new knowledge, correct perceptions, and find solutions to problems. Some key characteristics of good research include careful planning and analysis, accurate observation, and openness to new ideas. Nursing research specifically aims to improve patient care and develop effective solutions to health issues.
The document discusses various methods of educational evaluation, including formative and summative assessment, internal and external evaluation, and qualitative and quantitative measures. It describes different types of evaluation tools like essays, short answers, objective tests, observations, anecdotal records, checklists, rating scales, and oral exams. The purpose of evaluation is to measure student achievement and program effectiveness in order to improve the educational process.
This document discusses different types of experimental research designs, including their advantages and disadvantages. It covers true experimental designs like pretest-posttest and Solomon four-group designs. It also discusses quasi-experimental designs like nonequivalent control group and time series designs, as well as pre-experimental designs. Threats to internal and external validity are explained for different designs.
A research report summarizes a completed study by outlining the problem investigated, research questions addressed, and data collected and analyzed. It has three main sections - an introductory section providing background and methodology, a body section detailing literature review, study design, analysis and results, and a reference section citing sources. The introductory section includes a title page, abstract, and table of contents. The body section presents the study's framework, findings, and conclusions. References and appendices provide supplemental material. Overall, a research report communicates the details and outcomes of an original study conducted by the researcher.
This document defines research and discusses its purposes. Research is defined as a systematic, scientific process of gathering and analyzing data to answer questions and solve problems. It involves posing a question, collecting data to answer it, and presenting the answer. The purposes of research include discovering new facts, solving existing problems, improving techniques and developing new products, discovering previously unknown substances, and satisfying curiosity. Overall, research aims to increase knowledge and understanding while improving and preserving human life.
This document provides an overview of various quantitative data analysis techniques including parametric and non-parametric statistics, descriptive statistics, contingency analysis, t-tests, ANOVA, correlation, and regression. It discusses assumptions and processes for each technique and how to interpret results. Computer software like SPSS and SAS can be used to analyze large, complex datasets.
This document discusses different types of research designs used in experimental research. It begins by defining research and outlining the key characteristics of systematic, logical, empirical, reductive, and replicable research. It then presents a continuum of research designs ranging from analytical to experimental. Several types of experimental designs are discussed in detail, including true experimental designs involving manipulation, control and randomization, as well as quasi-experimental and pre-experimental designs that lack one or more of these elements. Specific true experimental designs explained include post-test only, pretest-posttest, Solomon four-group, factorial, randomized block, and crossover designs. Quasi-experimental designs covered are nonrandomized control group and time-series designs. The
This document discusses evaluation principles, processes, components, and strategies for evaluating community health programs. It begins by defining evaluation and explaining that the community nurse evaluates community responses to health programs to measure progress towards goals and objectives. The evaluation process involves assessing implementation, short-term impacts, and long-term outcomes. Key components of evaluation include relevance, progress, cost-efficiency, effectiveness, and outcomes. The document then describes various evaluation strategies like case studies, surveys, experimental design, monitoring, and cost-benefit/cost-effectiveness analyses and how they can be useful for evaluation.
The document discusses curriculum evaluation models and processes. It defines curriculum evaluation as assessing the strengths and weaknesses of a curriculum to improve its effectiveness. Several models are described, including Tyler's objectives-centered model which evaluates curriculum elements like objectives and student outcomes. Stufflebeam's CIPP model assesses curriculum context, inputs, processes, and products. The stakeholder-responsive model focuses on curriculum implementation from stakeholders' perspectives. Scriven's consumer-oriented model uses criteria and checklists to conduct formative or summative evaluations. Overall, the document outlines different approaches to curriculum evaluation to enhance learning outcomes.
This document discusses evaluation in education administration. It provides definitions of evaluation and discusses the purposes and processes of evaluation. Evaluation is defined as systematically acquiring and assessing information to provide useful feedback. The purposes of evaluation include appraising instructional outcomes and improving programs. Evaluation processes involve establishing clear purposes and questions, collecting and analyzing both qualitative and quantitative data, and reporting findings. Formative and summative evaluation approaches are also outlined. In summary, evaluation ensures quality teaching and promotes professional learning by systematically gathering feedback.
programme evaluation by priyadarshinee pradhanPriya Das
This document discusses concepts, needs, goals and tools related to program evaluation. It defines evaluation as a systematic process to determine the merit, worth and significance of a program or intervention using set standards and criteria. The primary purposes of evaluation are to gain insight and enable reflection to identify future changes. Some key goals of program evaluation include improving program design, assessing progress towards goals, and determining effectiveness and efficiency. Common tools for program evaluation discussed include interviews, observations, questionnaires, and case studies.
The document discusses monitoring and evaluation of education programs for sustainable development. It aims to identify learning processes aligned with ESD and their contributions. Key learning processes include collaboration, engaging stakeholders, and active participation. ESD learning refers to gaining knowledge as well as learning critical thinking and envisioning positive futures. However, data on ESD processes and outcomes is limited. The review recommends improved data collection focusing on experiences rather than literature. More evidence is still needed to fully understand ESD's contributions to sustainable development.
This document discusses management-oriented evaluation approaches. It begins by stating that these approaches aim to serve decision makers by providing evaluation information to help with good decision making. It describes the CIPP model created by Stuffbeam which evaluates programs based on Context, Input, Process, and Product. The document also discusses other early evaluation models like the UCLA model. It notes strengths of the management approach include focusing evaluations and linking them to decision making. Potential limitations include the evaluator becoming too aligned with management or evaluations becoming too complex.
This document provides an overview of curriculum evaluation, including its definition, purpose, and various models. It defines curriculum evaluation as assessing different components of a curriculum program to identify strengths and weaknesses and improve student learning outcomes. The document outlines several curriculum evaluation models, including Bradley's Effectiveness Model, Tyler's Objectives-Centered Model, Stufflebeam's CIPP Model, and Stake's Responsive Model. It also provides steps for conducting a curriculum evaluation, such as identifying critical issues, data sources, data collection techniques, and standards for analysis. The overall purpose of curriculum evaluation is to examine a program's objectives, content, instructional processes, and outcomes to help improve the quality and effectiveness of curriculum design and implementation.
This document discusses process evaluation for implementation science projects. It defines process evaluation as understanding how and why interventions work by examining implementation and change processes. Key aspects of process evaluation include assessing fidelity, dose, reach, adaptations and context. Process evaluation helps explain success or failure of interventions and understand outcome heterogeneity. The document reviews guidelines for process evaluation from the UK Medical Research Council, including clarifying theories of change. It then discusses the PeriKIP project, which aims to improve perinatal health in Vietnam using participatory stakeholder groups, and outlines plans for its process evaluation.
Evaluation approaches presented by hari bhusalHari Bhushal
The document discusses evaluation approaches and methods. It defines evaluation as appraising the relevance, efficiency, effectiveness, impacts, and sustainability of plans, policies, programs and projects. Evaluations are used to draw lessons to improve future implementation and hold agencies accountable. The document then discusses different types of evaluations including formative, process, outcome and economic evaluations. It also outlines various evaluation approaches like appreciative inquiry, beneficiary assessment, case studies, contribution analysis, developmental evaluation, and participatory evaluation.
Theory-driven evaluation is an important approach in implementation science that focuses on mechanisms of change rather than just outcomes. It involves developing a program theory or theory of change that explains how a strategy is expected to work. A theory-driven evaluation then assesses whether the actual implementation and effects match the initial program theory. It aims to learn whether a strategy works, for whom, in what contexts, and how. This provides valuable information for improving strategies and assessing their transferability.
Curriculum monitoring involves periodically assessing curriculum implementation and making adjustments. It determines how well the curriculum is working and informs decisions about retaining, improving, or modifying aspects. The document outlines the definition, rationale, types, roles, process, and similarities and differences between monitoring and evaluation. An effective monitoring system is simple, provides timely feedback, is cost-effective, flexible, accurate, comprehensive, relevant, and leads to learning. It involves clarifying roles, identifying evidence, data collection tools, training monitors, preparing staff, conducting monitoring, analyzing and sharing results, and determining a plan of action.
Evaluation serves two main purposes: accountability and learning. Development agencies have tended to prioritize the first, and given responsibility for that to centralized units. But evaluation for learning is the area where observers find the greatest need today and tomorrow. A learning approach to evaluation looks to designing evaluation with learning in mind.
Community engagement - what constitutes successcontentli
This document discusses evaluating community engagement programs. It explains that evaluation involves systematically collecting information about a program's activities and outcomes to track progress, make judgements, and improve effectiveness. For community engagement specifically, evaluation can determine what worked well or not, if engagement met its objectives, and if it enhanced knowledge and decision-making. The document recommends clarifying a program's logic, outcomes, and purpose of evaluation with stakeholders. It also suggests establishing performance indicators and methods for collecting and analyzing information to both manage programs adaptively and use findings.
This document discusses training evaluation, including its purpose, benefits, and process. Training evaluation assesses how effective a training program was at benefiting trainees and the company. It involves collecting outcomes data to determine if the training achieved its goals. Evaluation provides feedback to improve training and control over provision, while demonstrating training's contribution to the organization. Common evaluation methods are expert review, quality review, observation testing, and pilot testing.
This presentation tackles the following information:
*Approaches to Program Evaluation
*Three Dimensions that Shape Point of View on Evaluation
*Doing Program Evaluation
*Program Components as Data Sources
Reference: The Elements of Language Curriculum (A Systematic Approach to Program Development) by James Dean Brown of University of Hawaii at Manoa
Reporters: Joy Anne R. Puazo & Marie Buena S. Bunsoy
Program: Bachelor in Secondary Education Major in English
Year: 4th
Instructor: Mrs. Yolanda D. Reyes
Subject: Language Curriculum for Secondary Schools
Study of Performance Management System in HALsurabhi shinde
This document discusses the research methodology for a study on performance management at HAL (Hindustan Aeronautics Limited). It covers the following key points:
1. The research design is descriptive in nature, with the objective of providing a detailed explanation of the performance management system at HAL. Both primary and secondary data will be collected through methods like questionnaires and document analysis.
2. The study aims to understand HAL's performance appraisal process, employee perceptions of it, and how to improve employee performance. It will evaluate the effectiveness and satisfaction with the current system.
3. Limitations include a short time frame, unavailable information from the organization, and potential biased responses. The scope is limited to
This presentation has a vivid description of the basics of doing a program evaluation, with detailed explanation of the " Log Frame work " ( LFA) with practical example from the CLICS project. This presentation also includes the CDC framework for evaluation of program.
N.B: Kindly open the ppt in slide share mode to fully use all the animations wheresoever made.
This document discusses interactive evaluation, which involves participants playing a major role in setting goals, delivery, and evaluation. It aims to provide systematic evaluation findings to help organizations continuously improve their programs. Key approaches to interactive evaluation include responsive evaluation, action research, quality review, developmental evaluation, and empowerment evaluation. The overall goal is to integrate evaluation into the daily processes of organizations to help them become more effective and efficient.
It takes all kinds of AI and Humans to make Good Business DecisionDenis Gagné
In today’s rapidly evolving markets, the integration of human insight with advanced AI technologies is crucial for making sophisticated, timely decisions. This presentation delves into how businesses in regulated industries such as finance, healthcare, and government can leverage AI to balance mission-critical risks with profitability, ensure compliance, and maintain necessary transparency. We'll explore strategic, tactical, and operational decisions across various scenarios, demonstrating the power of AI to augment human decision-making processes, thus optimizing outcomes. Whether you are looking to enhance your existing protocols or build new frameworks, this webinar will equip you with the insights and tools to advance your decision-making capabilities.
SATTA MATKA DPBOSS KALYAN MATKA RESULTS KALYAN CHART KALYAN MATKA MATKA RESULT KALYAN MATKA TIPS SATTA MATKA MATKA COM MATKA PANA JODI TODAY BATTA SATKA MATKA PATTI JODI NUMBER MATKA RESULTS MATKA CHART MATKA JODI SATTA COM INDIA SATTA MATKA MATKA TIPS MATKA WAPKA ALL MATKA RESULT LIVE ONLINE MATKA RESULT KALYAN MATKA RESULT DPBOSS MATKA 143 MAIN MATKA KALYAN MATKA RESULTS KALYAN CHART
Vision and Goals: The primary aim of the 1st Defence Tech Meetup is to create a Defence Tech cluster in Portugal, bringing together key technology and defence players, accelerating Defence Tech startups, and making Portugal an attractive hub for innovation in this sector.
Historical Context and Industry Evolution: The presentation provides an overview of the evolution of the Portuguese military industry from the 1970s to the present, highlighting significant shifts such as the privatisation of military capabilities and Portugal's integration into international defence and space programs.
Innovation and Defence Linkage: Emphasis on the historical linkage between innovation and defence, citing examples like the military genesis of Silicon Valley and the Cold War's technological dividends that fueled the digital economy, highlighting the potential for similar growth in Portugal.
Proposals for Growth: Recommendations include promoting dual-use technologies and open innovation, streamlining procurement processes, supporting and financing new ICT/BTID companies, and creating a Defence Startup Accelerator to spur innovation and economic growth.
Current and Future Technologies: Discussion on emerging defence technologies such as drone warfare, advancements in AI, and new military applications, along with the importance of integrating these innovations to enhance Portugal's defence capabilities and economic resilience.
Adani Group Requests For Additional Land For Its Dharavi Redevelopment Projec...Adani case
It will bring about growth and development not only in Maharashtra but also in our country as a whole, which will experience prosperity. The project will also give the Adani Group an opportunity to rise above the controversies that have been ongoing since the Adani CBI Investigation.
L'indice de performance des ports à conteneurs de l'année 2023SPATPortToamasina
Une évaluation comparable de la performance basée sur le temps d'escale des navires
L'objectif de l'ICPP est d'identifier les domaines d'amélioration qui peuvent en fin de compte bénéficier à toutes les parties concernées, des compagnies maritimes aux gouvernements nationaux en passant par les consommateurs. Il est conçu pour servir de point de référence aux principaux acteurs de l'économie mondiale, notamment les autorités et les opérateurs portuaires, les gouvernements nationaux, les organisations supranationales, les agences de développement, les divers intérêts maritimes et d'autres acteurs publics et privés du commerce, de la logistique et des services de la chaîne d'approvisionnement.
Le développement de l'ICPP repose sur le temps total passé par les porte-conteneurs dans les ports, de la manière expliquée dans les sections suivantes du rapport, et comme dans les itérations précédentes de l'ICPP. Cette quatrième itération utilise des données pour l'année civile complète 2023. Elle poursuit le changement introduit l'année dernière en n'incluant que les ports qui ont eu un minimum de 24 escales valides au cours de la période de 12 mois de l'étude. Le nombre de ports inclus dans l'ICPP 2023 est de 405.
Comme dans les éditions précédentes de l'ICPP, la production du classement fait appel à deux approches méthodologiques différentes : une approche administrative, ou technique, une méthodologie pragmatique reflétant les connaissances et le jugement des experts ; et une approche statistique, utilisant l'analyse factorielle (AF), ou plus précisément la factorisation matricielle. L'utilisation de ces deux approches vise à garantir que le classement des performances des ports à conteneurs reflète le plus fidèlement possible les performances réelles des ports, tout en étant statistiquement robuste.
DPBOSS | KALYAN MAIN MARKET FAST MATKA RESULT KALYAN MATKA | MATKA RESULT | KALYAN MATKA TIPS | SATTA MATKA | МАТКА СОМ | MATKA PANA JODI TODAY | BATTA SATKA MATKA PATTI JODI NUMBER | MATKA RESULTS | MATKA CHART | MATKA JODI | SATTA COM | FULL RATE GAME | MATKA GAME | MATKA WAPKA | ALL MATKA RESULT LIVE ONLINE | MATKA RESULT | KALYAN MATKA RESULT | DPBOSS MATKA 143 | MAIN MATKA MATKA NUMBER FIX MATKANUMBER FIX SATTAMATKA FIXMATKANUMBER SATTA MATKA ALL SATTA MATKA FREE GAME KALYAN MATKA TIPS KAPIL MATKA GAME SATTA MATKA KALYAN GAME DAILY FREE 4 ANK ALL MARKET PUBLIC SEVA WEBSITE FIX FIX MATKA NUMBER INDIA.S NO1 WEBSITE TTA FIX FIX MATKA GURU INDIA MATKA KALYAN CHART MATKA GUESSING KALYAN FIX OPEN FINAL 3 ANK SATTAMATKA143 GUESSING SATTA BATTA MATKA FIX NUMBER TODAY WAPKA FIX AAPKA FIX FIX FIX FIX SATTA GURU NUMBER SATTA MATKA ΜΑΤΚΑ143 SATTA SATTA SATTA MATKA SATTAMATKA1438 FIX МАТКА MATKA BOSS SATTA LIVE ЗМАТКА 143 FIX FIX FIX KALYAN JODI MATKA KALYAN FIX FIX WAP MATKA BOSS440 SATTA MATKA FIX FIX MATKA NUMBER SATTA MATKA FIXMATKANUMBER FIX MATKA MATKA RESULT FIX MATKA NUMBER FREE DAILY FIX MATKA NUMBER FIX FIX MATKA JODI SATTA MATKA FIX ANK MATKA ANK FIX KALYAN MUMBAI ΜΑΤΚΑ NUMBERSATTA MATKA DPBOSS KALYAN MATKA RESULTS KALYAN CHART KALYAN MATKA MATKA RESULT KALYAN MATKA TIPS SATTA MATKA MATKA COM MATKA PANA JODI TODAY BATTA SATKA MATKA PATTI JODI NUMBER MATKA RESULTS MATKA CHART MATKA JODI SATTA COM INDIA SATTA MATKA MATKA TIPS MATKA WAPKA ALL MATKA RESULT LIVE ONLINE MATKA RESULT KALYAN MATKA RESULT DPBOSS MATKA 143 MAIN MATKA KALYAN MATKA RESULTS KALYAN CHART
Satta Matka Dpboss Matka Guessing Indian Matka KALYAN MATKA | MATKA RESULT | KALYAN MATKA TIPS | SATTA MATKA | MATKA.COM | MATKA PANA JODI TODAY | BATTA SATKA | MATKA PATTI JODI NUMBER | MATKA RESULTS | MATKA CHART | MATKA JODI | SATTA COM | FULL RATE GAME | MATKA GAME | MATKA WAPKA | ALL MATKA RESULT LIVE ONLINE | MATKA RESULT | KALYAN MATKA RESULT | DPBOSS MATKA 143 | ΜΑΙΝ ΜΑΤΚΑ❾❸❹❽❺❾❼❾❾⓿
DPboss Indian Satta Matta Matka Result Fix Matka NumberSatta Matka
Kalyan Matkawala Milan Day Matka Kalyan Bazar Panel Chart Satta Matkà Results Today Sattamatkà Chart Main Bazar Open To Close Fix Dp Boos Matka Com Milan Day Matka Chart Satta Matka Online Matka Satta Matka Satta Satta Matta Matka 143 Guessing Matka Dpboss Milan Night Satta Matka Khabar Main Ratan Jodi Chart Main Bazar Chart Open Kalyan Open Come Matka Open Matka Open Matka Guessing Matka Dpboss Matka Main Bazar Chart Open Boss Online Matka Satta King Shri Ganesh Matka Results Site Matka Pizza Viral Video Satta King Gali Matka Results Cool मटका बाजार Matka Game Milan Matka Guessing Sattamatkà Result Sattamatkà 143 Dp Boss Live Main Bazar Open To Close Fix Kalyan Matka Close Milan Day Matka Open Www Matka Satta Kalyan Satta Number Kalyan Matka Number Chart Indian Matka Chart Main Bazar Open To Close Fix Milan Night Fix Open Satta Matkà Fastest Matka Results Satta Batta Satta Batta Satta Matka Kalyan Satta Matka Kalyan Fix Guessing Matka Satta Mat Matka Result Kalyan Chart Please Boss Ka Matka Tara Matka Guessing Satta M Matka Market Matka Results Live Satta King Disawar Matka Results 2021 Satta King Matka Matka Matka
8328958814KALYAN MATKA | MATKA RESULT | KALYAN MATKA TIPS | SATTA MATKA | MATKA➑➌➋➑➒➎➑➑➊➍
8328958814KALYAN MATKA | MATKA RESULT | KALYAN MATKA TIPS | SATTA MATKA | MATKA.COM | MATKA PANA JODI TODAY | BATTA SATKA | MATKA PATTI JODI NUMBER | MATKA RESULTS | MATKA CHART | MATKA JODI | SATTA COM | FULL RATE GAME |
Empowering Excellence Gala Night/Education awareness Dubaiibedark
The primary goal is to raise funds for our cause, which is to help support educational programs for underprivileged children in Dubai. The gala also aims to increase awareness of our mission and foster a sense of community among attendees
➒➌➎➏➑➐➋➑➐➐ Satta Matka Dpboss Matka Guessing Indian Matka KALYAN MATKA | MATKA RESULT | KALYAN MATKA TIPS | SATTA MATKA | MATKA.COM | MATKA PANA JODI TODAY | BATTA SATKA | MATKA PATTI JODI NUMBER | MATKA RESULTS | MATKA CHART | MATKA JODI | SATTA COM | FULL RATE GAME | MATKA GAME | MATKA WAPKA | ALL MATKA RESULT LIVE ONLINE | MATKA RESULT | KALYAN MATKA RESULT | DPBOSS MATKA 143 | MAIN MATKA
Progress Report - Qualcomm AI Workshop - AI available - everywhereAI summit 1...Holger Mueller
Qualcomm invited analysts and media for an AI workshop, held at Qualcomm HQ in San Diego, June 26th. My key takeaways across the different offerings is that Qualcomm us using AI across its whole portfolio. Remarkable to other analyst summits was 50% of time being dedicated to demos / hands on exeriences.
2. Evaluation Research
– as an analytical tool, involves investigating a policy program to obtain all
information pertinent to the assessment of its performance, both process
and result.
– evaluation as a phase of the policy cycle more generally refers to the
reporting of such information back to the into the policy-making process
– “the systematic assessment of the operation and/or the outcomes of a
program or policy, compared to a set of explicit or implicit standards, as a
means of contributing to the improvement of the program or policy”
3. Why is there research evaluation?
– provide an evidence base for strategy development,
– document funding practices and thereby establish transparency about
taxpayers’ money,
– decide on the allocation of resources,
– support internal processes for learning about the research system and
funding activities which may result in the adaptation of funding
programmes or research fields,
4. Why is there research evaluation?
– demonstrate that research performing and research funding organizations
are accountable and are concerned with quality assurance,
– sharpen concepts: for example what is understood by internationalization,
interdisciplinary or impact of science
– establish a direct channel of communication with stakeholders, to
communicate the impact and results of research funding to government or
to allow grantees (scientists) to articulate their opinions about the funding
system, application procedures and research conditions (for example
during site visits, interviews, surveys).
5. Characteristics and Principles of
Evaluation
– Childers (1989, p. 250), in his article emphasizing the evaluation of
programs, notes that evaluation research
1. is usually employed for decision making;
2. deals with research questions about a program;
3. takes place in the real world of the program; and
4. usually represents a compromise between pure and applied
research.
6. – Wallace and Van Fleet (2001) comment that evaluation should be
carefully planned, not occur by accident; have a purpose that is
usually goal oriented; focus on determining the quality of a product
or service; go beyond measurement; not be any larger than
necessary; and reflect the situation in which it will occur.
7. Griffiths and King (1991) identify some
Principles for Good evaluation
1. Evaluation must have a purpose; it must not be an end in itself
2. Without the potential for some action, there is no need to evaluate
3. Evaluation must be more than descriptive; it must take into account relationships
among operational performance, users, and organizations
4. Evaluation should be a communication tool involving staff and users
5. Evaluation should not be sporadic but be ongoing and provide a mean for continual
monitoring, diagnosis, and change
6. Ongoing evaluation should provide a means for continual monitoring, diagnosis and
change
7. Ongoing evaluation should be dynamic in nature, reflecting new knowledge and
changes in the environment
8. 5 Basic Evaluation Questions
1) What will be assessed?
2) What measures/indicators will be used?
3) Who will be evaluated?
4) What data will be collected?
5) How will data be analyzed?
10. Summative evaluation
Summative evaluation seeks to understand the outcomes or effects of
something, for example where a test in of children in school is used to
assess the effectiveness of teaching or the deployment of a curriculum.
The children in this case are not direct beneficiaries - they are simply
objects that contain information that needs to be extracted.
11. Summative Evaluation Research
– Summative evaluations can assess such as:
A. Finance: Effect in terms of cost, savings, profit and so on.
B. Impact: Broad effect, both positive and negative, including depth,
spread and time effects.
C. Outcomes: Whether desired or unwanted effects are achieved.
D. Secondary analysis: Analysis of existing data to derive additional
information.
E. Meta-analysis: Integrating results of multiple studies.
12. Formative evaluation
– Formative evaluation is used to help strengthen or improve the
person or thing being tested. For example where a test of
children in school is used to shape teaching methods that will
result in optimal learning.
13. Formative evaluation
Formative evaluations can assess such as:
A. Implementation: Monitoring success of a process or project.
B. Needs: Looking at such as type and level of need.
C. Potential: The ability of using information for formative
purpose.
14. Topics Appropriate to Evaluation Research
– Evaluation research is appropriate whenever some social intervention occurs or
is planned.
– Social intervention is an action taken within a social context for the purpose of
producing some intended result.
– In its simplest sense, evaluation research is the process of determining whether
a social intervention has produced the intended result.
– The topics appropriate for evaluation research are limitless.
– The questions appropriate for evaluation research are of great practical
significance: jobs, programs, and investments as well as values and beliefs.
15. What Will Be Evaluated?
Formative (aka Process) Evaluation:
Done to help improve the project itself.
Gather information on how the project worked.
Data is collected about activities: What was done.
Summative (aka Outcome) Evaluation:
Done to determine what results were achieved.
Data is collected about outcomes (objectives; goals): What
happened.
16. What Measures Will Be Used?
Formative Evaluation:
Completion of planned activities
Adherence to proposed time lines
Meeting budget
Summative Evaluation:
Reaching a criterion
Change in knowledge, attitude, skill, behavior
17. Who will be evaluated?
Formative Evaluation:
Those responsible for doing activities/delivering
services and those participating in activities.
Faculty
Agency personnel
Students
Summative Evaluation:
Those who were expected to be impacted by activities.
Students
Clients
18. What data will be collected?
Formative Evaluation:
Program records
Observations
Activity logs
Satisfaction surveys
Summative Evaluation:
Observations
Interviews
Tests
Surveys/questionnaires
19. How will data be analyzed?
1) Qualitative analysis (more for formative)
1) Self-reports
2) Documentation
3) Description
4) Case Study
2) Quantitative analysis (more for summative)
1) Group comparison
2) Group change
3) Individual change
4) Comparison to population/reference
5) Analysis of relationships
20. An Example
The Cosmic Ray Observatory Project (CROP)
Goal: Establish a statewide collaborative network of expert teachers
fully capable of continuing the project locally.
Objectives: Teachers will acquire knowledge about cosmic ray physics
and skill in high energy research methods.
Teachers will exhibit increased self-efficacy for conducting
CROP research and integrating CROP into their teaching.
Activity: High school physics teachers and students will attend
a 3-4 week hands-on summer research experience on cosmic ray
physics at UNL
21. Formative Evaluation
What activities were evaluated?
The specific components of the Summer Research Experience
What measures were used?
Completion of activities
Participant satisfaction
Participant evaluation of goal attainment
Participant evaluation of activity effectiveness
Who was evaluated?
Participants
What data was collected?
Interviews
Rating scales
How was data analyzed?
Content analysis of interview responses
Frequency and descriptive statistical analysis of rating scales.
22. Examples of Formative Measures
Interview Questions
What was the most effective part of the workshop?
Hands-on work with detectors 6
Information from classroom sessions 4
Teacher Comments (by teacher with coded category(s) indicated):
– For me personal was the activities. The actual connecting and wiring and those
things. I don’t sit and take lectures very well. That’s just me. [Hands on work with
the detectors]
– Um, I think it was the classroom work. There was a good review for those of us that
have had physics and it was a good introduction for those that didn’t. [Information
from classroom sessions]
23. Examples of Formative Measures
Rating Scales
1. How effective do you think the workshop was in meeting its goals?
1 2 3 4 5
Not Effective Neither Effective Somewhat Effective Very Effective
nor ineffective Effective
4. Indicate how USEFUL you think each of the following workshop components was using the following scale.
1 2 3 4 5 6
Very Unuseful Somewhat Somewhat Useful Very
Unuseful Unuseful Useful Useful
a. Classroom/lecture sessions on particle detectors and
experimental techniques.
b. Lab work sessions refurbishing and preparing detectors.
24. Examples of Formative Measures
Rating Scales
How effective do you think the workshop was in meeting its goals?
Very Effective Somewhat Neither Effective Not
Effective Effective nor Ineffective Effective
(5) (4) (3) (2) (1) M SD
1 3 1 0 0 4.00 .71
Very Unuseful Somewhat Somewhat Useful Very
Unuseful Unuseful Useful Useful
(1) (2) (3) (4) (5) (6) M SD Component
0 0 0 0 2 3 5.60 .55 Classroom/lecture
sessions on particle
detectors and
experimental techniques.
0 0 0 1 3 1 5.00 .71 Lab work sessions
refurbishing and preparing
detectors.
25. Summative Evaluation
What Outcomes were evaluated?
Teachers’ increase in knowledge about cosmic ray physics and skill in high energy research methods
Teachers’ change self-efficacy for conducting CROP research and integrating CROP into their teaching
What measures were used?
Knowledge gain
Achieving criteria level of knowledge/skill
Increase in self-efficacy
Who was evaluated?
Teachers
What data was collected?
Pre- and post-workshop tests of cosmic ray physics and research
Pre- and post-workshop self-efficacy ratings
How was data analyzed?
Dependent t-tests of pre-post scores
Comparing skill scores to criteria
26. Summative Evaluation
Knowledge Test Questions
1. The energy distribution of primary cosmic rays bombarding the earth has
been measured by a number of experiments. In the space below, sketch a
graph of the number of observed primary cosmic rays vs. cosmic ray energy,
and describe the distribution in a sentence or two.
2. Explain how a scintillation counter works, i.e. write down the sequence of
events from the passage of a charged particle through a scintillator to the
generation of an electric signal in a photomultiplier tube.
3. Describe some characteristic differences between electromagnetic showers
and hadronic showers created when particles impinge on a block of matter
or a cosmic ray enters the atmosphere. Hint: think in terms of the type of
particle which initiates the shower, the type of secondary particles in the
shower, the shape of the shower, depth penetration of the shower particles,
etc.
27. Summative Evaluation
Data Analysis
Table 9
Participants Pre- and Post-Test Mean Scores on Knowledge Tests
Pre-Test Post-Test
df M SD M SD t ES
Teachers4 5.00 2.69 19.60 1.71 8.67* 6.64
Note. ES = effect size computed by Cohen's d in averaged pre- and post-test SD units. Teachers, n
= 5.
*p < .01.
28. Summative Evaluation
Self-Efficacy Questions
Please rate how confident you are about each of the following from 0 (completely
unconfident) to 100 (completely confident).
1. Your ability to set-up and maintain the CROP research
equipment at your school.
2. Your ability to conduct CROP research at your school.
3. Your ability to teach students at your school who
haven't attended the Summer Workshop how to
conduct CROP research at your school.
4. Your ability to design your own research projects for
your students utilizing the CROP research equipment.
5. Your ability to incorporate lessons and activities in
high-energy physics into your classes.
6. Your ability to create "hands-on" projects and activities for
students in your classes using the CROP research equipment.
29. Summative Evaluation
Data Analysis
Table 11
Participants Pre- and Post-Test Mean Self-Efficacy Scores
Pre-Test Post-Test
df M SD M SD t ES
Conducting 4 41.00 31.58 77.80 16.10 3.06* 1.54
CROP Activities
Integrating CROP 4 45.00 31.37 79.25 17.31 3.32* 1.41
Into Classes
Utilizing Distance 4 56.67 17.48 70.33 13.35 4.08* .89
Education
Note. ES = effect size computed by Cohen's d in averaged pre- and post-test SD units. Teachers, n = 5.
*p < .01.
30. Formative Evaluation Example
To obtain student reactions for the development of the
campus specific Web based brief intervention versions,
student feedback will be obtained. Beta versions will be
evaluated by recruiting a panel of students from the each
participating campus. These students will complete the
intervention and provide verbal and written feedback on
their reactions to the program and their suggestions for
improvement. Adjustments to the program will be made
based on student feedback.
31. Summative Evaluation Example
Students will complete the web-based brief alcohol intervention
(pre-test). Approximately 6-weeks later, they will again complete
the web-based brief alcohol intervention (post-test). Change will
be determined by comparing post-test scores to pre-test scores
using a Repeated Measures Analysis of Variance (ANOVA). Success
will be determined by a statistically significant decrease in drinking
and driving (Objective 1) and riding with a driver who has been
drinking (Objective 2), with an effect size of at least a 10% pre- to
post-test decrease for drunk driving and a 6% decrease for riding
with a drinking driver.
33. Formulating the Problem: Issues of
Measurement
– Problem: What is the purpose of the intervention to be evaluated?
– This question often produces vague results.
– A common problem is measuring the “unmeasurable.”
– Evaluation research is a matter of finding out whether something is there
or not there, whether something happened or did not happen.
– To conduct evaluation research, we must be able to operationalize,
observe, and measure.
34. What is the outcome, or the response variable?
– If a social program is intended to accomplish something, we must be
able to measure that something.
– It is essential to achieve agreements on definitions in advance.
– In some cases you may find that the definitions of a problem and a
sufficient solution are defined by law or by agency regulations; if so
you must be aware of such specifications and accommodate them.
35. – Whatever the agreed-upon definitions, you must also
achieve agreement on how the measurements will be
made.
– There may be several outcome measures, for instance
surveys of attitudes and behaviors, existing statistics, use
of other resources.
36. Measuring Experimental Contexts
– Measuring the dependent variable directly involved in the
experimental program is only a beginning.
– It is often appropriate and important to measure those aspects
of the context of an experiment researchers think might affect
the experiment.
– For example, what is happening in the larger society beyond the
experimental group, which may affect the experimental group.
37. Specifying Interventions
– Besides making measurements relevant to the outcomes of a
program, researchers must measure the program intervention—
the experimental stimulus.
– The experimental stimulus is the program intervention.
– If the research design includes an experimental and a control
group, then the experimental stimulus will be handled.
38. – Assigning a person to the experimental group is the same as scoring that person
“yes” on the stimulus, and assigning “no” to the person in the control group.
– Considerations: who participates fully; who misses participation in the program
periodically; who misses participation in the program a lot?
– Measures may need to be included to measure level of participation.
– The problems may be more difficult than that.
– The factors to consider should be addressed thoroughly.
39. Specifying the Population
– It is important to define the population of possible subjects for whom the
program is appropriate.
– Ideally, all or a sample of appropriate subjects will then be assigned to
experimental and control groups as warranted by the study design.
– Beyond defining the relevant population, the researcher should make fairly
precise measurements of the variables considered in the definition.
40. New versus Existing Measures
– If the study addresses something that’s never been measured before, the
choice is easy—new measures.
– If the study addresses something that others have tried to measure, the
researcher will need to evaluate the relative worth of various existing
measurement devices in terms of her or his specific research situations
and purpose.
41. – Of greater scientific significance, measures that have
been used frequently by other researchers carry a body
of possible comparisons that might be important to the
current evaluation.
– Finally, measures with a long history of use usually have
known degrees of validity and reliability, but newly
created measures will require pretesting or will be used
with considerable uncertainty.
42. – Advantages of creating measures:
– They can offer greater relevance and validity than using
existing measures.
– Advantages of using existing measures:
– Creating good measures takes time and energy, both of which
could be saved by adopting an existing technique.
43. Operationalizing Success/Failure
– Potentially one of the most taxing aspects of evaluation research
is determining whether the program under review succeeded or
failed. Definitions of “success” and “failure” can be rather
difficult.
44. Cost-benefit analysis
– How much does the program cost in relation to what it returns in
benefits?
– If the benefits outweigh the cost, keep the program going.
– If the reverse, ‘junk it’.
– Unfortunately this is not an appropriate analysis to make if
thinking only in terms of money.
45. – Researchers must take measurement quite seriously in
evaluation research, carefully determining all the variables to
be measured and getting appropriate measures for each.
– Such decisions are often not purely scientific ones.
– Evaluation researchers often must work out their measurement
strategy with the people responsible for the program being
evaluated.
– There is also a political aspect.
47. Types of Evaluation Research Designs
– Evaluation research is not itself a method, but rather one
application of social research methods. As such, it can involve
any of several research designs. To be discussed:
– 1. Experimental designs
– 2. Quasi-experimental designs
– 3. Qualitative evaluations
48. – Experimental Designs
– Many of the experimental designs introduced in Chapter 8
can be used in evaluation research.
49. – Quasi-Experimental Designs: distinguished from “true”
experiments primarily by the lack of random assignment of
subjects to an experiments primarily by the lack of random
assignment of subjects to an experimental and control
group. In evaluation research, it’s often impossible to
achieve such an assignment of subjects.
50. – Rather than forgo evaluation all together, there are some
other possibilities.
– Time-Series Designs
– Nonequivalent Control Groups
– Multiple Time-Series Designs
51. – Nonequivalent Control Groups:
– Using an existing “control” group that appears similar to the
experimental group, used when researchers cannot create
experimental and control groups by random assignment from a
common pool.
– A nonequivalent control group can provide a point of
comparison even though it is not formally a part of the study.
52. – Multiple Time-Series Designs:
– Using more than one time-series analysis.
– These are the improved version of the nonequivalent control group
design.
– This method is not as good as the one in which control groups are
randomly assigned, but it is an improvement over assessing the
experimental group’s performance without any comparison.
53. – Qualitative Evaluations
– Evaluations can be less structured and more qualitative.
– Sometimes important, often unexpected information is
yielded from in-depth interviews.
56. Steps in Creating a Logic Model
1) Clarify what the goals of the project/ program are.
2) Clarify what objectives the project should achieve.
3) Specify what program activities will occur.
57. Goal Clarification
High school physics teachers and students will attend a 3-4 week
hands-on summer research experience on cosmic ray physics at
UNL.
Is this a goal?
58. Goal Clarification
Establish a statewide collaborative network of expert teachers fully
capable of continuing the project locally.
59. Developing Objectives
Goal: Establish a statewide collaborative network of expert
teachers fully capable of continuing the project locally.
Objectives
1. Teachers will acquire knowledge about cosmic ray
physics and skill in high energy research methods.
2. Teachers will exhibit increased self-efficacy for
conducting CROP research and integrating CROP into
their teaching.
60. CROP Logic Model
Goal: Establish a statewide collaborative network of expert teachers fully capable of
continuing the project locally.
Objectives:
1. Teachers will acquire knowledge about cosmic ray physics and skill in high energy
research methods.
2. Teachers will exhibit increased self-efficacy for conducting CROP research and
integrating CROP into their teaching.
Activity: High school physics teachers and students will attend a 3-4 week hands-on
summer research experience on cosmic ray physics at UNL
61. Evaluating the Logic Model
– Goal – Objective Correspondence
Are objectives related to the overall goal?
– Goal – Activity Correspondence
Do anticipated activities adequately implement the goals?
– Activity – Objective Correspondence
Will program activities result in achieving objectives?
62. CROP Logic Model
Goal: Establish a statewide collaborative network of expert
teachers fully capable of continuing the project locally.
Objectives: Teachers will acquire knowledge about cosmic ray physics and skill
in high energy research methods.
Teachers will exhibit increased self-efficacy for conducting CROP
research and integrating CROP into their teaching.
Activity: High school physics teachers and students will attend a 3-4
week hands-on summer research experience on cosmic ray physics at
UNL
63. An Example
GOAL 1: Increase the availability of attractive student
centered social activities located both on and off the NU
campus.
Objective 1.1: Increase by 15% from baseline the
number of students aware of campus and
community entertainment options available to NU
students.
Activity: Develop and maintain an interactive web site
describing social and entertainment options for
students.
64. Another Example
GOAL 7: Reduce high-risk alcohol marketing and
promotion practices.
Objective 7.3: Reduce by 25% from baseline, the volume of alcohol
advertisements in the Daily Nebraskan, The Reader and Ground Zero
that mention high-risk marketing and promotion practices.
Activity: Work with the media to encourage at least 3 newspaper articles or
television news stories in the Lincoln market each school year
concerning high-risk marketing and promotion practices.
65. Logic Model Example
Goal Objectives Methodology Completion Date
1. Create active, operational
campus task forces at the 11
remaining state-funded
institutions of higher
education serving
undergraduate populations.
1.1 Recruit support from
upper administration at each
institution to commit
personnel to task force
coordination and
participation.
Administrative luncheon
presentations to institution
chancellors and presidents on
statewide initiatives hosted by
University of Nebraska
President James Milliken;
follow-up identifying key
contacts
By November, 2005
1.2 Provide technical
assistance and training to
assist campuses in campus
task force recruitment,
organization and
development.
Drive-in workshop on coalition
development; follow-up
teleconferences with
organizers, internet
resources.
By December, 2005
1.3: Provide regular
teleconferencing facilitation to
allow interaction between
task force coordinators at
participating campuses.
Monthly telephone conference
of campus task force
organizers; agenda that
allows sharing of issues,
problems, needs, and
accomplishments
Ongoing through January,
2007
66. Logic Model Worksheet
Goals Activities Objectives
(Outcome)
Indicators
Measures
Who
Evaluated
Data Sources Data Analysis
68. – The Social Context
– Evaluation research has a special propensity for
running into problems.
– Logistical problems
– Ethical problems
69. Logistical Problems
– Problems associated with getting subjects to do what they are
supposed to do, getting research instruments distributed and
returned, and other seemingly unchallenging tasks that can prove
to be very challenging.
– The special, logistical problems of evaluation research grow out of
the fact that it occurs within the context of real life.
70. – Although evaluation research is modeled after the experiment—
which suggests that the researchers have control over what
happens—it takes place within frequently uncontrollable daily
life.
– Lack of control can create real dilemmas for the researchers.
71. Administrative control:
– The logistical details of an evaluation project often fall to program
administrators.
– What happens when the experimental stimulus changes in the middle of the
experiment due to unforeseen problems (e.g. escaping convicts;
inconsistency of attendance, or replacing original subjects with substitutes)?
– Some of the data will reflect the original stimulus; other data will reflect the
modification.
72. Ethical Issues
– Ethics and evaluation are intertwined in many ways.
– Sometimes the social interventions being evaluated raise ethical
issues. They may involve political, ideological and ethical issues
about the topic itself
– Maybe the experimental program is of great value to those
participating in it.
– But what about the control group who is not receiving help?
73. Use of Research Results
– Because the purpose of evaluation research is to determine the
success or failure of social interventions, you might think it
reasonable that a program would automatically be continued or
terminated based on the results of the research.
– It’s not that simple.
– Other factors intrude on the assessment of evaluation research
results, sometimes blatantly and sometimes subtly.
74. – Three important reasons why the implications of the evaluation
research results are not always put into practice.
– The implications may not always be presented in a way that the
nonresearchers can understand.
– Evaluation results sometimes contradict deeply held beliefs
– Vested interests in the programs underway
75. Social Indicators Research
– Combining evaluation research with the analysis of existing data.
– A rapidly growing field in social research involves the development and
monitoring of social indicators, aggregated statistics that reflect the social
condition of a society or social subgroup.
– Researchers use indicators to monitor social life.
– It’s possible to use social indicators data for comparison across groups
either at one time or across some period of time.
– Often doing both sheds the most light on the subject.
76. – The use of social indicators is proceeding on two fronts:
– Researchers are developing ever more-refined indicators; finding
which indicators of a general variable are the most useful in
monitoring social life
– Research is being devoted to discovering the relationships among
variables within whole societies
77. – Evaluation research provides a means for us to learn right away
whether a particular “tinkering” really makes things better.
– Social indicators allow us to make that determination on a broad
scale; coupling them with computer simulation opens up the
possibility of knowing how much we would like a particular
intervention without having to experience its risks.
78. General References
– Frechting, J.A (2007). Logic modeling methods in Evaluation Research. San
Francisco: Jossey – Bass/. Wisey.
– Pattons, M. Q. (2002). Qualitative Research and evaluation methods.
Thousand, CA: Sage