1. Software project estimation involves decomposing a project into smaller problems like major functions and activities. Estimates can be based on similar past projects, decomposition techniques, or empirical models.
2. Accurate estimates depend on properly estimating the size of the software product using techniques like lines of code, function points, or standard components. Baseline metrics from past projects are then applied to the size estimates.
3. Decomposition techniques involve estimating the effort needed for each task or function and combining them. Process-based estimation decomposes the software process into tasks while problem-based estimation decomposes the problem.
source code metrics and other maintenance tools and techniquesSiva Priya
The document discusses two source code metrics: Halstead's effort equation and McCabe's cyclomatic complexity measure. Halstead's metrics are based on counts of operators, operands, unique operators, and unique operands in source code. McCabe's measure defines the complexity of a program's control flow graph based on the number of edges, nodes, and connected components. The document also mentions that software maintenance involves a range of activities from code modification to tracking complexity metrics over time.
The document provides an overview of software cost estimation, outlining various methods used including algorithmic models like COCOMO, expert judgement, top-down and bottom-up approaches, and estimation by analogy. It discusses COCOMO in detail, including the original COCOMO 81 model and updated COCOMO II model, and emphasizes the importance of calibration for accurate estimates.
Static modeling represents the static elements of software such as classes, objects, and interfaces and their relationships. It includes class diagrams and object diagrams. Class diagrams show classes, attributes, and relationships between classes. Object diagrams show instances of classes and their properties. Dynamic modeling represents the behavior and interactions of static elements through interaction diagrams like sequence diagrams and communication diagrams, as well as activity diagrams.
This document discusses several software cost estimation techniques:
1. Top-down and bottom-up approaches - Top-down estimates system-level costs while bottom-up estimates costs of each module and combines them.
2. Expert judgment - Widely used technique where experts estimate costs based on past similar projects. It utilizes experience but can be biased.
3. Delphi estimation - Estimators anonymously provide estimates in rounds to reach consensus without group dynamics influencing individuals.
4. Work breakdown structure - Hierarchical breakdown of either the product components or work activities to aid bottom-up estimation.
The data design action translates data objects into data structures at the software component level.
Data Design is the first and most important design activity. Here the main issue is to select the appropriate data structure i.e. the data design focuses on the definition of data structures.
Data design is a process of gradual refinement, from the coarse "What data does your application require?" to the precise data structures and processes that provide it. With a good data design, your application's data access is fast, easily maintained, and can gracefully accept future data enhancements.
Introduction to Software Project ManagementReetesh Gupta
This document provides an introduction to software project management. It defines what a project and software project management are, and discusses the key characteristics and phases of projects. Software project management aims to deliver software on time, within budget and meeting requirements. It also discusses challenges that can occur in software projects related to people, processes, products and technology. Effective project management focuses on planning, organizing, monitoring and controlling the project work.
The document discusses configuration management for software engineering projects. It covers topics such as configuration management planning, change management, version and release management, and the use of CASE tools to support configuration management. Configuration management aims to manage changes to software products and control system evolution through activities like change control, version control, and configuration auditing.
Software project planning involves defining roles and responsibilities, ensuring work aligns with business objectives, and checking schedules and requirements feasibility. It requires risk analysis, tracking the project plan, and meeting quality standards. Issues can include unclear requirements, time/budget mismanagement, personnel problems, and lack of management support. Key activities are identifying requirements, estimating costs/risks, preparing a project charter and plan, and commencing the project. The master schedule summarizes deliverables and milestones based on a master project plan and detailed work schedules.
source code metrics and other maintenance tools and techniquesSiva Priya
The document discusses two source code metrics: Halstead's effort equation and McCabe's cyclomatic complexity measure. Halstead's metrics are based on counts of operators, operands, unique operators, and unique operands in source code. McCabe's measure defines the complexity of a program's control flow graph based on the number of edges, nodes, and connected components. The document also mentions that software maintenance involves a range of activities from code modification to tracking complexity metrics over time.
The document provides an overview of software cost estimation, outlining various methods used including algorithmic models like COCOMO, expert judgement, top-down and bottom-up approaches, and estimation by analogy. It discusses COCOMO in detail, including the original COCOMO 81 model and updated COCOMO II model, and emphasizes the importance of calibration for accurate estimates.
Static modeling represents the static elements of software such as classes, objects, and interfaces and their relationships. It includes class diagrams and object diagrams. Class diagrams show classes, attributes, and relationships between classes. Object diagrams show instances of classes and their properties. Dynamic modeling represents the behavior and interactions of static elements through interaction diagrams like sequence diagrams and communication diagrams, as well as activity diagrams.
This document discusses several software cost estimation techniques:
1. Top-down and bottom-up approaches - Top-down estimates system-level costs while bottom-up estimates costs of each module and combines them.
2. Expert judgment - Widely used technique where experts estimate costs based on past similar projects. It utilizes experience but can be biased.
3. Delphi estimation - Estimators anonymously provide estimates in rounds to reach consensus without group dynamics influencing individuals.
4. Work breakdown structure - Hierarchical breakdown of either the product components or work activities to aid bottom-up estimation.
The data design action translates data objects into data structures at the software component level.
Data Design is the first and most important design activity. Here the main issue is to select the appropriate data structure i.e. the data design focuses on the definition of data structures.
Data design is a process of gradual refinement, from the coarse "What data does your application require?" to the precise data structures and processes that provide it. With a good data design, your application's data access is fast, easily maintained, and can gracefully accept future data enhancements.
Introduction to Software Project ManagementReetesh Gupta
This document provides an introduction to software project management. It defines what a project and software project management are, and discusses the key characteristics and phases of projects. Software project management aims to deliver software on time, within budget and meeting requirements. It also discusses challenges that can occur in software projects related to people, processes, products and technology. Effective project management focuses on planning, organizing, monitoring and controlling the project work.
The document discusses configuration management for software engineering projects. It covers topics such as configuration management planning, change management, version and release management, and the use of CASE tools to support configuration management. Configuration management aims to manage changes to software products and control system evolution through activities like change control, version control, and configuration auditing.
Software project planning involves defining roles and responsibilities, ensuring work aligns with business objectives, and checking schedules and requirements feasibility. It requires risk analysis, tracking the project plan, and meeting quality standards. Issues can include unclear requirements, time/budget mismanagement, personnel problems, and lack of management support. Key activities are identifying requirements, estimating costs/risks, preparing a project charter and plan, and commencing the project. The master schedule summarizes deliverables and milestones based on a master project plan and detailed work schedules.
The document discusses software estimation and project planning. It covers estimating project cost and effort through decomposition techniques and empirical estimation models. Specifically, it discusses:
1) Decomposition techniques involve breaking down a project into functions and tasks to estimate individually, such as estimating lines of code or function points for each piece.
2) Empirical estimation models use historical data from past projects to generate estimates.
3) Key factors that affect estimation accuracy include properly estimating product size, translating size to effort/time/cost, and accounting for team abilities and requirements stability.
This document discusses different software estimation techniques. It describes what software estimation is, why it is needed, and some common difficulties in estimation. It then outlines factors to consider like product objectives, corporate assets, and project constraints. It discusses methods for estimating lines of code or function points. Function point analysis and the unadjusted and value adjustment components are explained. Models for calculating effort and cost using lines of code and function points are provided, including the COCOMO model and its organic, semi-detached, and embedded project types.
The document discusses organization and team structures for software development organizations. It explains the differences between functional and project formats. The functional format divides teams by development phase (e.g. requirements, design), while the project format assigns teams to a single project. The document notes advantages of the functional format include specialization, documentation, and handling staff turnover. However, it is not suitable for small organizations with few projects. The document also describes common team structures like chief programmer, democratic, and mixed control models.
Software Project Management (monitoring and control)IsrarDewan
Monitoring and Controlling are processes needed to track, review, and regulate the progress and performance of the project. It also identifies any areas where changes to the project management method are required and initiates the required changes.
The COCOMO model is a widely used software cost estimation model that predicts development effort and schedule based on project attributes. It includes basic, intermediate, and detailed models of increasing complexity. The intermediate model estimates effort as a function of source lines of code and cost drivers. The detailed model further incorporates the impact of cost drivers on development phases. COCOMO 2 expands on this with application composition, early design, reuse, and post-architecture models for different project stages.
Software engineering a practitioners approach 8th edition pressman solutions ...Drusilla918
Full clear download( no error formatting) at: https://goo.gl/XmRyGP
software engineering a practitioner's approach 8th edition pdf free download
software engineering a practitioner's approach 8th edition ppt
software engineering a practitioner's approach 6th edition pdf
software engineering pressman 9th edition pdf
software engineering a practitioner's approach 9th edition
software engineering a practitioner's approach 9th edition pdf
software engineering a practitioner's approach 7th edition solution manual pdf
roger s. pressman
This document provides an overview of software maintenance. It discusses that software maintenance is an important phase of the software life cycle that accounts for 40-70% of total costs. Maintenance includes error correction, enhancements, deletions of obsolete capabilities, and optimizations. The document categorizes maintenance into corrective, adaptive, perfective and preventive types. It also discusses the need for maintenance to adapt to changing user requirements and environments. The document describes approaches to software maintenance including program understanding, generating maintenance proposals, accounting for ripple effects, and modified program testing. It discusses challenges like lack of documentation and high staff turnover. The document also introduces concepts of reengineering and reverse engineering to make legacy systems more maintainable.
Risk management involves identifying potential problems, assessing their likelihood and impacts, and developing strategies to address them. There are two main risk strategies - reactive, which addresses risks after issues arise, and proactive, which plans ahead. Key steps in proactive risk management include identifying risks through checklists, estimating their probability and impacts, developing mitigation plans, monitoring risks and mitigation effectiveness, and adjusting plans as needed. Common risk categories include project risks, technical risks, and business risks.
Estimating involves forecasting the time and cost to complete project deliverables. There are two main types of estimates: bottom-up estimates require more effort but rely on those familiar with the work, while top-down estimates can be made by managers without direct experience. Software cost and effort estimation is not an exact science due to many variable factors. Key parameters that affect estimates include resources, time, human skills, and cost. Common software estimation techniques include top-down and bottom-up methods such as the three-point estimation technique.
This document discusses major factors that influence software cost estimation. It identifies programmer ability, product complexity, product size, available time, required reliability, and level of technology as key factors. It provides details on how each factor affects software cost, including equations to estimate programming time and effort based on variables like source lines of code and developer months. Program complexity is broken into three levels: application, utility, and system software. The document also discusses how underestimating code size and inability to compress development schedules can impact cost estimates.
This Presentation contains all the topics in design concept of software engineering. This is much more helpful in designing new product. You have to consider some of the design concepts that are given in the ppt
2.6 Empirical estimation models & The make-buy decision.pptTHARUNS44
The document discusses empirical estimation models, including the structure of estimation models, the COCOMO II model, and the software equation. It also covers making a make/buy decision by creating a decision tree to calculate the expected cost of building, reusing, buying, or contracting a software project. Outsourcing is discussed as either a strategic or tactical decision.
The document discusses some key issues with conventional software management approaches like the waterfall model. It notes that software development is unpredictable and that management discipline is more important for success than technology. Some problems with the waterfall model are late risk resolution, adversarial stakeholder relationships due to rigid documentation requirements, and a focus on documents over engineering work. The document also provides metrics on the relative costs of development versus maintenance and how people are a major factor in productivity.
The document discusses project planning in software engineering. It defines project planning and its importance. It describes the project manager's responsibilities which include project planning, reporting, risk management, and people management. It discusses challenges in software project planning. The RUP process for project planning is then outlined which involves creating artifacts like the business case and software development plan. Risk management is also a key part of project planning.
Decomposition technique In Software Engineering Bilal Hassan
The document discusses different techniques for estimating software project costs and effort, including decomposition, sizing, and function point analysis. It provides an example of estimating the lines of code and function points for a mechanical CAD software project. Estimates are developed by decomposing the problem into smaller elements and tasks, and estimating the effort required for each. The accuracy of estimates depends on properly sizing the software and having reliable past project metrics.
The document discusses different types of software metrics that can be used to measure various aspects of software development. Process metrics measure attributes of the development process, while product metrics measure attributes of the software product. Project metrics are used to monitor and control software projects. Metrics need to be normalized to allow for comparison between different projects or teams. This can be done using size-oriented metrics that relate measures to the size of the software, or function-oriented metrics that relate measures to the functionality delivered.
The document discusses critical systems and system dependability. It defines critical systems as systems where failure could result in significant economic losses, damage, or threats to human life. It describes four dimensions of dependability for critical systems: availability, reliability, safety, and security. It emphasizes that critical systems require trusted development methods to achieve high dependability.
The document discusses staffing level estimation over the course of a software development project. It describes how the number of personnel needed varies at different stages: a small group is needed for planning and analysis, a larger group for architectural design, and the largest number for implementation and system testing. It also references models like the Rayleigh curve and Putnam's interpretation that estimate personnel levels over time. Tables show estimates for the distribution of effort, schedule, and personnel across activities for different project sizes. The key idea is that staffing requirements fluctuate throughout the software life cycle, with peaks during implementation and testing phases.
This document discusses several software design techniques: stepwise refinement, levels of abstraction, structured design, integrated top-down development, and Jackson structured programming. Stepwise refinement is a top-down technique that decomposes a system into more elementary levels. Levels of abstraction designs systems as layers with each level performing services for the next higher level. Structured design converts data flow diagrams into structure charts using design heuristics. Integrated top-down development integrates design, implementation, and testing with a hierarchical structure. Jackson structured programming maps a problem's input/output structures and operations into a program structure to solve the problem.
Estimation of resources, cost, and schedule for a software engineering effort requires experience, access to good historical information, and the courage to commit to quantitative predictions when qualitative information is all that exists. Halstead’s Measure & COCOMO Modeol COCOMO II Model of Estimation techniquesused or S/w Developments and Maintenance
The document provides an overview of software project estimation techniques. It discusses that estimation involves determining the money, effort, resources and time required to build a software system. The key steps are: describing product scope, decomposing problems, estimating sub-problems using historical data and experience, and considering complexity and risks. It also covers decomposition techniques, empirical estimation models like COCOMO II, and factors considered in estimation like resources, feasibility and risks.
The document discusses software estimation and project planning. It covers estimating project cost and effort through decomposition techniques and empirical estimation models. Specifically, it discusses:
1) Decomposition techniques involve breaking down a project into functions and tasks to estimate individually, such as estimating lines of code or function points for each piece.
2) Empirical estimation models use historical data from past projects to generate estimates.
3) Key factors that affect estimation accuracy include properly estimating product size, translating size to effort/time/cost, and accounting for team abilities and requirements stability.
This document discusses different software estimation techniques. It describes what software estimation is, why it is needed, and some common difficulties in estimation. It then outlines factors to consider like product objectives, corporate assets, and project constraints. It discusses methods for estimating lines of code or function points. Function point analysis and the unadjusted and value adjustment components are explained. Models for calculating effort and cost using lines of code and function points are provided, including the COCOMO model and its organic, semi-detached, and embedded project types.
The document discusses organization and team structures for software development organizations. It explains the differences between functional and project formats. The functional format divides teams by development phase (e.g. requirements, design), while the project format assigns teams to a single project. The document notes advantages of the functional format include specialization, documentation, and handling staff turnover. However, it is not suitable for small organizations with few projects. The document also describes common team structures like chief programmer, democratic, and mixed control models.
Software Project Management (monitoring and control)IsrarDewan
Monitoring and Controlling are processes needed to track, review, and regulate the progress and performance of the project. It also identifies any areas where changes to the project management method are required and initiates the required changes.
The COCOMO model is a widely used software cost estimation model that predicts development effort and schedule based on project attributes. It includes basic, intermediate, and detailed models of increasing complexity. The intermediate model estimates effort as a function of source lines of code and cost drivers. The detailed model further incorporates the impact of cost drivers on development phases. COCOMO 2 expands on this with application composition, early design, reuse, and post-architecture models for different project stages.
Software engineering a practitioners approach 8th edition pressman solutions ...Drusilla918
Full clear download( no error formatting) at: https://goo.gl/XmRyGP
software engineering a practitioner's approach 8th edition pdf free download
software engineering a practitioner's approach 8th edition ppt
software engineering a practitioner's approach 6th edition pdf
software engineering pressman 9th edition pdf
software engineering a practitioner's approach 9th edition
software engineering a practitioner's approach 9th edition pdf
software engineering a practitioner's approach 7th edition solution manual pdf
roger s. pressman
This document provides an overview of software maintenance. It discusses that software maintenance is an important phase of the software life cycle that accounts for 40-70% of total costs. Maintenance includes error correction, enhancements, deletions of obsolete capabilities, and optimizations. The document categorizes maintenance into corrective, adaptive, perfective and preventive types. It also discusses the need for maintenance to adapt to changing user requirements and environments. The document describes approaches to software maintenance including program understanding, generating maintenance proposals, accounting for ripple effects, and modified program testing. It discusses challenges like lack of documentation and high staff turnover. The document also introduces concepts of reengineering and reverse engineering to make legacy systems more maintainable.
Risk management involves identifying potential problems, assessing their likelihood and impacts, and developing strategies to address them. There are two main risk strategies - reactive, which addresses risks after issues arise, and proactive, which plans ahead. Key steps in proactive risk management include identifying risks through checklists, estimating their probability and impacts, developing mitigation plans, monitoring risks and mitigation effectiveness, and adjusting plans as needed. Common risk categories include project risks, technical risks, and business risks.
Estimating involves forecasting the time and cost to complete project deliverables. There are two main types of estimates: bottom-up estimates require more effort but rely on those familiar with the work, while top-down estimates can be made by managers without direct experience. Software cost and effort estimation is not an exact science due to many variable factors. Key parameters that affect estimates include resources, time, human skills, and cost. Common software estimation techniques include top-down and bottom-up methods such as the three-point estimation technique.
This document discusses major factors that influence software cost estimation. It identifies programmer ability, product complexity, product size, available time, required reliability, and level of technology as key factors. It provides details on how each factor affects software cost, including equations to estimate programming time and effort based on variables like source lines of code and developer months. Program complexity is broken into three levels: application, utility, and system software. The document also discusses how underestimating code size and inability to compress development schedules can impact cost estimates.
This Presentation contains all the topics in design concept of software engineering. This is much more helpful in designing new product. You have to consider some of the design concepts that are given in the ppt
2.6 Empirical estimation models & The make-buy decision.pptTHARUNS44
The document discusses empirical estimation models, including the structure of estimation models, the COCOMO II model, and the software equation. It also covers making a make/buy decision by creating a decision tree to calculate the expected cost of building, reusing, buying, or contracting a software project. Outsourcing is discussed as either a strategic or tactical decision.
The document discusses some key issues with conventional software management approaches like the waterfall model. It notes that software development is unpredictable and that management discipline is more important for success than technology. Some problems with the waterfall model are late risk resolution, adversarial stakeholder relationships due to rigid documentation requirements, and a focus on documents over engineering work. The document also provides metrics on the relative costs of development versus maintenance and how people are a major factor in productivity.
The document discusses project planning in software engineering. It defines project planning and its importance. It describes the project manager's responsibilities which include project planning, reporting, risk management, and people management. It discusses challenges in software project planning. The RUP process for project planning is then outlined which involves creating artifacts like the business case and software development plan. Risk management is also a key part of project planning.
Decomposition technique In Software Engineering Bilal Hassan
The document discusses different techniques for estimating software project costs and effort, including decomposition, sizing, and function point analysis. It provides an example of estimating the lines of code and function points for a mechanical CAD software project. Estimates are developed by decomposing the problem into smaller elements and tasks, and estimating the effort required for each. The accuracy of estimates depends on properly sizing the software and having reliable past project metrics.
The document discusses different types of software metrics that can be used to measure various aspects of software development. Process metrics measure attributes of the development process, while product metrics measure attributes of the software product. Project metrics are used to monitor and control software projects. Metrics need to be normalized to allow for comparison between different projects or teams. This can be done using size-oriented metrics that relate measures to the size of the software, or function-oriented metrics that relate measures to the functionality delivered.
The document discusses critical systems and system dependability. It defines critical systems as systems where failure could result in significant economic losses, damage, or threats to human life. It describes four dimensions of dependability for critical systems: availability, reliability, safety, and security. It emphasizes that critical systems require trusted development methods to achieve high dependability.
The document discusses staffing level estimation over the course of a software development project. It describes how the number of personnel needed varies at different stages: a small group is needed for planning and analysis, a larger group for architectural design, and the largest number for implementation and system testing. It also references models like the Rayleigh curve and Putnam's interpretation that estimate personnel levels over time. Tables show estimates for the distribution of effort, schedule, and personnel across activities for different project sizes. The key idea is that staffing requirements fluctuate throughout the software life cycle, with peaks during implementation and testing phases.
This document discusses several software design techniques: stepwise refinement, levels of abstraction, structured design, integrated top-down development, and Jackson structured programming. Stepwise refinement is a top-down technique that decomposes a system into more elementary levels. Levels of abstraction designs systems as layers with each level performing services for the next higher level. Structured design converts data flow diagrams into structure charts using design heuristics. Integrated top-down development integrates design, implementation, and testing with a hierarchical structure. Jackson structured programming maps a problem's input/output structures and operations into a program structure to solve the problem.
Estimation of resources, cost, and schedule for a software engineering effort requires experience, access to good historical information, and the courage to commit to quantitative predictions when qualitative information is all that exists. Halstead’s Measure & COCOMO Modeol COCOMO II Model of Estimation techniquesused or S/w Developments and Maintenance
The document provides an overview of software project estimation techniques. It discusses that estimation involves determining the money, effort, resources and time required to build a software system. The key steps are: describing product scope, decomposing problems, estimating sub-problems using historical data and experience, and considering complexity and risks. It also covers decomposition techniques, empirical estimation models like COCOMO II, and factors considered in estimation like resources, feasibility and risks.
This document discusses software project management and estimation techniques. It covers:
- Project management involves planning, monitoring, and controlling people and processes.
- Estimation approaches include decomposition techniques and empirical models like COCOMO I & II.
- COCOMO I & II models estimate effort based on source lines of code and cost drivers. They include basic, intermediate, and detailed models.
- Other estimation techniques discussed include function point analysis and problem-based estimation.
SE - Lecture 11 - Software Project Estimation.pptxTangZhiSiang
This document discusses software project estimation. It begins by outlining the major activities of software project planning, which includes estimation. It then describes the estimation process, which involves predicting time, cost, and resources required. Several estimation techniques are discussed, including using historical metrics, task breakdown, size estimates, and automated tools. Accuracy depends on properly defining scope, available metrics, and team abilities. The document provides examples of using lines of code and function point approaches to estimate effort and cost.
FUNDAMENTALS OF software developement and a detail outcome of the software based on the project management and the various metrics and measurements development in software engineering
This document describes a new methodology called Extreme Software Estimation (XSoft Estimation) for accurately estimating software projects. XSoft Estimation uses COSMIC-Full Function Points (FFP) to measure software size and then applies a model of Development Effort = Size * Variable to estimate effort, cost, and schedule. The methodology was tested on 5 projects measuring their size in CFP units and comparing actual development time between expert and skilled teams, different programming languages and layers. The results showed expert teams and some languages/layers took significantly less time than others for the same sized functionality. XSoft Estimation aims to improve on past methods by basing estimates directly on measured functionality using COSMIC FFP.
Abstract The management of software cost, development effort and project planning are the key aspects of software development. Throughout the sixty-odd years of software development, the industry has gone at least four generations of programming languages and three major development paradigms. Still the total ability to move consistently from idea to product is yet to be achieved. In fact, recent studies document that the failure rate for software development has risen almost to 50 percent. There is no magic in managing software development successfully, but a number of issues related to software development make it unique. The basic problem of software development is risky. Some example of risk is error in estimation, schedule slips, project cancelled after numerous slips, high defect rate, system goes sour, business misunderstanding, false feature rich, staff turnover. XSoft Estimation addresses the risks by accurate measurement. A new methodology to estimate using software COSMIC-Full Function Point and named as EXtreme Software Estimation (XSoft Estimation). Based on the experience gained on the original XSoft project develpment, this paper describes what makes XSoft Estimation work from sizing to estimation. Keywords: -COSMIC function size unit, XSoft Estimation, XSoft Measurement, Cost Estimation.
How Should We Estimate Agile Software Development Projects and What Data Do W...Glen Alleman
Estimating techniques for an acquisition program progresses from analogies to actual cost method as the program matures and more information is known. The analogy method is most appropriate early in the program life cycle when the system is not yet fully defined.
The document discusses software cost estimation and introduces the COCOMO model. It describes that COCOMO takes into account project attributes, product attributes, personnel attributes, and hardware attributes to predict development effort. It also explains that algorithmic cost models can be used to quantitatively analyze options by comparing the costs of different development strategies. Finally, it notes that project duration is independent of team size, and adding people too quickly can actually lead to schedule delays.
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
The document discusses software project management. It defines a software project as the complete process of software development from requirements gathering through testing and maintenance. A software project manager closely monitors the development process, prepares plans, arranges resources, and manages communication between team members. Software project management involves planning, scope management, estimation of size, effort, time and cost, and other activities. Estimation techniques include decomposition by functions or activities and empirical models. Lines of code is a common size metric but does not consider complexity. Effort estimation forecasts time required and project estimation uses a stepwise decomposition approach.
This document provides guidance on estimating the effort required for a software development project. It discusses estimating human effort by rating functions as easy, medium, hard, or complex and assigning effort estimates in days. Additional activities like analysis, design, and testing are estimated as percentages of the build effort. Hardware requirements like processor power, disk space, and RAM are also addressed at a high level. The overall message is that project estimation is imprecise but essential, and estimates should be revisited regularly as more information becomes available.
The document discusses various techniques for estimating software effort, including parametric models, expert judgment, analogy, and bottom-up and top-down approaches. It describes the bottom-up approach as breaking a project into tasks, estimating effort for each, and summing totals. Top-down uses parametric models relating effort to system size and productivity factors. Function point analysis and COSMIC function points are presented as top-down methods to measure system size independently of programming language.
The document provides information on project planning and scope determination activities. It describes conducting a preliminary meeting between the customer and developer to determine the overall goals and functionality of the proposed software system through a set of context and follow up questions. It also discusses determining the technical, cost, time and risk feasibility of the project. The document outlines estimating the required resources including human resources with the necessary skills, reusable software components, and development environment and network resources. It provides decomposition techniques for estimating the cost and effort of the project by breaking it down into major functions and activities.
This document outlines the 10 step process for software project planning. It begins with selecting the project and identifying its scope and objectives. It then covers identifying the project infrastructure, analyzing project characteristics, and identifying products and activities. Steps also include estimating effort for each activity, identifying risks, allocating resources, and reviewing/publicizing the plan. Execution then involves lower level planning. The document also discusses software effort estimation techniques such as algorithmic models, expert judgment, analogy, and top-down and bottom-up approaches.
Estimation determines the resources needed to build a system and involves estimating the software size, effort, time, and cost. It is based on past data, documents, assumptions, and risks. The main steps are estimating the software size, effort, time, and cost. Software size can be estimated in lines of code or function points. Effort estimation calculates person-hours or months based on software size using formulas like COCOMO-II. Cost estimation considers additional factors like hardware, tools, personnel skills, and travel. Techniques for estimation include decomposition and empirical models like Putnam and COCOMO, which relate size to time and effort.
AN APPROACH FOR SOFTWARE EFFORT ESTIMATION USING FUZZY NUMBERS AND GENETIC AL...csandit
One of the most critical tasks during the software development life cycle is that of estimating the effort and time involved in the development of the software product. Estimation may be performed by many ways such as: Expert judgments, Algorithmic effort estimation, Machine
learning and Analogy-based estimation. In which Analogy-based software effort estimation is the process of identifying one or more historical projects that are similar to the project being developed and then using the estimates from them. Analogy-based estimation is integrated with Fuzzy numbers in order to improve the performance of software project effort estimation during
the early stages of a software development lifecycle. Because of uncertainty associated with attribute measurement and data availability, fuzzy logic is introduced in the proposed model.But hardly a historical project is exactly same as the project being estimated due to some distance associated in similarity distance. This means that the most similar project still has a
similarity distance with the project being estimated in most of the cases. Therefore, the effort needs to be adjusted when the most similar project has a similarity distance with the project being estimated. To adjust the reused effort, we build an adjustment mechanism whose
algorithm can derive the optimal adjustment on the reused effort using Genetic Algorithm. The proposed model Combine the fuzzy logic to estimate software effort in early stages with Genetic algorithm based adjustment mechanism may result to near the correct effort estimation.
AN APPROACH FOR SOFTWARE EFFORT ESTIMATION USING FUZZY NUMBERS AND GENETIC AL...cscpconf
One of the most critical tasks during the software development life cycle is that of estimating the effort and time involved in the development of the software product. Estimation may be performed by many ways such as: Expert judgments, lgorithmic effort estimation, Machine learning and Analogy-based estimation. In which Analogy-based software effort estimation is
the process of identifying one or more historical projects that are similar to the project being developed and then using the estimates from them. Analogy-based estimation is integrated with Fuzzy numbers in order to improve the performance of software project effort estimation during the early stages of a software development lifecycle. Because of uncertainty associated with tribute measurement and data availability, fuzzy logic is introduced in the proposed model. But hardly a historical project is exactly same as the project being estimated due to some distance associated in similarity distance. This means that the most similar project still has a similarity distance with the project being estimated in most of the cases. Therefore, the effort needs to be adjusted when the most similar project has a similarity distance with the project being estimated. To adjust the reused effort, we build an adjustment mechanism whose
algorithm can derive the optimal adjustment on the reused effort using Genetic Algorithm. The proposed model Combine the fuzzy logic to estimate software effort in early stages with Genetic algorithm based adjustment mechanism may result to near the correct effort estimation.
An approach for software effort estimation using fuzzy numbers and genetic al...csandit
One of the most critical tasks during the software development life cycle is that of estimating the
effort and time involved in the development of the software product. Estimation may be
performed by many ways such as: Expert judgments, Algorithmic effort estimation, Machine
learning and Analogy-based estimation. In which Analogy-based software effort estimation is
the process of identifying one or more historical projects that are similar to the project being
developed and then using the estimates from them. Analogy-based estimation is integrated with
Fuzzy numbers in order to improve the performance of software project effort estimation during
the early stages of a software development lifecycle. Because of uncertainty associated with
attribute measurement and data availability, fuzzy logic is introduced in the proposed model.
But hardly a historical project is exactly same as the project being estimated due to some
distance associated in similarity distance. This means that the most similar project still has a
similarity distance with the project being estimated in most of the cases. Therefore, the effort
needs to be adjusted when the most similar project has a similarity distance with the project
being estimated. To adjust the reused effort, we build an adjustment mechanism whose
algorithm can derive the optimal adjustment on the reused effort using Genetic Algorithm. The
proposed model Combine the fuzzy logic to estimate software effort in early stages with Genetic
algorithm based adjustment mechanism may result to near the correct effort estimation.
The document provides an overview of business intelligence (BI) and business analytics (BA). It defines BI as a variety of software applications used to analyze organizational data to make strategic, tactical, and operational decisions. BA is described as using data, analytics, and business models to gain insights and make better decisions. The key components of BI are data warehousing, business analytics, business performance management, and user interfaces. Descriptive, predictive, and prescriptive analytics are also discussed as types of analyses used in BA.
The document provides an overview of the Python programming language. It discusses that Python is an interpreted, object-oriented, high-level programming language created by Guido van Rossum in 1989. It also lists many common uses of Python, such as for web development, data analysis, and game development. Additionally, it introduces several integrated development environments (IDEs) for Python programming including IDLE, Visual Studio Code, Notepad, and PyCharm.
The document provides an introduction to software engineering and process models. It defines key terms like software, software engineering, and characteristics of software. It then discusses software engineering as a layered technology with process, method, and tools layers. The document also explains the software process as consisting of five generic activities - communication, planning, modeling, construction, and deployment. It provides examples and definitions for each activity. Finally, it asks exam questions related to defining software engineering and explaining it as a layered technology.
This document discusses Boolean algebra and digital logic gates. It introduces Boolean algebra, which uses logical variables and operations like AND, OR, and NOT. These operations are represented with symbols and defined through truth tables. Different types of logic gates are also introduced, including AND, OR, NAND, NOR and XOR gates. The gates are the basic building blocks of digital circuits and are used to implement Boolean logic functions. Functionally complete sets of gates are identified that allow any Boolean function to be implemented.
Business analytics (BA) refers to the methods and techniques used to measure business performance. BA uses statistical analysis to transform raw data into meaningful insights. There are six major components of a BA solution: data mining, forecasting, predictive analytics, optimization, text mining, and visualization.
BA can be categorized into descriptive, predictive, and prescriptive analytics. Descriptive analytics answers "what happened" by analyzing past data. Predictive analytics predicts future outcomes and answers "what could happen." Prescriptive analytics determines optimal courses of action and answers "what should we do?" Together, these three categories of BA provide businesses with insights from data to improve decision-making.
The document provides an overview of business intelligence (BI), including definitions, objectives, components, history, needs/benefits, features, uses, and examples. Some key points:
- BI is an umbrella term for architectures, tools, databases, and methods to improve business decision-making through analysis of facts and data-driven systems.
- The goal of BI is to transform raw data into meaningful and useful information through analytics that provides insights and knowledge for impactful decisions.
- Major BI components include data warehousing, extraction/transformation/loading tools, data marts, metrics/key performance indicators, dashboards, and online analytical processing reporting.
- BI has evolved from static reporting systems in
Michael Phelps is an American swimmer who is considered the most successful and decorated Olympian of all time, having won 28 medals over five Olympic Games. He holds the record for most Olympic gold medals with 23 golds. Phelps is often compared to American swimmer Mark Spitz, who held the previous record of winning the most gold medals in a single Olympics with 7 golds in 1972.
The document discusses software engineering and process models. It defines software engineering as the application of systematic and quantifiable approaches to software development, operation and maintenance. It describes software as computer programs, data structures and documentation.
It then discusses characteristics of software such as it being engineered not manufactured, not wearing out over time, and continuing to be custom built in most cases. It also discusses the software engineering layers including the process, method and tools layers.
Finally, it discusses the software process as a framework involving key activities like communication, planning, modeling, construction and deployment. It explains the generic process model and how activities are populated by actions and tasks to produce work products.
The document discusses various design concepts and elements of the design model in software engineering. It covers 12 key design concepts including abstraction, architecture, patterns, separation of concerns, modularity, and information hiding. It also discusses design classes, refinement, aspects, and refactoring. Additionally, it outlines elements of the design model including data design, architectural design, interface design, component-level design, and deployment-level design. The goal of design is to create a model of software that will correctly implement requirements and provide joy to users.
The document discusses the process of requirements engineering. It begins by defining requirements engineering as the process of defining, documenting, and maintaining requirements. It then outlines the key tasks in requirements engineering: inception, elicitation, elaboration, negotiation, specification, validation, and management. For each task, it provides details on the goals and steps involved. Overall, the document provides a comprehensive overview of requirements engineering and the various activities that comprise the process.
This document outlines the key steps in the data preparation process:
1. Check questionnaires for completeness and logical responses
2. Edit data to ensure consistency, correct errors, and fill in missing values
3. Code data by assigning numerical values to question responses
4. Clean data by identifying outliers and inconsistencies to improve data quality
The document discusses various concepts related to measurement, scaling, instrument design, and sampling. It defines measurement as assigning numbers to objects or observations according to specific rules. There are four main types of measurement scales discussed - nominal, ordinal, interval, and ratio scales - which differ in the types of mathematical operations and statistical analyses that can be conducted. Good measurement is reliable, valid, and practical. Reliability refers to consistency over time, validity is the ability to measure what is intended, and practicality considers cost, convenience and interpretability.
The document discusses research design and provides details on different types of research designs. It begins by defining research design and outlines the key decisions that must be made, including what, where, when, how much, and how data will be collected and analyzed. It then discusses different types of research designs for exploratory, descriptive, diagnostic, and hypothesis-testing studies. Specific methods for qualitative and quantitative research designs are also outlined.
This document provides an overview of queuing theory and the key components of a queuing system. It discusses the calling population or customers arriving for service, including characteristics like size, behavior, and arrival patterns. The queuing process and queue discipline are also examined. Finally, the document outlines various performance measures used to evaluate queuing systems, such as average wait time, number of customers, probability of waiting, and operating costs.
CapTechTalks Webinar Slides June 2024 Donovan Wright.pptxCapitolTechU
Slides from a Capitol Technology University webinar held June 20, 2024. The webinar featured Dr. Donovan Wright, presenting on the Department of Defense Digital Transformation.
Information and Communication Technology in EducationMJDuyan
(𝐓𝐋𝐄 𝟏𝟎𝟎) (𝐋𝐞𝐬𝐬𝐨𝐧 2)-𝐏𝐫𝐞𝐥𝐢𝐦𝐬
𝐄𝐱𝐩𝐥𝐚𝐢𝐧 𝐭𝐡𝐞 𝐈𝐂𝐓 𝐢𝐧 𝐞𝐝𝐮𝐜𝐚𝐭𝐢𝐨𝐧:
Students will be able to explain the role and impact of Information and Communication Technology (ICT) in education. They will understand how ICT tools, such as computers, the internet, and educational software, enhance learning and teaching processes. By exploring various ICT applications, students will recognize how these technologies facilitate access to information, improve communication, support collaboration, and enable personalized learning experiences.
𝐃𝐢𝐬𝐜𝐮𝐬𝐬 𝐭𝐡𝐞 𝐫𝐞𝐥𝐢𝐚𝐛𝐥𝐞 𝐬𝐨𝐮𝐫𝐜𝐞𝐬 𝐨𝐧 𝐭𝐡𝐞 𝐢𝐧𝐭𝐞𝐫𝐧𝐞𝐭:
-Students will be able to discuss what constitutes reliable sources on the internet. They will learn to identify key characteristics of trustworthy information, such as credibility, accuracy, and authority. By examining different types of online sources, students will develop skills to evaluate the reliability of websites and content, ensuring they can distinguish between reputable information and misinformation.
How to Create User Notification in Odoo 17Celine George
This slide will represent how to create user notification in Odoo 17. Odoo allows us to create and send custom notifications on some events or actions. We have different types of notification such as sticky notification, rainbow man effect, alert and raise exception warning or validation.
How to Create a Stage or a Pipeline in Odoo 17 CRMCeline George
Using CRM module, we can manage and keep track of all new leads and opportunities in one location. It helps to manage your sales pipeline with customizable stages. In this slide let’s discuss how to create a stage or pipeline inside the CRM module in odoo 17.
Brand Guideline of Bashundhara A4 Paper - 2024khabri85
It outlines the basic identity elements such as symbol, logotype, colors, and typefaces. It provides examples of applying the identity to materials like letterhead, business cards, reports, folders, and websites.
Decolonizing Universal Design for LearningFrederic Fovet
UDL has gained in popularity over the last decade both in the K-12 and the post-secondary sectors. The usefulness of UDL to create inclusive learning experiences for the full array of diverse learners has been well documented in the literature, and there is now increasing scholarship examining the process of integrating UDL strategically across organisations. One concern, however, remains under-reported and under-researched. Much of the scholarship on UDL ironically remains while and Eurocentric. Even if UDL, as a discourse, considers the decolonization of the curriculum, it is abundantly clear that the research and advocacy related to UDL originates almost exclusively from the Global North and from a Euro-Caucasian authorship. It is argued that it is high time for the way UDL has been monopolized by Global North scholars and practitioners to be challenged. Voices discussing and framing UDL, from the Global South and Indigenous communities, must be amplified and showcased in order to rectify this glaring imbalance and contradiction.
This session represents an opportunity for the author to reflect on a volume he has just finished editing entitled Decolonizing UDL and to highlight and share insights into the key innovations, promising practices, and calls for change, originating from the Global South and Indigenous Communities, that have woven the canvas of this book. The session seeks to create a space for critical dialogue, for the challenging of existing power dynamics within the UDL scholarship, and for the emergence of transformative voices from underrepresented communities. The workshop will use the UDL principles scrupulously to engage participants in diverse ways (challenging single story approaches to the narrative that surrounds UDL implementation) , as well as offer multiple means of action and expression for them to gain ownership over the key themes and concerns of the session (by encouraging a broad range of interventions, contributions, and stances).
Creativity for Innovation and SpeechmakingMattVassar1
Tapping into the creative side of your brain to come up with truly innovative approaches. These strategies are based on original research from Stanford University lecturer Matt Vassar, where he discusses how you can use them to come up with truly innovative solutions, regardless of whether you're using to come up with a creative and memorable angle for a business pitch--or if you're coming up with business or technical innovations.
3. Software Project Estimation
Software cost and effort estimation will never be an exact science. Too many
variables—human, technical, environmental, political—can affect the ultimate
cost of software and effort applied to develop it.
However, software project estimation can be transformed from a black art to a
series of systematic steps that provide estimates with acceptable risk.
To achieve reliable cost and effort estimates, a number of options arise:
1. Delay estimation until late in the project (obviously, we can achieve 100
percent accurate estimates after the project is complete!).
2. Base estimates on similar projects that have already been completed.
3. Use relatively simple decomposition techniques to generate project cost and
effort estimates.
4. Use one or more empirical models for software cost and effort estimation.
3
4. Software Project Estimation
Unfortunately, the first option, however attractive, is not practical. The second
option can work reasonably well, if the current project is quite similar to past
efforts and other project influences (e.g., the customer, business conditions, the
software engineering environment, deadlines) are roughly equivalent.
Unfortunately, past experience has not always been a good indicator of future
results. The remaining options are viable approaches to software project
estimation.
Decomposition techniques take a divide-and-conquer approach to software
project estimation. By decomposing a project into major functions and related
software engineering activities, cost and effort estimation can be performed in a
stepwise fashion.
Empirical estimation models can be used to complement decomposition
techniques and offer a potentially valuable estimation approach in their own
right. A model is based on experience (historical data) and takes the form
d = f (vi) where d is one of a number of estimated values
(e.g., effort, cost, project duration) and vi are selected independent parameters
(e.g., estimated LOC or FP). 4
5. Software Project Estimation
Automated estimation tools implement one or more decomposition techniques
or empirical models and provide an attractive option for estimating. In such
systems, the characteristics of the development organization (e.g., experience,
environment) and the software to be developed are described. Cost and effort
estimates are derived from these data.
Each of the viable software cost estimation options is only as good as the
historical data used to seed the estimate. If no historical data exist, costing rests
on a very shaky(unstable) foundation.
5
6. Decomposition Techniques
Software project estimation is a form of problem solving, and in most cases, the
problem to be solved (i.e., developing a cost and effort estimate for a software
project) is too complex to be considered in one piece. For this reason, you
should decompose the problem, recharacterizing it as a set of smaller (and
hopefully, more manageable) problems.
Estimation uses one or both forms of partitioning i.e. decomposition of the
problem and decomposition of the process. But before an estimate can be
made, you must understand the scope of the software to be built and generate
an estimate of its “size.”
1. Software Sizing :
The accuracy of a software project estimate is predicated on a number of things:
(1) the degree to which you have properly estimated the size of the product to
be built; (2) the ability to translate the size estimate into human effort, calendar
time, and dollars (a function of the availability of reliable software metrics from
past projects); (3) the degree to which the project plan reflects the abilities of
the software team; and (4) the stability of product requirements and the
environment that supports the software engineering effort. 6
7. Decomposition Techniques
Software Sizing cont..
If a direct approach is taken, size can be measured in lines of code (LOC). If an
indirect approach is chosen, size is represented as function points (FP).
Four different approaches to the sizing problem:
1.“Fuzzy logic” sizing : This approach uses the approximate reasoning
techniques
that are the cornerstone of fuzzy logic. To apply this approach, the planner must
identify the type of application, establish its magnitude on a qualitative scale,
and then refine the magnitude within the original range.
2. Function point sizing : The planner develops estimates of the information
domain characteristics
3.Standard component sizing : Software is composed of a number of different
“standard components” that are generic to a particular application area. For
example, the standard components for an information system are subsystems,
modules, screens, reports, interactive programs, batch programs, files, LOC, and
object-level instructions. The project planner estimates the number of
occurrences of each standard component and then uses historical project data
to estimate the delivered size per standard component.
7
8. Decomposition Techniques
Software Sizing cont..
4. Change sizing : This approach is used when a project encompasses the use of
existing software that must be modified in some way as part of a project. The
planner estimates the number and type (e.g., reuse, adding code, changing
code, deleting code) of modifications that must be accomplished.
Problem-Based Estimation
Lines of code and function points were described as measures from which
productivity metrics can be computed.
LOC and FP data are used in two ways during software project estimation:
(1) as estimation variables to “size” each element of the software and
(2) as baseline metrics collected from past projects and used in conjunction with
estimation variables to develop cost and effort projections.
8
9. Decomposition Techniques
Problem-Based Estimation cont..
LOC and FP estimation are distinct estimation techniques. Yet both have a
number of characteristics in common. LOC or FP (the estimation variable) is then
estimated for each function. Alternatively, you may choose another component
for sizing, such as classes or objects, changes, or business processes affected.
Baseline productivity metrics (e.g., LOC/pm or FP/pm) are then applied to the
appropriate estimation variable, and cost or effort for the function is derived.
Function estimates are combined to produce an overall estimate for the entire
project.
In general, LOC/pm or FP/pm averages should be computed by project domain.
That is, projects should be grouped by team size, application area, complexity,
and other relevant parameters.
The LOC and FP estimation techniques differ in the level of detail required for
decomposition and the target of the partitioning.
9
10. Decomposition Techniques
Problem-Based Estimation cont..
When LOC is used as the estimation variable, decomposition is absolutely
essential and is often taken to considerable levels of detail. The greater the
degree of partitioning, the more likely reasonably accurate estimates of LOC can
be developed.
For FP estimates, decomposition works differently. Rather than focusing on
function, each of the information domain characteristics—inputs, outputs, data
files, inquiries, and external interfaces—as well as the complexity adjustment
values are estimated. The resultant estimates can then be used to derive an FP
value that can be tied to past data and used to generate an estimate.
Regardless of the estimation variable that is used, you should begin by
estimating a range of values for each function or information domain value.
Using historical data or (when all else fails) intuition, estimate an optimistic,
most likely, and pessimistic size value for each function or count for each
information domain value. An implicit indication of the degree of uncertainty is
provided when a range of values is specified.
10
11. Decomposition Techniques
Problem-Based Estimation cont..
A three-point or expected value can then be computed. The expected value for
the estimation variable (size) S can be computed as a weighted average of the
optimistic (sopt ), most likely (sm), and pessimistic (spess) estimates.
Once the expected value for the estimation variable has been determined,
historical LOC or FP productivity data are applied.
11
12. Decomposition Techniques
Process -Based Estimation :
The most common technique for estimating a project is to base the estimate on
the process that will be used. That is, the process is decomposed into a
relatively small set of tasks and the effort required to accomplish each task is
estimated.
Like the problem-based techniques, process-based estimation begins with a
delineation of software functions obtained from the project scope. A series of
framework activities must be performed for each function.
Once problem functions and process activities are melded, you estimate the
effort (e.g., person-months) that will be required to accomplish each software
process activity for each software function.
Costs and effort for each function and framework activity are computed as the
last step. If process-based estimation is performed independently of LOC or FP
estimation, we now have two or three estimates for cost and effort that may be
compared and reconciled. 12
13. Decomposition Techniques
Estimation with Use Cases :
However, developing an estimation approach with use cases is problematic for
the following reasons :
• Use cases are described using many different formats and styles—there is no
standard form.
• Use cases represent an external view (the user’s view) of the software and can
therefore be written at many different levels of abstraction.
• Use cases do not address the complexity of the functions and features that are
described.
• Use cases can describe complex behavior (e.g., interactions) that involve many
functions and features.
13
14. Decomposition Techniques
Reconciling (Integration) Estimate:
The estimation techniques discussed result in multiple estimates that must be
reconciled to produce a single estimate of effort, project duration, or cost.
The total estimated effort for the CAD software ranges from a low of 46
person/months (derived using a process-based estimation approach) to a high
of 68 person/months (derived with use-case estimation). The average estimate
(using all four approaches) is 56 person-months. The variation from the average
estimate is approximately 18 percent on the low side and 21 percent on the high
side.
Widely divergent (different) estimates can often be traced to one of two causes:
(1) the scope of the project is not adequately understood or has been
misinterpreted by the planner, or (2) productivity data used for problem-based
estimation techniques is inappropriate for the application, obsolete or has been
misapplied. You should determine the cause of divergence and then reconcile
the estimates.
14
15. Empirical Estimation Model
The empirical data that support most estimation models are derived from a
limited sample of projects. For this reason, no estimation model is
appropriate for all classes of software and in all development environments.
Therefore, you should use the results obtained from such models judiciously.
An estimation model should be calibrated to reflect local conditions. The
model should be tested by applying data collected from completed projects,
plugging the data into the model, and then comparing actual to predicted
results. If agreement is poor, the model must be tuned and retested before it
can be used.
The Structure of Estimation Models :
A typical estimation model is derived using regression analysis on data
collected from past software projects.
15
16. Empirical Estimation Model
The Structure of Estimation Models cont..
The overall structure of such models takes the form where A, B, and C are
empirically derived constants, E is effort in person-months, and ev is the
estimation variable (either LOC or FP).
In addition to the relationship noted in Equation, the majority of estimation
models have some form of project adjustment component that enables E to be
adjusted by other project characteristics (e.g., problem complexity, staff
experience, development environment).
Among the many LOC-oriented estimation models proposed in the literature are
16
17. Empirical Estimation Model
The Structure of Estimation Models cont..
FP-oriented models have also been proposed. These include
A quick examination of these models indicates that each will yield a different
result for the same values of LOC or FP. The implication is clear. Estimation
models must be calibrated for local needs!
17
18. Empirical Estimation Model
The COCOMO II Model :
A hierarchy of software estimation models bearing the name COCOMO, for
COnstructive COst MOdel. The original COCOMO model became one of the
most widely used and discussed software cost estimation models in the
industry. It has evolved into a more comprehensive estimation model, called
COCOMOII .
Like its predecessor, COCOMO II is actually a hierarchy of estimation models
that address the following areas:
Application composition model :
Used during the early stages of software engineering, when prototyping of
user interfaces, consideration of software and system interaction,
assessment of performance, and evaluation of technology maturity are
paramount
18
19. Empirical Estimation Model
The COCOMO II Model cont..
Early design stage model :
Used once requirements have been stabilized and basic software
architecture has been established.
Post-architecture-stage model : Used during the construction of the
software.
Like all estimation models for software, the COCOMO II models require
sizing information. Three different sizing options are available as part of the
model hierarchy: object points, function points, and lines of source code.
Like function points, the object point is an indirect software measure that is
computed using counts of the number of (1) screens (at the user interface),
(2) reports, and (3) components likely to be required to build the
application.
Each object instance (e.g., a screen or report) is classified into one of three
complexity levels (i.e., simple, medium, or difficult) 19
20. Empirical Estimation Model
The COCOMO II Model cont..
Once complexity is determined, the number of screens, reports, and
components are weighted. The object point count is then determined by
multiplying the original number of object instances by the weighting factor in
the figure and summing to obtain a total object point count.
When component-based development or general software reuse is to be
applied, the percent of reuse (%reuse) is estimated and the object point count is
adjusted:
where NOP is defined as new object points.
To derive an estimate of effort based on the computed NOP value, a
“productivity rate” must be derived. Productivity rate for different levels of
developer experience and development environment maturity.
20
21. Empirical Estimation Model
The COCOMO II Model cont..
An estimate of project effort is computed using :
The Software Equation :
The software equation is a dynamic multivariable model that assumes a specific
distribution of effort over the life of a software development project. The model
has been derived from productivity data collected for over 4000 contemporary
software projects. Based on these data, we derive an estimation model of the
form
21
22. Empirical Estimation Model
The Software Equation cont..
Typical values might be P = 2000 for development of real-time embedded
software, P = 10,000 for telecommunication and systems software, and
P = 28,000 for business systems applications.
You should note that the software equation has two independent parameters:
(1) an estimate of size (in LOC) and
(2) an indication of project duration in calendar months or years.
A set of equations derived from the software equation. Minimum development
time is defined as :
22
23. Estimation For Object Oriented Projects
It is worthwhile to supplement conventional software cost estimation methods
with a technique that has been designed explicitly for OO software.
Some Approaches :
1. Develop estimates using effort decomposition, FP analysis, and any other
method that is applicable for conventional applications.
2. Using the requirements model , develop use cases and determine a count.
Recognize that the number of use cases may change as the project progresses.
3. From the requirements model, determine the number of key classes.
4. Categorize the type of interface for the application
and develop a multiplier for support classes:
Multiply the number of key classes (step 3) by the
multiplier to obtain an estimate for the number of
support classes. 23
24. Estimation For Object Oriented Projects
5. Multiply the total number of classes (key + support) by the average number of
work units per class. Lorenz and Kidd suggest 15 to 20 person-days per class.
6. Cross-check the class-based estimate by multiplying the average number of
work units per use case.
24
25. Overview of Project Scheduling
Although there are many reasons why software is delivered late, most can be
traced to one or more of the following root causes:
• An unrealistic deadline established by someone outside the software team
and forced on managers and practitioners.
• Changing customer requirements that are not reflected in schedule changes.
• An honest underestimate of the amount of effort and/or the number of
resources that will be required to do the job.
• Predictable and/or unpredictable risks that were not considered when the
project commenced.
• Technical difficulties that could not have been foreseen in advance.
• Human difficulties that could not have been foreseen in advance.
• Miscommunication among project staff that results in delays.
• A failure by project management to recognize that the project is falling behind
schedule and a lack of action to correct the problem.
25
26. Overview of Project Scheduling
How to overcome from this situation :
1. Perform a detailed estimate using historical data from past projects.
Determine the estimated effort and duration for the project.
2. Using an incremental process model, develop a software engineering strategy
that will deliver critical functionality by the imposed deadline, but delay other
functionality until later. Document the plan.
3. Meet with the customer and (using the detailed estimate), explain why the
imposed deadline is unrealistic. Be certain to note that all estimates are based
on performance on past projects. Also be certain to indicate the percent
improvement that would be required to achieve the deadline as it currently
exists
4. Offer the incremental development strategy as an alternative
26
27. Overview of Project Scheduling
Project Scheduling :
Software project scheduling is an action that distributes estimated effort across
the planned project duration by allocating the effort to specific software
engineering tasks. It is important to note, however, that the schedule evolves
over time. During early stages of project planning, a macroscopic schedule is
developed.
This type of schedule identifies all major process framework activities and the
product functions to which they are applied. As the project gets under way, each
entry on the macroscopic schedule is refined into a detailed schedule. Here,
specific software actions and tasks (required to accomplish an activity) are
identified and scheduled.
Scheduling for software engineering projects can be viewed from two rather
different perspectives. In the first, an end date for release of a computer-based
system has already (and irrevocably) been established.
27
28. Overview of Project Scheduling
Project Scheduling :
Basic Principles :
Like all other areas of software engineering, a number of basic principles guide
software project scheduling:
Compartmentalization : The project must be compartmentalized into a number
of manageable activities and tasks. To accomplish compartmentalization, both
the product and the process are refined.
Interdependency : The interdependency of each compartmentalized activity or
task must be determined. Some tasks must occur in sequence, while others can
occur in parallel. Some activities cannot commence until the work product
produced by another is available. Other activities can occur independently.
Time allocation : Each task to be scheduled must be allocated some number of
work units (e.g., person-days of effort). In addition, each task must be assigned a
start date and a completion date that are a function of the interdependencies
and whether work will be conducted on a full-time or part-time basis. 28
29. Overview of Project Scheduling
Project Scheduling :
Basic Principles cont..
Effort validation : Every project has a defined number of people on the software
team. As time allocation occurs, you must ensure that no more than the
allocated number of people has been scheduled at any given time. For example,
consider a project that has three assigned software engineers (e.g., three
person-days are available per day of assigned effort). On a given day, seven
concurrent tasks must be accomplished. Each task requires 0.50 person-days of
effort. More effort has been allocated than there are people to do the work.
Defined responsibilities : Every task that is scheduled should be assigned to a
specific team member.
Defined outcomes : Every task that is scheduled should have a defined
outcome. For software projects, the outcome is normally a work product (e.g.,
the design of a component) or a part of a work product. Work products are
often combined in deliverables.
29
30. Overview of Project Scheduling
Project Scheduling :
The Relationship Between People and Effort
In a small software development project a single person can analyze
requirements, perform design, generate code, and conduct tests. As the size of a
project increases, more people must become involved
30
31. Overview of Project Scheduling
Project Scheduling :
The Relationship Between People and Effort cont..
Curve provides an indication of the relationship between effort applied and
delivery time for a software project. A version of the curve, representing project
effort as a function of delivery time, is shown in Figure.
The curve indicates a minimum value to that indicates the least cost for delivery
(i.e., the delivery time that will result in the least effort expended). As we move
left of to (i.e., as we try to accelerate delivery), the curve rises nonlinearly.
Although it is possible to accelerate delivery, the curve rises very sharply to the
left of td.
31
32. Overview of Project Scheduling
Project Scheduling :
Effort Distribution :
A recommended distribution of effort across the software process is often
referred to as the 40–20–40 rule. Forty percent of all effort is allocated to
frontend analysis and design. A similar percentage is applied to back-end
testing. You can correctly infer that coding (20 percent of effort) is
deemphasized.
This effort distribution should be used as a guideline only. The
characteristics of each project dictate the distribution of effort. Work
expended on project planning rarely accounts for more than 2 to 3 percent
of effort. Customer communication and requirements analysis may comprise
10 to 25 percent of project effort. A range of 20 to 25 percent of effort is
normally applied to software design. Time expended for design review and
subsequent iteration must also be considered.
Range of 15 to 20 percent of overall effort can be achieved for software design
and coding and testing and subsequent debugging can account for 30 to 40
percent of software development effort.
32
33. Scheduling
Scheduling of a software project does not differ greatly from scheduling of any
multitask engineering effort. Therefore, generalized project scheduling tools and
techniques can be applied with little modification for software projects.
Program evaluation and review technique (PERT) and the critical path method
(CPM) are two project scheduling methods that can be applied to software
development.
Both techniques are driven by information already developed in earlier project
planning activities: estimates of effort, a decomposition of the product function,
the selection of the appropriate process model and task set, and decomposition
of the tasks that are selected.
Interdependencies among tasks may be defined using a task network. Tasks,
sometimes called the project work breakdown structure (WBS), are defined for
the product as a whole or for individual functions.
33
34. Scheduling
Both PERT and CPM provide quantitative tools that allow you to
(1) determine the critical path—the chain of tasks that determines the duration
of the project,
(2) establish “most likely” time estimates for individual tasks by applying
statistical models, and
(3) calculate “boundary times” that define a time “window” for a particular task.
Time Line Charts :
When creating a software project schedule, you begin with a set of tasks (the
work breakdown structure). If automated tools are used, the work breakdown is
input as a task network or task outline. Effort, duration, and start date are then
input for each task.
In addition, tasks may be assigned to specific individuals. As a consequence of
this input, a time-line chart, also called a Gantt chart, is generated. A time-line
chart can be developed for the entire project. Alternatively, separate charts can
be developed for each project function or for each individual working on the
project. 34
36. Scheduling
Time Line Charts cont..
Time-line chart depicts a part of a software project schedule that emphasizes
the concept scoping task for a word-processing (WP) software product. All
project tasks (for concept scoping) are listed in the left hand column. The
horizontal bars indicate the duration of each task.
When multiple bars occur at the same time on the calendar, task concurrency is
implied. The diamonds indicate milestones.
Once the information necessary for the generation of a time-line chart has been
input, the majority of software project scheduling tools produce project tables—
a tabular listing of all project tasks, their planned and actual start and end dates,
and a variety of related information. Used in conjunction with the time-line
chart, project tables enable you to track progress.
36
37. Tracking the Schedule
If it has been properly developed, the project schedule becomes a road
map that defines the tasks and milestones to be tracked and controlled
as the project proceeds. Tracking can be accomplished in a number of
different ways:
• Conducting periodic project status meetings in which each team
member reports progress and problems
• Evaluating the results of all reviews conducted throughout the
software engineering process
• Determining whether formal project milestones (the diamonds shown
in Figure 27.3) have been accomplished by the scheduled date
•Comparing the actual start date to the planned start date for each
project task listed in the resource table (Figure 27.4)
• Meeting informally with practitioners to obtain their subjective
assessment of progress to date and problems on the horizon
• Using earned value analysis (Section 27.6) to assess progress
quantitatively 37
38. • In reality, all of these tracking techniques are used by experienced
project managers.
• Control is employed by a software project manager to administer
project resources, cope with problems, and direct project staff. If
things are going well (i.e., the project is on schedule and within
budget, reviews indicate that real progress is being made and
milestones are being reached), control is light. But when problems
occur, you must exercise control to reconcile them as quickly as
possible. After a problem has been diagnosed, additional resources
may be focused on the problem area: staff may be redeployed or the
project schedule can be redefined.
• When faced with severe deadline pressure, experienced project
managers sometimes use a project scheduling and control technique
called time-boxing. The time-boxing strategy recognizes that the
complete product may not be deliverable by the predefined deadline.
Therefore, an incremental software paradigm (Chapter 2) is chosen,
and a schedule is derived for each incremental delivery. 38
39. Tracking Progress for an OO Project
• Although an iterative model is the best framework for an OO
project, task parallelism makes project tracking difficult. You
may have difficulty establishing meaningful milestones for an
OO project because a number of different things are happening
at once. In general, the following major milestones can be
considered “completed” when the criteria noted have been
met.
Technical milestone: OO analysis completed
• All classes and the class hierarchy have been defined and
reviewed.
• Class attributes and operations associated with a class have been
defined and reviewed.
• Class relationships (Chapter 6) have been established and
reviewed.
• A behavioral model (Chapter 7) has been created and reviewed.
• Reusable classes have been noted. 39
40. Technical milestone: OO design completed
• The set of subsystems has been defined and reviewed.
• Classes are allocated to subsystems and reviewed.
• Task allocation has been established and reviewed.
• Responsibilities and collaborations have been identified.
• Attributes and operations have been designed and reviewed.
• The communication model has been created and reviewed.
Technical milestone: OO programming completed
• Each new class has been implemented in code from the design model.
• Extracted classes (from a reuse library) have been implemented.
• Prototype or increment has been built.
40
41. Technical milestone: OO testing
• The correctness and completeness of OO analysis and design models
has been reviewed.
• A class-responsibility-collaboration network has been developed and
reviewed.
• Test cases are designed, and class-level tests have been conducted for
each class.
• Test cases are designed, and cluster testing is completed and the
classes are integrated.
• System-level tests have been completed.
41
42. Scheduling for WebApp Projects
• WebApp project scheduling distributes estimated effort across the
planned time line (duration) for building each WebApp increment.
• This is accomplished by allocating the effort to specific tasks. It is
important to note, however, that the overall WebApp schedule
evolves over time.
• During the first iteration, a macroscopic schedule is developed.
• This type of schedule identifies all WebApp increments and projects
the dates on which each will be deployed. As the development of an
increment gets under way, the entry for the increment on the
macroscopic schedule is refined into a detailed schedule. Here,
specific development tasks (required to accomplish an activity) are
identified and scheduled.
42
43. • As an example of macroscopic scheduling, consider the
SafeHomeAssured.com WebApp. Recalling earlier
discussions of SafeHomeAssured.com, seven increments
can be identified for the Web-based component of the
project:
• Increment 1: Basic company and product information
• Increment 2: Detailed product information and downloads
• Increment 3: Product quotes and processing product orders
• Increment 4: Space layout and security system design
• Increment 5: Information and ordering of monitoring
services
• Increment 6: Online control of monitoring equipment
• Increment 7: Accessing account information
43
44. • The team consults and negotiates with stakeholders
and develops a preliminary deployment schedule
for all seven increments. A time-line chart for this
schedule is illustrated in Figure.
44
46. • It is important to note that the deployment dates
(represented by diamonds on the time-line chart)
are preliminary and may change as more detailed
scheduling of the increments occurs. However, this
macroscopic schedule provides management with
an indication of when content and functionality will
be available and when the entire project will be
completed.
• As a preliminary estimate, the team will work to
deploy all increments with a 12-week time line. It’s
also worth noting that some of the increments will
be developed in parallel (e.g., increments 3, 4, 6
and 7). This assumes that the team will have
sufficient people to do this parallel work.
46
47. • Once the macroscopic schedule has been developed, the team is
ready to schedule work tasks for a specific increment. To accomplish
this, you can use a generic process framework that is applicable for all
WebApp increments. A task list is created by using the generic tasks
derived as part of the framework as a starting point and then
adapting these by considering the content and functions to be
derived for a specific WebApp increment.
• Each framework action (and its related tasks) can be adapted in one
of four ways:
• (1) a task is applied as is, (2) a task is eliminated because it is not
necessary for the increment, (3) a new (custom) task is added, and (4)
a task is refined (elaborated) into a number of named subtasks that
each becomes part of the schedule.
47
48. • To illustrate, consider a generic design modeling action for
WebApps that can be accomplished by applying some or all
of the following tasks:
• Design the aesthetic for the WebApp.
• Design the interface.
• Design the navigation scheme.
• Design the WebApp architecture.
• Design the content and the structure that supports it.
• Design functional components.
• Design appropriate security and privacy mechanisms.
• Review the design.
48
49. • As an example, consider the generic task Design the
Interface as it is applied to the
• fourth increment of SafeHomeAssured.com. Recall that
the fourth increment implements the content and function
for describing the living or business space to be secured by
the SafeHome security system. Referring to Figure 27.5, the
fourth increment commences at the beginning of the fifth
week and terminates at the end of the ninth week.
• There is little question that the Design the Interface task
must be conducted.
• The team recognizes that the interface design is pivotal to
the success of the increment and decides to refine
(elaborate) the task. The following subtasks are derived for
the Design the Interface task for the fourth increment:
49
50. • Develop a sketch of the page layout for the space design page.
• Review layout with stakeholders.
• Design space layout navigation mechanisms.
• Design “drawing board” layout.10
• Develop procedural details for the graphical wall layout function.
• Develop procedural details for the wall length computation and
display function.
• Develop procedural details for the graphical window layout function.
• Develop procedural details for the graphical door layout function.
• Design mechanisms for selecting security system components
(sensors, cameras, microphones, etc.).
• Develop procedural details for the graphical layout of security system
components.
• Conduct pair walkthroughs as required.
50
51. • These tasks become part of the increment
schedule for the fourth WebApp increment and
are allocated over the increment development
schedule. The can be input to scheduling
software and used for tracking and control.
51