This document discusses graphics hardware components. It describes various graphics input devices like the mouse, joystick, light pen etc. and how they are either analog or digital. It then covers common graphics output devices such as CRT displays, plasma displays, LCDs and 3D viewing systems. It provides details on the internal components and working of CRT displays. It also discusses graphics storage formats and the architecture of raster and random graphics systems.
It gives the detailed information about Three Dimensional Display Methods, Three dimensional Graphics Package, Interactive Input Methods and Graphical User Interface, Input of Graphical Data, Graphical Data: Input Functions, Interactive Picture-Construction
Transformation:
Transformations are a fundamental part of the computer graphics. Transformations are the movement of the object in Cartesian plane.
Types of transformation
Why we use transformation
3D Transformation
3D Translation
3D Rotation
3D Scaling
3D Reflection
3D Shearing
2 d transformations by amit kumar (maimt)Amit Kapoor
Transformations are operations that change the position, orientation, or size of an object in computer graphics. The main 2D transformations are translation, rotation, scaling, reflection, shear, and combinations of these. Transformations allow objects to be manipulated and displayed in modified forms without needing to redraw them from scratch.
This document discusses graphics hardware components. It describes various graphics input devices like the mouse, joystick, light pen etc. and how they are either analog or digital. It then covers commonly used graphics output devices like CRT displays, plasma displays, LCDs and 3D viewing systems. It provides details on the internal components and working of CRT displays. It also discusses graphics storage formats and the architecture of raster and random graphics systems.
RGB color stands for RED,GREEN and BLUE. This color model is used in computer monitors, television sets,
and theater. It's an additive color model.
CMYK refers to the four inks used in some color printing: cyan, magenta, yellow and key (black).
hidden surface elimination using z buffer algorithmrajivagarwal23dei
The document discusses hidden surface removal techniques used in 3D computer graphics. It introduces the hidden surface problem that arises when non-transparent objects obscure other objects from view. It describes object space and image space methods for identifying and removing hidden surfaces. The z-buffer algorithm is discussed as a commonly used image space method that works by comparing depth values in a z-buffer to determine which surfaces are visible at each pixel location.
Comprehensive coverage of fundamentals of computer graphics.
3D Transformations
Reflections
3D Display methods
3D Object Representation
Polygon surfaces
Quadratic Surfaces
It gives the detailed information about Three Dimensional Display Methods, Three dimensional Graphics Package, Interactive Input Methods and Graphical User Interface, Input of Graphical Data, Graphical Data: Input Functions, Interactive Picture-Construction
Transformation:
Transformations are a fundamental part of the computer graphics. Transformations are the movement of the object in Cartesian plane.
Types of transformation
Why we use transformation
3D Transformation
3D Translation
3D Rotation
3D Scaling
3D Reflection
3D Shearing
2 d transformations by amit kumar (maimt)Amit Kapoor
Transformations are operations that change the position, orientation, or size of an object in computer graphics. The main 2D transformations are translation, rotation, scaling, reflection, shear, and combinations of these. Transformations allow objects to be manipulated and displayed in modified forms without needing to redraw them from scratch.
This document discusses graphics hardware components. It describes various graphics input devices like the mouse, joystick, light pen etc. and how they are either analog or digital. It then covers commonly used graphics output devices like CRT displays, plasma displays, LCDs and 3D viewing systems. It provides details on the internal components and working of CRT displays. It also discusses graphics storage formats and the architecture of raster and random graphics systems.
RGB color stands for RED,GREEN and BLUE. This color model is used in computer monitors, television sets,
and theater. It's an additive color model.
CMYK refers to the four inks used in some color printing: cyan, magenta, yellow and key (black).
hidden surface elimination using z buffer algorithmrajivagarwal23dei
The document discusses hidden surface removal techniques used in 3D computer graphics. It introduces the hidden surface problem that arises when non-transparent objects obscure other objects from view. It describes object space and image space methods for identifying and removing hidden surfaces. The z-buffer algorithm is discussed as a commonly used image space method that works by comparing depth values in a z-buffer to determine which surfaces are visible at each pixel location.
Comprehensive coverage of fundamentals of computer graphics.
3D Transformations
Reflections
3D Display methods
3D Object Representation
Polygon surfaces
Quadratic Surfaces
Projection is the transformation of a 3D object into a 2D plane by mapping points from the 3D object to the projection plane. There are two main types of projection: perspective projection and parallel projection. Perspective projection uses lines that converge to a single point, while parallel projection uses parallel lines. Perspective projection includes one-point, two-point, and three-point perspectives. Parallel projection includes orthographic projection, which projects lines perpendicular to the plane, and oblique projection, where lines are parallel but not perpendicular to the plane.
Identify those parts of a scene that are visible from a chosen viewing position.
Visible-surface detection algorithms are broadly classified according to whether
they deal with object definitions directly or with their projected images.
These two approaches are called object-space methods and image-space methods, respectively
An object-space method compares
objects and parts of objects to each other within the scene definition to determine which surfaces, as a whole, we should label as visible.
In an image-space algorithm, visibility is decided point by point at each pixel position on the projection plane.
Visible surface detection in computer graphicanku2266
Visible surface detection aims to determine which parts of 3D objects are visible and which are obscured. There are two main approaches: object space methods compare objects' positions to determine visibility, while image space methods process surfaces one pixel at a time to determine visibility based on depth. Depth-buffer and A-buffer methods are common image space techniques that use depth testing to handle occlusion.
Video monitors use cathode ray tubes to display output. In a cathode ray tube, an electron gun fires a beam of electrons that is focused and deflected to hit phosphor on the screen, causing it to glow. The beam rapidly redraws the image to keep the screen illuminated, in a process called refresh. Key components of the electron gun include a heated cathode that emits electrons, an accelerating anode that speeds up the electrons, and control and focusing systems that shape the beam. When electrons hit phosphor, their energy causes the phosphor to glow briefly.
This document provides an outline for a seminar on computer graphics. It begins with basics of computer graphics including definitions, classifications, and principles. It then covers topics like computer-aided design, presentation graphics, computer art, entertainment, education and training, and visualization. Graphics devices, output primitives, displays, and input devices are discussed. Drawing points, lines, polygons, and transformations are explained. 3D concepts like parallel projection, perspective projection, and object representations are introduced. The document also covers color models, animations, graphics processing units, and the OpenGL graphics library. It provides examples of functions for initializing and creating windows in OpenGL.
This document discusses methods for identifying and removing hidden surfaces when rendering 3D scenes to create a realistic 2D image. It describes two approaches: object-space methods that compare whole objects, and image-space methods that decide visibility point-by-point. It focuses on the depth buffer/z-buffer method, which processes surfaces one point at a time, comparing depth values to determine visibility and store the color of visible points. It also discusses using scan line coherence to solve hidden surfaces one scan line at a time from top to bottom.
Image enhancement is the process of adjusting digital images so that the results are more suitable for display or further image analysis. For example, you can remove noise, sharpen, or brighten an image, making it easier to identify key features.
Here are some useful examples and methods of image enhancement:
Filtering with morphological operators, Histogram equalization, Noise removal using a Wiener filter, Linear contrast adjustment, Median filtering, Unsharp mask filtering, Contrast-limited adaptive histogram equalization (CLAHE). Decorrelation stretch
it is a Visible surface detection method is also known as depth buffer method. In this method detect the visible surface by using the distance of the object from the projections plane.
3D transformation in computer graphicsSHIVANI SONI
This document discusses different types of 2D and 3D transformations that are used in computer graphics, including translation, rotation, scaling, shearing, and reflection. It provides the mathematical equations and transformation matrices used to perform each type of transformation on 2D and 3D points and objects. Key types of rotations discussed are roll (around z-axis), pitch (around x-axis), and yaw (around y-axis). Homogeneous coordinates are introduced for representing 3D points.
Computer Graphics - Hidden Line Removal AlgorithmJyotiraman De
This document discusses various algorithms for hidden surface removal when rendering 3D scenes, including the z-buffer method, scan-line method, spanning scan-line method, floating horizon method, and discrete data method. The z-buffer method uses a depth buffer to track the closest surface at each pixel. The scan-line method only considers visible surfaces within each scan line. The floating horizon method finds the visible portions of curves using a horizon array. The discrete data method handles surfaces defined by discrete points rather than mathematical equations.
The document summarizes raster scan and random scan displays. Raster scan displays use an electron beam that sweeps across the screen from top to bottom to generate pixels based on values stored in a refresh buffer. Random scan displays directly draw images using an electron beam without a fixed pattern, storing only line drawing instructions. The key differences are that raster scan is used for realistic images due to storing intensity values while random scan has higher resolution but is limited to line drawings. Both use a cathode ray tube containing an electron gun, deflection coils, and phosphor screen.
This document discusses various techniques for image segmentation. It describes two main approaches to segmentation: discontinuity-based methods that detect edges or boundaries, and region-based methods that partition an image into uniform regions. Specific techniques discussed include thresholding, gradient operators, edge detection, the Hough transform, region growing, region splitting and merging, and morphological watershed transforms. Motion can also be used for segmentation by analyzing differences between frames in a video.
An illumination model, also called a lighting model and sometimes referred to as a shading model, is used to calculate the intensity of light that we should see at a given point on the surface of an object.
Surface rendering means a procedure for applying a lighting model to obtain pixel intensities for all the projected surface positions in a scene.
A surface-rendering algorithm uses the intensity calculations from an illumination model to determine the light intensity for all projected pixel positions for the various surfaces in a scene.
Surface rendering can be performed by applying the illumination model to every visible surface point.
The document discusses concepts related to basic illumination models. It covers key components like ambient light, diffuse illumination, and specular reflection that contribute to how objects are illuminated. It notes that illumination models try to approximate real world lighting in a realistic but not perfectly accurate way. The document also discusses challenges like accounting for all light rays reflected between nearby objects and having multiple light sources and viewing directions in a scene.
The document discusses how more complex geometric transformations can be performed by combining basic transformations through composition. It provides examples of how scaling and rotation can be done with respect to a fixed point by first translating the object to align the point with the origin, then performing the basic transformation, and finally translating back. Mirror reflection about a line is similarly described as a composite of translations and rotations.
This document discusses various 3D transformations including translation, rotation, scaling, reflection, and shearing. It provides the transformation matrices for each type of 3D transformation. It also discusses combining multiple transformations through composite transformations by multiplying the matrices in sequence from right to left.
A graphics monitor is a display that can show graphics in addition to text. Graphics monitors are used in applications like air traffic control, medical imaging, and CAD. A workstation is a powerful computer optimized for visualization and manipulation of complex data like 3D modeling, simulation, and image rendering. Workstations have specifications like 64MB or more of RAM, high-resolution graphics screens, large displays, and built-in network support. They are used for graphics-intensive applications like 3D design, video editing, and CAD. A server handles data requests from other computers on a network and hosts applications, while a workstation is a personal computer used for graphics applications and intensive programs by professional users.
This document provides an overview of 3D transformations, including translation, rotation, scaling, reflection, and shearing. It explains that 3D transformations generalize 2D transformations by including a z-coordinate and using homogeneous coordinates and 4x4 transformation matrices. Each type of 3D transformation is defined using matrix representations and equations. Rotation is described for each coordinate axis, and reflection is explained for each axis plane. Shearing is introduced as a way to modify object shapes, especially for perspective projections.
This document discusses 3D transformations and projections. It describes two main projection methods: parallel projection and perspective projection. Parallel projection preserves proportions but does not provide a realistic 3D representation. Perspective projection maps 3D points along converging lines to a vanishing point, resulting in foreshortening effects where objects appear smaller the farther they are from the viewing plane. The document outlines different types of parallel and perspective projections.
The document describes different algorithms for filling polygon and area shapes, including scanline fill, boundary fill, and flood fill algorithms. The scanline fill algorithm works by determining intersections of boundaries with scanlines and filling color between intersections. Boundary fill works by starting from an interior point and recursively "painting" neighboring points until the boundary is reached. Flood fill replaces a specified interior color. Both can be 4-connected or 8-connected. The document also discusses problems that can occur and more efficient span-based approaches.
This document discusses solid state chemistry and provides information on various topics within the subject. It begins by defining the three states of matter and what distinguishes a solid. It then describes the two main types of solids - crystalline and amorphous - and provides details on their structures and properties. Various types of crystal structures are also outlined, including ionic, covalent, molecular and metallic crystals. The document concludes by discussing Bragg's equation and important solid materials like diamond, graphite and fullerenes.
Projection is the transformation of a 3D object into a 2D plane by mapping points from the 3D object to the projection plane. There are two main types of projection: perspective projection and parallel projection. Perspective projection uses lines that converge to a single point, while parallel projection uses parallel lines. Perspective projection includes one-point, two-point, and three-point perspectives. Parallel projection includes orthographic projection, which projects lines perpendicular to the plane, and oblique projection, where lines are parallel but not perpendicular to the plane.
Identify those parts of a scene that are visible from a chosen viewing position.
Visible-surface detection algorithms are broadly classified according to whether
they deal with object definitions directly or with their projected images.
These two approaches are called object-space methods and image-space methods, respectively
An object-space method compares
objects and parts of objects to each other within the scene definition to determine which surfaces, as a whole, we should label as visible.
In an image-space algorithm, visibility is decided point by point at each pixel position on the projection plane.
Visible surface detection in computer graphicanku2266
Visible surface detection aims to determine which parts of 3D objects are visible and which are obscured. There are two main approaches: object space methods compare objects' positions to determine visibility, while image space methods process surfaces one pixel at a time to determine visibility based on depth. Depth-buffer and A-buffer methods are common image space techniques that use depth testing to handle occlusion.
Video monitors use cathode ray tubes to display output. In a cathode ray tube, an electron gun fires a beam of electrons that is focused and deflected to hit phosphor on the screen, causing it to glow. The beam rapidly redraws the image to keep the screen illuminated, in a process called refresh. Key components of the electron gun include a heated cathode that emits electrons, an accelerating anode that speeds up the electrons, and control and focusing systems that shape the beam. When electrons hit phosphor, their energy causes the phosphor to glow briefly.
This document provides an outline for a seminar on computer graphics. It begins with basics of computer graphics including definitions, classifications, and principles. It then covers topics like computer-aided design, presentation graphics, computer art, entertainment, education and training, and visualization. Graphics devices, output primitives, displays, and input devices are discussed. Drawing points, lines, polygons, and transformations are explained. 3D concepts like parallel projection, perspective projection, and object representations are introduced. The document also covers color models, animations, graphics processing units, and the OpenGL graphics library. It provides examples of functions for initializing and creating windows in OpenGL.
This document discusses methods for identifying and removing hidden surfaces when rendering 3D scenes to create a realistic 2D image. It describes two approaches: object-space methods that compare whole objects, and image-space methods that decide visibility point-by-point. It focuses on the depth buffer/z-buffer method, which processes surfaces one point at a time, comparing depth values to determine visibility and store the color of visible points. It also discusses using scan line coherence to solve hidden surfaces one scan line at a time from top to bottom.
Image enhancement is the process of adjusting digital images so that the results are more suitable for display or further image analysis. For example, you can remove noise, sharpen, or brighten an image, making it easier to identify key features.
Here are some useful examples and methods of image enhancement:
Filtering with morphological operators, Histogram equalization, Noise removal using a Wiener filter, Linear contrast adjustment, Median filtering, Unsharp mask filtering, Contrast-limited adaptive histogram equalization (CLAHE). Decorrelation stretch
it is a Visible surface detection method is also known as depth buffer method. In this method detect the visible surface by using the distance of the object from the projections plane.
3D transformation in computer graphicsSHIVANI SONI
This document discusses different types of 2D and 3D transformations that are used in computer graphics, including translation, rotation, scaling, shearing, and reflection. It provides the mathematical equations and transformation matrices used to perform each type of transformation on 2D and 3D points and objects. Key types of rotations discussed are roll (around z-axis), pitch (around x-axis), and yaw (around y-axis). Homogeneous coordinates are introduced for representing 3D points.
Computer Graphics - Hidden Line Removal AlgorithmJyotiraman De
This document discusses various algorithms for hidden surface removal when rendering 3D scenes, including the z-buffer method, scan-line method, spanning scan-line method, floating horizon method, and discrete data method. The z-buffer method uses a depth buffer to track the closest surface at each pixel. The scan-line method only considers visible surfaces within each scan line. The floating horizon method finds the visible portions of curves using a horizon array. The discrete data method handles surfaces defined by discrete points rather than mathematical equations.
The document summarizes raster scan and random scan displays. Raster scan displays use an electron beam that sweeps across the screen from top to bottom to generate pixels based on values stored in a refresh buffer. Random scan displays directly draw images using an electron beam without a fixed pattern, storing only line drawing instructions. The key differences are that raster scan is used for realistic images due to storing intensity values while random scan has higher resolution but is limited to line drawings. Both use a cathode ray tube containing an electron gun, deflection coils, and phosphor screen.
This document discusses various techniques for image segmentation. It describes two main approaches to segmentation: discontinuity-based methods that detect edges or boundaries, and region-based methods that partition an image into uniform regions. Specific techniques discussed include thresholding, gradient operators, edge detection, the Hough transform, region growing, region splitting and merging, and morphological watershed transforms. Motion can also be used for segmentation by analyzing differences between frames in a video.
An illumination model, also called a lighting model and sometimes referred to as a shading model, is used to calculate the intensity of light that we should see at a given point on the surface of an object.
Surface rendering means a procedure for applying a lighting model to obtain pixel intensities for all the projected surface positions in a scene.
A surface-rendering algorithm uses the intensity calculations from an illumination model to determine the light intensity for all projected pixel positions for the various surfaces in a scene.
Surface rendering can be performed by applying the illumination model to every visible surface point.
The document discusses concepts related to basic illumination models. It covers key components like ambient light, diffuse illumination, and specular reflection that contribute to how objects are illuminated. It notes that illumination models try to approximate real world lighting in a realistic but not perfectly accurate way. The document also discusses challenges like accounting for all light rays reflected between nearby objects and having multiple light sources and viewing directions in a scene.
The document discusses how more complex geometric transformations can be performed by combining basic transformations through composition. It provides examples of how scaling and rotation can be done with respect to a fixed point by first translating the object to align the point with the origin, then performing the basic transformation, and finally translating back. Mirror reflection about a line is similarly described as a composite of translations and rotations.
This document discusses various 3D transformations including translation, rotation, scaling, reflection, and shearing. It provides the transformation matrices for each type of 3D transformation. It also discusses combining multiple transformations through composite transformations by multiplying the matrices in sequence from right to left.
A graphics monitor is a display that can show graphics in addition to text. Graphics monitors are used in applications like air traffic control, medical imaging, and CAD. A workstation is a powerful computer optimized for visualization and manipulation of complex data like 3D modeling, simulation, and image rendering. Workstations have specifications like 64MB or more of RAM, high-resolution graphics screens, large displays, and built-in network support. They are used for graphics-intensive applications like 3D design, video editing, and CAD. A server handles data requests from other computers on a network and hosts applications, while a workstation is a personal computer used for graphics applications and intensive programs by professional users.
This document provides an overview of 3D transformations, including translation, rotation, scaling, reflection, and shearing. It explains that 3D transformations generalize 2D transformations by including a z-coordinate and using homogeneous coordinates and 4x4 transformation matrices. Each type of 3D transformation is defined using matrix representations and equations. Rotation is described for each coordinate axis, and reflection is explained for each axis plane. Shearing is introduced as a way to modify object shapes, especially for perspective projections.
This document discusses 3D transformations and projections. It describes two main projection methods: parallel projection and perspective projection. Parallel projection preserves proportions but does not provide a realistic 3D representation. Perspective projection maps 3D points along converging lines to a vanishing point, resulting in foreshortening effects where objects appear smaller the farther they are from the viewing plane. The document outlines different types of parallel and perspective projections.
The document describes different algorithms for filling polygon and area shapes, including scanline fill, boundary fill, and flood fill algorithms. The scanline fill algorithm works by determining intersections of boundaries with scanlines and filling color between intersections. Boundary fill works by starting from an interior point and recursively "painting" neighboring points until the boundary is reached. Flood fill replaces a specified interior color. Both can be 4-connected or 8-connected. The document also discusses problems that can occur and more efficient span-based approaches.
This document discusses solid state chemistry and provides information on various topics within the subject. It begins by defining the three states of matter and what distinguishes a solid. It then describes the two main types of solids - crystalline and amorphous - and provides details on their structures and properties. Various types of crystal structures are also outlined, including ionic, covalent, molecular and metallic crystals. The document concludes by discussing Bragg's equation and important solid materials like diamond, graphite and fullerenes.
This document discusses various graphics input and output devices. It covers video display devices like cathode ray tubes and flat panel displays. It describes the basic components of CRTs including the electron gun and phosphor screen. The document also discusses raster scan displays, random scan displays, and color CRT monitors. Finally, it covers common input devices such as keyboards, mice, trackballs, joysticks, data gloves, digitizers, image scanners, and touch panels.
This document provides an overview of database system concepts and architecture. It discusses data models, schemas, instances, and states. It also describes the three-schema architecture, data independence, DBMS languages and interfaces, database system utilities and tools, and centralized and client-server architectures. Key classification of DBMSs are also covered.
This document introduces databases and database management systems (DBMS). It defines key terms like data, database, and DBMS. It describes typical DBMS functionality including defining and constructing databases, and allowing querying, updating, and concurrent access. Example database applications are given ranging from traditional to more recent ones like multimedia and geographic databases. Main characteristics of the database approach are outlined. Database users are categorized and advantages of the database approach are summarized.
This document discusses various algorithms for polygon scan conversion and filling, including:
- The scan line polygon fill algorithm which determines pixel color by calculating polygon edge intersections with scan lines and using an odd-even rule.
- Methods for handling special cases like horizontal edges and vertex intersections.
- Using a sorted edge table and active edge list to incrementally calculate edge intersections across scan lines.
- Flood fill and depth/z-buffer algorithms for hidden surface removal when rendering overlapping polygons.
A polygon is a closed two-dimensional shape with straight or curved sides. It can be defined by an ordered sequence of vertices and edges connecting consecutive vertices. The scan line polygon fill algorithm uses an odd-even rule to determine if a point is inside or outside the polygon by counting edge crossings along a scan line from that point to infinity. Boundary fill and flood fill are two area filling algorithms that color the interior of a polygon or region by recursively filling neighboring pixels of the same color.
This document discusses analysis of algorithms. It covers computation models like Turing machine and RAM models. It then discusses measuring the time complexity, space complexity, and order of growth of algorithms. Time complexity is measured based on the number of basic operations like comparisons. Space complexity depends on memory required. Order of growth classifies algorithms based on how their running time grows with input size (n), such as O(n), O(log n) etc. Asymptotic notations like Big O, Omega and Theta are used to represent the asymptotic time complexity of algorithms.
The document discusses algorithms and their analysis. It defines an algorithm as a step-by-step procedure to solve a problem and get a desired output. Key aspects of algorithms discussed include their time and space complexity, asymptotic analysis to determine best, average, and worst case running times, and common asymptotic notations like Big O that are used to analyze algorithms. Examples are provided to demonstrate how to determine the time and space complexity of different algorithms like those using loops, recursion, and nested loops.
This document provides an overview of database concepts. It discusses the traditional approach to data management versus the database approach. The traditional approach leads to problems like data redundancy, inconsistency, and inability to share data. A database management system addresses these issues by allowing centralized data storage and shared access. Key topics covered include data modeling, the relational database model, database administration, popular DBMSs, and emerging concepts like data warehousing, data mining and business intelligence.
This document discusses style sheet languages like CSS that are used to control the presentation of XML documents. CSS allows one to specify things like fonts, colors, spacing etc. for different elements in an XML file. A single XML file can then be formatted in multiple ways just by changing the associated CSS stylesheet without modifying the XML content. The document provides examples of using CSS selectors, rules and properties to style elements in an XML file and controlling presentation aspects like layout of elements on a page. It also discusses how to link the CSS stylesheet to an XML file using processing instructions.
This document discusses XML web services and their components. It defines XML web services as software services exposed on the web through the SOAP protocol and described with WSDL and registered in UDDI. It describes how SOAP is used for communication, WSDL describes service interfaces, and UDDI allows for service discovery. Examples of web services are provided. The architecture of web services is shown involving clients, services, and standards. Finally, it discusses how XML data can be transformed to HTML for display in web pages using XSLT transformation rules.
This document discusses different methods of data representation in GIS, including data collection, input, and output devices. It covers three main types of data input: sample ground data, topographic maps, and satellite digital data. Common input devices include digitizers, scanners, keyboards, and disk drives, while common output devices are plotters, printers, visual display units, and tape drives. The document then focuses on different data input methods like keyboard entry, digitizing, and scanning, outlining their processes, advantages, and limitations.
This document provides an overview of thermodynamics concepts including:
1. The first law of thermodynamics states that the change in internal energy of a system is equal to the heat transferred plus work done.
2. Enthalpy (H) is a state function that takes into account both internal energy changes and work related to pressure-volume changes during chemical reactions.
3. Hess's law states that the total enthalpy change for a reaction is equal to the sum of the enthalpy changes for the individual steps of that reaction.
The document summarizes key concepts in atomic structure:
- John Dalton proposed atoms as the smallest indivisible particles containing electrons, protons and neutrons.
- Rutherford's nuclear model presented atoms as mostly empty space with a dense positively charged nucleus.
- Bohr's model improved on this by proposing electrons orbit in fixed shells with discrete energies, explaining atomic spectra.
- Planck and Einstein established the particle-like nature of electromagnetic radiation as photons.
1) The chapter discusses tools for studying chemical reactions including equilibrium constants, free energy change, enthalpy, entropy, bond dissociation energy, kinetics and activation energy.
2) It then examines the chlorination of methane as a free-radical chain reaction involving initiation, propagation and termination steps.
3) Key concepts covered include how reaction rate depends on factors like temperature, activation energy and reaction order. Transition state theory and reaction energy diagrams are also explained.
This document provides an introduction to databases and database management systems (DBMS). It discusses key concepts such as the main components and users of a database including end users, database administrators, and designers. It also summarizes the main characteristics of the database approach like data abstraction, multiple views, and transaction processing. Some advantages of using a DBMS are controlling redundancy, restricting access, and enforcing integrity constraints. The document also outlines scenarios where a DBMS may not be needed.
This document summarizes an optical mark recognition (OMR) based attendance system. It introduces OMR, describes the system's phases including input, processing, and output stages, and discusses advantages like cost efficiency and speed as well as disadvantages such as potential proxy marking issues. The conclusion states that OMR based attendance systems provide a robust, low-cost technique that can be widely adopted.
This document provides an overview of computer graphics. It discusses what computer graphics is, the basic components of a computer graphics system including display devices like CRT monitors. It describes the two main techniques for displaying images on a CRT - vector/random scan and raster scan. The document also discusses color CRT monitors and the two techniques used - beam penetration and shadow mask. It outlines several applications of computer graphics like user interfaces, modeling, simulation and animation.
Model 1 multimedia graphics and animation introduction (1)Rahul Borate
Graphics controller
9 Refreshing of screen is
required.
Refreshing of screen is not required.
10 Suitable for TV, monitor. Suitable for CAD/CAM application,
scientific visualization.
The document discusses various input devices used for graphics workstations, including keyboards, mice, trackballs, spaceballs, joysticks, data gloves, digitizers, image scanners, touch panels, light pens, and voice systems. Image scanners work by placing an image on a glass plate and using a scanning unit with light sensors to convert the image to digital pixel data. Touch panels detect screen positions touched by the user using either optical, electrical, or acoustic methods. Light pens allow screen positions to be selected by detecting the light emitted from a CRT screen. Voice systems use speech recognition to accept voice commands by matching input to a predefined dictionary.
Basic fundamental Computer input/output Accessoriessuraj pandey
The document discusses various computer input and output devices. For input, it describes keyboards, mice, joysticks, light pens, touch screens, data gloves, tablets, digitizers, scanners, optical character recognition, optical mark readers, bar code readers, voice recognition, electronic cards, digital cameras, and webcams. For output, it discusses monitors including CRT, LCD, LED, plasma displays, printers, and impact vs non-impact printers.
This document discusses image processing and its various applications and techniques. It defines image processing as processing images in a desired manner and explains it has two aspects: improving visual appearance for humans and preparing images for feature measurement. It describes why image processing is needed such as preparing digital images for viewing and optimizing images for applications. It also outlines different types of image processing like image-to-image, image-to-information, and information-to-image transformations.
This document summarizes various modern surveying equipment used for mapping and construction projects, including:
- Electronic distance measurement (EDM) devices and total stations that integrate EDM to measure distances electronically.
- Automatic and digital levels used to measure elevations and slopes accurately and efficiently.
- Global positioning systems (GPS) that use satellites to determine precise locations on Earth.
- Key principles, components, operations, and uses of total stations are described, which integrate distance measurement, angle measurement, and data recording into one portable instrument.
Computer graphics involves rendering pictures, charts, and graphs on computers rather than just text. It has many applications including movies, games, medical imaging, CAD, education, and simulations. Computer graphics uses pixels - the smallest display elements - to represent images on screens. There are two main types: interactive graphics which allow user input, and passive graphics which do not. Raster scan displays refresh images by sweeping an electron beam across the screen in lines, while random scan displays draw images line by line. Algorithms like DDA and Bresenham's are used to efficiently render lines and circles of pixels.
Modern surveying techniques utilizes advanced electronic equipment for measuring distances, angles, and elevations. This includes digital levels that use electronic image processing of barcoded staff readings, total stations that integrate distance and angle measurements, and electromagnetic distance measurement instruments. Remote sensing involves analyzing sensor data such as satellite imagery to obtain information about areas without direct contact. It has various applications including agriculture, urban planning, hydrology, and disaster management by aiding tasks such as early warning, damage assessment, and recovery efforts.
Computer graphics uses computers to draw and display pictures, graphics, and data in pictorial form. It expresses data visually instead of just text. Computer graphics is used in movies, games, medical imaging, design, education, simulators, art, presentations, image processing, and graphical user interfaces. Pixels are the smallest display elements on a screen, each with an intensity and color value. Interactive graphics allow user input to modify images, while passive graphics do not. Common display devices are CRT monitors which use electron beams to excite phosphors and LCD screens which use pixels to control light transmission. Algorithms like DDA and Bresenham's are used to draw lines on these displays.
The document provides an overview of common computer input and output devices. It describes keyboards, mice, scanners, and sensors as examples of input devices used to capture and send data to a computer. It also discusses monitors, printers, and speakers as examples of output devices that display or convey information from a computer in visual, audio, or physical forms. The document contains detailed descriptions and comparisons of specific input devices like different types of mice, keyboards, scanners, and sensors. It also examines characteristics of output displays like monitor resolution, refresh rates, and types of displays including CRT, LCD, LED, and plasma screens.
The document provides an overview of computer hardware input and output devices. It describes common input devices like the keyboard, mouse, and scanners that allow data to be entered into the computer. It also discusses output devices like monitors, printers, and speakers that allow the computer to display or present information to users. A diagram shows the basic components of a computer system including the central processing unit, memory, hard drive, and connections to input and output devices.
The document provides an overview of computer hardware and input devices. It discusses the basic components of a computer system including the input, processing, storage and output units. It then describes various commonly used input devices such as keyboards, mice, scanners, microphones, webcams and touchscreens. Specific input devices like optical mark readers, bar code readers and digitizers are also explained along with their uses.
This document provides an introduction to computer graphics, including its applications and components. It discusses the different types of graphic display devices such as CRT monitors and their concepts like double buffering. It also covers 2D graphics topics like coordinate systems, line and circle drawing algorithms. The key components of computer graphics are explained as the frame buffer, display controller, and monitor. Interactive and non-interactive computer graphics are defined. Finally, it discusses how the display controller works with the frame buffer and monitor to produce graphical output.
Surveying is considered as one of the oldest field of Civil Engineering. As days passes we can see lot of improvements in technology. In this ppt we can able to see latest instruments used for surveying
Computer Graphics is an advance field in information technology and all about manipulation and rendering of images. This presentation covers all the main concepts in computer graphics including graphics algorithms.
This document outlines the syllabus and content for a course on computer graphics. The 6 units cover topics like primitive algorithms, 2D and 3D transformations, viewing and clipping, curves/surfaces, object rendering and animation. Key concepts discussed include image representation using pixels, bitmap vs vector graphics, applications in design, entertainment, education, and interfaces. Display devices like CRT, LCD and plasma are explained. Coordinate systems and input technologies are also introduced.
This document discusses techniques for filling 2D shapes and regions in raster graphics. It covers seed fill algorithms that start with an interior seed point and grow outward, filling neighboring pixels. Boundary fill and flood fill are described as variations. The document also discusses raster-based filling that processes shapes one scanline at a time. Methods for filling polygons are presented, including using the even-odd rule or winding number rule to determine if a point is inside the polygon boundary.
The document derives Bresenham's line algorithm for drawing lines on a discrete grid. It starts with the line equation and defines variables for the slope and intercept. It then calculates the distance d1 and d2 from the line to two possible pixel locations and expresses their difference in terms of the slope and intercept. By multiplying this difference by the change in x, it removes the floating point slope value, resulting in an integer comparison expression. This is defined recursively to draw each subsequent pixel, using pre-computed constants. The initial p0 value is also derived from the line endpoint coordinates.
The document discusses algorithms for drawing lines and circles on a discrete pixel display. It begins by describing what characteristics an "ideal line" would have on such a display. It then introduces several algorithms for drawing lines, including the simple line algorithm, digital differential analyzer (DDA) algorithm, and Bresenham's line algorithm. The Bresenham algorithm is described in detail, as it uses only integer calculations. Next, a simple potential circle drawing algorithm is presented and its shortcomings discussed. Finally, the more accurate and efficient mid-point circle algorithm is described. This algorithm exploits the eight-way symmetry of circles and uses incremental calculations to determine the next pixel point.
The document provides an introduction to XSLT (Extensible Stylesheet Language Transformations), including:
1) It discusses XSLT basics like using templates to extract values from XML and output them, using for-each loops to process multiple elements, and if/choose for decisions.
2) It covers XPath for addressing parts of an XML document, and functions like contains() and position().
3) The document gives examples of transforming sample XML data using XSLT templates, value-of, and apply-templates.
XML documents can be represented and stored in memory as tree structures using models like DOM and XDM. XPath is an expression language used to navigate and select parts of an XML tree. It allows traversing elements and their attributes, filtering nodes by properties or position, and evaluating paths relative to a context node. While XPath expressions cannot modify the document, they are commonly used with languages like XSLT and XQuery which can transform or extract data from XML trees.
This document provides an overview of XML programming and XML documents. It discusses the physical and logical views of an XML document, document structure including the root element, and how XML documents are commonly stored as text files. It also summarizes how an XML parser reads and validates an XML document by checking its syntax and structure. The document then covers various XML components in more detail, such as elements, attributes, character encoding, entities, processing instructions, well-formedness, validation via DTDs, and document modeling.
XML Schema provides a way to formally define and validate the structure and content of XML documents. It allows defining elements, attributes, and data types, as well as restrictions like length, pattern, and value ranges. DTD is more limited and cannot validate data types. XML Schema is written in XML syntax, uses XML namespaces, and provides stronger typing capabilities compared to DTD. It allows defining simple and complex element types, attributes, and restrictions to precisely describe the expected structure and values within XML documents.
An attribute declaration specifies attributes for elements in a DTD. It defines the attribute name, data type or permissible values, and required behavior. For example, an attribute may have a default value if not provided, be optional, or require a value. Notations can label non-XML data types and unparsed entities can import binary files. Together DTDs and entities provide a schema to describe document structure and relationships.
This document provides an introduction and overview of XML. It explains that XML stands for Extensible Markup Language and is used for data transportation and storage in a platform and language neutral way. XML plays an important role in data exchange on the web. The document discusses the history of XML and how it was developed as an improvement over SGML and HTML by allowing users to define their own tags to structure data for storage and interchange. It also provides details on the pros and cons of XML compared to other markup languages.
This document provides instructions for packaging and deploying a J2EE application that was developed in IBM Rational Application Developer. It describes resetting the database to its original state, exporting the application as an EAR file, using the WebSphere administrative console to install the EAR file on the application server, and testing the application in a web browser. The goal is to simulate taking an application developed in a development environment and deploying it to a production server.
This document provides an overview of key Java enterprise technologies including JNDI, JMS, JPA and XML. It discusses the architecture and usage of JNDI for accessing naming and directory services. It also covers the point-to-point and publish/subscribe messaging models of JMS, the core JMS programming elements like connection factories, connections and destinations, and how applications use these elements to send and receive messages. Finally, it briefly introduces JPA for object-relational mapping and the role of XML.
The document discusses the benefits of using Enterprise JavaBeans (EJBs) for developing Java EE applications. It explains that EJBs provide infrastructure for developing and deploying mission-critical, enterprise applications by handling common tasks like database connectivity and transaction management. The three types of EJBs - session, entity, and message-driven beans - are described as well as how they are contained in EJB containers.
This document provides an overview of JSP and Struts programming. It discusses the advantages of JSP over servlets, the JSP lifecycle, and basic JSP elements like scriptlets, expressions, directives. It also covers creating simple JSP pages, the JSP API, and using scripting elements to include Java code in JSP pages.
This document provides lecture notes on servlet programming. It covers topics like the introduction to servlets, GET and POST methods, the lifecycle of a servlet, servlet interfaces like Servlet, GenericServlet and HttpServlet. It also discusses request dispatching in servlets, session management techniques and servlet filters. Code examples are provided to demonstrate servlet implementation and request dispatching.
The document discusses Java Database Connectivity (JDBC) and provides details about its core components and usage. It covers:
1) The four core components of JDBC - drivers, connections, statements, and result sets.
2) The four types of JDBC drivers and examples of each.
3) How to use JDBC to connect to a database, execute queries using statements, iterate through result sets, and update data. Prepared statements are also discussed.
The document is a set of lecture notes on Enterprise Java from January to June 2014 prepared by Mr. Hitesh Kumar Sharma and Mr. Ravi Tomar. It covers core J2EE technologies, enterprise application architectures like 2-tier, 3-tier and n-tier, advantages and disadvantages of architectures, J2EE application servers, web containers and EJB containers. The notes are to be submitted by B.Tech CS VI semester students specializing in MFT, O&G, OSS and CCVT.
This document provides an overview of Android development. It discusses the Android SDK, Dalvik VM, and differences between Android and Java APIs. It also covers key aspects of building Android apps like activities, intents, services, and UI components. Debugging, optimizations, and the anatomy of an Android app are also briefly discussed.
The document summarizes the different resource types used in an Android application. Resources like animations, colors, drawables, layouts, menus and strings are stored in the res folder and accessed via their respective R classes. The src folder contains Java source code, gen contains the R class, assets stores raw files, and bins has compiled code. Resources support different densities in drawable folders. Layouts define UIs and values contains simple data like strings.
The document discusses the Android application lifecycle, which describes the steps an app goes through from launch to exit. It includes starting, resuming, pausing, stopping and destroying activities. The lifecycle is managed by callbacks in the Activity class like onCreate(), onResume() and onDestroy(). An app's manifest defines its components and launcher activity using tags like <activity>, <intent-filter> and <category>.
SQLLite and Java
SQLite is an embedded SQL database that is not a client/server system but is instead accessed via function calls from an application. It uses a single cross-platform database file. The android.database.sqlite package provides classes for managing SQLite databases in Android applications, including methods for creating, opening, inserting, updating, deleting, and querying the database. Queries return results as a Cursor object that can be used to access data.
(𝐓𝐋𝐄 𝟏𝟎𝟎) (𝐋𝐞𝐬𝐬𝐨𝐧 3)-𝐏𝐫𝐞𝐥𝐢𝐦𝐬
Lesson Outcomes:
- students will be able to identify and name various types of ornamental plants commonly used in landscaping and decoration, classifying them based on their characteristics such as foliage, flowering, and growth habits. They will understand the ecological, aesthetic, and economic benefits of ornamental plants, including their roles in improving air quality, providing habitats for wildlife, and enhancing the visual appeal of environments. Additionally, students will demonstrate knowledge of the basic requirements for growing ornamental plants, ensuring they can effectively cultivate and maintain these plants in various settings.
Brand Guideline of Bashundhara A4 Paper - 2024khabri85
It outlines the basic identity elements such as symbol, logotype, colors, and typefaces. It provides examples of applying the identity to materials like letterhead, business cards, reports, folders, and websites.
Cross-Cultural Leadership and CommunicationMattVassar1
Business is done in many different ways across the world. How you connect with colleagues and communicate feedback constructively differs tremendously depending on where a person comes from. Drawing on the culture map from the cultural anthropologist, Erin Meyer, this class discusses how best to manage effectively across the invisible lines of culture.
Post init hook in the odoo 17 ERP ModuleCeline George
In Odoo, hooks are functions that are presented as a string in the __init__ file of a module. They are the functions that can execute before and after the existing code.
How to Create a Stage or a Pipeline in Odoo 17 CRMCeline George
Using CRM module, we can manage and keep track of all new leads and opportunities in one location. It helps to manage your sales pipeline with customizable stages. In this slide let’s discuss how to create a stage or pipeline inside the CRM module in odoo 17.
The Science of Learning: implications for modern teachingDerek Wenmoth
Keynote presentation to the Educational Leaders hui Kōkiritia Marautanga held in Auckland on 26 June 2024. Provides a high level overview of the history and development of the science of learning, and implications for the design of learning in our modern schools and classrooms.
How to Download & Install Module From the Odoo App Store in Odoo 17Celine George
Custom modules offer the flexibility to extend Odoo's capabilities, address unique requirements, and optimize workflows to align seamlessly with your organization's processes. By leveraging custom modules, businesses can unlock greater efficiency, productivity, and innovation, empowering them to stay competitive in today's dynamic market landscape. In this tutorial, we'll guide you step by step on how to easily download and install modules from the Odoo App Store.
5. Graphics Input Devices
• Any device that allows information from
outside the computer to be communicated to
the computer is considered an input device.
• Understanding of various input devices is
important in order to construct high-quality
graphical user-interfaces.
• Input devices are of two basic types: analog
and digital.
Graphics-UPES
6. Commonly used Analog Input Devices
(convert a graphic system user’s movements
into changes in voltage)
• Paddle control,
• Trackball,
• Mouse, and
• Joystick
Graphics-UPES
7. Commonly used Digital Input Devices
(are actually analog devices that collect input
information in discrete form)
• Light pen,
• Magnetic pen and tablet,
• Touch Panel, and
• Keyboard
• Digitizers
• Image Scanners
Graphics-UPES
8. Paddle Control
• Simplest of the analog input devices.
• The paddle control varies its resistance,
thereby changing the voltage of the input
circuit in relation to the movement of the
paddle’s control knob.
• Commonly, two paddle controls are used in
graphics system, one to control movement
in the x-direction and one to control
movement in the y-direction.
Graphics-UPES
10. Trackball
• Trackball is normally operated by rolling
the ball with the palm of the hand.
• It mechanically combines two variable
resistors in a single device, thus allowing
the user to use one hand to enter both x and
y information with a single device.
Graphics-UPES
12. Mouse
• The mouse, like trackball, combines two variable
resistors in a single device.
• Wheels or rollers on the bottom of the mouse can
be used to record the amount and direction of
movement. Another method for detecting mouse
motion is with an optical sensor.
• One, two or three buttons are usually included on
the top of the mouse for signaling the execution of
some operation, such as recording cursor position
or invoking a function.
Graphics-UPES
14. Joystick
• A joystick consists of a small, vertical lever
(stick) mounted on a base that is used to
steer the screen cursor around.
• The distance that the stick is moved in any
direction from its center position
corresponds to screen-cursor movement in
that direction.
Graphics-UPES
16. Light Pen
• Light pens are used to select screen
positions by detecting the light coming from
the points on the CRT screen.
• They are sensitive to the short burst of light
emitted from the phosphor coating at the
instant the electron beam strikes a particular
point.
• The recorded light-pen coordinates can be
used to position an object or to select a
processing option.
Graphics-UPES
18. Magnetic pen and tablet
• A magnetic pen and tablet are composed of
a two-dimensional wire grid and a
radiowave-emitting stylus.
• The wire grid is a matrix antenna which
locates the position of the stylus measuring
the intensity of the radio signal received by
each wire in the grid.
Graphics-UPES
20. Touch Panel
• Touch panels allow displayed objects or
screen positions to be selected with the
touch of a finger.
• Optical touch panels make use of a series of
infrared light-emitting diodes (LEDs) and
sensors located around the perimeter of the
display.
• When the user touches the screen, light
beams are broken, indicating the location of
the user’s finger.
Graphics-UPES
22. Keyboard
• The keyboard is an efficient device for inputting
nongraphic data as picture levels associated with a
graphic display.
• Keyboards can also be provided with features to
facilitate entry of screen coordinates, menu
selections, or graphic functions.
• Function keys allow users to enter frequently used
operations in a single keystroke, and cursor-
control keys can be used to select displayed
objects or coordinate positions by positioning the
screen cursor.
Graphics-UPES
24. Digitizers
• A common device for interactively
selecting coordinate positions on a object is
a digitizer.
• These discrete coordinate positions can be
joined with straight-line segments to
approximate the curve or surface shapes.
• Graphic tablets provide a highly accurate
method for selecting coordinate positions
with accuracy of about 0.05 mm.
Graphics-UPES
25. Digitizers
• Many graphic tablets are constructed with a
rectangular grid of wires embedded in the
tablet surface.
• Electromagnetic pulses are generated in
sequence along the wires, and an electric
signal is induced in a wire coil in an
activated stylus or hand cursor to record a
tablet position.
Contd…
Graphics-UPES
27. Image Scanners
• An image scanner records the gradations of
gray scale/color of a given color or b/w
photos and stores in an array.
• On stores image, we can apply
transformations to rotate, scale, crop the
picture to a particular screen area.
• We can also apply various image
processing methods to modify the array
representation of the picture (e.g. contrast
enhancement).
Graphics-UPES
29. DataGlove: 3D Interaction Device
• The glove is constructed with a series of
sensors that detect hand and finger
movements.
• Electromagnetic coupling between
transmitting antennas and receiving
antennas is used to provide information
about position and orientation of the hand.
• Inputs from the glove can be used to
position or manipulate objects in a virtual
scene.
Graphics-UPES
31. Graphic Storage Formats
Regardless of the storage medium selected,
the graphics system designer will always
use some combination of the following
basic storage formats:
1. Image-only Storage
2. Display-memory Storage
3. Compressed-memory Storage
4. Information Storage
Graphics-UPES
32. Image-Only Storage
• Here the video image is retained on video
tape or video disk or as a photograph.
• Storage of images in this fashion is
relatively inexpensive.
• Once the image is stored, it is difficult and
expensive to restore it to the computer for
further manipulation.
Graphics-UPES
33. Display-Memory Storage
• Here the bit pattern that represents the
image is copied directly from display
memory to the storage medium.
• A utility program may be used to save
blocks of the computer memory by passing
the starting and ending addresses of the
display memory.
• Drawback: Storing images in this manner
requires a great deal of memory.
Graphics-UPES
34. Compressed-Memory Storage
• Storage space can be greatly reduced by
storing images in compressed format.
• Compression takes advantage of repeated
patterns in display memory.
• Compression routines can be very complex,
taking advantage of long series of
replication.
• It may not be useful when images to be
saved contain no or few series of replicated
bytes.
Graphics-UPES
36. Information Storage
• It retains the information (series of
commands that describe the image) used to
construct the image.
• It can save considerable time and memory if
the image to be stored is composed entirely
of standard objects.
• This approach is not fruitful if nonstandard
objects are used.
Graphics-UPES
39. Graphics Output Devices
• Most commonly used computer output
devices that are capable of producing
graphical output are:
Raster-scan Cathode Ray Tube (CRT)
Plasma Display
Liquid Crystal Display
3D Viewing using Stereoscopic Systems
Plotters and Printers
Graphics-UPES
40. Raster-Scan CRT
• Interactive computer graphics demands display
devices whose images can be changed quickly.
• Nonpermanent image displays allow an image to
be changed, making possible dynamic movement
of portions of an image.
• Raster-scan CRTs are used in common television
sets. The term “raster” is synonym for the term
“matrix”. A raster-scan CRT scans a matrix with
an electron beam.
• The basic understanding of CRT’s internal
operations is useful in graphics programming.
41. Components of a raster-scan CRT
• Electron gun
• Control electrode
• Focusing electrode
• Deflection yoke
• Phosphorus-coated screen
Graphics-UPES
43. Electron Gun
• It consists of a series of components
(primarily a heater and a cathode) which
together cause electrons to collect at the end
of the electron gun.
• These electrons are then accelerated by
application of an electric field.
Graphics-UPES
44. Control Electrode
• It is used to regulate the flow of electrons.
• It is a metal cylinder that fits over the
cathode.
• A negative voltage on the control electrode
simply decreases the number of electrons
passing through.
• Hence intensity of the electron beam is
controlled by setting voltage levels on the
control electrode.
Graphics-UPES
45. Focusing Electrode
• It is used to create a clear picture by
focusing the electron beam into a narrow
beam.
• The focusing electrode serves this purpose
by extracting an electromagnetic force on
the electrons in the electron beam.
• The effect resembles that of a glass lens on
light wave.
Graphics-UPES
46. Deflection Yoke
• It is used to control the direction of the
electron beam.
• The deflection yoke creates a magnetic field
which will bend the electron beam as it
passes through the field.
• In a conventional CRT the yoke is
connected to a scan generator.
Graphics-UPES
47. • The scan generator sends out an oscillating
sawtooth current that, in turn, causes the
deflection yoke to apply a varying magnetic
field to the electron beam’s path.
• The oscillating voltage potential causes the
electron beam to move across the CRT’s
screen in a regular pattern.
Graphics-UPES
48. Phosphorus-coated Screen
• The CRT surface is coated with special crystals
called phosphors.
• Phosphors glow when they are hit by a high-
energy electron beam.
• The glow given off by the phosphor during
exposure to the electron beam is known as
fluorescence.
• The continuing glow given off after the beam is
removed is known as phosphorescence. Its
duration is known as the phosphor’s persistence.
Graphics-UPES
49. Principle of Raster-scan Displays
• Picture definition is stored in a memory area
called the refresh buffer or frame buffer.
This memory area holds the set of intensity
values for all the screen points.
• The electron beam is swept across the
screen, one row at a time from top to bottom
and painting the stored intensity values.
Graphics-UPES
50. Refresh and Flicker
Each time the electron beam goes through a
complete cycle of rater or scan lines, the
CRT is said to be “refreshed”.
It is very important that the persistence of
the phosphor used and the refresh rate be
matched.
Otherwise, an image on the CRT may
appear to flash rapidly on and off. It is
called “flicker”.
Graphics-UPES
51. Horizontal/ Vertical Retrace
• Refreshing of rater-scan displays is done at
the rate of 60 to 80 frames per second.
• At the end of each scan line, the electron
beam returns to the left side of the screen to
begin displaying the next line (horizontal
retrace).
• And at the end of each frame, the electron
beam returns to the top left corner of the
screen to begin the next frame (vertical
retrace).
Graphics-UPES
52. Interlacing
• Interlacing is primarily used with slower
refreshing rates to avoid flicker.
• Here each frame is displayed in two passes
(so the entire screen is displayed in one-half
the time).
• In the first pass, the beam sweeps across
every other scan line from top to bottom.
• Then after the vertical retrace, the beam
sweeps out the remaining scan lines.
Graphics-UPES
54. Plasma Display
It is one of the flat-panel display under
emissive (convert electrical energy into light)
display category.
Plasma displays do not have to be refreshed;
that is, once a pixel is displayed on the
screen, it will remain lit until it is
intentionally turned off.
The electrofluorescent material is nothing but
an array of tiny neon bulbs. Each bulb can be
put into an “on” state or an “off” state, and
remains in the state until explicitly changed to
the other.
Graphics-UPES
56. Liquid Crystal Display
These non-emissive devices produce a
picture by passing polarized light from the
surroundings or from an internal light
source through a liquid crystal material that
can be aligned to either block or transmit
the light.
The Glass plate serves as a bounding
surface for the conductive coating.
Conductive coating acts as a conductor so
that a voltage can be applied across the
liquid crystal.
Graphics-UPES
57. Liquid crystal is a substance which will polarize
light when a voltage is applied to it.
Polarized film is a transparent sheet that
polarizes light. Its axis of polarization is kept
900
out of phase with that of the liquid crystal.
Figure: Liquid Crystal Display
Graphics-UPES
58. Plotters
• All plotters behave like slow vector devices
from the graphics programmer’s point of
view.
• Examples:
Flatbed plotter
Drum plotter
• Components of a Flatbed plotter:
Pen – an actual pen that draws on the paper.
Graphics-UPES
59. Write-move mechanism – used to lift and
lower the pen.
Pen cartridge – holds several different
colored pens. The plotter holds a program
in ROM that instructs to pick corresponding
color pen.
x Driver motor – moves the pen
horizontally across the paper.
y Driver motor – moves the pen vertically
across the paper.
Graphics-UPES
60. 3D Viewing using Stereoscopic Systems
• In a stereoscopic projection, two views of a
scene are generated from a viewing direction
corresponding to each eye.
• When we simultaneously look at the left
view with the left eye and the right view with
the right eye, the two views merge into a
single image and we perceive a scene with
depth.
• Examples: i) stereoscopic glasses & an
infrared synchronizing emitter, ii) Headset
Graphics-UPES
63. Architecture of a Raster-graphics System
Video Controller
- It performs the basic refreshing operation.
- It accesses the frame buffer directly to refresh the
screen.
- It can retrieve multiple pixel values from the
frame buffer on each pass.
- The multiple pixel intensities are then stored in a
separate register and used to control the CRT
beam intensity for a group of adjacent pixels.
- Double buffering is often used in real-time
animations.
Graphics-UPES
64. Display Processor
- Its purpose is to free the CPU from the
graphics chores.
- It digitizes a picture definition given in an
application program into a set of pixel
intensity values for storage in the frame
buffer.
- Other functions include generating various
line styles (dashed, dotted or solid),
displaying color areas and performing
manipulations on displayed objects.
Graphics-UPES
66. Architecture of a Random-display System
• Graphics pattern are drawn on a random-scan
(vector) system by directing the electron beam
along the component lines of the picture.
• Graphic commands in the application program are
translated by the graphics package into a display
file stored in the system memory.
• The display processor cycles through each
command in the display file program once during
each refresh cycle.
Graphics-UPES